Preferences

suprjami
Joined 2,425 karma
superjamie.github.io

  1. While I agree with the sentiment that LLM coding can produce a lot of inefficient junk code which works with holes if you're lucky...

    What you're describing is 7 days of productivity supported by probably 7+ years (or 27+ years) of experience and learning and getting things wrong and restarting over again.

    It is definitely wonderful to see though.

  2. Extremely cringe article.

    The biggest thing to affect laptops in "decades" is solid state storage. No longer do you need to worry about killing your entire device simply by putting it down on a solid surface.

    There are also plenty of other things like modern dense lithium ion batteries with 12+ hour runtimes, super power friendly CPUs of all architectures, the ultra-thin body and metal body popularised by Apple, LCD panels without ghosting, external power bricks instead of literally a PC power supply in a briefcase.

    But yeah sure, the infinite slop plagiarism machine is coming. Gotta get some clicks!

  3. Buy from listings with many sales and good reviews.

    Look for reviews with real images and real seeming phrases in many languages, not 10 accounts all posting the same phrase with no pictures.

    Buy from stores with a name, preferably who have established a "brand" for themselves across many products. UGreen are a great example of this for USB gadgets.

    Don't buy from stores named Shop195772040, these will take your money and disappear or ship fakes. Don't buy suspiciously cheap items with no sales, these will do the same.

  4. If you want to do it cheap, get a desktop motherboard with two PCIe slots and two GPUs.

    Cheap tier is dual 3060 12G. Runs 24B Q6 and 32B Q4 at 16 tok/sec. The limitation is VRAM for large context. 1000 lines of code is ~20k tokens. 32k tokens is is ~10G VRAM.

    Expensive tier is dual 3090 or 4090 or 5090. You'd be able to run 32B Q8 with large context, or a 70B Q6.

    For software, llama.cpp and llama-swap. GGUF models from HuggingFace. It just works.

    If you need more than that, you're into enterprise hardware with 4+ PCIe slots which costs as much as a car and the power consumption of a small country. You're better to just pay for Claude Code.

  5. I'm gonna give the AI a pass on this one.

    Student was intentionally dressed in military camo as part of a dress up.

    The camera is probably grainy garbage at poor angle.

    It's a reasonable assessment, probably a human being would double take in the same situation.

    The problem is not having a human check the AI alert before locking down the school.

    The problem is having a society where people regularly take guns into schools and public places and commit mass murder.

  6. World IPv6 day 6-6-26, just turn IPv4 off. Let the world catch up.

    I said the same thing for 6-6-16 too.

  7. Because this is way easier. It's effectively a printf debugger and editor you can just slot in the middle of the data stream.
  8. Ostensibly nerds. Linux users and maybe Mac users. Technical people who understand more about the software industry than all Mozilla Corp management since Brendan.

    It's difficult to monetize us when the product is a zero dollar intangible, especially when trust has been eroded such that we've all fled to Librewolf like you said.

    It's difficult to monetize normies when they don't use the software due to years of continuous mismanagement.

    I think giving Mozilla a new CEO is like assigning a new captain to the Titanic. I will be surprised if this company still exists by 2030.

  9. You really only need to make $2M before you can live off the interest forever. That's the goal of these people imo.
  10. You want "Trust"?

    Cut executive pay 75% back to what Brendan was getting paid, and invest that money in the company instead of lining your own pockets.

    Ditch the AI crap that nobody wants or needs and focus on making a good browser and email application, and advertising them to increase user count.

    Anything less than this is not trustworthy, it's just another lecherous MBA who is hastening the death of Mozilla.

  11. Unlikely. These patches have been carried out-of-tree for over a decade precisely because upstream OpenSSH won't accept them.
  12. All good, no snark inferred. Yes I have considered this, and I keep considering it every time I get a bad result. Sorry this response is so long.

    I think I have a good idea how these things work. I have run local LLMs for a couple of years on a pair of video cards here, trying out many open weight models. I have watched the 3blue1brown ML course. I have done several LinkedIn Learning courses (which weren't that helpful, just mandatory). I understand about prompting precisely and personas (though I am not sold personas are a good idea). I understand LLMs do not "know" anything, they just generate the next most likely token. I understand LLMs are not a database with accurate retrieval. I understand "reasoning" is not actual thinking just manipulating tokens to steer a conversation in vector space. I understand LLMs are better for some tasks (summarisation, sentiment analysis, etc) than others (retrieval, math, etc). I understand they can only predict what's in their training data. I feel I have a pretty good understanding of how to get results from LLMs (or at least the ways people say you can get results).

    I have had some small success with LLMs. They are reasonably good at generating sub-100 line test code when given a precise prompt, probably because that is in training data scraped from StackOverflow. I did a certification earlier this year and threw ~1000 lines of Markdown notes into Gemini and had it quiz me which was very useful revision, it only got one question wrong of the couple of hundred I had it ask me.

    I'll give a specific example of a recent failure. My job is mostly troubleshooting and reading code, all of which is public open source (so accessible via LLM search tooling). I was trying to understand something where I didn't know the answer, and this was difficult code to me so I was really not confident at all in my understanding. I wrote up my thoughts with references, the normal person I ask was busy so I asked Gemini Pro. It confidently told me "yep you got it!".

    I asked someone else who saw a (now obvious) flaw in my reasoning. At some point I'd switched from a hash algorithm which generates Thing A, to a hash algorithm which generates Thing B. The error was clearly visible, one of my references had "Thing B" in the commit message title, which was in my notes with the public URL, when my whole argument was about "Thing A".

    This wasn't even a technical or code error, it was a text analysis and pattern matching error, which I didn't see because I was so focused on algorithms. Even Gemini, the apparent best LLM in the world which is causing "code red" at OpenAI did not pick this up, when text analysis is supposed to be one of its core functionalities.

    I also have a lot of LLM-generated summarisation forced on me at work, and it's often so bad I now don't even read it. I've seen it generate text which makes no logical sense and/or which uses so many words without really saying anything at all.

    I have tried LLM-based products where someone else is supposed to have done all the prompt crafting and added RAG embeddings and I can just behave like a naive user asking questions. Even when I ask these things question which I know are in the RAG, they cannot retrieve an accurate answer ~80% of the time. I have read papers which support the idea that most RAG falls apart after about ~40k words and our document set is much larger than that.

    Generally I find LLMs are at the point where to evaluate the LLM response I need to either know the answer beforehand so it was pointless to ask, or I need to do all the work myself to verify the answer which doesn't improve my productivity at all.

    About the only thing I find consistently useful about LLMs is writing my question down and not actually asking it, which is a form of Rubber Duck Debugging (https://en.wikipedia.org/wiki/Rubber_duck_debugging) which I have already practiced for many years because it's so helpful.

    Meanwhile trillions of dollars of VC-backed marketing assures me that these things are a huge productivity increaser and will usher in 25% unemployment because they are so good at doing every task even very smart people can do. I just don't see it.

    If you have any suggestions for me I will be very willing to look into them and try them.

  13. I don't see it that way. Tabs, spaces, curly brace placement, Vim, Emacs, VSCode, etc are largely aesthetic choices with some marginal unproven cognitive implications.

    I find people mostly prefer what they are used to, and if your preference was so superior then how could so many people build fantastic software using the method you don't like?

    AI isn't like that. AI is a bunch of people telling me this product can do wonderful things that will change society and replace workers, yet almost every time I use it, it falls far short of that promise. AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.

  14. No, RealSound was not a Covox-like hardware dongle. It was PC speaker only. Play the first few minutes of Mean Streets or Martian Memorandum or Countdown in DOSBox and you'll hear it.
  15. I miss it too, but I don't see how Mozilla Prism can be related to the company's success or failure.

    How do you suggest Electron makes money for OpenJS?

    You can still make PWAs backed by Firefox:

    https://github.com/linuxmint/webapp-manager

    This has not made Linux Mint any richer.

  16. No, that isn't correct.

    Brendan was in charge of a company built on LGBT people. For him to turn around and donate to anti-LGBT political causes was not appropriate. The company fairly lost faith in him. How can you work somewhere your boss hates you so badly he campaigns against your basic human rights?

    However, Brendan also did not increase his own pay by hundreds of percent while laying off over a quarter of the staff, which is what the executive teams since Brendan have done.

    The bad stuff in Firefox started when the C-suite, now lead by an ex-McKinsey CEO, started lining their own pockets instead of running a technology company.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal