- loveparade parentCaptcha is a completely useless system trivially solved by many agents and services. The only thing captcha does is annoy humans. I do agree with the problem, but I don't know what a solution would look like outside of government identification.
- Isn't static site generation exactly what hugo does?
- In the six years you are using your computer, do you ever expect to run into versioning issues and conflicts? Homebrew packages conflicting with local packages, something you compile give needs a different python/ruby/node/rust/whatever version that you have locally installed, you want to quickly try out a new package or upgrade without changing your system but have the option of rolling back safely, need to quickly install a database, want to try out a new shell and shell config but don't brick your system and have the option to roll back, etc. Nix gives you all of that and more for a one-time setup cost. Your argument is correct only if you expect to never change anything on your computer for the 6 years. But if I think about how often I have fought with homebrew or some kind of versioning/path/binary conflicts in the past then the investment in nix has paid off exponentially.
It's also about peace of mind like you said. Before nix I sometimes felt anxiety installing or upgrading certain things on my computer. "Will this upgrade break stuff?" - and often it did and I'd have to spend the next few hours debugging. With nix I don't worry about any of that anymore.
- I think what it comes down to, and where many people get confused, is separating the technology itself from how we use it. The technology itself is incredible for learning new skills, but at the same time it incentivizes people to not learn. Just because you have an LLM doesn't mean you can skip the hard parts of doing textbook exercises and thinking hard about what you are learning. It's a bit similar to passively watching youtube videos. You'd think that having all these amazing university lectures available on youtube makes people learn much faster, but in reality in makes people lazy because they believe they can passively sit there, watch a video, do nothing else, and expect that to replace a classroom education. That's not how humans learn. But it's not because youtube videos or LLMs are bad learning tools, it's because people use them as mental shortcut where they shouldn't.
- You absolutely need to spend money in PoE to buy stash tabs. It's basically mandatory if you play regularly. The difference to most dark patterns is that the spending has a very low cap. Once you've spent $50 or so on stash tabs you are set forever and never need to spend again. So it's not so different from buying a $50 game, just that you get to try it out for free first.
- I don't really see how examples are useful because you're not going to understand the context. My prompt may be something like "We recently added a new transcription backend api (see recent git commits), integrate it into the service worker. Before implementing, create a detailed plan, ask clarifying questions, and ask for approval before writing code"
Does that help you? I doubt it. But there you go.
- It doesn't feel off to me because that's the exact experience I've had as well. So it's unsurprising to me that many other people share that experience. I'm sure there is a bunch of paid promotion going on for all kinds of stuff on HN (especially what gets onto the front page), but I don't think this is one of those cases.
- Interesting, my experience has been the opposite. I've been running Codex and Sonnet 4.5 side by side the past few weeks, and Codex gives me better results 90% of the time, pretty much across all tasks. Where Claude really shines is that it's much faster than codex. So if I know exactly what I want or if it's a simpler task I feel comfortable giving it to Claude because I don't want to wait for Codex to work through it. Claude cli is also a much better user experience than codex cli. But Codex gets complex things right more consistently.
- Yeah, reading the docs it seems you are right. The landing page mentions AI-native at the very top and all over the place, so I got the wrong impression that it's somehow tightly coupled to an AI integration. But looks like it's optional.
- I wish this didn't have AI in it. I've been looking for a Jupyter alternative that is pure python and can be modified from a regular text editor. Jupytext works okay, but I miss the advanced Jupyter features. But I really don't want to deal with yet another AI assistant, especially not a custom one when I'm already using Claude/etc from the CLI and I want those agents to help me edit the notebooks.
Take out all the AI stuff and I'd give it a try. I use AI coding agents as my daily driver, but I really don't need this AI enshittification in every tool/library I'm using.
- That sounds more like an organizational problem. If you are an employee that doesn't care about maintainability of code, e.g. a freelancer working on a project you will never touch again after your contract is over, your incentive has always been to write crappy code as quickly as possible. Previously that took the form of copying cheap templates, copying and pasting code from StackOverflow as-is without adjustments, not caring about style, using tools to autogenerate bindings, and so on. I remember a long time ago I took over a web project that a freelancer had worked on, and when I opened it I saw one large file of mixed python and HTML. He literally just copied and pasted whole html pages into the render statements in the server code.
The same is true for many people submitting PRs to OSS. They don't care about making real contributions, they just want to put something on their resume.
AI is probably making it more common, but it really isn't a new issue, and is not directly related to LLMs.
- I've experienced similar things, but my conclusion has usually been that the model is not receiving enough context in such cases. I don't know your specific example, but in general it may not be incorrect to put an Arc/Lock on many things at once (or using Arc isntead of Rc, etc) if your future plans are parallelize several parts of your codebase. The model just doesn't know what your future plans are, and in errs on the side of "overengineering" solutions for all kinds of future possibilities. I found that this is a bias that these models tend to have, many times their code is overengineered for features I will never need and I need to tell them to simplify - but that's expected. How would the model know what I do and don't need in the future without me giving all the right context?
The same thing is true for tests. I found their tests to be massively overengineered, but that's easily fixed by telling them to adopt the testing style from the rest of the codebase.
- Yeah, it has been really good in my experience. I've done some niche WASM stuff with custom memory layouts and parallelism and it did great there too, probably better than I could've done without spending several hours reading up on stuff.
- Rust, Python, and a bit of C++. Around 80% Rust probably
- > Beyond this, if you’re working on novel code, LLMs are absolutely horrible at doing anything. A lot of assumptions are made, non-existent libraries are used, and agents are just great at using tokens to generate no tangible result whatsoever.
Not my experience. I've used LLMs to write highly specific scientific/niche code and they did great, but obviously I had to feed them the right context (compiled from various websites and books convered to markdown in my case) to understand the problem well enough. That adds additional work on my part, but the net productivity is still very much positive because it's one-time setup cost.
Telling LLMs which files they should look at was indeed necessary 1-2 years ago in early models, but I have not done that for the last half year or so, and I'm working on codebases with millions of lines of code. I've also never had modern LLMs use nonexistent libraries. Sometimes they try to use outdated libraries, but it fails very quickly once they try to compile and they quickly catch the error and follow up with a web search (I use a custom web search provider) to find the most appropriate library.
I'm convinced that anybody who says that LLMs don't work for them just doesn't have a good mental model of HOW LLMs work, and thus can't use them effectively. Or their experience is just outdated.
That being said, the original issue that they don't always follow instructions from CLAUDE/AGENT.md files is quite true and can be somewhat annoying.
- Sad but true.
- I often use that time to spec out a future task. Either by going through Github issues, doing some research and adding details, or by spinning up another codex/claude session to create a detailed design document for a future task and iterating on that. So one agent is coding while another is helping me to spec out future work. So when the coding agent is done I can immediately start on the next task with a proper spec, reducing margin for error.
- Reading HN I seem to be in the minority but AI has made programming a lot more fun for me. I've been an engineer for nearly 25 years and 95% of the work is rather mindless boilerplate. I know exactly what I need to do next, it just takes time and iteration.
The "you think about the problem and draw diagrams" part of you describe probably makes up less than 5% of a typical engineering workflow, depending on what you work on. I work in a scientific field where it's probably more than for someone working in web dev, but even here it's very little, and usually only at the beginning of a project. Afterwards it's all about iteration. And using AI doesn't change that part at all, you still need to design the high level solution for an LLM to produce anything remotely useful.
I never encountered the problem of not understanding details of the AI's implementation that people here seem to describe. I still review all the code and need to ask the LLM to make small adjustments if I'm not happy with it, especially around not-so-elegant abstractions.
Tasks that I actively avoided previously because they seemed like a hassle, like large refactorings, I no longer avoid now because I can ask an AI to do most of it. I feel so much productive and work is more satisfying because I get to knock out all these chores that I had resistance to before.
Brainstorming with an AI about potential solutions to a hard problem is also more fun for me, and more productive, than doing research the old ways. So instead of drawing diagrams I now just have conversations.
I can't say for certain whether using LLMs has made me much more productive (overall it likely has but for certain tasks it hasn't), but it definitely has made work more fun for me.
Another side effect has been that I'm learning new things more frequently when using AI. When I brainstorm solutions with an AI or ask for an implementation, it sometimes uses libraries and abstractions I have not seen before, especially around very low level code that I'm not super familiar with. Previously I was much more likely to use or do things the one way I know.
- For the long videos I just relied in ffmpeg to remove silence. It has lots of options for it, but you may need to fiddle with the parameters to make it work. I ended up with something like:
``` stream = ffmpeg.filter( stream, 'silenceremove', detection='rms', start_periods=1, start_duration=0, start_threshold='-40dB', stop_periods=-1, stop_duration=0.15, stop_threshold='-35dB', stop_silence=0.15 ) ```
- I've built something similar before for my own use cases and one thing I'd push back on are official subtitles. Basically no video I care about has ever had "official" subtitles and the auto generated subtitles are significantly worse than what you get by piping content through an LLM. I used Gemini because it was the cheapest option and still did very well.
The biggest challenge with this approach is that you probably need to pass extra context to LLMs depending on the content. If you are researching a niche topic, there will be lots of mistakes if the audio isn't if high quality because that knowledge isn't in the LLM weights.
Another challenge is that I often wanted to extract content from live streams, but they are very long with lots of pauses, so I needed to do some cutting and processing on the audio clips.
In the app I built I would feed an RSS feed of video subscriptions in, and at the other end a fully built website with summaries, analysis, and transcriptions comes out that is automatically updated based on the youtube subscription rss feed.
- You can also reach research parity by downloading a Github repository. Is that impressive too?
- Claude Code. He always tells me I'm right.
- I hope the EditorBird added subtitles.
- Looks crazy but this isn't really anything new. Before AI these people became obsessed with fictional characters from romantic novels, movies, or videogames and treated them as their partners and fantasized about them. Now it's more interactive with AI but still the same thing.
- The funny thing is that I find HN especially useful for non tech news. It's highly biased for tech news, but only the most important normal news make it to the front page, so it's a great filter.
- TLDR; If you give the agent an access token that has permissions to access private repos it can use it to... access private repos!?