Preferences

I'm amazed at this question and the responses you're getting.

These last few years, I've noticed that the tone around AI on HN changes quite a bit by waking time zone.

EU waking hours have comments that seem disconnected from genAI. And, while the US hours show a lot of resistance, it's more fear than a feeling that the tools are worthless.

It's really puzzling to me. This is the first time I noticed such a disconnect in the community about what the reality of things are.

To answer your question personally, genAI has changed the way I code drastically about every 6 months in the last two years. The subtle capability differences change what sorts of problems I can offload. The tasks I can trust them with get larger and larger.

It started with better autocomplete, and now, well, agents are writing new features as I write this comment.


Despite the latest and greatest models…I still see glaring logic errors in the code produced in anything beyond basic CRUD apps. They still make up fields that don’t exist, assign a value to a variable that is nonsensical. I’ll give you an example, in the code in question, Codex assigned a required field LoanAmount to a value from a variable called assessedFeeAmount…simply because as far as I can tell, it had no idea how to get the correct value from the current function/class.
That's why I don't get people that claim to be letting an agent run for an hour on some task. LLMs tend to do so many small errors like that, that are so hard to catch if you aren't super careful.

I wouldn't want to have to review the output of an agent going wild for an hour.

The agent reviews the code. The agent has access to tools. It writes the code, runs it through a test, reads the error, fixes the code, keeps going. It passes the code off to another agent with a prompt to review code and give it notes. They pass it back and forth, another agent reads and creates documentation. It keeps going and passes things back.

Now that's the idea anyway. Of course they all will lie to each other and there's hallucinations every step of the way. If you want to see a great example look at the documentation for the TEMU marketplace API. The whole API system, docs, examples etc appears to be vibe coded and lots of nonsensical formatting, methods that don't work and parameters in example that just say "test" or "parameters", but they are presented as working examples with actual response examples (like a normal API) but it largely appears to just be made up!

Who says anyone’s reviewing anything? I’m seeing more and more influencers and YouTubers playing engineer or just buying an app from an overseas app farm. Do you think anyone in that chain gives the first shit what the code is like?

It’s the worst kind of disposable software.

If the LLM can test the code it will fix those issues automatically. That’s how it can keep going for hours and produce something useful. You need to review the code and tests obviously afterwards.
The main line of contention is how much autonomy these agents are capable of handling in a competitive environment. One side generally argues that they should be fully driven by humans (i.e. offloading tedious tasks you know the exact output of but want to save time not doing) while the other side generally argues that AI agents should handle tasks end-to-end with minimal oversight.

Both sides have valid observations in their experiences and circumstances. And perhaps this is simply another engineering "it depends" phenomenon.

the disconnect is quite simple, there are people that are professionals and are willing to put the time in to learn and then there’s vast majority of others who don’t and will bitch and moan how it is shit etc. if you can’t get these tools to make your job easier and more productive you ought to be looking for a different career…
You're not doing yourself any favors by labeling people who disagree with you undereducated or uninformed. There is enough over-hyped products/techniques/models/magical-thinking to warrant skepticism. At the root of this thread is an argument to (paraphrasing) encouraging people to just wait until someone solves major problems instead of tackling it themselves. This is a broad statement of faith, if I've ever seen one, in a very religious sense: "Worry not, the researchers and foundation models will provide."

My skepticism and intuition that AI innovations are not exponential, but sigmoid are not because I don't understand what gradient-descent, transformers, RAG, CoT, or multi-head attention are. My statement of faith is: the ROI economics are going to catch up with the exuberance way before AGI/ASI is achieved; sure, you're getting improving agents for now, but that's not going to justify the 12- or 13-digit USD investments. The music will stop, and improvements slow to a drip

Edit: I think at it's root, the argument is between folk who think AI will follow the same curve as past technological trends, and those who believe "It's different this time".

> labeling people who disagree with you undereducated or uninformed

I did neither of these two things... :) I personally could not care about

- (over)hype

- 12/13/14/15 ... digit USD investment

- exponential vs. sigmoid

There are basically two groups of industry folk:

1. those that see technology as absolutely transformational and are already doing amazeballs shit with it

2. those that argue how it is bad/not-exponential/ROI/...

If I was a professional (I am) I would do everything in my power to learn everything there is to learn (and then more) and join the Group #1. But it is easier to be in Group #2 as being in Group #1 requires time and effort and frustrations and throwing laptop out the window and ... :)

> I did neither of those 2 things

>> ...there are people that are professionals and are willing to put the time in to learn and then there’s vast majority of others who don’t...

being lazy and unprofessional is just a tad different than uneducated :)
A mutually exclusive group 1 & group 2 are a false dichotomy. One can have a grasp on the field and keep up to date with recent papers, have an active Claude subscription, use agents and still have a net-negative view of "AI" as a whole, considering the false promises, hucksters, charlatans and an impending economic reckoning.

tl;dr version: having negative view of the industry is decoupled from one's familiarity with, and usage of the tools, or the willingness to learn.

> considering the false promises, hucksters, charlatans and an impending economic reckoning.

I hack for a living. I could hardly give two hoots about “false promises” or “hucksters” or some “impeding economic reckoning…” I made a general comment that a whole lot of people simple discount technology on technical grounds (favorite here on HN)…

I see the first half of group 1, but where's the second half? Don't get me wrong, there's some cool and interesting stuff in this space, but nothing I'd describe as close to "amazeballs shit."
you should see what I’ve seen (and many other people also). after 30 years of watching humans do it (fairly poorly as there is extremely small percentage of truly great SWEs) stuff I am seeing is ridiculously amazing
Can you describe some of it? On one hand, it is amazing that a computer can go from prose to code at all. On the other hand, it’s what I like to describe as a dancing bear. The bear is not a very good dancer, but it’s amazing that it can dance at all.

I’d make the distinction between these systems and what they’re used for. The systems themselves are amazing. What people do with them is pretty mundane so far. Doing the same work somewhat faster is nice, and it’s amazing that computers can do it, but the result is just a little more of the same output.

If there is really amazing stuff happening with this technology how did we have two recent major outages that were cause by embarrassing problems? I would guess that at least in the cloud flare instance some of the responsible code was ai generated
> I would guess that at least in the cloud flare instance some of the responsible code was ai generated

Your whole point isn't supported by anything but ... a guess?

If given the chance to work with an AI who hallucinates sometimes or a human who makes logical leaps like this

I think I know what I'd pick.

Seriously, just what even? "I can imagine a scenario where AI was involved, therefore I will treat my imagination as evidence."

good thing before “ai” when humans coded we had many decades of no outages… phew
Can you show me some of the “amazeballs shit” people are doing with it?
Amazeballs shit yet precious little actual products.
this is a tool, you use it to create just like you use brush and oils to paint a masterpiece. it is not a product in it of itself…
They were obviously asking where are the masterpieces.
They're not logistic, this is a species of nonsense claim that irks me even more than claiming "capabilities gains are exponential, singularity 2026!"; it actually includes the exponential-gains claim and then tries to tack on epicycles to preempt the lack of singularities.

Remember, a logistic curve is an exponential (so, roughly, a process whose outputs feed its growth, the classic example being population growth, where more population makes more population) with a carrying capacity (the classic example is again population, where you need to eat to be able to reproduce).

Singularity 2026 is open and honest, wearing its heart on its sleeve. It's a much more respectable wrong position.

It's disheartening. I got a colleague, very senior, who dislikes AI for a myriad of reasons and doesn't want to adapt if not forced by mgmt. I feel from 2022-2024 the majority of my colleagues were in this camp - either afraid from AI or because they looked at it as not something a "real" developer would ever use. 2025 it seemed to change a bit. American HN seemed to adapt more quickly while EU companies are still lacking the foresight to see what is happening on the grand scale.
I'm pretty senior and I just don't find it very useful. It is useful for certain things (deep code search, writing non-production helper scripts, etc.) and I'm happy to use it for those things, but it still seems like a long way off for it to be able to really change things. I don't foresee any of my coworkers being left behind if they don't adopt it.
AI gives you either free expertise or free time. If you can make software above the level of Gemini or Claude output, then have it write your local tools, or have it write synthetic data for tests, or have it optimize your zshrc or bash profile. Maybe have it implement changes your skip level wants to see made, which you know they are amateurish, unsound garbage with revolting UI. Rather than waste your day writing ill-advised but high quality code just to show them how it’s a bad idea, you can have AI write code for you, to illustrate your point without spending any real work hours on it.

Just in my office, I have seen “small tools” like Charles Proxy almost entirely disappear. Everyone writes/shares their AI-generated solutions now rather than asking cyber to approve a 3rd party envfile values autoloader to be whitelisted across the entire organization.

senior as well, few years from finishing up my career. I run 8 to 12 terminals entire day. it is changing existing and writing new stuff all day, every day. 100’s of thosands of lines of changed/added/removed code in production… and a lot less issues than when every line was typed in by me (or another human)
What sort of work do you do? I suspect a lot of the differences of opinion here are caused by these systems being a lot better at some kinds of programming than others.

I do lower level operating systems work. My bread and butter is bit-packing shenanigans, atomics, large-scale system performance, occasionally assembly language. It’s pretty bad at those things. It comes up with code that looks like what you’d expect, but doesn’t actually work.

It’s good for searching code big codebases. “I’m crashing over here because this pointer has the low bit set, what would do that?” It’s not consistent, but it’s easy to check what it finds and it saves time overall. It can be good for making tests, especially when given an example to work from. And it’s really good for helper scripts. But so far, production code is a no-go for me.

Can you show us the features AI wrote while you wrote this comment?
ai is useless. anyone claiming otherwise is dishonest
I use GenAI for text translation, text 2 voice and voice 2 text, there it is extremely useful. For coding I often have the feeling it is useless, but also sometimes it is useful, like most tools...
Exactly, it’s really weird to see all this people claiming these wonderful things about LLMs. Maybe it’s really just different levels of amazement, but I understand how LLMs work, I actually use ChatGPT quite a bit for certain things (searching, asking some stuff I know it can find online, discuss ideas or questions I have etc.).

But all the times I tried using LLMs to help me coding, the best it performs is when I give it a sample code (more or less isolated) and ask it for a certain modification that I want.

More often than not, it does make seemingly random mistakes and I have to be looking at the details to see if there’s something I didn’t catch, so the smallest scope there better.

If I ask for something more complex or more broad, it’s almost certain it will make many things completely wrong.

At some point, it’s such a hard work to detail exactly what you want with all context that it’s better to just do it yourself, cause you’re writing a wall of text to have a one time thing.

But anyway, I guess I remain waiting. Waiting until FreeBSD catches up with Linux, because it should be easy, right? The code is there in the Linux kernel, just tell an agent to port it to FreeBSD.

I’m waiting for the explosion of open source software that aren’t bloated and that can run optimized, because I guess agents should be able to optimize code? I’m waiting for my operating system to get better over time instead of worse.

Instead I noticed the last move from WhatsApp was to kill the desktop app to keep a single web wrapper. I guess maintaining different codebases didn’t get cheaper with the rise of LLMs? Who knows. Now Windows releases updates that break localhost. Ever since the rise of LLMs I haven’t seen software release features any faster, or any Cambrian explosion of open source software copying old commercial leaders.

What are you doing at your job that ai can't help with at all to consider is completely use less?
Useless for what?
That could even be argued (with an honest interlocutor, which you clearly are not)

The usefulness of your comment, on the other hand, is beyond any discussion.

"Anyone who disagrees with me is dishonest" is some kindergarten level logic.

[Deleted as Hackernews is not for discussion of divergent opinions]
> It's not useless but it's not good for humanity as a whole.

Ridiculous statement. Is Google also not good for humanity as a whole? Is Internet not good for humanity as a whole? Wikipedia?

I think it is an interesting thought experiment to try to visualize 2025 without the internet ever existing because we take it completely for granted that the internet has made life better.

It seems pretty clear to me that culture, politics and relationships are all objectively worse.

Even remote work, I am not completely sure I am happier than when I use to go to the office. I know I am certainly not walking as much as I did when I would go to the office.

Amazon is vastly more efficient than any kind of shopping in the pre-internet days but I can remember shopping being far more fun. Going to a store and finding an item I didn't know I wanted because I didn't know it existed. That experience doesn't exist for me any longer.

Information retrieval has been made vastly more efficient so I instead of spending huge amounts of time at the library, I get that all back in free time. What I would have spent my free time doing though before the internet has largely disappeared.

I think we want to take the internet for granted because the idea that the internet is a long term, giant mistake is unthinkable to the point of almost having a blasphemous quality.

Childhood? Wealth inequality?

It is hard to see how AI as an extension of the internet makes any of this better.

Chlorofluorocarbons, microplastics, UX dark patterns, mass surveillance, planned obsolescence, fossil fuels, TikTok, ultra-processed food, antibiotic overuse in livestock, nuclear weapons.

It's a defensible claim I think. Things that people want are not always good for humanity as a whole, therefore things can be useful and also not good for humanity as a whole.

You’re delusional if you think LLMs/AI are in the ballpark of these. I’ve listed things in my comment for a reason.
> EU waking hours have comments that seem disconnected from genAI. And, while the US hours show a lot of resistance, it's more fear than a feeling that the tools are worthless.

I don't think it's because the audience is different but because the moderators are asleep when Europeans are up. There are certain topics which don't really survive on the frontpage when moderators are active.

I'm unsure how you're using "moderators." We, the audience, are all 'moderators' if we have the karma. The operators of the site are pretty hands-off as far as content in general.

This would mean it is because the audience is different.

The people who "operate" the website are different from the people who "moderate" the website but both are paid positions.

This fru-fru about how "we all play a part" is only serving to obscure the reality.

I'm sure this site works quite differently from what you say. There's no paid team of moderators flicking stories and comments off the site because management doesn't like them.

There's dang who I've seen edit headlines to match the site rules. Then there's the army of users upvoting and flagging stories, voting (up and down) and flagging comments. If you have some data to backup your sentiments, please do share it - we'd certainly like to evaluate it.

HN brought on a second mod (Tim, this year, iirc)

My email exchanges with Dang, as part of the moderation that happens around here, have all been positive

1. I've been moderated, got a slowdown timeout for a while

2. I've emailed about specific accounts, (some egregious stuff you've probably never seen)

3. Dang once emailed me to ask why I flagged a story that was near the top, but getting heavily flagged by many users. He sought understanding before making moderation choices

I will defend HN moderation people & policies 'til the cows come home. There is nothing close to what we have here on HN, which is largely about us being involved in the process and HN having a unique UX and size

dang announced they were moved from volunteer to paid position a few years ago. More rumblings about more mods brought on since then. What makes you say you're "so sure"?
> There's no paid team of moderators flicking stories and comments off the site because management doesn't like them.

Emphasis mine. The question is does the paid moderation team disappear unfavorable posts and comments, or are they merely downranked and marked dead (which can still be seen by turning on showdead in your profile).

As an anonymous coward on HN for at least a decade, I'd say that's not really true.

When paul graham was more active and respected here, I spoke negatively about how revered he was. I was upvoted.

I also think VC-backed companies are not good for society. And have expressed as much. To positive response here.

We shouldn't shit on one of the few bastions of the internet we have left.

I regret my negativity around pg - he was right about a lot and seems to be a good guy.

Yeah, I avoid telling people about HN. It’s too rare and pleasant of an anomaly to risk getting morphed into X/Bluesky/Reddit.
I’m referring to the actual moderators of this website removing posts from the front page.
that's a conspiracy theory

The by far more common action is for the mods to restore a story which has been flagged to oblivion by a subset of the HN community, where it then lands on the front page because it already has sufficient pointage

It's not controversial to say that submissions are being moderated, that's how this (and many other) sites work. I haven't made any claims about how often it happens, or how it relates to second-chance moderation.

What I'm pointing out is just that moderation isn't the same at different times of the day and that this sometimes can explain what content you see during EU and US waking hours. If you're active during EU daytime hours and US morning hours, you can see the pattern yourself. Tools like hnrankings [1] make it easy to watch how many top-10 stories fall off the front page at different times of day over a few days.

[1]: https://hnrankings.info/

> I’m referring to the actual moderators of this website removing posts from the front page.

This is what you said. There has only been one until this year, so now we have two.

The moderation patterns you see are the community and certainly have significant time factors that play into that. The idea that someone is going into the system and making manual changes to remove content is the conspiracy theory

Anything sovereign AI or whatever is gone immediately when the mods wake up. Got an EU cloud article? Publish it at 11am CET, it's disappears around 12.30.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal