Preferences

agentcoops parent
It's wild -- I've never seen such a persistent split in the Hacker News audience like this one. The skeptics read one set of AI articles, everyone else the others; a similar comment will be praised in one thread and down-voted to oblivion in another.

throwawayqqq11
IMO the split is between people understanding the heuristic nature of AI and people who dont and thus think of it as an all-knowing, all-solving oracle. Your elder parents having nice conversations with chatgpt is nice aslong it doesnt make big life changing decisions for them, which happens already today.

You have to know the tools limits and usecases.

agentcoops OP
I can’t see that proposed division as anything but a straw-man. You would be hard-pressed to find anyone who genuinely thinks of LLMs as an “all-knowing, all-solving oracle” and yet, even in specialist fields, their utility is certainly more than a mere “heuristic”, which of course isn’t to say they don’t have limits. See only Terrance Tao’s reports on his ongoing experiments.

Do you genuinely think it’s worse that someone makes a decision, whether good or bad, after consulting with GPT versus making it in solitude? I spoke with a handyman the other day who unprompted told me he was building a side-business and found GPT a great aid — of course they might make some terrible decisions together, but it’s unimaginable to me that increasing agency isn’t a good thing. The interesting question at this stage isn’t just about “elder parents having nice conversations”, but about computers actually becoming useful for the general population through an intuitive natural language interface. I think that’s a pretty sober assessment of where we’re at today not hyperbole. Even as an experienced engineer and researcher myself, LLMs continue to transform how I interact with computers.

johneth
> Do you genuinely think it’s worse that someone makes a decision, whether good or bad, after consulting with GPT versus making it in solitude?

Depending on the decision yes. An LLM might confidently hallucinate incorrect information and misinform, which is worse than simply not knowing.

daveguy
Yup. Exactly this. As soon as enough people get screwed by the ~80% accuracy rate, the whole facade will crumble. Unless AI companies manage to bring the accuracy up 20% in the next year, by either limiting scope or finding new methods, it will crumble. That kind of accuracy gain isn't happening with LLMs alone (ie foundational models).
agentcoops OP
Charitably, I don’t understand what those like you mean by the “whole facade” and why you use these old machine learning metrics like “accuracy rate” to assess what’s going on. Facade implies that the unprecedented and still exponential organic uptake of GPT (again see the actual data I linked earlier from Mary Meeker) is just a hype-generated fad, rather than people finding it actually useful to whatever end. Indeed, the main issue with the “facade” argument is that it’s actually what dominates the media (Marcus et al) much more than any hyperbolic pro-AI “hype.”

This “80-20” framing, moreover, implies we’re just trying to asymptotically optimize a classification model or some information retrieval system… If you’ve worked with LLMs daily on hard problems (non-trivial programming and scholarly research, for example), the progress over even just the last year is phenomenal — and even with the presently existing models I find most problems arise from failures of context management and the integration of LLMs with IR systems.

daveguy
Time will tell.
12345hn6789
My team has measurably gotten our LLM feature to have ~94% accuracy in widespread reliable tests. Seems fairly confident, speaking as an SWE not a DS orML engineer though.
sevensor
I think of the two camps like this: one group sees a lot of value in llms. They post about how they use them, what their techniques and workflows look like, the vast array of new technologies springing up around them. And then there’s the other camp. Reading the article, scratching their heads, and not understanding what this could realistically do to benefit them. It’s unprecedented in intensity perhaps, but it’s not unlike the Rails and Haskell camps we had here about a dozen years ago.
anal_reactor
I think there are two problems:

1. AI is a genuine threat to lots of white-collar jobs, and people instinctively deny this reality. See that very few articles here are "I found a nice use case for AI", most of them are "I found a use case where AI doesn't work (yet)". Does it sound like tech enthusiasts? Or rather people terrified of tech?

2. Current AI is advanced enough to have us ask deeper questions about consciousness and intelligence. Some answers might be very uncomfortable and threaten the social contract, hence the denial.

agentcoops OP
On the second point, it’s worth noting how many of the most vocal and well-positioned critics of LLMs (Marcus/Pinker in particular) represent the still academically dominant but now known to be losing side of the debate over connectionism. The anthology from the 90s Talking Nets is phenomenal to see how institutionally marginalized figures like Hinton were until very recently.

Off-topic, but I couldn’t find your contact info and just saw your now closed polyglot submission from last year. Look into technical sales/solution architecture roles at high growth US startups expanding into the EU. Often these companies hire one or two non-technical native speakers per EU country/region, but only have a handful of SAs from a hub office so language skills are of much more use. Given your interest in the topic, check out OpenAI and Anthropic in particular.

anal_reactor
Thanks for the advice. Currently I have a €100k job where I sit and do nothing. I'm wondering if I should coast while it lasts, or find something more meaningful

This item has no comments currently.