- hansmayerI completely agree with the main sentiment, which is - I want the browser to be a User Agent and nothing else. I don´t need a crappy, un-reliable intermediary between the already perfectly fine UA and the Internet.
- > I'm not a professional SWE
It was already obvious from your first paragraph - in that context even the sentence "everything works like I think it should" makes absolute sense, because it fits perfectly to limited understanding of a non-engineer - from your POV, it indeed all works perfectly, API secrets in the frontend and 5 levels of JSON transformation on the backend side be damned, right ;) Yay, vibe-coding for everyone - even if it takes longer than the programming the conventional way, who cares, right?
- Whats "on me" mate? Not being impressed with the 101st ToDo app vibe-coding hobbysts elatedly put together with the help of the statistical magic box?
- Or...infrastructure, public services and schools go unmaintained? How about the magic technology supposedly allowing for all of this efficency, all the while it imagines a human has six fingers, who will maintain that?
- Well, duhhh - do you think the rich folks are pushing for mass unemployment so that they could pay more tax and achieve a more just society? Where are we getting these silly, silly ideas from :)
- Well, you dont have to agree with that statement. But I havent seen a serious refute of my arguments either.
- Please for once answer the question being asked without replacing both the question and the stated intention with something else. I was willing to give you the benefit of doubt, but I am now really wondering where does your motivation for these vaguely constructed "analogies" coming from, is the LLM industry that desperate? We were all "positive" about LLM possibilities once. I am asking you, when will LLMs be so reliable that they can be used in place of service dogs for blind people ? Do you believe that this technology will ever be that safe. Have you ever actually seen a service dog? I don't think you can distract a service dog with a steak - did you know they start their training basically from year one of age and it takes up to two years to train them. Do you think they spend those two years learning to fetch properly? Also I never said people should not be allowed to "try" a technology. But like with drugs, the tools for impaired, sick etc. also undergo a verification and licensing process, I am surprised you did not know that. So I am asking you again, can you ever imagine an LLM passing those high regulatory hurdles, so that they can be safely used for assisting the impaired people? Service dogs must be doing something right, if so many of them are safely assisting so many people today, don't they ?
- Whether its blast furnaces or carbon fiber, the wear and tear (macroscopic changes) as well as material fatigue (molecular changes) is something that will be specified by the manufacturer, within some margin of error and you pretty much know what to expect - unless you are a smartass billionaire building an improvised sub off of carbon fiber whose expiry date was long due. However, the carbon fiber or your blast furnace wont break just on their own. So it's a weak analogy and a stretch at that. Now for your experiment: it has no value because a) you and me both know if you told your LLM that their output was shit, they would immediately "agree" with you and go off to produce some other crap b) For this to be a scientifically valid experiment at all, I'd expect on the order of 10.000 repetitions, each providing exactly the same output. But also on this you and me both know already the 2nd iteration will introduce some changes. So stop fighting the obvious and repeat after me: LLMs are shit for any serious work.
- > One of us is misleading people here, and I don't think it's me.
Firstly, I am not the one with an LLM-influencer side-gig. Secondly - No sorry, please don't move the goalposts. You did not answer my main argument - which is - how does a "tool" which constantly change its behaviour deserve being called a tool at all? If a tailor had scissors which cut the fabric sometimes just a bit, and sometimes completely differently every time they used it, would you tell the tailor he is not using them right too? Thirdly you are now contradicting yourself. First you said we need to live with the fact that they are un-predictable. Now you are sugarcoating it into being "a bit unpredictable", or "not as nearly unpredictable". I am not sure if you are doing this intentionally or do you really want to believe in the "magic" but either way you are ignoring the ground tenets of how this technology works. I'd be fine if they used it to generate cheap holiday novels or erotica - but clearly after four years of experimenting with the crap machines to write code created a huge pushback in the community - we don't need the proverbial scissors which cut our fabric differently each time!
- Ever heard of service dogs? Or police dogs? Now tell me, when will LLMs ever be safe to be used as assistance to blind people? Or will the big tech at some point release some sloppy blind-people-tool based on LLMs and unleash the LLM-influencers like yourself to start gaslighting the users into thinking they were "not holding it right" ? For mission and life critical problems, I'll take a dog any day, thank you very much!
- Spot on. Not to mention all the fouls and traveling the demented "all star" makes for your team, effectively negating any point gains.
- No, please, stop misleading people Simon. People use tools to make things easier for them, not harder. And a tool which I cannot steer predictably is not a god damn tool at all! The sheer persistence the AI-promoters like you are willing to invest just to gaslight us all into thinking we were dumb and did not know how to use the shit-generators is really baffling. Understand that a lot of us are early adopters and we see this shit for what it is - the most serious mess up of the "Big Tech" since Zuckerberg burned 77B for his metaverse idiocy. By the way - animals are not tools. People do not use them - they engage with them as helpers, companions and for some people, even friends of sorts. Drop your LLM and try engaging with someone who has a hunting dog for example - they'd be quite surprised if you referred to their beloved retriever as a "tool". And you might learn something about a real intelligence.
- I am an early adopter since 2021 buddy. "It works" for trivial use-cases, for anything more complex it is utter crap.
- I am sorry but what do I have to learn? That the tool does not work as advertised? That sometimes it will work as advertised, sometimes not? That it will sometimes expose critical secrets as plain text and some other time suggest to solve a problem in a function by removing the function code completely? What are you even talking about, comparing to shell and text editors? These are still bloody deterministic tools. You learn how they work and the usage does not change unpredictably every day! How can you learn something that does not have predictable outputs?
- 73 points
- I should not have to fight tooling, especially the supposedly "intelligent" one. What's the point of it, if we have to always adapt to the tool, instead of the other way around?
- Ah yes of course. But no one asked for the code really. Just show us the app. Or is it some kinda super-duper secret military stuff you are not even supposed to discuss, let alone show.
- Not borderline - it is just straight snake-oil peddling.
- +1 here. Lets see those productivity gains!
- "basic prompt engineering" - Since when has writing English language sentences become nothing less than "engineering" ?