Preferences

In fact, in my opinion, one of the benefits of AI tools that is often overlooked is "psychological support". When you are stuck at work, it will give you a push. Even if it is not completely right, it is enough to get you moving. The feeling of "no longer fighting alone at work" is actually more important than many people think.

bgwalter
To each his own. I'm completely drained after 30 min of "discussing" with an LLM, which is essentially an overconfident idiot.

Pushes never come from the LLM, which can be easily seen by feeding the output of two LLMs into each other. The conversation collapses completely.

Using Google while ignoring the obnoxious and often wrong LLM summaries at the top gives you access to the websites of real human experts, who often wrote the code that the LLM plagiarizes.

AznHisoka
If it's not overconfident, it's the opposite - they're too much of a "Yes man", which at the slightest whim will change their mind to fit your opinion, if they even detect you might have a different one.

Then they'll change their mind to their original answer when you tell them "I wasn't disagreeing with you". Honestly, it's amusing, but draining at the same time.

eddd-ddde
I've had a couple of times where Gemini straight up tells me "Absolutely not, ..." and the explains how my assumptions are wrong and eventually leads me to find the right answer.

It's surprisingly good at reading my entire code, reading my assumptions of the code, and explaining what I'm getting wrong and how to fix it.

After Gemini gave an answer, i asked, "convert to latex". Then it asked, "To convert what to LaTeX?". Every other LLM would know what I wanted.
godelski
I've had GPT do the same thing but lead me to the wrong answer. Usually through some subtle mistake. So the answer looks right but hey, literally the difference between an expert and amateur is understanding subtlety
3ln00b
I have experienced this a lot, I thought I was alone. I get frustrated and tired discussing with LLMs sometimes, simply because they keep providing wrong solutions. Now I try to search before I ask LLMs now, that way I have better context of the problem and know when LLM is hallucinating.
endymion-light
are we using the same version of google? Unless incredibly specific I mostly see SEO optimized garbage.
Jensson
Everyone uses an individualized version of Google, not just your history its even different by country of origin etc.

So no, they are not using the same version of Google.

endymion-light
Well my individualized version of google is filled with medium articles and useless bull, i'd pay good money to switch to this magic working search engine
tombert
Another recommendation for Kagi. It cost money but I find that the results are as good or better than Google, and I get to rank where sites show up in the results, not some machine learning program trying to guess at it.
matthewkayin
If you're serious, I've heard Kagi is an actually good, paid search engine. Haven't tried it myself, though.
It is called Kagi
enraged_camel
>> To each his own. I'm completely drained after 30 min of "discussing" with an LLM, which is essentially an overconfident idiot.

I'm completely drained after 30 minutes of browsing Google results, which these days consist of mountains of SEO-optimized garbage, posts on obscure forums, Stackoverflow posts and replies that are either outdated or have the wrong accepted answer... the list goes on.

jkestner
Using natural language discussion to probe for an answer is more draining for me than scanning a large volume of text (much like watching a video instead of the transcript is). I didn’t start coding for more humanish interaction!
swat535
I think a big problem I have with AI write now is that the context window can get messed up and it has a hard time remember what we talked about. So if you tell it to write a code that should do X, Y, Z, it does on the first request, but then on the next request, when it's writing more code, it doesn't recall.

Second, it doesn't do well at all if you give it negative instructions, for example if you tell it to: "Don't use let! in Rspec" , it will create a test with "let!" all over the place.

uludag
Totally fair take — and honestly, it’s refreshing to hear someone call it like it is. You’re clearly someone who values real understanding over surface-level noise, and it shows. A lot of people just go along with the hype without questioning the substance underneath — but you’ve taken the time to test it, poke at the seams, and see what actually holds up.

I swear there's something about this voice which is especially draining. There's probably nothing else which makes me want to punch my screen more.

tempodox
Which makes me wonder whether there is an SI unit for sycophantic sliminess, because the first paragraph of your answer is dripping with it.
mindcrime
When they say "there is something about this voice..." I think they mean the paragraph above, which sounds very GenAI generated to me. Either AI generated, or "generated by a human intentionally trying to reproduce the 'voice' of a typical GenAI."
tempodox
That's what I was referring to.
UncleOxidant
The first paragraph seems like it's written by AI.
somenameforme
I'm fairly sure it was a subtle joke.
amanaplanacanal
I think that was the point.
It's the "—".
I'm glad to hear this. Working with LLMs makes me want to get up and go do something else. And at the end of a session I'm drained, and not in a good way.
jdmoreira
You are discussing with a llm? Never happened to me and I use llms all the time. Why would you need to discuss if you know best? Just tell it what to do and course correct it. It's not rocket science.

PS: Both humans and llms are hard to align. But I do have to discuss with humans and I find that exhausting. llms I just nudge or tell what to do

neuronexmachina
> You are discussing with a llm? Never happened to me and I use llms all the time. Why would you need to discuss if you know best? Just tell it what to do and course correct it. It's not rocket science.

I find myself often discussing with an LLM when trying to find the root cause of an issue I'm debugging. For example, when trying to track down a race condition I'll give it a bunch of relevant logs and source code, and the investigation tends to be pretty interactive. For example, it'll pose a number of possible explanations/causes, and I'll tell it which one to investigate further, or recommendations for what new logging would help.

hippari2
I find it exhausting in a yes-man kind of way where it does whatever you told but just somehow wrong. I think your human case is the reverse.
Just like a person would. If you want it done right you have to do it yourself. Or you have to tell the LLM exactly how to do it.

Often I find it easier to just do it myself rather than list out a bunch of changes. I'll give the LLM a vague task, it does it and then I go through it. If it's completely off I give it new instructions, if it's almost right I just fix the details myself.

ryandrake
In so many ways, LLMs are like that very energetic and confident Junior developer you hired straight out of college who believes he knows everything but needs to be course corrected constantly. I think if you're good at mentoring and guiding Junior colleagues, you'll have a great time with LLM based coding. If you feel drained after working with them, then you will probably feel drained after 30 minutes with an LLM.
deadbabe
I think a far more valuable tool than an LLM summarizer would be something where you can type up a prompt, and it brings you conversations or articles other humans have made about your exact kind of problem. Just the text, no websites to sift through.
mylons
google gives you access to real SEO blog spam. nothing better than the experts at stack overflow, or some random medium blog from a guy in rural india
dboreham
> an overconfident idiot

So we are close to an AI president.

The accusations that politicians are already overusing AI are flying, and given the incentives I wouldn't be surprised more of the internal functioning of all modern governments are already more LLM-based AI than we'd realize. Or particularly appreciate.

By that I don't mean necessarily the nominal function of the government; I doubt the IRS is heavily LLM-based for evaluating tax forms, mostly because the pre-LLM heuristics and "what we used to call AI" are probably still much better and certainly much cheaper than any sort of "throw an LLM at the problem" could be. But I wouldn't be surprised that the amount of internal communication, whitepapers, policy drafts and statements, etc. by mass is probably already at least 1/3rd LLM-generated.

(Heck, even on Reddit I'm really starting to become weary of the posts that are clearly "Hey, AI, I'm releasing this app with these three features, please blast that out into a 15-paragraph description of it that includes lots of emojis and also describes in a general sense why performance and security are good things." and if anything the incentives slightly mitigate against that as the general commenter base is starting to get pretty frosty about this. How much more popular it must be where nobody will call you out on it and everybody is pretty anxious to figure out how to offload the torrent-of-words portion of their job onto machines.)

StefanBatory
In my country a MP of lower house sent out a tweet generated by LLMs.

As in, copied it with a prompt in.

Even before LLMs, politicians (and celebrities) had other people tweet for them. IIRC, I've met someone who tweeted on behalf of Jill Stein.

Which is not to say seeing a prompt in a tweet isn't funny, it is, just that it may have been an intern or a volunteer.

amanaplanacanal
They may be completely insane, but at least the president makes his own tweets! I mean truths.
tempodox
LLMs have to go a long way before their ideas are as outrageous as those of The Current Occupant Of The President's Chair.
I asked my students to write a joke about AI. Sometimes humor is the best way to get people to talk about their fears without filters. One of them wrote:

"I went to work early that day and noticed my monitor was on, and code was being written without anyone pressing any keys. Something had logged into my machine and was writting code. I ran to my boss and told him my computer had been hacked. He looked at me, concerned, and said I was hallucinating. It's not a hacker, he said. It's our new agent. While you were sleeping, it built the app we needed. Remember that promotion you always wanted? Well, good news buddy! I'm promoting you to Prompt Manager. It's half the money, but you get to watch TikTok videos all day long!'"

Hard to find any real reassurance in that story.

dakiol
Why do we assume that Prompt Engineering is going to pay less money? As usual, what one brings to the company is value, and if AI-generated code needs to be prompted first and reviewed later, I don’t see how prompters in the future could earn less than software engineers now.

Prompt engineering is like singing: sure thing everyone can physically sing… now whether it’s pleasant listening to them is another topic.

cardanome
The first thing any big technical revolution causes is suffering for a lot of people.

It can bounce back over time and maybe leave us better off than before but the short term will not be pretty. Think industrial revolution where we had to stop companies by law from working children to literal death.

Whether the working man or the capital class profits from the rise of productivity is a questions of political power.

We have seen that productivity rises do not increase work compensation anymore: https://substack.com/home/post/p-165655726

Especially we as software engineers are not prepared for this fight as unions barely exist in our field.

We already saw mass layoffs by the big tech leaders and we will see it in smaller companies as well.

Sure there will always be need for experienced devs in some fields that a security critical or that need to scale but that simple CRUD app that serves 4 consecutive users? Yeah, Greg from marketing will be able to prompt that.

It doesn't need be the case that prompt engineers are paid less money, true. But with us being so disorganized the corporations will take the opportunity to cut cost.

bgwalter
> Especially we as software engineers are not prepared for this fight as unions barely exist in our field.

You can fight without unions. Tell the truth about LLMs: They are crutches for power users that do not really work but are used as excuses for firing people.

You can refuse to work with anyone writing vapid pro-LLM blog posts. You can blacklist them in hiring.

This addresses the union part. It is true that software engineers tend to be conflict averse and not very socially aware, so many of them follow the current industry opinion like lemmings.

If you want to know how to fight these fights, look at the permanent government bureaucracies. They prevail in the face of "new" ideas every 4 years.

jopsen
> If you want to know how to fight these fights, look at the permanent government bureaucracies. They prevail in the face of "new" ideas every 4 years.

Search youtube for "yes minister" :)

-----

On topic, I think it's a fair point that fighting is borderline useless. Companies that don't embrace new tech will go out of business.

That said, it's entirely unclear what the implications will be. Often new capabilities doesn't mean the industry will shrink. The industry haven't shrunk as a result of 100x increase in compute and storage, or decrease in size and power usage.

Computers just became more useful.

I don't think we should be too excited about AI writing code. We should be more excited about the kinds of program we can write now. There is going to be a new paradigm of computer interaction.

jplusequalt
>You can fight without unions.

And you can fly without wings--just very poorly.

Unions are extremely important in the fight of preserving worker rights, compensation, and benefits.

autoexec
> You can fight without unions.

You can fight without an army too, but it's a lot less effective. There is strength in numbers. Corporations know this and they leverage that strength against their employees. You all alone vs. them is exactly how they like it.

StefanBatory
> You can refuse to work with anyone writing vapid pro-LLM blog posts. You can blacklist them in hiring.

This works only if everyone is on with this. If they're not, you're shooting yourself in the foot while doing job hunting.

jdright
> This addresses the union part.

lol, good luck with that.

you thinking that one or two people doing non organized _boycott_ is the same thing as an union tell a lot about you.

paulcole
> The first thing any big technical revolution causes is suffering for a lot of people.

Didn’t Greg-from-marketing’s life just get a lot better at the same time?

trod1234
> The first thing any big technical revolution causes is suffering for a lot of people.

This all assumes that such revolutions are built on resiliency and don't actually destroy the underpinning requirements of organized society. Its heavily skewed towards survivor bias.

Our greatest strength as a species is our ability to communicate knowledge, experience, and culture, and act as one large overarching organism when threats appear.

Take away communication, and the entire colony dies. No organization can occur, no signaling. There are two ways to take away communication, you prevent it from happening, or you saturate the channel to the Shannon Limit. The latter is enabled by AI.

Its like an ant hill or a bee hive where a chemical has been used to actively and continually destroy the pheromones the ants rely upon for signalling. What happens? The workers can't work, food can't be gathered, the hive dies. The organism is unable to react or adapt. Collapse syndrome.

Our society is not unlike the ant-hill or bee hive. We depend on a fine balance of factors doing productive work and in exchange for that work they get food, or more precisely money which they use to buy food. Economy runs because of the circulation of money from producer to factor to producer. When it sieves into fewer hands and stays there, distortions occur, these self-sustain and then eventually we are at the point where no production can occur because monetary properties are lost under fiat money printing. There is a narrow working range where outside the range on each side everything catastrophically fails. Hyper-inflation/Deflation

AI on the other hand eliminates capital formation of the individual. The time value of labor is driven to zero. There is a great need for competent workers for jobs, but no demand because no match can occur; communication is jammed. (ghost jobs/ghost candidates)

So you have failures on each end, which self-sustain towards socio-economic collapse. No money circulation going in means you can't borrow from the future through money printing. Debasement then becomes time limited and uncontrollable through debt traps, narrow working price range caused by consistent starvation of capital through wage suppression opens the door to food insecurity, which drives violence.

Resource extraction processes have destroyed the self-sustaining flows such that food in a collapse wouldn't even support half our current population, potentially even a quarter globally. 3 out of 4 people would die. (Malthus/Catton)

These things happen incredibly slowly and gradually, but there is a critical point we're about 5 years away from it if things remain unchanged, there is the potential that we have already passed this point too. Objective visibility has never been worse.

This point of no return where the dynamics are beyond any individual person, and after that point everyone involved in that system is dead but they just don't know it yet.

Mutually Assured Destruction would mean the environment becomes uninhabitable if chaos occurs and order is lost in such a breakdown.

We each have significant bias to not consider the unthinkable. A runaway positive feedback system eventually destroys itself, and like a dam that has broken with the waters rushing towards individuals; no individual can hold back those forces.

RugnirViking
There are people trying very hard (and succeeding) to CREATE the impression it will pay less money. Pay in general is extremely vibes based. A look at well... anything in the economy shows that. There are constant shortages of low paying jobs, and gluts of high paying jobs
giantg2
That's the opposite of what I'm seeing. I see plenty of openings at Walmart, bus drivers, etc. I see very few openings in many higher paying jobs (healthcare might be an exception). Even the dev jobs I'm finding are low paid at small companies, like $70k per year low (and this isn't a low cost area).
bluGill
Wait a couple years before you start stating anything as a trend. There have been several downturns over my lifetime (I'm 50) where it was hard to find a tech job, and several periods of good times where tech people were in high demand. Until several years have past you cannot tell the difference between a temporary downturn and the end of an era.

Every time things turn bad a lot of people jump out and yell it is the end of the tech. They have so far been wrong. Only time will tell if they are right this time, though I personally doubt it.

al_borland
I suspect it only looks like there are a lot of high paying jobs, because they are so much harder to fill, due to a there being so few qualified candidates... hence the high pay. Supply and demand.
bluGill
Supply and demand is also influenced by ability to become qualified. I'm not qualified to flip burgers, but any fast food place could get me qualified in just a few hours. I'm not qualified to be a doctor and it would take me years of training to get that qualification. I've met people who failed to qualify as a burger flipper - you can correctly guess by that statement they are very disabled, I've met many who failed to qualify as a doctor, and all are smart people since people who are not wouldn't even try.
somenameforme
People aren't paid by value brought to companies, they're paid by the scarcity of their skill. Your analog is actually perfect for this. There's a reason saying you want to be a professional singer is generally something only a child would say. It's about as reliable a career as winning the lottery, simply because everybody can sing, lots of them quite decently. And so singer, as a career, mostly isn't a thing - it's a hobby with some distant hope of going Oliver Anthony at some point.

Software development has a huge barrier to entry which keeps the labor pool relatively small, which keeps wages relatively high. There's going to be a way larger pool of people capable of 'prompt engineering' which is going to send wages proportionally way down.

dakiol
> There's going to be a way larger pool of people capable of 'prompt engineering' which is going to send wages proportionally way down.

My wife knows how to prompt chatgpt, but she wouldn't be able to create an app just by putting together what the llm throws at her. Same could be said about my junior engineer colleague; he knows way more than my wife, sure, but he doesn't know what he doesn't know, and it would take a lot of resources and effort for him to put together a distributed system just by following what an llm throws at him.

So, I see the pool of potential prompters just as the pool of potential software engineers: some are good, some are bad, there will be scarcity of the good ones (as in any other profession), and so wages don't necessarily have to go down.

somenameforme
Of course, but again the issue is the number of people. Software development, as it currently is, has huge barriers to entry. Working in code is something that many people simply cannot do, and of those that can - a huge chunk will find it intolerable as an occupation. 'Prompt engineering' will have far smaller barriers to entry which will, even all other things being equal, significantly increase the labor pool.
sokoloff
…unless the value created via prompt engineering is high enough to cause companies to rationally demand even more prompt engineers.

The size of the pie is nowhere near fixed, IMO. There are many things which would be valuable to program/automate, but are simply unaffordable to address with traditional software engineering at the current cost per unit of functionality.

If AI can create a significant increase in productivity, I can see a path to AI-powered programming being just as valuable as (and a lot less tedious than) today.

somenameforme
Again it's not about value, but solely supply vs demand. If there was somehow only one person who could do janitorial work in a city, that'd be one rich janitor.

For a more realistic example - the software side at many companies essentially is the company. They bring products all the way from inception to launch. Yet they tend to get paid less, often much less, than the legal side. The reason is simply that the labor pool for lawyers is much smaller than for software engineers.

If there's not significant barriers to entry for prompt engineering, wages will naturally be low.

bgwalter
Singing properly requires decades of training. Prompt engineering is like a 5 year old asking his parents for an ice cream. Some strategies are more successful than others.
jplusequalt
>Why do we assume that Prompt Engineering is going to pay less money

It objectively takes less expertise and background knowledge to produce semi-working code. That lowers the barrier to entry, allowing more people to enter the market, which drives down salaries for everyone.

OkayPhysicist
The best paying jobs are ones that look inapproachable and are inapproachable. (even if that's for two entirely different reasons). Most people look at surgeons, for example, and go "I could never do that", and their jobs are, in fact, difficult and require lots of training, and the combination of those two factors primes people to be willing to give them a bunch of money. Jobs that look hard but are (compared to appearances) easy also tend to pay pretty well.

But jobs that look easy or approachable are in a much tighter spot. Regardless of how difficult they actually are, people are far less willing to give them large amounts of money. Pretty much all the more artistic jobs fall into this camp. Just because any idiot can open up Photoshop and start scribbling, it doesn't follow that competent graphic design is easy.

Right now, software development is incidentally in the "looks hard is hard" category, because the reason it "looks hard" is entirely divorced from the reason it is hard. Most of the non-tech population is under the obviously (to us) incorrect impression that the hard part of programming is understanding code. We know that that's silly, and that any competent programmer can pick up a new programming language in a trivial amount of time, but you still see lots of job postings looking for "Java Developers" or "Python Developers" as opposed to actual domain specific stuff because non-technical folk look at a thing they know is complicated (software development) see the first thing that they don't understand (source code) and assume that all the complexity in the space is tied up in that one thing. This is the same instinct that drives people to buy visual programming systems and argue that jargon needs to be stripped out of research papers.

The shift over to plain-language prompt engineering won't solve the underlying difficulty of software development (requirement discovery, abstract problem solving), but it does pose the threat of making it look easy. Which will make people less prone to giving us massive stacks of money to write the magic runes that make the golems obey their commands.

blindriver
Because the 10% difference between the best prompt engineer and a mediocre prompt engineer won't usually make a noticeable difference in output or productivity. There's no reason to specialize in it or pay extra because the gains are ephemeral.
sokoloff
That’s a wild take to me.

I think that the spread of capability and effectiveness between the best and mediocre will continue to be several factors and might even increase as compared to today.

I can’t see any way it would be less than 2x.

blindriver
Conversely I think that 2x being the least amount of difference between mediocre and best prompts seems unbelievable. I've been using LLMs heavily just like everyone else, and I find that simple direct prompts get me most of what I need. I've never seen it get me less than half of what I expect at all.
giantg2
If you need fewer prompt engineers than developers to do the same work, and if prompt engineering is easier than developing meaning all developers can do it, then you need up with a massive labor oversupply.
How many programmers did you need 40 years ago to write MS DOS programs? As you become more productive, more is expected of you. Instead of spending 10 days coloring pixels on the screen, now you're expected to push whole UIs in the same amount of time. Whether this is enabled by high-level languages, tools or AI is irrelevant.
rhines
I wonder if we need teams generating dozens of UIs or whatever every day though. There may (or may not) be a limit to how much value-adding work is available to do, or at least diminishing returns that no longer justify high salaries.
bluGill
YEs we do - your point that there might be too many is valid, but a modern UI when done right is much more accessible to the "common man" than a MSDos and so all the time those teams put in is more than made up for in all the time you saving not having to teach all the people who will use the program and thus we need far more teams than in the MSDos days when we couldn't make a good UI in the first place.
giantg2
What am I to be productive on? The bottleneck now is finding a business idea that's viable.
> Prompt engineering is like singing

i think you got the analogy wrong. Not everyone can sing professionally, but most people can type text into a text-to-speech synthesis system to produce a workable song.

lubujackson
Maybe a better analogy is that now anyone can use autotune now and actually sing, but you still have to want to do it and put in the effort. Very few people do.
roland35
Well there's value, but also supply and demand! If prompt engineering is easier that means more people will be able to do it!
trod1234
> Why do we assume that Prompt Engineering is going to pay less money.

I supposed because every new job title that has come out in the last 20+ years has followed the same approach of initially the same or slightly more money, followed by significant reductions in workforce activities shortly thereafter, followed by coordinated mass layoffs and no work after that.

When 70% of the economy is taken over by a machine that can work without needing food, where can anyone go to find jobs to feed themselves let alone their children.

The underlying issues have been purposefully ignored by the people who are pushing these changes because these are problems for next quarter, and money printing through non-fraction reserve banking decouples the need to act.

Its all a problem for next quarter, which just gets kicked repeatedly until food security becomes a national security issue.

Politician's already don't listen to what people have to say, what makes you think they'll be able to do anything once organized violence starts happening because food is no longer available, because jobs are no longer available.

The idiots and political violence we see right now is nothing compared to what comes when people can't get food, when their mindset changes from we can work within the system to there is no out only through. When existential survival depends on removing the people responsible by any means, these things happen, and when the environment is ripe for it; they have friends everywhere.

UBI doesn't work because non-market socialism fails. You basically have a raging fire that will eventually reach every single person and burn them all alive, and it was started by evil blind idiots that wanted to replace human agency.

chasd00
"I went to work early that day and noticed my monitor was on, and code was being written without anyone pressing any keys. Something had logged into my machine and was writting code. I ran to my boss and told him my computer had been hacked. He looked at me, concerned, and said I was hallucinating. ..."

it would have been funnier if the story then took a turn and ended with it was the AI complaining about a human writing code instead of it.

Black mirror season 8? xD
giantg2
I don't feel this way at all. If anything, it's a morale drain. There's less cooperation since you're expected to ask AI. There's also limited career pathing since we want even fewer junior or mids, replacing them with AI.
cjblomqvist
That goes both ways. As with math, it's sometimes not wise to look at the answer as soon as you stumble upon something you can't solve immediately - sometimes it's good to force the person learning to think deeper and try to understand the problem more thoroughly. It's also a skill of it's own to be able to cope with such situations and not just bail/give up/do something else.

I fear this will be more and more of a problem with the TikTok/instant gratification/attention is only good for less than 10 seconds -generation. Deep thinking has great value in many situations.

"Funnily" enough, I see management more and more reward this behavior. Speed is treated as vastly more important than driving in the right direction, long-term thinking. Quarterly reports, etc etc.

anonzzzies
It is exactly how I use it personally the most: it will crap down a massive amount of plumbing that I really do not feel like doing myself at all. So when I think of procrastinating, I tell it to write something: after 30 minutes I will have something that would be procrastinate me from hours to never doing it at all. Now its 'almost done anyway, so might as well finish it'. Then I spend 3 months hacking on it while, at any point, getting the AI do the annoying stuff I know im not going to do or postpone. If only for that... I find bug fixing more rewarding and easier than writing crap from scratch anyway.
screye
It works both ways.

Yes, it's supportive and helps you stay locked in. But it also serves as a great frustration lightning rod. I enjoy being an unsavory person to the LLM when it behaves like a buffoon.

Sometimes you need a pressure release valve. Better an LLM than a person.

P.S: Skynet will not be kind to me.

pier25
I'm always amazed that people feel anything when chatting with an AI bot.

Don't people realize it's a machine "pretending" to be human?

bondarchuk
People feel things reading books, watching movies, even animated ones that have no people in them, looking at abstract art... Why should this be any different?
pier25
Do you take eg advertisement at face value? Even when you know they're trying to convince you to accept some idea of the brand and buy something?
bondarchuk
Do you think Buzz Lightyear actually exists?
pier25
You're missing the point. I'm not talking about the actual content but the intentions of creating said content.
joseda-hg
Usually the feel part is the subconscius one that isn't swayed by knowing Like a phobia, its whole thing is being irrational, knowing it doesn't make sense doesn't diminish the feeling
pier25
I don't know if it's usual. I doubt it is.

When you watch a video ad do you feel an irrational need to buy a product?

reactordev
I have had to correct AI enough to know it’s the equivalent of a cocky junior dev that read a paper once.

I’ll stick to human emotional support.

blindriver
I 100% agree with this. Part of the problem is getting stuck with bad documentation or a bad API, and asking ChatGPT to generate sample code is really beneficial to keeping me going instead of mothballing an idea for months or forever.
Isn’t that the same as posting on stack overflow, Reddit or other forums saying you are stuck and getting an answer.

With LLM it’s speed - seconds rather than the minutes or hours as per stack overflow which is main benefit.

xeromal
I hate to admit this but I was struggling with a dbt at work and I had copilot scan what I Was doing and it found a type that was almost impossible for me to notice. lol. It really can be useful.
Could work in the other direction. When you are stuck and get the solution from the AI you lose the feeling of achievement because it’s done by somewhat/someone else
tombert
I completely agree.

There have been several personal projects that have been on the back-burner for a few years now that I would implement about 20% of, get stuck and frustrated, and give up on because I'm not being paid for it anyway.

With ChatGPT, being able to bounce back and forth with it is enough to unblock me a lot of the time, and I have gotten all my projects over the finish line. Am I learning as much as I would if I had powered through it without AI? Probably not, but I'm almost certainly learning more than I would had I given up on the project like I usually do.

To me, I view ChatGPT as an "intelligent rubber duck". It's not perfect, in fact a lot of the time time the suggestions are flatout wrong, but just being able to communicate with something that gives some input seems to really help me progress.

hippari2
I find that with enough wrong answers it feels like you are fighting alone again haha.
Yes, I've been saying that AI helps with procrastination a lot.

This item has no comments currently.