Preferences

>Frequent LLM usage impairs thinking

Is there hard evidence on this?


bgwalter
If you are the type who prefers studies:

https://time.com/7295195/ai-chatgpt-google-learning-school/

Otherwise, read pro-LLM blogs which are mostly rambling nonsense that overpromises while almost no actual LLM written software exists.

You can also see how the few open source developers who jump on the LLM bandwagon now have worse blogging and programming output than they had pre-LLM.

Workaccount2
I have 7 different 100% LLM written programs in use at my company daily, some going back to GPT-4 and some a recent as gemini 2.5.

Software engineers are so lost in the weeds of sprawling feature pack endless flexibility programs that they have completely lost sight of simple narrow scope programs. I can tell an LLM exactly how we need the program to work (forgoing endless settings and option menus) and tell it exactly what it needs to do (forgoing endless branching possibilities for every conceivable user workflow) and get a lean lightweight program that takes the user from A to B in 3k LOC.

Is the program something that could be sold? No. Would it work for other companies/users? Probably not. Does it replace a massive 1M+ LOC $20/mo software package for that user in our bespoke use case? Yes.

infecto
Short answer no.

Longer answer there was that study posted this week that compared it to using search and then what was it…raw thinking or something similar. I could totally understand in certain cases you are not activating parts of your brain as much, I don’t know any of it proves much in aggregate.

fat_cantor

This item has no comments currently.