Preferences

when I started coding at the age of 11 in machine code and assembly on the C64, the dream was to create software that creates software. Nowadays it's almost reality, almost because the devil is always in the details. When you're used to write code, writing code is relatively fast. You need this knowledge to debug issues with generated code. However you're now telling AI to fix the bugs in the generated code. I see it kind of like machine code becomes overlaid with asm which becomes overlaid with C or whatever higher level language, which then uses dogma/methodology like MVC and such and on top of that there's now the AI input and generation layer. But it's not widely available. Affording more than 1 computer is a luxury. Many households are even struggling to get by. When you see those what 5 7 Mac Minis, which normal average Joe can afford that or does even have to knowledge to construct an LLM at home? I don't. This is a toy for rich people. Just like with public clouds like AWS, GCP I left out, because the cost is too high and running my own is also too expensive and there are cheaper alternatives that not only cost less but also have way less overhead.

What would be interesting to see is what those kids produced with their vibe coding.


kordlessagain
Kids? Think about all the domain experts, entrepreneurs, researchers, designers, and creative people who have incredible ideas but have been locked out of software development because they couldn't invest 5-10 years learning to code.

A 50-year-old doctor who wants to build a specialized medical tool, a teacher who sees exactly what educational software should look like, a small business owner who knows their industry's pain points better than any developer. These people have been sitting on the sidelines because the barrier to entry was so high.

The "vibe coding" revolution isn't really about kids (though that's cute) - it's about unleashing all the pent-up innovation from people who understand problems deeply but couldn't translate that understanding into software.

It's like the web democratized publishing, or smartphones democratized photography. Suddenly expertise in the domain matters more than expertise in the tools.

pton_xd
> Think about all the domain experts, entrepreneurs, researchers, designers, and creative people who have incredible ideas but have been locked out of software development because they couldn't invest 5-10 years learning to code.

> it's about unleashing all the pent-up innovation from people who understand problems deeply but couldn't translate that understanding into software.

This is just a fantasy. People with "incredible ideas" and "pent-up innovation" also need incredible determination and motivation to make something happen. LLMs aren't going to magically help these people gain the energy and focus needed to pursue an idea to fruition. Coding is just a detail; it's not the key ingredient all these "locked out" people were missing.

agentultra
100% this. There have been generations of tools built to help realize this idea and there is... not a lot of demand for it. COBOL, BASIC, Hypercard, the wasteland of no-code and low-code tools. The audience for these is incredibly small.

A doctor has an idea. Great. Takes a lot more than a eureka moment to make it reality. Even if you had a magic machine that could turn it into the application you thought of. All of the iterations, testing with users, refining, telemetry, managing data, policies and compliance... it's a lot of work. Code is such a small part. Most doctors want to do doctor stuff.

We've had mind-blowing music production software available to the masses for decades now... not a significant shift in people lining up to be the musicians they always wanted to be but were held back by limited access to the tools to record their ideas.

pphysch
> These people have been sitting on the sidelines because the barrier to entry was so high.

This comment is wildly out of touch. The SMB owner can now generate some Python code. Great. Where do they deploy it? How do they deploy it? How do they update it? How do they handle disaster recovery? And so on and so forth.

LLMs accelerate only the easiest part of software engineering, writing greenfield code. The remaining 80% is left as an exercise to the reader.

bongodongobob
All the devs I work with would have to go through me to touch the infra anyway, so I'm not sure I see the issue here. No one is saying they need to deploy fully through the stack. It's a great start for them and I can help them along the way just like I would with anyone else deploying anything.
pphysch
In other words, most of the barriers to leveraging custom software are still present.
bongodongobob
Yes, the parts we aren't talking about that have nothing to do with LLMs, ie normal business processes.
nevertoolate
It sounds too good to be true. Why do you think llm is better in coding then in how education software should be designed?
diggan
> those kids produced with their vibe coding

No one, including Karpathy in this video, is advocating for "vibe coding". If nothing more, LLMs paired with configurable tool-usage, is basically a highly advanced and contextual search engine you can ask questions. Are you not using a search engine today?

Even without LLMs being able to produce code or act as agents they'd be useful, because of that.

But it sucks we cannot run competitive models locally, I agree, it is somewhat of a "rich people" tool today. Going by the talk and theme, I'd agree it's a phase, like computing itself had phases. But you're gonna have to actually watch and listen to the talk itself, right now you're basically agreeing with the video yet wrote your comment like you disagree.

infecto
This is most definitely not toys for rich people. Now perhaps depending on your country it may be considered rich but I would comfortably say that for most of the developed world, the costs for these tools are absolutely attainable, there is a reason ChatGPT has such a large subscriber base.

Also the disconnect for me here is I think back on the cost of electronics, prices for the level of compute have generally gone down significantly over time. The c64 launched around the $5-600 price level, not adjusted for inflation. You can go and buy a Mac mini for that price today.

bawana
I suspect that economies of scale are different for software and hardware. With hardware, iteration results in optimization of the supply chain, volume discount as the marginal cost is so much less than the fixed cost, and lower prices in time. The purpose of the device remains fixed. With software, the software becomes ever more complex with technical debt - featuritis, patches, bugs, vulnerabilities, and evolution of purpose to try and capture more disparate functions under one environment in an attempt to capture and lock in users. Price tends to increase in time. (This trajectory incidentally is the opposite of the unix philosophy - having multiple small fast independent tools than can be concatenated to achieve a purpose.) This results in ever increasing profits for software and decreasing profits for hardware at equilibrium. In the development of AI we are already seeing this-first we had gpt, then chatbots, then agents, now integration with existing software architectures.Not only is each model ever larger and more complex (RNN->transformer->multihead-> add fine tuning/LoRA-> add MCP), but the bean counters will find ways to make you pay for each added feature. And bugs will multiply. Already prompt injection attacks are a concern so now another layer is needed to mitigate those.

For the general public, these increasing costs will besubsidized by advertising. I cant wait for ads to start appearring in chatGPT- it will be very insidious as the advertising will be comingled with the output so there will be no way to avoid it.

infecto
I’m struggling to follow your argument, it feels more speculative than evidence-based. Runtime costs have consistently fallen.

As for advertising, it’s possible, but with so many competitors and few defensible moats, there’s real pressure to stay ad-free. These tools are also positioned to command pricing power in a way search never was, given search has been free for decades.

The hardware vs. software angle seems like a distraction. My original point was in response to the claim that LLMs are “toys for the rich.” The C64 was a rich kid’s toy too—and far less capable.

LtWorf (dead)
kapildev
>What would be interesting to see is what those kids produced with their vibe coding.

I think you are referring to what those kids in the vibe coding event produced. Wasn't their output available in the video itself?

dist-epoch
> This is a toy for rich people

GitHub copilot has a free tier.

Google gives you thousands of free LLM API calls per day.

There are other free providers too.

guappa
1st dose is free
palmfacehn
Agreed. It is worth noting how search has evolved over the years.
infecto
LLM APIs are pretty darn cheap for most of the developed worlds income levels.
guappa
Yeah, because they're bleeding money like crazy now.

You should consider how much it actually costs, not how much they charge.

How do people fail to consider this?

NitpickLawyer
No, there are 3rd party providers that run open-weights models and they are (most likely) not bleeding money. Their prices are kind of similar, and make sense in a napkin-math kind of way (we looked into this when ordering hardware).

You are correct that some providers might reduce prices for market capture, but the alternatives are still cheap, and some are close to being competitive in quality to the API providers.

Eggpants
Just wait for the enshitencation of LLM services.

It going to get wild when the tech bro investors demand ads be the included in responses.

It will be trivial for a version of AdWords where someone pays for response words be replaced. “Car” replaced by “Honda”, variable names like “index” by “this_index_variable_is_sponsered_by_coinbase” etc.

I’m trying to be funny with the last one but something like this will be coming sooner than later. Remember, google search used to be good and was ruined by bonus seeking executives.

bdangubic
how much does it cost?
infecto
>You should consider how much it actually costs, not how much they charge. How do people fail to consider this?

Sure, nobody can predict the long-term economics with certainty but companies like OpenAI already have compelling business fundamentals today. This isn’t some scooter startup praying for margins to appear; it’s a platform with real, scaled revenue and enterprise traction.

But yeah, tell me more about how my $200/mo plan is bankrupting them.

It's cheap now. But if you take into account all the training costs, then at such prices they cannot make a profit in any way. This is called dumping to capture the market.
infecto
No doubt the complete cost of training and to getting where we are today has been significant and I don’t know how the accounting will look years from now but you are just making up the rest based on feelings. We know operationally OpenAI is profitable on purely the runtime side, nobody knows how that will look when accounting for R&D but you have no qualification to say they cannot make a profit in any way.
diggan
> But if you take into account all the training costs

Not everyone has to paid that cost, as some companies are releasing weights for download and local use (like Llama) and then some other companies are going even further and releasing open source models+weights (like OLMo). If you're a provider hosting those, I don't think it makes sense to take the training cost into account when planning your own infrastructure.

Although I don't it makes much sense personally, seemingly it makes sense for other companies.

dist-epoch
There is no "capture" here, it's trivial to switch LLM/providers, they all use OpenAI API. It's literally a URL change.

This item has no comments currently.