- "The parents get to decide when sexually explicit material is appropriate for their children."
It's always strange to me when the concern is "sexually explicit material" and not "violently explicit material".
- I don't think you can make an argument the definition is settled based on popular usage when the replies to your comment show there's still plenty of contention about what it means. It hasn't entered the general lexicon yet because most people have never heard it and could only guess at what it means. Not even my students agree on what vibe coding means and they're the ones who do it most. And regardless, terms of art can still have technical meanings distinct from the general lexicon (like "theory", or the example you used "hacker", which means something different around here).
- If you want to do X, "build a programming language first then use it to do X" is a tried and true way to never do X.
- AI is allowing a lot of "non SWEs" to speedrun the failed project lifecycle.
The exuberance of rapid early-stage development is mirrored by the despair of late-stage realizations that you've painted yourself into a corner, you don't understand enough about the code or the problem domain to move forward at all, and your AI coding assistant can't help either because the program is too large for it to reason about fully.
AI lets you make all the classic engineering project mistakes faster.
- I disagree, "vibe" characterizes how the LLM is being used, not that AI is being used at all. The vibe part means it's not rigorous. Using an LLM to autocomplete a line in an otherwise traditionally coded project would not be considered "vibe" coding, despite being AI assisted, because the programmer can easily read and verify the line is as correct as if they had typed it character by character.
- It's funny that you listed 1-based index as a strength, and another poster here lists it as a weakness. Goes to show there's really no agreement when it comes to indexing!
- Yes, because LLMs don't change the fact that different programming languages have different expressive capabilities. It's easier to say some things in some languages over others. That doesn't change if it's an LLM writing the code; LLMs have finite context windows and limited attention. If you can express an algorithm in 3000 loc in one language but 30 loc in another, the more expressive language is still preferred, even if the LLM can spit out the 3000 lines in 1s. The reason being if the resulting codebase is 10 - 100x larger than it needs to be, that has real costs that are not mitigated by LLMs or agents. All things being equal, you'd still prefer the right tool for the job, which does not imply we should use Python for everything because it dominates the training set, it means we should make sure LLMs have capabilities to write other programming languages equally well before we rely on them too much.
- Camera only is still a bigger mistake because without LiDAR, the EV delivery trucks with built in self driving will not work.
- That apology is for posting it, not for believing it.
- It's the commercials that get me. Depicting people playing the games constantly, while clearing the house, going to the bathroom, hanging out with friends. It shows them with their minds elsewhere, as if they're in a fancy casino surrounded by rich beautiful people wearing suits and glamourous clothes. Meanwhile they are in their dull ordinary life, craving for a dopamine hit the app gives them.
The commercials are a celebration of addiction, and its disgusting to those of us who have struggled with addiction and know, like you say, the damage is clear. And they tacitly admit it too at the end of the commercial where they hurriedly say "struggling with gambling addiction? call this number." As if that absolves anything.
And it's not just the gambling either. A typical commercial break these days consists of: gambling ads where they try to get you addicted, crypto ads where they try to bilk you, political ads where they lie to you, and then there's the omnipresent pharmaceutical ads. Now we've got AI ads on top of it all. Every one of those ad categories should be made illegal, like tobacco advertising.
- You already said that. It does not answer the question. Moving to another app doesn't solve anything, because we still haven't answered the question of why they should have had to move in the first place! It's the same situation if they move to a new app, nothing has changed.
At this point we have gone in a circle, I must assume I won't get a genuine answer to the only thing I have asked despite trying to engage genuinely in conversation. Have a good day.
- I can't call my new formula translation language FORTRAN because it's been taken, as have many other names. So now to avoid collisions, it's named after my cat.
- I didn't make clear I was responding to your question:
"Where do my 20 years of software dev experience fit into this except beyond imparting my aesthetic preferences?"
Anyway, I think you kind of unintentionally proved my point. These two examples are pretty trivial as far as software goes, and it enabled someone with a little technical experience to implement them where before they couldn't have.
They work well because:
a) the full implementation for these apps don't even fill up the AI context window. It's easy to keep the LLM on task.
b) it's a tutorial style-app that people often write as "babby's first UI widget", so there are thousands of examples of exactly this kind of thing online; therefore the LLM has little trouble summoning the correct code in its entirety.
But still, someone with zero technical experience is going to be immediately thwarted by the prompts you provided.
Take the first one "I want a menubar app that shows me the current weather".
https://chatgpt.com/share/693b20ac-dcec-8001-8ca8-50c612b074...
ChatGPT response: "Nice — here's a ready-to-run macOS menubar app you can drop into Xcode..."
She's already out of her depth by word 11. You expect your mom to use Xcode? Mine certainly can't. Even I have trouble with Xcode and I use it for work. Almost every single word in that response would need to be explained to her, it might as well be a foreign language.
Now, the LLM could help explain it to her, and that's what's great about them. But by the time she knows enough to actually find the original response actionable, she would have gained... knowledge and experience enough to operate it just to the level of writing that particular weather app. Though having done that, it's still unreasonable to now believe she could then use the LLM to write a bytecode compiler, because other people who have a Ph.D. in CS can. The LLM doesn't level the playing field, it's still lopsided toward the Ph.D.s / senior devs with 20 years exp.
- I've tried getting this set up at my University, it was hell dealing with them. We ended up going with Gitlab.
- Here's how I look at it as a roboticist:
The LLM prompt space is an ND space where you can start at any point, and then the LLM carves a path through the space for so many tokens using the instructions you provided, until it stops and asks for another direction. This frames LLM prompt coding as a sort of navigation task.
The problem is difficult because at every decision point, there's an infinite number of things you could say that could lead to better or worse results in the future.
Think of a robot going down the sidewalk. It controls itself autonomously, but it stops at every intersection and asks "where to next boss?" You can tell it either to cross the street, or drive directly into traffic, or do any number of other things that could cause it to get closer to its destination, further away, or even to obliterate itself.
In the concrete world, it's easy to direct this robot, and to direct it such that it avoids bad outcomes, and to see that it's achieving good outcomes -- it's physically getting closer to the destination.
But when prompting in an abstract sense, its hard to see where the robot is going unless you're an expert in that abstract field. As an expert, you know the right way to go is across the street. As a novice, you might tell the LLM to just drive into traffic, and it will happily oblige.
The other problem is feedback. When you direct the physical robot to drive into traffic, you witness its demise, its fate is catastrophic, and if you didn't realize it before, you'd see the danger then. The robot also becomes incapacitated, and it can't report falsely about its continued progress.
But in the abstract case, the LLM isn't obliterated, it continues to report on progress that isn't real, and as a non expert, you can't tell its been flattened into a pancake. The whole output chain is now completely and thoroughly off the rails, but you can't see the smoldering ruins of your navigation instructions because it's told you "Exactly, you're absolutely right!"
Intrinsic in learning is teaching. You haven't learned something until you've successfully taught it to someone else.