- strix_varius parentAlso imagine that basic interactions were mediated by those monopolies: you had to print your bus ticket personally with software only available on your IBM.
- The fact that the strategic wedge with which a successful, relatively socially-positive business manages to sustain itself isn't universally accessible doesn't negate its value.
The Venn diagram between people who shop at dollar stores and people who shop at Costco isn't empty.
- Absolutely. In areas where there are known quality options, people are clearly willing to pay more. Toyota for instance is a solid example of this.
Automobiles are large, expensive purchases with a relatively small set of options though... For most purchases, it's impossible to determine quality ahead of time.
- > Is that not also true of human written software that costs more per hour than the monthly cost of a coding agent?
The difference is that a human can learn and grow.
From your examples, it sounds like we're talking about completely different applications of code. I'm a software engineer who is responding to the original topic of reviewing PRs full of LLM slop. It sounds like you are a hobbyist who uses LLMs to vibe code personal apps. Your use case is, frankly, exactly what LLMs should be used for. It's analogous to how using a consumer grade 3d printer to make toys for your kids is fine, but nobody would want to be on the hook for maintaining full scale structural systems that were printed the same way.
- It does matter, because it's a worthwhile investment of my time to deeply review, understand, and provide feedback for the work of a junior engineer on my team. That human being can learn and grow.
It is not a worthwhile use of my time to similarly "coach" LLM slop.
The classic challenge with junior engineers is that helping them ship something is often more work than just doing it yourself. I'm willing to do that extra work for a human.
- > I’ll take badly made software for free
No, not if I have to maintain it.
Code is liability. LLM written PRs often bring net negative value: they make the whole system larger, more brittle, and less integrated. They come at the cost of end user quality and maintainer velocity.
- Totally agree. For me, the hard part has been figuring out the distinction with junior engineers... Is this poorly thought out, inefficient solution that is 3x as long as necessary due to AI, or inexperience?
- I also use Brave on all my devices - it also works on Amazon Prime. Prime frequently made me offers to upgrade to an ad-free experience that I didn't understand... surely this is a bug, I already have an ad-free experience. Then I installed the Prime app on my TV and realized the constant barrage of ads that Brave has been protecting me from!
- > You can't just code the website, zip the code and mail it to the client.
The suggestion that the only alternative to paying $96 million AUD ($62 million USD) for a website is getting one that was "coded, zipped, and mailed" is absurd.
> That's why you have thousands of employees in tech companies with seemingly a simple product that you can fully code in a week(at least the user facing part of it).
I've worked at Salesforce, Facebook, and Adobe. I couldn't code even the thinnest sliver of a vertical slice of any of their products in a week.
- This sounds more like your personal journey, and less like some broad trend.
A quick check of just one of your examples shows the term "3d printer" is googled for literally twice as frequently today as it was in 2016, for instance.
- E-Bikes are sold & regulated in "classes" (at least where I live in the US).
A class 1 e-bike is pedal-assist and stops assisting beyond 20mph (mine, for instance, tapers off starting probably around 15 mph).
A class 2 is the above, plus a throttle.
A class 3 is anything that assists over 20mph. The "basically motorcycle" set exists here.
- Sure, but "favor x over y" or, put another way, "use y only if x is unsuitable" is compatible with this. Nothing in "prefer composition over inheritance" says that composition is the only correct way.
- This gets me too. I generally agree that success is basically luck * effort, so I don't judge people who haven't been able to "make it." Similarly, I don't really admire people for having "made it"... If I don't know them personally, there's no way for me to gauge the ratio of luck and effort.
However, I do judge adults who aren't in good circumstances who also decide to bring children into their hardship. I have two kids, which is the most I felt I could provide for (time, money, attention, energy, etc).
- When I look up the actual release dates on viable head mounted displays, it turns out I'm wrong: not years, more like "year."
You should check out the xreal one!
- I copied the exact same sentence but of course someone already highlighted it!
That's the worst, most tortured string of English I've read all month.
- I don't understand your comment. What you're describing has existed for years.
- Apparently not.
- Your example is not passive voice.
- ...I guess you didn't read the article? Because the entire article is about how the artists intentionally skewed the digital colors so that they'd look as intended on film (and wrong / exaggerated on digital displays)
- This is inherent in the architecture of chatgpt. It's a unified model: text, images, etc all become tokenized input. It's similar to re-encoding your image in a lossy format, the format is just the black box of chatgpt's latent space.
This leads to incredibly efficient, dense semantic consistency because every object in an image is essentially recreated from (intuitively) an entire chapter of a book dedicated to describing that object's features.
However, it loses direct pixel reference. For some things that doesn't matter much, but humans are very discerning regarding faces.
Chatgpt is architecturally unable to reproduce exactly the input pixels - they're always encoded into tokens, then decoded. This matters more for subjects for which we are sensitive to detail loss, like faces.
- "curl it" has been a common (tech) term for at least 15 years: https://hn.algolia.com/?dateRange=all&page=5&prefix=true&que...
- > No reason to use command line for them, ever.
That hasn't been my experience. I suspect that most others who also daily drive linux would find it remarkable if someone used Linux every day for a year and never needed to open a terminal to install anything, fix anything, reset anything, update anything, follow any instructions given by any software they found and wanted to use, etc.
- Found the guy who wears a fedora.
- While you don't have to use it much, if you spend a year daily driving Linux, it's a near certainty that you'll have to use the command line.
- Progressive disclosure can be intensely annoying to actual power users.
Definitionally, it means you're hiding (non-disclosing) features behind at least 1 secondary screen. Usually, it means hiding features behind several layers of disclosures.
Making a very simple product more powerful via progressive disclosure can be a good way to give more power to non-power users.
Making a powerful product "simpler" via progressive disclosure can annoy the hell out of power users who already use the product.
- The examples given are not what I would consider "tinkering." Changing editor configs? Tuning mouse sensitivity? Really?
- I agree, and I'm not sure why it feels off but I have a theory.
AI is good at local coherence, but loses the plot over longer thoughts (paragraphs, pages). I don't think I could identify AI sentences but I'm totally confident I could identify an AI book.
This includes both opening a large text in a way of thinking that isn't reflected several paragraphs later, and also maintaining a repetitive "beat" in the rhythm of writing that is fine locally but becomes obnoxious and repetitive over longer periods. Maybe that's just regression to the mean of "voice?"
- This is what I'd like to know as well. $20k - $12k at "dumping stock" prices! - for a digital item for a video game is just incomprehensible to me.
But clearly it's happening, so I'd like to understand better the venn diagram of people who have $20k completely disposable and people who are so highly motivated by their appearance in a video game. My assumptions are obviously wrong.
- ...and now the site is down. Hope the cursors were worth it!
- To me, the most salient point was this:
> Code reviewing coworkers are rapidly losing their minds as they come to the crushing realization that they are now the first layer of quality control instead of one of the last. Asked to review; forced to pick apart. Calling out freshly added functions that are never called, hallucinated library additions, and obvious runtime or compilation errors. All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
LLMs have made Brandolini's law ("The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it") perhaps understated. When an inexperienced or just inexpert developer can generate thousands of lines of code in minutes, the responsibility for keeping a system correct & sane gets offloaded to the reviewers who still know how to reason with human intelligence.
As a litmus test, look at a PR's added/removed LoC delta. LLM-written ones are almost entirely additive, whereas good senior engineers often remove as much code as they add.