- WalterSear parentDoes your significant other know about your car collection? You may have a car hoarding problem.
- You would have to make sure your search footprint supported that. IE - fully private, non-publicly-visible profiles everywhere.
- Inference is cash positive: it's research that takes up all the money. So, if you can get ahold of enough users, the volume eventually works in your favour.
- > starting to be demonstrably harmful
Starting?
- This isn't that.
This is VCs FOMOing as global-economy-threatening levels of leverage are being bet on an AI transformation that, by even the most optimistic estimates, cannot achieve a tiny portion of the required ROI in the required time.
- What it's transformational but takes a decade or so, instead of a year or so?
It's not like this isn't following exactly the same hype cycle as every other technological transformation.
- VC isn't "getting back to it's roots", though it is certainly displaying one of it's fundamental drives: FOMO.
- That too, is easier than ever.
It's just work, there's no secrets to it.
- IMHO, there's never been a better time to build your own product and learn to sell it. The effort that AI implementation requires is clearly exponential to complexity of the organization.
You can build faster now that you ever have: I am building faster than I have in 25 years of engineering. You have more capable support for all the unfamiliar processes of building a business imaginable.
And almost everyone larger than you is finding it harder to achieve similar productivity gains from implementing AI, if not outright struggling with it. This is a golden moment and won't last long.
- That was my first assumption, quite a while ago now.
- I've claimed neither. I actually prefer restarting or rolling back quickly rather than trying to re-work suboptimal outputs - less chance of being rabbit holed. Just add what I've learned to the original ticket/prompt.
'Git gud' isn't much of a truism.
- And I'm arguing that if the output wasn't sufficient, neither was your input.
You could also be asking for too much in one go, though that's becoming less and less of a problem as LLMs improve.
- I think maybe there's another step too - breaking the design up into small enough peices that the LLM can follow it, and you can understand the output.
- My point is that, if I can do it right, others can too. If someone's LLM is outputing slop, they are obviously doing something different: I'm using the same LLMs.
All the LLM hate here isn't observation, it's sour grapes. Complaining about slop and poor code quality outputs is confessing that you haven't taken the time to understand what is reasonable to ask for, aren't educating your junior engineers how to interact with LLMs.
- 9000-line PRs were never a good idea, have only been sufficiently plausible because we were forced to accept bad PR review practices. Coding was expensive and management beat us into LGTMing them into the codebase to keep the features churning.
Those days are gone. Coding is cheap. The same LLMs that enable people to submit 9000 line PRs of chaos can be used to quickly turn them into more sensible work. If they genuinely can't do a better job, rejecting the PR is still the right response. Just push back.
- If you are getting garbage out, you are asking it for too much at once. Don't ask for solutions - ask for implementations.
- And if you are doing something fabulously unique, the LLM can still write all the code around it, likely help with many of the components, give you at least a first pass at tests, and enable rapid, meaningful refactors after each feature PR.
- Our intelligence, yes. But that doesn't establish it as essential for thought.
- They do, for many people. Perhaps you need to change your approach.
- It's still a SAAS, with components that couldn't be replicated client-side, such as AI.
- I have pretty much the same amount of confidence when I receive AI generated or non-AI generated code to review: my confidence is based on the person guiding the LLM, and their ability to that.
Much more so than before, I'll comfortably reject a PR that is hard to follow, for any reason, including size. IMHO, the biggest change that LLMs have brought to the table is that clean code and refactoring are no longer expensive, and should no longer be bargained for, neglected or given the lip service that they have received throughout most of my career. Test suites and documentation, too.
(Given the nature of working with LLMs, I also suspect that clean, idiomatic code is more important than ever, since LLMs have presumably been trained on that, but this is just a personal superstition, that is probably increasingly false, but also feels harmless)
The only time I think it is appropriate to land a large amount of code at once is if it is a single act of entirely brain dead refactoring, doing nothing new, such as renaming a single variable across an entire codebase, or moving/breaking/consolidating a single module or file. And there better be tests. Otherwise, get an LLM to break things up and make things easier for me to understand, for crying out loud: there are precious few reasons left not to make reviewing PRs as easy as possible.
So, I posit that the emotional reaction from certain audiences is still the largest, most exhausting difference.
- I contend that, by far and away the biggest difference between entirely human-generated slop and AI-assisted stupidity is the irrational reaction that some people have to AI-assisted stuff.
- Apparently, coffee contains compounds other than caffiene, that cause gastric acid secretion: tryptamides and catechols.
“Espresso and French press both tend to extract higher concentrations of tryptamides,” Sebastian says. “Meanwhile, tryptamide concentrations in filter coffee are usually quite low because they are absorbed by the paper filter.
“However, tryptamides are only some of the compounds which contribute to the increased secretion of gastric acid,” he adds. “In our research, we are also analysing the effect of chlorogenic acids [on the stomach], but more evidence needs to be gathered.”
https://perfectdailygrind.com/2023/02/can-too-much-coffee-ca...
- Acidity hitting the stomach?
- There's been more than one way to reliably differentiate CFS patients from controls announced this year. However, there's no clinical diagnostic yet.
- Claude from Nantucket
- IMHO, it will be autonomous robotics - one way or another.