- I don't think anyone disagrees with that. But it's a good time to learn now, to jump on the train and follow the progress.
It will give the developer a leg up in the future when the mature tools are ready. Just like the people who surfed the 90s internet seem to do better with advanced technology than the youngsters who've only seen the latest sleek modern GUI tools and apps of today.
- >> It's too bad people spend energy for generating them now.
How do you mean?
Some quick back of the napkin math.
Creating a 'throwaway' banner image by hand, maybe 15 minutes on a 100W CPU in Photoshop:
Creating a 'throwaway' banner image by stable diffusion on a 600W GPU. In reality it's probably less than 20 seconds to generate, but let's round it up to one full minute of compute time:15 minutes human work time + 0.025 kWh (100W*0.25h)
The way I see it it seems to spend less energy, regardless of whether you're talking about human energy or electrical energy. What's the issue here exactly?5 minutes human work time + 0.01 kWh (600W*(1/60)h) - Sounds like a solution looking for a problem.
- Perhaps. But I can't see a reason why they couldn't still write endless—and theoretically valuable—poems, dissertations, or blog posts, about all things red and the nature of redness itself. I imagine it would certainly take some studying for them, likely interviewing red-seers, or reading books about all things red. But I'm sure they could contribute to the larger red discourse eventually, their unique perspective might even help them draw conclusions the rest of us are blind to.
So perhaps the fact that they "cannot know red" is ultimately irrelevant for an LLM too?
- Human entitlement really is the bane of game theory.
- There are use cases where even low accuracy could be useful. I can't predict future products, but here are two that are already in place today:
- On the keyboard on iphones some sort of tiny language model suggest what it thinks are the most likely follow up words when writing. You only have to pick a suggested next word if it matches what you were planning on typing.
- Speculative decoding is a technique which utilized smaller models to speed up the inference for bigger models.
I'm sure smart people will invent other future use cases too.
- That's a clickbait title.
What they are actually saying: Given one correct quoted sentence, the model has 42% chance of predicting the next sentence correctly.
So, assuming you start with the first sentence and tell it to keep going, it has a 0.42^n odds of staying on track, where n is the n-th sentence.
It seems to me, that if they didn't keep correcting it over and over again with real quotes, it wouldn't even get to the end of the first page without descending into wild fanfiction territory, with errors accumulating and growing as the length of the text progressed.
EDIT: As the article states, for an entire 50 token excerpt to be correct the probability of each output has to be fairly high. So perhaps it would be more accurate to view it as 0.985^n where n is the n-th token. Still the same result long term. Unless every token is correct, it will stray further and further from the correct source.
- What about the free open weights models then? And the open source tooling to go with them?
Sure, they are perhaps 6 months behind the closed-source models, and the hardware to run the biggest and best models isn't really consumer-grade yet (How many years could it be before regular people have GPUs with 200+ gigabytes vram? That's merely one order of magnitude away).
But they're already out there. They will only ever get better. And they will never disappear due to the company going out of business or investors raising prices.
I personally only care about the closed sourced proprietary models in so far as they let me get a glimpse of what I'll soon have access to freely and privately on my own machine. Even if all of them went out of business today, LLMs would still have a permanent effect on our future and how I'd be working.
- I think one issue is that you won't always be able to invoice those extra 999 hours to your customer. Sometimes you'll still only be able to get paid for 1 hour, depending on the task and contract.
But the llm bill will always invoice you for all the saved work regardless.
- The way I see it:
* The world is increasingly ran on computers.
* Software/Computer Engineers are the only people who actually truly know how computers work.
Thus it seems to me highly unlikely that we won't have a job.
What that job entails I do not know. Programming like we do today might not be something that we spend a considerable amount of time doing in the future. Just like most people today don't spend much time handing punched-cards or replacing vacuum tubes. But there will still be other work to do, I don't doubt that.
- I won't deny that in a context with perfect information, a future LLM will most likely produce flawless code. I too believe that is inevitable.
However, in real life work situations, that 'perfect information' prerequisite will be a big hurdle I think. Design can depend on any number of vague agreements and lots of domain specific knowledge, things a senior software architect has only learnt because they've been at the company for a long time. It will be very hard for a LLM to take all the correct decisions without that knowledge.
Sure, if you write down a summary of each and every meeting you've attended for the past 12 months, as well as attach your entire company confluence, into the prompt, perhaps then the LLM can design the right architecture. But is that realistic?
More likely I think the human will do the initial design and specification documents, with the aforementioned things in mind, and then the LLM can do the rest of the coding.
Not because it would have been technically impossible for the LLM to do the code design, but because it would have been practically impossible to craft the correct prompt that would have given the desired result from a blank sheet.
- Depends on what front end you use. But for text-generation-webui for example, Prompt Caching is simply a checkbox under the Model tab you can select before you click "load model".
- Here you go, enjoy!
- Great take. A mixtape can be greater than the sum of its parts!
I believe that a lot of the judgement is also connected to the quality of the works. "Slop", while doubtlessly accurate for today, may be a rather weird description in a couple of year if the rate of progress continues to accelerate like it has.
Although I've already heard people starting to refer to DeviantArt and the like as full of "human slop" so perhaps this is just modern language that's evolving and completely unrelated to AI.
- I agree, the story behind the work is part of how we humans view a creation and cannot be dismissed.
I think we're a long way away from 100% algorithmically created content. This far all I've seen is content that is created based on human inputs and ideas. I'm not aware of the Instagram incident you mentioned, but it too seems like the brain child of a human if I'm not mistaken.
There have been trending AI generated videos floating around lately for example. Which I found surprising at first. But they still had a human script writer (prompt writer?), director and a human editor. Someone who had a vision of what they wanted to create and share. My prediction is that this human-directed tool-like usage will be the standard for a long time, so I'm not particularly worried about humans getting removed from the process.
- While I have no doubt that individual companies, such as OpenAI for example, will eventually introduce enshittification features, I doubt the industry as a whole can be summarized that easily.
I believe, over all, development will go forward and things will get better. A rising tide lifts all ships, even if some of them decide to be shitty leaking vessels. If nothing else we always have open source software to fall back on when the enshittification of the proprietary models start.
For a practical example: The cars we drive today are a lot better than 100 years ago. A bad future isn't always inevitable.
- My theory is that as the quality of these generative tools increase, we'll see the public opinion of them slowly shift. Regardless of philosophy (although discussing it is always fun), it just seems inevitable since there are so many more consumers than producers. And as you say, consumers are the ones that will primarily benefit from this new technology. As a consumer we care primarily (some could argue solely) about our own emotional reaction to the music —or more generally put, art-piece.
In practical terms I also believe that this will give rise to a lot of new consumer behavior, and, as you so aptly puts it "creative consumers" will become normal.
The ability to on-demand create more content to fill out some very narrow niche is a great example ("Today I want 24 hours of non stop Mongolian throat singing neo-industrial Christmas music"). Or maybe to create covers of songs in the voices of your favorite long dead artist. Anything from minor tweak of existing works ("I wish this love song was dedicated specifically to ME", to completely new works (Just look at how much the parody-music genre has grown since Suno and the like first appeared). The possibilities are near endless.
- Personally I don't like to gatekeep art.
For example: If someone walks out into the wilderness and encounters a particularly fascinating rock formation or plant, something that was created completely by accident and without a artist or designer, but they find that the sight instills in them strong emotions or deeper thought, I believe they should be allowed to call that art.
Maybe this is just petty linguistics and semantics though, in which case we're drifting away from the topic at hand, and I'm sorry.
- Trying to look at the bigger picture for a moment. A lot of the philosophical debate about art I see here, and elsewhere on social media, is often very shallow and can be reduced to:
Does one believe that the value of the art-piece (be is music, paintings, film, or whatever) is created in the mind of the artist, or is it created in the mind of the consumer?
If you believe only in the former, AI art is an oxymoron and pointless. If you believe only the later, you're likely to rejoice at all the explosion of new content and culture we can expect in the coming years.
As far as I can tell though, most regular people think that the truth is somewhere in between these two extremes, where both both the creator and the consumer's thoughts are important in unison. That culture is about where the two meet each other, and help each other grow. But most of the arguments I've seen online seem to ignore or miss this dichotomy of views entirely, which unfortunately reduces the quality of the debate considerably.
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things.”
― Douglas Adams