raincole parent
If these topics are word salad, colleges might have been training word saucier chefs way before GPT-2 became a thing.
It's not just that it's word salad, it's also that it's exactly the same. There's a multi-trillion dollar attempt to replace your individuality with bland amorphous slop """content""". This doesn't bother you in the slightest?
I now have a visceral reaction to being told that I'm ABSOLUTELY RIGHT!, for example. It seemed an innocuous phrase before -- rather like em dashes -- but has now become grating and meaningless. Robotic and no longer human.
I'm launching a new service to tell people that they are absolutely, 100% wrong. That what they are considering is a terrible idea, has been done before, and will never work.
Possibly I can outsource the work to HN comments :)
This sounds like a terrible idea that has been done before and will never work.
BrandonM as a Service?
After the reply to his infamous comment¹, BrandonM responded with “You are correct”.
https://www.hackerneue.com/item?id=9224
¹ Which people really should read in full and consider all the context. https://www.hackerneue.com/item?id=27068148
Yeah BrandonM got unfairly maligned but it's still funny.
You're exactly right, this really gets to the heart of the issue and demonstrates that you're already thinking like a linguist.
For what most of us are using it for (generating code), that's not a bad outcome. This audience might have less of a problem with it than the general population.
Whether we have the discipline to limit our use of the tool to its strengths... well, I doubt it. Just look at how social media turned out.
(Idle thought: I wonder if a model fine-tuned on one specific author would give more "original" titles).
This is the default setting. The true test would be if LLMs CAN'T produce distinct outputs. I think this problem can be solved by prompt engineering. Has anyone tried this with Kimi K2?
I had the exact same thought! Wow!
Now tell me, which one of us is redundant?