- jack_ppI'm sorry but I'm not familiar with the context you mention, have not worked in a job where I had to communicate with clients and I find it hard to imagine a job where a junior would have to communicate with a client on a 2 hour task. Why would you want a junior to be the public face of your company?
- > well I can't leave a task with the LLM and come back to it tomorrow
You could actually just do that, leave an agent on a problem you would give a junior, go back on your main task and whenever you feel like it check the agent's work.
- Recently they added a setting for default language
- Never gonna happen. Cash has no intrinsic value exccept maybe for use in fire / toiler paper. GPUs while currently inflated in price will always find enough value. Their price might go down 50-75% but never 99%
- Is political correctness necessary to have a thriving community / open source project?
Linux seems to be doing fine.
I wouldn't personally care either way but it is non-obvious to me that the first version would actually hurt the community.
- > And for context, search advertising is 40% of digital ad revenue.
But all the search companies have their own AI so how would OAI make money in this sector?
- Maybe don't use JavaScript on the backend.
- I'd like to see you make popcorn or an omelette in a kettle. Or heat up rice / soup / stew
- I might get worried when mainstream computers won't be able to run Linux. Until then.. I'm not worried.
Seems there are efforts to bring openness to platforms that inherently have an interest to resist it and while the progress is slow.. there is progress
- I'd say the only ones capable of really approaching anything like scientific understanding of how to prompt these for maximum efficacy are the providers not the users.
Users can get a glimpse and can try their best to be scientific in their approach however the tool is of such complexity that we can barely skim the surface of what's possible.
That is why you see "folk magic", people love to share anecdata because.. that's what most people have. They either don't have the patience, the training or simply the time to approach these tools with rational rigor.
Frankly it would be enormously costly in both time and API costs to get anywhere near best practices backed up by experimental data let alone having coherent and valid theories about why a prompt technique works the way it does. And even if you built up this understanding or set of techniques they might only work for one specific model. You might have to start all over again in a couple of months
- Maybe have Claude coordinate Codex?
- I view it more as fun and spicy. Now we are moving away from the paradigm that the computer is "the dumbest thing in existence" and that requires a bit of flailing around which is exciting!
Folk magic is (IMO) a necessary step in our understanding of these new.. magical.. tools.
- So if different LLMs have different political views then you're saying it's more likely they trained on different data than that they're being manipulated to suit their owners interest?
- You get music discovery, radios, go to album, go to artist
- +1, this is the most annoying YouTube feature I've ever come across. Gave them feedback on it.. maybe more people should complain
- Scratch that I thought it was a different version. The one you linked has no support for filtergraphs so isn't even comparable to the old one.
- Thing is, if you want to use LLMs for mockups you got to use the old one.
- That sounds good, save the LLM generated workflows and have them edited by more seasoned users.
Or you could go one step further and create a special workflow which would allow you to define some inputs and iterate with an LLM until the user gets what he wants but for this you would need to generate outputs and have the user validate what the LLM has created before finally saving the recipe.
- Oh I have no doubt that I could but I don't see why since linux already does what I need and I don't see any compelling reason to switch. I was just curious to see what all the hype was about with the new m1 CPUs and give it a shot.
- I have written a lot of ffmpeg-python and plain ffmpeg commands using LLMs and while I am amazed at how good Gemini or chatGPT can handle ffmpeg prompts it is still not 100% so this seems to me like a big gamble on your part. However it might work for most users that only ask for simple things.