1
point
jablongo
Joined 547 karma
- jablongoIt is what it is thinking consciously / its internal narrative. For example a supervillain's internal narrative with their plans would go into their COT notepad. If we want to really lean into the analogy between human psychology and LLMs. The "internal reasoning" that people keep referencing in this thread.. referring to the transformer weights and inscrutable inner working of a GPT.. isn't reasoning, but more like instinct, or the subconscious.
- Upcoming: Selective pressure of AI coevolution leads to humans with a fear of unplugging things and the ability to sleep while sitting.
- Not sure what we should we make of this. Besides the tragedy of losing a human being and a scientist, is this significant in another way?
- I tried to make ChatGPT sing Mary had a little lamb recently and it's atonal but vaguely resembles the melody, which is interesting.
- I had Claude make a web interface and it was very similar stylistically. Looks like Claude has some design preferences of its own!
- I'm curious -- did you make the interface with Claude? I have a hunch you did, can you confirm/deny?
- Why would it be more like OG YouTube, when the content they demoed very closely resembles YouTube shorts? The key difference is OG YouTube was long form.
- To be clear I'm not disgusted by AI in general, I'm disgusted by short form video and AI/ML in service of dopamine reward loop hacking.
- its well underway already
- I think the last one takes the cake.
- Sam Altman has made (for me) encouraging statements in the past about short-form video like TikTok being the best current example of misaligned AI. While this release references policies to combat "Doomscrolling and RL-sloptimization", it's curious that OpenAI would devote resources to building a social app based on AI generated short form video, which seems to be a core problem in our world. IMO you can't tweak the TikTok/YouTube shorts format and make it a societal good all of a sudden, especially with exclusively AI content. This is a disturbing development for Altman's leadership, and sort of explains what happened in 2023 when they tried to remove him... -> says one thing, does the opposite.
- I lets you inspect what actually constitutes a given cluster, for example it seems like the outer clusters are variations of individual words and their direct translations, rather than synonyms (the ones I saw at least).
- Usually PCA doesn't look quite like this so this is likely done using TSNE or UMAP, which are non parametric embeddings (they optimize a loss by modifying the embedded points directly). I can see labels if I mouseover the dots.
- GPT-5 claims it is just GPT-4o. Is OpenAI sending overflow requests to an earlier model? How could this not be the first thing they checked when they updated GPT-5?
- 5 points
- It's also worth considering that past some threshold, it may be very difficult for us as users to discern which model is better. I don't think thats what's going on here, but we should be ready for it. For example, if you are an ELO 1000 chess player would you yourself be able to tell if Magnus Carlson or another grandmaster were better by playing them individually? To the extent that our AGI/SI metrics are based on human judgement the cluster effect that they create may be an illusion.
- Tao is now transitioning to psychohistory.
- Id like to hear about the tools and use cases that lead people to hit these limits. How many sub-agents are they spawning? How are they monitoring them?
- Establishing ground truth for this is not easy. Often the labeled calories on foods are quite inaccurate themselves, based on n=1 bomb calorimetry tests. There are also incentives that may lead to lower than actual reported calories on the label.