Socials:
- github.com/Askir
- linkedin.com/in/jaschabeste
Opinions are my own
- 1 point
- 4 points
- 4 points
- So what are its responsibilities? Does it actually do anything?
- 2 points
- Depending on your workload you might also be able to use Timescale to have very fast analytical queries inside postgres directly. That avoids having to replicate the data altogether.
Note that I work for the company that built timescale (Tiger Data). Clickhouse is cool though, just throwing another option into the ring.
Tbf in terms of speed Clickhouse pulls ahead on most benchmark, unless you want to join a lot with your postgres data directly then you might benefit from having everything in one place. And of course you avoid the sync overhead.
- This always sounds super messy to me but I guess supabase is kind of the same thing and especially for side projects it seems like a very efficient setup.
- Why replace it at all? Just remove it. I use AI every day and don't use MCP. I've built LLM powered tools that are used daily and don't use MCP. What is the point of this thing in the first place?
It's just a complex abstraction over a fundamentally trivial concept. The only issue it solves is if you want to bring your own tools to an existing chatbot. But I've not had that problem yet.
- Verification is key, and the issue is that almost all AI generated code looks plausible so just reading the code is usually not enough. You need to build extremely good testing systems and actually run through the scenarios that you want to ensure work to be confident in the results. This can be preview deployments or other AI generated end to end tests that produce video output that you can watch or just a very good test suite with guard rails.
Without such automation and guard rails, AI generated code eventually becomes a burden on your team because you simply can't manually verify every scenario.
- OpenAI and Anthropic at least are both pretty clear about the fact that you need to check the output:
https://openai.com/policies/row-terms-of-use/
https://www.anthropic.com/legal/aup
OpenAI:
> When you use our Services you understand and agree:
Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice. You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services. You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them. Our Services may provide incomplete, incorrect, or offensive Output that does not represent OpenAI’s views. If Output references any third party products or services, it doesn’t mean the third party endorses or is affiliated with OpenAI.
Anthropic:
> When using our products or services to provide advice, recommendations, or in subjective decision-making directly affecting individuals or consumers, a qualified professional in that field must review the content or decision prior to dissemination or finalization. You or your organization are responsible for the accuracy and appropriateness of that information.
So I don't think we can say they are lying.
A poor workman blames his tools. So please take responsibility for what you deliver. And if the result is bad, you can learn from it. That doesn't have to mean not use AI but it definitely means that you need to fact check more thoroughly.
- I found this which claims ram market in 2024 was almost 100 billion: https://www.grandviewresearch.com/industry-analysis/random-a...
I assume this includes more than just the raw price of modules but Openai only has 60 billion in funding altogether and was aiming for 20 billion ARR this year. This sounds like they are spending maybe half their money on RAM they never use? That just doesn't add up.
- I'm curious how OpenAI has the funds to pay for 40% of the worlds ram production? Sure they are big and have a few billions but I kind of assumed that 40% for a year or whatever they are buying is easily double digit billions? That has to hurt even them, especially because they cant buy anything else?
Also what are these contracts? Surely Samsung could decide to cancel the contract by paying a large fee but is that fee truly so large that getting their ram back when prices are now 4x of what they used to be is not worth it?
- > reportedly in the low single-digit billions at best
They are expected to hit 9 billion by end of year. Meaning the valuation multiple is only 30x. Which is still steep but at that growth rate not totally unreasonable.
https://techcrunch.com/2025/11/04/anthropic-expects-b2b-dema...
- Do they actually spend that much though? I think they are getting similar results with much fewer resources.
It's also a bit funny that providing free models is probably the most communist thing China has done in a long time.
- I think the main issue is that when content is hand written you can be certain someone put at least the effort it takes to write into it. And while some people write fast, I would assume that at least means they have read their own writing once.
AIslop you can produce faster than you're able to read it. This makes it incredibly costly to filter out in comparison. It just messes so much with the signal to noise ratio on the web.
- Oh they need control of models to be able to censor and ensure whatever happens inside the country with AI stays under their control. But the open-source part? Idk I think they do it to mess with the US investment and for the typical open source reasons of companies: community, marketing, etc. But tbh especially the messing with the US, as a european with no serious competitor, I can get behind.
- The problem really is that it is impossible to verify that the content someone uploads came from their mind and not a computer program. And at some point probably all content is at least influenced by AI. The real issue is also not that I used chatgpt to look up a synonym or asked a question before writing an article, the problem is when I copy paste the content and claim I wrote it.
- Anthropic is leaning into agentic coding and heavily so. It makes sense to use swe verified as their main benchmark. It is also the one benchmark Google did not get the top spot last week. Claude remains king that's all that matters here.
- I feel like this is so core to any LLM automation it was crazy that anthropic is only adding it now.
I built a customized deep research internally earlier this year that is made up of multiple "agentic" steps, each focusing on specific information to find. And the outputs of those steps are always in json and then the input for the next step. Sure you can work you way around failures by doing retries but its just one less thing to think about if you can guarantee that the random LLM output adheres at least to some sort of structure.
> This video features an in-depth interview with Yann LeCun, Chief AI Scientist at Meta and a Turing Award winner, hosted on The Information Bottleneck podcast. LeCun discusses his new startup, the limitations of current Large Language Models (LLMs), his vision for "World Models," and his optimistic outlook on AI safety.
Executive Summary Yann LeCun argues that the current industry focus on scaling LLMs is a dead end for achieving human-level intelligence. He believes the future lies in World Models—systems that can understand the physical world, plan, and reason using abstract representations rather than just predicting the next token. To pursue this, he is launching a new company, Advanced Machine Intelligence (AMI), which will focus on research and productizing these architectures.
Key Insights from Yann LeCun 1. The "LLM Pill" & The Limits of Generative AI LeCun is highly critical of the Silicon Valley consensus that simply scaling up LLMs and adding more data will lead to Artificial General Intelligence (AGI).
The "LLM Pill": He disparages the idea that you can reach superintelligence just by scaling LLMs, calling it "complete bullshit" [01:13:02].
Data Inefficiency: LLMs require trillions of tokens to learn what a 4-year-old learns from just living. He notes that a child sees about 16,000 hours of visual data in four years, which contains far more information than all the text on the internet [25:23].
Lack of Grounding: LLMs do not understand the physical world (e.g., object permanence, gravity) and only "regurgitate" answers based on fine-tuning rather than genuine understanding [36:22].
2. The Solution: World Models & JEPA LeCun advocates for Joint Embedding Predictive Architectures (JEPA).
Prediction in Abstract Space: Unlike video generation models (like Sora) that try to predict every pixel (which is inefficient and hallucinatory), a World Model should predict in an abstract representation space. It filters out irrelevant details (noise) and focuses on what matters [15:35].
The Sailing Analogy: He compares sailing to running a world model. You don't simulate every water molecule (Computational Fluid Dynamics); you use an intuitive, abstract physics model to predict how the wind and waves will affect the boat [01:30:29].
Planning vs. Autocomplete: True intelligence requires planning—predicting the consequences of a sequence of actions to optimize an objective. LLMs just autocomplete text [07:26].
3. A New Startup: Advanced Machine Intelligence (AMI) LeCun is starting AMI to focus on these "World Models" and planning systems.
Open Research: He insists that upstream research must be published openly to be reliable. Closed research leads to "delusion" about one's own progress [04:59].
Goal: To become a supplier of intelligent systems that can reason and plan, moving beyond the capabilities of current chatbots.
4. AI Safety is an Engineering Problem LeCun dismisses "doomer" narratives about AI taking over the world, viewing safety as a solvable engineering challenge akin to building reliable jet engines.
Objective-Driven Safety: He proposes "Objective-Driven AI". Instead of trying to fine-tune an LLM (which can be jailbroken), you build a system that generates actions by solving an optimization problem. Safety constraints (e.g., "don't hurt humans") are hard-coded into the objective function, making the system intrinsically safe by construction [01:02:04].
The Jet Engine Analogy: Early jet engines were dangerous and unreliable, but through engineering, they became the safest mode of transport. AI will follow the same trajectory [58:25].
Dominance vs. Intelligence: He argues that the desire to dominate is a biological trait tied to social species, not a necessary byproduct of intelligence. A machine can be super-intelligent without having the drive to rule humanity [01:35:13].
5. Advice for Students Don't Just Study CS: LeCun advises students to focus on subjects with a "long shelf life" like mathematics, physics, and engineering (control theory, signal processing).
Avoid Trends: Computer Science trends change too rapidly. Foundational knowledge in how to model reality (physics/math) is more valuable for future AI research than learning the specific coding framework of the month [01:36:20].
6. AGI Timelines He rejects the term "AGI" because human intelligence is specialized, not general.
Prediction: Optimistically, we might have systems with "cat-level" or "dog-level" intelligence in 5–10 years. Reaching human level might take 20+ years if unforeseen obstacles arise [51:24].