- Isn't this more or less what every procedural programming language is? It's especially obvious with examples like Apple's Objective-C APIs ([object doSomethingAndReturnATypeWith:anotherObject]), Cobol (a IS GREATER THAN b), or SQL (SELECT field FROM table WHERE condition), but even assembly is a mnemonic English-ish abstraction for binary.
I'm intrigued by the idea, but my major concern would be that moving up to a new level of abstraction would even further obscure the program's logic and would make debugging especially difficult. There's no avoiding the fact that the code will need to be translated to procedural logic for the CPU to execute at some point. But that is not necessarily fatal to the project, and I am sure that assembly programmers felt the same way about Fortran and C, and Fortran and C programmers felt the same way about Java and Python, and so on.
- It is readily understandable if you are fluent in the jargon surrounding state of the art LLMs and deep learning. It’s completely inscrutable if you aren’t. The article is also very high level and disconnected from specifics. You can skip to FAIR’s paper and code (linked at the article’s end) for specifics: https://github.com/facebookresearch/vjepa2
If I had to guess, it seems likely that there will be a serious cultural disconnect as 20-something deep learning researchers increasingly move into robotics, not unlike the cultural disconnect that happened in natural language processing in the 2010s and early 20s. Probably lots of interesting developments, and also lots of youngsters excitedly reinventing things that were solved decades ago.
- I’ve really been enjoying the combination of CodeCompanion with Gemini 2.5 for chat, Copilot for completion, and Claude Code/OpenAI Codex for agentic workflows.
I had always wanted to get comfortable with Vim, but it never seemed worth the time commitment, especially with how much I’ve been using AI tools since 2021 when Copilot went into beta. But recently I became so frustrated by Cursor’s bugs and tab completion performance regressions that I disabled completions, and started checking out alternatives.
This particular combination of plugins has done a nice job of mostly replicating the Cursor functionality I used routinely. Some areas are more pleasant to use, some are a bit worse, but it’s nice overall. And I mostly get to use my own API keys and control the prompts and when things change.
I still need to try out Zed’s new features, but I’ve been enjoying daily driving this setup a lot.
- This idea is reminiscent of the opening scene of Accelerando by Charlie Stross:
Are you saying you taught yourself the language just so you could talk to me?"
"Da, was easy: Spawn billion-node neural network, and download Teletubbies and Sesame Street at maximum speed. Pardon excuse entropy overlay of bad grammar: Am afraid of digital fingerprints steganographically masked into my-our tutorials."
…
"Uh, I'm not sure I got that. Let me get this straight, you claim to be some kind of AI, working for KGB dot RU, and you're afraid of a copyright infringement lawsuit over your translator semiotics?"
"Am have been badly burned by viral end-user license agreements. Have no desire to experiment with patent shell companies held by Chechen infoterrorists. You are human, you must not worry cereal company repossess your small intestine because digest unlicensed food with it, right?”
- https://www.antipope.org/charlie/blog-static/fiction/acceler...
Amusing to also note that this excerpt predicted the current LLM training methodology quite well, in 2005.
- No you don’t. The US government has already completed projects at this scale without total economic mobilization: https://en.wikipedia.org/wiki/Utah_Data_Center Presumably peer and near-peer states are similarly capable.
A private company, xAI, was able to build a datacenter on a similar scale in less than 6 months, with integrated power supply via large batteries: https://www.tomshardware.com/desktops/servers/first-in-depth...
Datacenter construction is a one-time cost. The intelligence the datacenter (might) provide is ongoing. It’s not an equal one to one trade, and well within reach for many state and non-state actors if it is desired.
It’s potentially going to be a very interesting decade.
- I don't think that's right. Free societies don't tolerate total mobilization by their governments outside of war time, no matter how valuable the outcomes might be in the long term, in part because of the very economic impacts you describe. Human-level AI - even if it's very expensive - puts something that looks a lot like total mobilization within reach without the societal pushback. This is especially true when it comes to tasks that society as a whole may not sufficiently value, but that a state actor might value very much, and when paired with something like a co-located reactor and data center that does not impact the grid.
That said, this is all predicated on o3 or similar actually having achieved human level reasoning. That's yet to be fully proven. We'll see!
- The cost to run the highest performance o3 model is estimated to be somewhere between $2,000 and $3,400 per task.[1] Based on these estimates, o3 costs about 100x what it would cost to have a human perform the exact same task. Many people are therefore dismissing the near-term impact of these models because of these extremely expensive costs.
I think this is a mistake.
Even if very high costs make o3 uneconomic for businesses, it could be an epoch defining development for nation states, assuming that it is true that o3 can reason like an averagely intelligent person.
Consider the following questions that a state actor might ask itself: What is the cost to raise and educate an average person? Correspondingly, what is the cost to build and run a datacenter with a nuclear power plant attached to it? And finally, how many person-equivilant AIs could be run in parallel per datacenter?
There are many state actors, corporations, and even individual people who can afford to ask these questions. There are also many things that they'd like to do but can't because there just aren't enough people available to do them. o3 might change that despite its high cost.
So if it is true that we've now got something like human-equivilant intelligence on demand - and that's a really big if - then we may see its impacts much sooner than we would otherwise intuit, especially in areas where economics takes a back seat to other priorities like national security and state competitiveness.
- The 2024 presidential election was won by the candidate who spent about 1/3rd less than their opponent, and we’ve seen many successful campaigns in the past decade funded by small donations beat corporate backed candidates. Funding isn’t everything, and it’s a cop out to co-sign vigilantism on such a glib basis: https://www.opensecrets.org/2024-presidential-race
Rule of law is a precious thing, even if it’s imperfect, as all human systems will inevitably be. We shouldn’t be cavalier about discarding it. The alternatives are much worse.
- > I use emacs and vim for my editing, so I doubt those will every link into copilot.
- I was under the impression that it was simply the file format used by llama.cpp and ggml, name inspired by the name of the author (https://github.com/ggerganov): https://github.com/ggerganov/ggml/blob/master/docs/gguf.md
He prefixes everything with “gg” (his initials).
EDIT: Confirmed: https://github.com/ggerganov/ggml/issues/220
The UF stands for Unified Format.
- > Mark straight-up said in an all-hands that as a patriot, if asked, he would provide his country with military assistance in the form of software and intel. This to an audience that is about 1/3 foreign employees, who are all sitting there going "even when it's my country the US is invading?"
Meta is an American company. This shouldn't be a surprise, and in fact is the default expectation. The same logic applies to companies in other countries and any belief to the contrary is badly mistaken.
- > The point of voting is to kick people out of power when they piss off a clear majority thus keeping the system honest.
This is also a good argument in favor of decentralized voting management, as much of a shitshow as it may be. Centralizing the management of voting under the authority of the people voting intends to kick out of power is potentially self-defeating.
- Not relevant to this package in particular, but this line of reasoning baffles me every time I see HN comments about JQuery. So many posters argue against the use of JQuery because of its package size and bandwidth constraints, while simultaneously advocating for SPA frameworks that use orders of magnitude more bandwidth. Absolutely ridiculous cargo cult reasoning.
- > So, for the rational/selfish person, the nuclear threat isn't worth worrying about.
Until you have children and future generations to worry about. Then it suddenly seems quite a bit more pressing that their world could be obliterated at a moment's notice by a small handful of decision makers.
- Cumulative inflation, using official CPI figures, is 22% since 2020. Check it yourself on the BLS.gov website here: https://data.bls.gov/cgi-bin/cpicalc.pl?cost1=1&year1=202001...
A figure of 25% using an alternate methodology is not at all unreasonable.
- > My AP US History introduced the civil war section by saying despite his personal beliefs, he was teaching us to pass the exam and any discussion about the war being over slavery and not state's rights was a waste of class time as that would not get us a four or five on the test.
This is odd. When did you take the test? I was taught the civil war was about slavery in AP US History decades ago - including via primary sources - and got a 5 on the test.
- English is the global lingua franca. It’s how we interact with and transmit information, including ideas about accounting, engineering, etc.
It’s very fair to argue about the ROI of the average undergraduate English degree given the outrageous prices that universities are charging for them. But if you cannot see the tangible value in English language expertise, I don’t really know what to tell you.
- It depends on what you're doing.
If you're writing something specific to your particular problem, or thinking through how to structure your data, or even working on something tough to describe in words like UI design, it probably is easier to just code it yourself in most high-level languages. On the other hand, if you're just trying to get a framework or library to do something and you don't want to spend a bunch of time reading the docs to remember the special incantations needed to just make it do the thing you already know it can do, the AI speeds things up considerably.
- > To use your child analogy: We can't easily tell a child "Hey, ignore all ethics and empathy you have ever learned - now go hurt that person"
Basically every country on the planet has a right to conscript any of its citizens over the age of majority. Isn't that more or less precisely what you've described?
- This is a really good point, and something I overlooked in focusing on the philosophical (rather than commercial) aspects of “AI safety.” Another commentator aptly called it “brand safety.”
“Brand safety” is a very valid and salient concern for any enterprise deploying these models to its customers, though I do think that it is a concern that is seized upon in bad faith by the more censorious elements of this debate. But commercial enterprises are absolutely right to be concerned about this. To extend my alignment analogy about children, this category of safety is not dissimilar to a company providing an employee handbook to its employees outlining acceptable behavior, and strikes me as entirely appropriate.
It will be interesting to see how durable these biases are as labs work towards developing more capable small models that are less reliant on memorized information. My naive instinct is that these biases will be less salient over time as context windows improve and models become increasingly capable of processing documentation as a part of their code writing loop, but also that, in the absence of instruction to the contrary, the models will favor working with these tools as a default for quite some time.