- For now, Sora will not be able to actually produce all the text, I think. Maybe next year.
- Student unions tend to focus on all sorts of other issues, I wouldn't trust them to handle cases like this.
The only way to reliably prevent the use of AI tools without punishing innocent students is to monitor the students while they work.
Schools can either do that by having essays be written on premise, either by hand or by using computers managed by the school.
But students that are worried that they will be targeted can also do this themselves, by setting up their phone to film them while working.
And if they do this, and the teacher tries to punish someone who can prove they wrote the essay themselves, either the teacher or the school should hopefully learn that such tools can't be trusted.
- Certainly, and I don't think anyone really doubts this.
Still, people are sometimes surprised by how DNA may affect more parts of behavior than they previously thought.
Not necessarily by directly coding for the behavior. In many cases, the DNA will just modulate how we learn from the environment. And if the environment is fairly constant, observed behavior can correlate more strongly with DNA that one might have expected.
- I don't think human DNA generally codes for the behavior derectly. Rather, DNA can code for how the brain learns from incoming data streams.
If the brain naturally tunes into some sources or patterns of input rather than others, it may learn very quickly from the preferred sources. And as long as those sources carry signals that are fairly invariant over time, it may seem like those signals are instinctual.
For instance, it may appear that humans learn to build relationships with kin (both parents and children) and friends, to build revenue streams (or gather food in more primitive societies) and reproduce.
Instead, the brain may come preloaded to generate brain chemicals when detecting certain stimuli. Like oxytocin near caregivers (as children) or small fluffy things (as adults). When exposed to parents/babies, this triggers. But it can also trigger around toys, pets, adopted children, etc.
Friendship-seeking can be, in part, related to seretonin-production in certain social situations. But may be hijacked by social media.
Revenue-seeking behavior can come from dopamin-stimulus from certain goal-optimzing situations. But may also be triggered by video games.
And the best known part: Reproductive behavior may primarily come from sexual arousal, and hijacked by porn or birth control.
Each of the above may be coded by a limited number of bytes of DNA, and it's really the learning algorithm combined with the data stream of natural environments that causes specific behaviors.
- As technology changes over history, governments tend to emerge that reflect the part of the population that can maintain a monopoly of violence.
In the Classical Period, it was the citizen soldiers of Rome and Greece, at least in the west. These produced the ancient republics and proto-democracies.
Later replaced by professional standing armies under people like Alexander and the Ceasars. This allowed kings and emperors.
In the Early to Mid Medieaval time, they were replaced by knights, elites who allowed a few men to defeat commoners many times their number. This caused feudalism.
Near the end of the period, pikes and crossbows and improved logistic systems shifted power back to central governments, primarily kings/emperors.
Then, with rifles, this swung the pendulum all the way back to citizen soldiers between the 18th and early 20th century, which brought back democracies and republics.
Now the pendulum is going in the opposite direction. Technology and capital distribution has already effectively moved a lot of power back to an oligarchic elite.
And if full AGI combined with robots more physically capable than humans, it can swing all the way. In principle a single monarch could gain monopoly of violence over an entire country.
Do not take for granted that our current undertanding of what the government is, is going to stay the same.
Some kind of merger between capital and power seems likely, where democratic elections quickly become completely obsolete.
Once the police and military have been mostly automated, I don't think our current system is going to last very long.
- The fears from 3 Mile Island and Fukushima were almost completely irrational. The death toll from those was too low to measure.
And the fears from Chernobyl was MOSTLY irrational.
The reason for the extreme fears that are generated from even very moderate spills from nuclear plants comes in part from the association with nuclear bombs and in part from fear of the unknown.
A lot (if not most) people shut their rational thinking off when the word "nuclear" is used, even those who SHOULD understand that a lot more people die from coal and gas plants EVERY YEAR than have died from nuclear energy throughout history.
Indeed, the safety level at Chernobyl may have been atrocious. But so was the coal industry in the USSR. Indeed, even if just considering the USSR, the death toll from coal alone caused a similar number of deaths (or a bit more) than the deaths caused by Chernobyl EVERY YEAR [1].
[1] https://www.science.org/doi/pdf/10.1126/science.238.4823.11....
- The "next token prediction" is a distraction. That's not where the interesting part of an AI model happens.
If you think of the tokenization near the end as a serializer, something like turning an object model into json, you get a better understanding. The interesting part of a an OOP program is not in the json, but what happens in memory before the json is created.
Likewise, the interesting parts of a neural net model, whether it's LLM's, AlphaProteo or some diffusion based video model, happen in the steps that operate in their latent space, which is in many ways similar to our subconscious thinking.
In those layers, the AI models detect deeper and deeper patterns of reality. Much deeper than the surface pattern of the text, images, video etc used to train them. Also, many of these patterns generalize when different modalities are combined.
From this latent space, you can "serialize" outputs in several different ways. Text is one, image/video another. For now, the latent spaces are not general enough to do all equally well, instead models are created that specialize on one modality.
I think the step to AGI does not require throwing a lot more compute into the models, but rather to have them straddle multiple modalities better, in particular, these:
- Physical world modelling at the level of Veo3 (possibly with some lessons from self driving or robotics model for elements like object permananence and perception) - Symbolic processing of the best LLM's. - Ability to be goal oriented and iterate towards a goal, similar to the Alpha* family of systems - Optionally: Optimized for the use of a few specific tools, including a humanoid robot.
Once all of these are integrated into the same latent space, I think we basically have what it takes to replace most human thought.
- For now, the people able to glue all the necessary ingredients together are the same ones who can understand the output if they drill into it.
Indeed, these may be the last ones to be fired, as they can become efficient enough to do the jobs of everyone else one day.
- Ironically, finding ways to spin stories like that IS one way of taking responsibility, even if it's only a way to take responsibility for the narratives that are created after things happen.
Because those narratives play an important role in the next outcome.
The error is when you expect them to play for your team. Most people will (at best) be on the same team as those they interact with directly on a typical day. Loyalty 2-3 steps down a chain of command tends to be mostly theoretic. That's just human nature.
So what happens when the "#¤% hts the fan, is that those near the top take responsbility for themselves, their families and their direct reports and managers first. Meaning they externalize damage to elsewhere, which would include "you and me".
Now this is baseline human nature. Indeed, this is what natural empathy dictates. Because empathy as an emotions is primarily triggered by those we interact with directly.
Exceptions exist. Some leaders really are idealists, governed more by the theories/principles they believe in than the basic human impulses.
But those are the minority, and this may aven be a sign of autism or something similar where empathy for oneself and one's immediate surrounding is disabled or toned down.
- It would not collapase. But it would shift some purchaing power from the middle class to the working class if all of them would leave, as working class salaries would go up even faster than the inflatino it would cause.
- Presidents may not be able to pardon themselves, but they ARE immune from prosecution through the regular legal system for any actions taken as part of the office as president.
The only way to go after them (given the current SCOTUS, who made the ruling above), is impeachment. And for that, the president has to do something so bad that 67 senators are willing to find the president guilty.
- Language models are closing the gaps that still remain at an amazing rate. There are still a few gaps, but if we consider what has happened just in the last year, and extrapolated 2-3 years out....
- If the training data had a lot of humans saying "I don't know", then the LLM's would too.
Humans don't and LLM's are essentially trained to resemble most humans.
- Authority, yes, accountable, not so much.
Basically at the level of other publishers, meaning they can be as biased as MSNBC or Fox News, depending on who controls them.
- Wikipedia is one of the better sources out there for topics that are not seen as political.
For politically loaded topics, though, Wikipedia has become increasingly biased towards one side over the past 10-15 years.
- Somebody, or SOMETHING.
There will not be much work that cannot be done by Figure, Optimus, Atlas, Claude, Grok or GPT by 2035.
- With all due respect, this attitude typically comes with age. I see it in myself, too (I'm over 50).
You're right that an important reason why it's hard to replace those 30+ year old systems, and that part of the reason is that the current devs are not necessarily at the same level as those who built the original. But at least in part, this is due to survivorship bias.
Plenty of the systems that were built 30-50 years ago HAVE been shut down, and those that were not tend to be the most useful ones.
A more important tell, though, is that you see traditional IT systems as the measuring stick for progress. If you do a review of history, you'll see that what is seen as the measuring stick changes over time.
For instance, in the 50's and 60's, the speed of cars and airplanes was a key measuring sticks. Today, we don't even HAVE planes in operation that match the SR-71 or Concorde, and car improvements are more incremental and practical than spectacular.
In the 70s and into the 80s, space exploration and flying cars had the role. We still don't have flying cars, and very little happened in space from 1985 until Elon (who grew up in that era) resumed it, based on his dream of going to Mars.
In the 90s, as Gen-X'ers (who had been growing up with C64/Amiga's) grew up, computers (PC) were the rage. But over the last 20 years little has happened with the hardware (and traditional software) except that the number of cores/socket has been going up.
In the 2000s, mobile phones were the New Thing, alongside apps like social media, uber, etc. Since 2015, that has been pretty slow, too, though.
Every generations tends to devalue the breakthroughs that came after they turned 30.
Boomers were not impressed by computers. Many loved their cars, but remained nostalgic about the old ones.
X-ers would often stay with PC's as the milennials switched to phones-only. Some X-ers may still be a bit disappointed that there's no flying cars, Moon Base and no Mars Colony yet (though Elon, an X'er is working on those).
And now, some Milennials do not seem to realize that we're in the middle of the greatest revolution in human history (or pre-history for that matter).
And developers (both X'ers and millennials) in particular seem to resist it more than most. They want to keep their dependable von Neumann architecture computing paradigm. The skills they have been building up over their career. The source of their pride and their dignity.
They don't WANT AI to be the next paradigm. Instead, they want THEIR paradigm to improve even further. They hold on to it as long as they can get away with it. They downplay of revolutionary it is.
The fact, though, is that every kid today walks around with R2D2 and C3PO in their pockets. And production of physical robots have gone exponential, too. A few more years at this rate, and it will be everywhere.
Walking around today, 2025 isn't all that different from 2015. But 2035 may well be as different from 2025 as 2025 is to 1925.
And you say the West is declining?
Well, for Europe (including Russia), this is true. Apart from DeepMind (London), very little happens in Europe now.
Also, China is a competitor now. But so was the USSR a couple of generations ago, especially with Sputnik.
The US is still in the leadership position, though, if only barely. China is catching up, but they're still behind in many areas.
Just like with Sputnik, the US may need to pull itself together to maintain the lead.
But if you think all development has ended, you're like a boomer in 2010, using planes and cars as the measuring stick that thinks that nothing significant happened since 1985.
- Try to live below that poverty line for a few months, and I'm pretty sure you will understand it.
- > Is this because of a fundamental limitation, or because the post-training sets are currently too small (or otherwise deficient in some way) to induce good thinking patterns?
"Thinking" isn't a singular thing. Humans learn to think in layer upon layer of understandig the world, physical, social and abstract, all at many different levels.
Embodiment will allow them to use RL on the physical world, and this in combination with access to not only means of communication but also interacting in ways where there is skin in the game, will help them navigate social and digital spaces.
- Things have changes since the Raegan era. There are a couple of elements to ICMB defense:
1) If you can strike the the ICBM's before the MIRV's separate, you only need a fraction of the number. To do this, you need to already have the interceptors (or whatever else used to shot them down) in orbit before the ICBM's launch.
Independently of AI, Starship is making it much cheaper to place objects in orbit, and can help with this. (Though it could trigger a first strike if detected, it might be possible to hide interceptors within Starlink satellites, for instance.)
2) Coordination and precision. This is what wasn't in place at all in the 80s. I'm old enough to remember when this was going on, and labelled impossible. I still remember thinking, back then: "This is impossible now, but will not remain impossible forever".
Whether it applies to interceptors already placed in orbit, novel weapons such as lasers, typically also placed in orbit or interceptors intended to stop reentry vehicles one faces a coordination problem with time restrictions that makes it very hard for humans or even traditional computer algorithms to solve properly.
This, more than the volume, was the fundamental showstopper in the 80s (the willingness to pay was pretty significant).
Now, with AI tech, plenty of known options open up, and an unknown number of things we didn't think of yet, could also open up.
Accuracy and coordination is the most fundamental one. Here AI may help distribute the compute load into satellites and even independent interceptor vehicles. (Both by making them more autonomous and by improving algorithms or control systems for the dumber ones.)
But beyond that, AI may (if one side achievs a significant lead) also a path to making manufacturing large numbers much cheaper, meaning one could much more easily scale up enough volume to match whatever volume the enemy can deploy. Also, with more advanced tech (allowed by ASI), interceptors can potentially be made much smaller. Even a pebble sized chunk of metal can stop most rockets, given the velocities in space. The hard part is to make them hit the target.
Basically, what I'm saying is that whoever has ASI first may at minimum get a time window of technological superiority where the opponent's ICBM's may be rendered more or less obsolete.
In fact, I think the development of the Poseidon by Russia was a response to realizing decades ago that ICBM's would eventually be counterable.
However, AI tech will possibly even more suitable for detecting and countering this kind of stealthy threats. Just like it is currently revolutionizing radiology, it will be able to find patterns in data from sonars, radars, satellites etc that humans and traditional algorithms have little chance to detect in time.
- I heard war games indicate that the US would lose at least a few carriers if they try to defend Taiwan.
That may be more than worth it if they succeed.
Taiwan is not like Ukraine. As long as TSMC has monopoly on the latest AI chips, it's at least as important as access to oil.
- Being unpredictable has advantages and also disadvantages, depending on setting.
Though with an AI race going on and Musk practically living in the White House, I can't imagine the US would let China have Taiwan without a fight right now.
Also, forcing TSMC to build a number of modern fabs in the US is sort of a warning to China to stay away AT LEAST until those fabs are done. If China attacks right now,I think we would see the full might of the US forces coming to their defense.
AI right now has the same role as nukes had during the cold war. Nobody really knows how quickly it will develop, and many scenarios would allow those who get it first to take out all enemy nukes without much risk of receiving a retaliatory strike.
For instance, AI may make it possible to build a virtually perfect missile defense against ICBMS, it may may allow perfect tracking of subs and other submarine threats, it may power drone swarms capabable of disabling any integrated air defense network, and even to destroy all enemy missile siles and nuclear subs whil minimizing loss of life.
The US is not going to let China get there first, if they can stop it.
- Asking what charge, momentum or energy (or other conserved quantities like QCD color) in Physics basically boils down to something that is invariant under some symmetry.
Momentum and energy may feel more intuitive, but I'm not sure they really are, especially within QM.
I'm not sure if we have any deeper explanations than these symmetries.
Btw, questions formed like "What is X?" can have this kind of problem in any domain, especially if we expect some answer that is both intuitive and provides an essentialist explanation.
For instance "What is consciousness?". "What is intelligence?", "What is the meaning of life?"
What I've come to think, is that these questions come from the same type of mistake:
As any Physicist would know, the world as described by Physics and the world as we intuitively start to understand it as small children are quite different, especially at scales far removed from our senses (like in QM or Cosmology).
Humans simply doesn't have access to the full extent of reality, nor would our brains be able to do something useful with it if we had it, since we don't have anything near the processing power to comprehend it.
What we're always stuck in, is an inner world model that is some kind of rough representation of the outside world. Now let's assume the outside world actually EXISTS, even if we don't know all that much about it. Physics is just a hint of this mismatch. If we simply let go of the assumption that there is a close correspondence between our internal model of the world and the actual world, we no longer have an obligation to form strict correspondences between object within our internal simplified simulation and the outside world.
Now we're prepared for the next step: To understand that there probably is a REASON why we have this internal representation: It's there for evolutionary purposes. It helps us act in a world. Even for concepts that do not have a 1:1 correspondence with something in the Physical world, they may very well have correspondences to aspects of the world we're simply not able to comprehend otherwise. For instance, fully understanding what "consciousness" represents (how it emerges) may not even be possible without extreme amounts of computational power (the compute part may be irreducible).
Concepts like charge are similar, except that we DO (through some advanced math) have some kind of ability to build mental models that DO (perhaps) capture what gives rise to it in the Physical world.
But it still will not map onto our intuition in a way that give us the feeling of "understanding" what it "is". It kind of feels like "consciousness is an emergent property of sufficiently large scale computational system that build world models that include themselves". Still doesn't correspond to how we "feel" that consciousness "is".
But if we simply stop insisting on full correspondence between the intuitive representation of the world and the "real" (or rather, the one represented through accumulated scientific knowledge), but instead realize that the intuition MAY still be useful, we not only avoid stress related to the disconnect, we even allow ourselves to bring back concepts (like "free will") into our intuitive world model without worrying about whether it's "real".
This provides two benefits:
1) We are "allowed" to use concepts that we know are not 100% accurate representations, and even have good reason to believe they're fairly useful simplifications of aspects of the world that ARE real, but too complex for us to grasp (like QM charge for a 5-year-old).
2) As opposed to idealists (who think the inner word is primary), we don't fall into the trap of applying those concepts out of context. Many idealist philosophies and ideologies can fail catastrophically by treating such simplified ideas as fundamental axioms from which they can deduce all sorts of absurdities.
- If you define "junior" based mostly on age, then LLM's aren't yet at the level of a good "junior".
If you base it on ability, then an LLM can be be more useful to a good developer than 1 or more less competent "junior" team members (regardless of their age).
Not because it can do all the things like any "junior" can (like make coffee), but because the things it can do on top of what a "junior" can do, more than makes up for it.
- A very small percentage of orgs, a not-as-small percentage of developers, and at the higher end of the value scale, the percentage is not small at all.
- A lot has been written about this. The jury system has many purposes, but I don't think protection against individual subjective or corrupt judges are among the most important. (If anything, judges are much more likely to be objective than jurors.)
Rather, juries are generally a mechanism to prevent overreach by the executive (primarily) but also other branches of government (including the legislative branch). Not necessarily by going directly against laws (though that could also happen), but for instance by identifying laws that contradict the principles of the constitution. (Or other "deeper" laws, for that matter.)
Jefferson wrote : "Another apprehension is that a majority cannot be induced to adopt the trial by jury; and I consider that as the only anchor, ever yet imagined by man, by which a government can be held to the principles of it’s constitution." [1]
Now, a liberal interpretation of this is that a jury has an independent power, possibly even a duty, to disregard laws they consider to be against the legal basis (constitution, legal traditions, etc) of a country, basically overruling the legislative in such cases.
In fact, this type of thinking is probably a big part of the reason why the SCOTUS will not and can not override many aspects of jury verdicts. Specifically, even the SCOTUS cannot overturn an aquittal.
In other cases (such as when the SCOTUS thinks the jury has violated the constitution, due process hasn't been followed, etc) it will instead invalidate a decision. (Which can lead to the case being dismissed or to a retrial, depending on the details)
[1]: https://founders.archives.gov/documents/Jefferson/01-15-02-0...
- During Watergate, congress was still aware of it's role as a counterweight to the executive. Nixon would likely have been convicted if he hadn't resigned first, or at least he must have thought so, since he resigned.
But since then, congress has become more and more partisan, with less and less ability to act together in important issues. This was particularily obvious in all 3 impeachment processes that have happened since. In all 3 cases, impeachment was done without the proper bipartisan basis needed for a conviction, basically just to achieve short term political gain.
Like the boy who cried wolf, each repitition means the probability that people will take it seriously next time goes down.
And when the day comes where a president does something that really requires a bi-partisan conviction during an impeachment, congress may be so used to voting along party lines that this becomes impossible.
And maybe worse: presidents may even begin to consider such a conviction an impossibility, and act with fewer inhibitions.
- In most cases, I would think the SCOTUS would point to Congress in a situation like this.
It's Congress' role to step up and provide checks and balances for a president that goes off the rails.
As long as a president has support by their fellow party members in congress to be immune against impeachment, I doubt the SCOTUS would step in.
Unless, perhaps, the actions of the president were to be bad enough to introduce imminent risk to the whole system of government. Like pardoning people who (provably) organize large scale voting fraud, etc.
- Don't you think this is precisly why there is a jury system in the first place?
It seems that most legal systems from the start was intended to codify what was considered "just and fair" in the eyes of "the people".
Juries seem to have been put in place specifically to ensure that the legal system operates within this mandate, and to preven overreach or abuse.
For instance, the pattern the brain seeks to optimize to learn to work may be much smaller than the full algorithm for walking.
And if the brain learns quickly enough (and if a newborn animal started learning elements such as balance, moving legs, etc, before even being born), learning to walk may be learned in minutes instead of months.