Preferences

BanditDefender
Joined 5 karma

  1. There is quite a bit of bait-and-switch in AI, isn't there?

    "Oh, machine learning certainly is not real learning! It is a purely statistical process, but perhaps you need to take some linear algebra. Okay... Now watch this machine learn some theoretical physics!"

    "Of course chain-of-thought is not analogous to real thought. Goodness me, it was a metaphor! Okay... now let's see what ChatGPT is really thinking!"

    "Nobody is claiming that LLMs are provably intelligent. We are Serious Scientists. We have a responsibility. Okay... now let's prove this LLM is intelligent by having it take a Putnam exam!"

    One day AI researchers will be as honest as other researchers. Until then, Demis Hassabis will continue to tell people that MuZero improves via self-play. (MuZero is not capable of play and never will be)

  2. LLMs aren't actually able to do that though, are they? They are simply incapable of keeping track of consistent behaviors and beliefs. I recognize that for certain prompts an LLM has to do it. But as long as we're using transformers, it'll never actually work.
  3. It is better to say humans formalized it :) All birds and mammals are capable of arithmetic in the sense of quantitative reasoning. E.g. a rat quickly learning that if they're shown two plates, one with two rocks and one with three rocks, if they pick the plate with five rocks they get a treat. That is to say rats understand addition intuitively, even if they can't write large numbers like humans can.

    Too many AI people are completely uninterested in how rats are able to figure stuff like that out. It is not like they are being prompted, they are being manipulated.

  4. > Animal evolution operates on generational timescales, but LLM "commercial evolution" happens in months.

    But LLMs have all been variations on transformer neural networks. And that is simply not true with animals. A nematode brain has 150 neurons, a bee has 600,000. But the bee's individual neurons are much more sophisticated than the nematode. Likewise between insects and fish, between fish and birds, between rodents and primates....

    Animal evolution also includes "architectural breakthroughs" but that's not happening with LLMs right now. "Attention Is All You Need" was from 2017. We've been fine-tuning that paper ever since. What we need is a new historically important paper.

  5. I think it is way too far to say that!

    We've had automated theorem proving since the 60s. What we need is automated theorem discovery. Erdős discovered these theorems even if he wasn't really able to prove them. Euler and Gauss discovered a ton of stuff they couldn't prove. It is weird that nobody considers this to be intelligence. Instead intelligence is a little game AI plays with Lean.

    AI researchers keep trying to reduce intelligence into something tiny and approachable, like automated theorem proving. It's easy: you write the theorem you want proven and hope you get a proof. It works or it doesn't. Nice and benchmarkable.

    Automated axiom creation seems a lot harder. How is an LLM supposed to know that "between any two points there is a line" formalizes an important property of physical space? Or how to suggest an alternative to Turing machines / lambda calculus that expresses the same underlying idea?

  6. "Mathematicial superintelligence" is so obnoxious. Why exactly do they think it is called an Erdős problem when Erdős didn't find the solution? Because Erdős discovered the real mathematics: the conjecture!

    These people treat math research as if it is a math homework assignment. There needs to be an honest discussion about what the LLM is doing here. When you bang your head against a math problem you blindly try a bunch of dumb ideas that don't actually work. It wastes a lot of paper. The LLM automates a lot of that.

    It is actually pretty cool that modern AI can help speed this up and waste less paper. It is very similar to how classical AI (Symbolica) sped up math research and wasted less paper. But we need to have an honest discussion about how we are using the computer as a tool. Instead malicious idiots like Vlad Tenev are making confident predictions about mathematical superintelligence. So depressing.

  7. Those aren't mutually exclusive, stimulus and bodily perception enable higher-level thoughts about the physical world. Once I was driving a big cheap pickup with a heavy load on an interstate, and a rear tire violently blew out, causing the truck to sway violently. I operated entirely by feel + my 3D mental model of a moving truck to discern what and where went wrong and how to safely pull over. It was too fast and too difficult for any stupid words to get in the way.

    I am glad humans are meaningfully smarter than chimps, and not merely more vocal. Helen Keller herself seemed to think that learning language finally helped her understand what this weird language thing was:

      I stood still, my whole attention fixed upon the motions of her fingers. Suddenly I felt a misty consciousness as of something forgotten—a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that w-a-t-e-r meant the wonderful cool something that was flowing over my hand. The living word awakened my soul, gave it light, hope, set it free!
    
    It is not like she was constantly dehydrated because she didn't understand what water was. She realized even a somewhat open-ended concept like "water" could be given a name by virtue of being recognizable via stimulus and bodily perception. That in and of itself is quite a high-level thought!

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal