- florilegiumsonI like the idea of reimagining the whole stack so as to make AI more productive, but why stop at languages (as x86 asm is still a language)? Why not the operating system? Why not the hardware layer? Why not LLM optimized verilog, or an AI tuned HDL?
- If AI is really likely to cause a mass extinction event, then non-proliferation becomes critical as it was in the case with nuclear weapons. Otherwise, what does it really mean for AI to "replace people" outside of people needing to retool or socially awkward people having to learn to talk to people better? AI surely will change a lot, but I don't understand the steps needed to get to the highly existential threat that has become a cliché in every "Learn CLAUDE/MCP" ad I see. A period of serious unemployment, sure, but this article is talking about population collapse, as if we are all only being kept alive and fed to increase shareholder value for people several orders of magnitude more intelligent than us, and with more opposable thumbs. Do people think 1.2B people are going to die because of AI? What is the economy but people?
- Thank you: these are excellent.
- Really cool project. I love the animations that go with the songs.
I’d go through all of the chord progressions and make sure they actually match what is being played. There are quite a few errors. Happens to everyone.
Also, you and everyone else should remember that while the band is mostly playing power chords and omitting the fifths, what Cobain sings is part of the chord as it’s heard. This means that, for example, a lot of songs do sound major, Smells like teen spirit is probably in F minor.
I find determining key in popular music to be tricky. Most progressions consist of something like 4 chords, and there isn’t the teleology you see in something like Tin Pan Alley or Chopin to give the sense of where one is to arrive. Even the Axis of Awesome progression can be heard a major or minor depending on how you end the song.
- Really cool to see GPUs applied to sound synthesis. Didn’t realize that all one needed to do to keep up with the audio thread was to batch computations at the size of the audio thread. I’m fascinated by the idea of doing the same kind of thing for continua in the manner of Stefan Bilbao: https://www.amazon.com/Numerical-Sound-Synthesis-Difference-...
Although I wonder if mathematically it’s the same thing …
- L-systems were proposed for music even earlier. Here's a link to an article from 1986: https://quod.lib.umich.edu/cgi/p/pod/dod-idx/score-generatio...
It definitely is not a glorified PRNG. The idea is that you can create patterns that have both variety and repetition with them. I don't like the results, generally, but they are not random.
- The author is right that there is nothing new about making music with AI. However, earlier uses of AI were for symbol manipulation, whereas currently AI has the potential to be a new kind of sound synthesis method. I’ve heard demos where sounds come from these interstitial regions of latent space and so it sounds like I’m listening to two things at once. I wonder if quantum computers will have the ability to do something similarly freaky.
It’s really cool to use quantum computers to compose music, but I’d love to see them used for things other than control of “frequency modulation (FM), additive synthesis, and granular synthesis.”