- > "you'll get lots of exposure!"
"Backed by the White House"
I don't think this is the kind of exposure most people are going to want, nor will they want this on their resume.
- There's a new serif in town.
- > It's really funny how much better the AI is at writing python and javascript than it is C/C++. For one thing it proves the point that those languages really are just way harder to write.
I have not found this to be the case. I mean, yeah, they're really good with Python and yeah that's a lot easier, but I had one recently (IIRC it was the pre-release GPT5.1) code me up a simulator for a kind of a microcoded state machine in C++ and it did amazingly well - almost in one-shot. It can single-step through the microcode, examine IOs, allows you to set input values, etc. I was quite impressed. (I had asked it to look at the C code for a compiler that targets this microcoded state machine in addition to some Verilog that implements the machine in order for it to figure out what the simulator should be doing). I didn't have high expectations going in, but was very pleasantly surprised to have a working simulator with single-stepping capabilities within an afternoon all in what seems to be pretty-well written C++.
- > I have successfully vibe-coded features in C. I still don't like C.
Same here. I've been vibe-coding in C for the sake of others in my group who only know C (no C++ or Rust). And I have to say that the agent did do pretty well with memory management. There were some early problems, but it was able to debug them pretty quickly (and certainly if I had had to dig into the intricacies of GDB to do that on my own, it would've taken a lot longer). I'm glad that it takes care of things like memory management and dealing with strings in C (things that I do not find pleasant).
- But I have been vibe coding in C. Created a parser/compiler for a subset of the C programming language that compiles to microcode for a novel computing architecture. Could I have done this on my own? Sure, but if I did I would've probably have done it in OCaml. And it would've taken me a lot longer to get it to the point where it is now. I think the advantage of vibe coding this (at least for me) is that I would have a hard time getting started due to procrastination - and I'd have a hard time keeping interested if there wasn't something working (yeah, maybe I'm a little ADHD, but aren't we all at this point?). Vibe coding it got me to something that was working pretty well in a surprisingly short amount of time which tended to make me more engaged without losing interest and attention. I didn't have to get caught up in remembering the intricacies of creating a makefile to build the code, for example. That's one of many places where I can get bogged down.
- So basically bring a burner phone.
- As is mentioned in the article, depends on when they bought their DRAM contracts. If they were in before this then they'll be fine for a while.
- Or the Hunt brothers and silver which was just a few years before that.
How'd that turn out? https://en.wikipedia.org/wiki/Silver_Thursday#:~:text=On%20J...
- The current Justice Department? You're kidding, right?
- Altman was already unpopular. After this will he be able to show his face in Silicon Valley?
- I think it would be more like 5-7 years from now if they started breaking ground on new fabs today.
- Or maybe models that are much more task-focused? Like models that are trained on just math & coding?
- Even their TPU based systems need RAM.
- I feel like I can get all of that for free already. Not sure why I would pay a monthly subscription when I'm already getting Gemini across the Google ecosystem.
- Wondering if we're going to have a situation in the future where we end up having to buy the hand-me-downs from industry after they're done with them (and thus kind of outdated tech)? Kind of seems like the days of building your own PC are numbered.
- TPUs are also cheaper because GPUs need to be more general purpose whereas TPUs are designed with a focus on LLM workloads meaning there's not wasted silicon. Nothing's there that doesn't need to be there. The potential downside would be if a significantly different architecture arises that would be difficult for TPUs to handle and easier for GPUs (given their more general purpose). But even then Google could probably pivot fairly quickly to a different TPU design.
- Right, and the inevitable bubble pop will just slow things down for a few years - it's not like those TPUs will suddenly be useless, Google will still have them deployed, it's just that instead of upgrading to a newer TPU they'll stay with the older ones longer. It seems like Google will experience much less repercussions when the bubble pops compared to Nvidia, OpenAI, Anthropic, Oracle etc. as they're largely staying out of the money circles between those companies.
- But the thing is, the Voyager project came about in a much more stable period for the US - and in a more optimistic cultural climate (we could say similar for the Apollo project which wasn't that much earlier). When we used to prioritize spending on basic science and projects like this that basically had no ROI (NASA didn't even much think in those terms back in the 70s). Now we're in a very different place where, in the US anyway, we're very pessimistic about the future. To create a Voyager project you have to have some hope, like you said "with the assumption that that future would exist and might care" - now we're in a very different place where people don't have a lot of hope about the future. And it's also different in that we now ask "what's the payback going to be?" - everything now seems to need to pay it's way.
Not saying that other countries won't be able to do stuff like this - probably China is going to take the position that the US used to hold for this kind of exploration. It seems to be a more optimistic culture at this point, but hard to say how long that lasts.
- > Some things had to be done in "adversarial" mode where Claude coded and Codex criticized/reviewed
How does one set up this kind of adversarial mode? What tools would you need to use? I generally use Cline or KiloCode - is this possible with those?
There was story like this on NPR recently where a professor used this method to weed out students who were using AI to answer an essay question about a book the class was assigned to read. The book mentioned nothing about Marxism, but the prof inserted unseeable text into the question such that when it was copy&pasted into an AI chat it added an extra instruction to make sure to talk about Marxism in relation to this book (which wasn't at all related to Marxism). When he got answers that talked extensively about the book in Marxist terms he knew that they had used AI.