- This is one of those projects where the implementation is more interesting than the meme. Rendering DOOM isn’t the impressive part, hijacking a PCB editor’s rendering pipeline and making it behave like a real-time vector engine is.
The part I love most is how many unrelated systems had to cooperate:
extracting geometry directly from DOOM’s drawsegs/vissprite internals
mapping sprite classes to physical component footprints
running real-time updates through KiCad’s object model without triggering full recompute
and then running the same vector stream to an oscilloscope via audio DAC
That’s a really clever chain of “use the tool for something it was never designed to do.”
ScopeDoom might end up being the more interesting long-term direction, vector displays force you to think about rendering differently, and there’s something poetic about DOOM being rendered as literal analog voltage traces.
If you ever take it further, the combination of:
faster DAC (or multi-kHz arbitrary waveform generator)
true analog persistence phosphor scope
and dynamic sprite simplification
…could get you surprisingly close to a smooth vector-shooter aesthetic.
Either way: great hack. The world needs more playful abuse of serious tools.
- It’s wild how Voyager forces two truths to sit together:
Technically, what we’ve done is almost boringly modest.
~17 km/s
~1 light-day in ~50 years
No realistic way to steer it anywhere meaningful now On cosmic scales it’s… basically still on our doorstep.
Psychologically, it’s still one of the most ambitious things we’ve ever done.
We built something meant to work for decades, knowing the people who launched it would never see the end of the story.
We pointed a metal box into the dark with the assumption that the future would exist and might care.
I keep coming back to this: Voyager isn’t proof that interstellar travel is around the corner. It’s proof that humans will build absurdly long-horizon projects anyway, even when the ROI is almost entirely knowledge and perspective.
Whether we ever leave the solar system in a serious way probably depends less on physics and more on whether we ever build a civilization stable enough to think in centuries without collapsing every few decades.
Voyager is the test run for that mindset more than for the tech.
- The old STN/FSTN monochrome panels had surprisingly good power efficiency and excellent static contrast.
We lost that niche when the industry fully committed to color TFT,and e ink never quite replaced the responsiveness.
- One thing that always struck me about Calvin & Hobbes is how well it ages. The humor lands when you’re a kid, but the subtext only becomes clear as an adult.
Watterson managed to keep that dual perspective without letting the strip drift into cynicism, which is rare for long-running comics.
- Desktop Linux doesn’t win by replacing Windows for everyone,it wins one workflow at a time. For some setups it’s already the most predictable environment.
- The “hope molecules” framing is catchy, but most of the underlying effects seem to trace back to myokines and other exercise-induced signaling pathways.
What’s interesting is that these pathways tie together metabolism, inflammation, and mood in ways that aren’t fully understood yet. The mechanisms are still early, but the correlation between regular movement and improved emotional regulation is well-documented.
- This is a great example of the blind spot between sampling-based observability and event-driven tracing.
Anything that appears + disappears between polls is effectively invisible unless you’re streaming syscalls/process events. It’s surprising how often “short-lived, high-impact” processes cause the worst production spikes.
Curious whether you’re planning to surface this at the scheduler level (run queue latency/involuntary context switches) or stick to process-lifecycle tracing?
- The hardest part of QA still seems to be maintaining tests as the system evolves. Curious whether you’re aiming at test generation, test pruning, or improving test reliability.
- Looks clean. One question how are you ensuring fairness and preventing race conditions during contract resolution? That’s usually where smaller prediction market engines run into edge cases.
- HorizonDB sounds interesting, but it’s hard to evaluate until Microsoft clarifies how it differs from their existing Postgres offerings (Flexible Server, Hyperscale/Citus, CosmosDB integrations).
Without clear pricing + performance characteristics, it’s difficult to know whether this is a new architecture or just another SKU in an already crowded lineup
- AI can help with content generation or scaffolding, but teaching is still a bidirectional feedback process. When the model can’t adapt to misunderstanding or context, students immediately feel the gap. It’s a UX failure more than a “should AI be allowed” issue
- I’m curious whether this is an RRC/IMS stack issue on Samsung’s implementation or something carrier-side in Australia’s 000 routing setup.
Emergency call handling tends to expose edge cases that normal calls never hit. Would be interesting to know if this affects only certain models or firmware branches.
- I sympathize with the startup argument: heavy compliance costs can stifle early innovation. But the solution shouldn’t be “weaker rules.” It should be smarter rules, clearer safe harbors for small actors, browser-level consent primitives for users, and stronger enforcement against dark-pattern CMPs. That keeps privacy meaningful without killing small businesses.
- It seems like a lot of the pain comes from the fact that hardware passthrough behaves so differently under LXC vs VMs.
Has anyone here found a stable way to handle USB / PCIe device identity changes across updates or reboots?
That part always feels like the weak point in otherwise solid Proxmox setups
- Really interesting direction. The node-based canvas feels like a more scalable abstraction for video automation than the usual chat-only interface. I’m curious how you’re handling long-form content where temporal context matters (e.g., emotional shifts, pacing, narrative cues).
Multimodal models are good at frame-level recognition, but editing requires understanding relationships between scenes, have you found any methods that work reliably there?
I’ve been more of a lurker than a poster over the years, but this place has shaped how I think about tech, work, and the future more than any other corner of the internet.
Huge thanks to @dang, @tomhow, YC, and everyone who shows up here with curiosity and good faith. The signal-to-noise ratio here is still unmatched.
Here’s to many more years of weird, smart, opinionated conversations.