Preferences

hadlock
Joined 3,164 karma
https://github.com/hadlock

  1. I noticed I am not hitting limits either. My guess is OpenAI sees CC as a real competitor/serious threat. Had OAI not given me virtually unlimited use I probably would have jumped ship to CC by now. Burning tons of cash at this stage is likely Very Worth It to maintain "market leader" status if only in the eyes of the media/investors. It's going to be real hard to claw back current usage limits though.
  2. We call ours "bombing-range"

    We maintain an internal service that hosts two endpoints; /random-cat-picture (random >512KB image + UUID + text timestamp to evade caching) and /api/v1/generic.json which allows developers and platform folks to test out new ideas from commit to deploy behind a load balancer in an end-to-end fashion, it has saved countless headaches over the years.

  3. Lots of systems are "fine" until they aren't. As you pointed out, Jenkins being super-customizable means it isn't strongly opinionated, and there is plenty of opportunity for a well-meaning developer to add several foot-guns, doing some simple point and click in the GUI. Or the worst case scenario: cleaning up someone elses' Jenkins mess after they leave the company.

    Contrast with a declarative system like github actions: "I would like an immutable environment like this, and then perform X actions and send the logs/report back to the centralized single pane of glass in github". Google's "cloud run" product is pretty good in this regard as well. Sure, developers can add foot guns to your GHA/Cloud Run workflow, but since it is inherently git-tracked, you can simply revert those atomically.

    I used Jenkins for 5-7 years across several jobs and I don't miss it at all.

  4. Github being a single pane of glass for developers with a single login is pretty powerful. Github hosting the runners is also pretty useful, ask anyone who has had to actually manage/scale them what their opinion is about Jenkins is. Being a "Jenkins Farmer" is a thankless job that means a lot of on-call work to fix the build system in the middle of the night at 2am on a Sunday. Paying a small monthly fee is absolutely worth it to rescue the morale of your infra/platform/devops/sre team.

    Nothing kills morale faster than wrenching on the unreliable piece of infrastructure everyone hates. Every time I see an alert in slack github is having issues with actions (again) all I think is, "I'm glad that isn't me" and go about my day

  5. Yes I extracted the physics engine from Ms flight simulator 3.0 (C) and ported it into my own project (rust) in Ghidra as a complete novice from having never opened the app to working code in rust in just over three hours. It helped a lot that I have previous experience with writing my own similar software so I knew what to start looking for, and also Ms fs 3.0 is only about 9500 loc, much of it is graphics.

    But yeah codex will totally hold your hand and teach you Ghidra if you have a few hours to spare and the barest grasp of assembly

  6. Giving the llm access to Ghidra so it can directly read and iterate through the Sudoku puzzle that is decompile binaries seems like a good one. Ghidra has a cli mode and various bindings so you can automate decompiling various binaries. For example right now if you want to isolate the physics step of Microsoft flight simulator 3.0 codex will hold your hand and walk you through (over the course of 3-4 hours, using the gui) finding the main loop and making educated guesses about which decompiled c functions in there are likely physics related, but it would be a lot easier to just give it the "Ghidra" skill and say, "isolate the physics engine and export it as a portable cargo package in rust". If you're an NSA analyst you can probably use it to disassemble and isolate interesting behavior of various binaries from state actors a lot faster.
  7. I'm excited to use this with the Ghidra cli mode to rapidly decompile physics engines from various games. Do I want my flight simulator to behave like the Cessna like in flight simulator 3.0 in the air? Codex can already do that. Do I want the plane to handle like Yoshi from Mario Kart 64 when taxiing? It hasn't been done yet but Claude code is apparently pretty good at pulling apart n64 roms so that seems within the realm of possibility.
  8. >vibe-coding

    A surprising amount of programming is building cardboard services or apps that only need to last six months to a year and then thrown away when temporary business needs change. Execs are constantly clamoring for semi-persistent dashboards and ETL visualized data that lasts just long enough to rein in the problem and move on to the next fire. Agentic coding is good enough for cardboard services that collapse when they get wet. I wouldn't build an industrial data lake service with it, but you can certainly build cardboard consumers of the data lake.

  9. He says in his article:

    >Is C the ideal language for vibe coding? I think I could mount an argument for why it is not, but surely Rust is even less ideal.

    I've been using Rust with LLMs for a long time (mid-2023?) now; cargo check and the cargo package system make it very easy for LLMs to check their work and produce high quality code that almost never breaks, and always compiles.

  10. This blog post was written by someone who speaks/reads english as a first language.
  11. The market is ripe for ChatGPT in a box, replacing google home or Alexa desktop pucks. God knows the google home assistant has been detuned and detuned to the point it barely works for turning the lights on and off at this point. There's a handful of golf-ball shaped objects on AliExpress for $25 that provide this functionality, powered by an ESP32 IoT chip, but doesn't have wakeword capability (yet). I picked up two for a Home Assistant voice assistant project but haven't had time to dive into it yet.
  12. Youtube maintains an independent campus from the google/alphabet mothership, I'm curious how much direction they get, as (outwardly, at least) appear to run semi-autonomously.
  13. Probably something like this; git reset --hard HEAD
  14. > AI has failed. >The rumor mill has it that about 95% of generative AI projects in the corporate world are failures.

    AI tooling has only just barely reached the point where enterprise CRUD developers can start thinking about. Langchain only reached v1.0.0 in the last 60 days (Q4 2025); OpenAI effectively announced support for MCP in Q2 2025. The spec didn't even approach maturity until Q4 of 2024. Heck most LLMs didn't have support for tools in 2024.

    In 2-3 years a lot of these libraries will be part way through their roadmap towards v2.0.0 to fix many of the pain points and fleshing out QOL improvements, and standard patterns evolved for integrating different workflows. Consumer streaming of audio and video on the web was a disaster of a mess until around ~2009 despite browsers having plugins for it going back over a decade. LLMs continue to improve at a rapid rate, but tooling matures more slowly.

    Of course previous experiments failed or were abandoned; the technology has been moving faster than the average CRUD developer can implement features. A lot of "cutting edge" technology we put into our product in 2023 are now standard features for the free tier of market leaders like ChatGPT etc. Why bother maintaining a custom fork of 2023-era (effectively stone age) technology when free tier APIs do it better in 2025? MCP might not be the be-all, end-all, but at least it is a standard interface that's at least maintainable in a way that developers of mature software can begin conceiving of integrating it into their product as a permanent feature, rather than a curiosity MVP at the behest of a non technical exec.

    A lot of AI-adjacent libraries we've been using finally hit v1.0.0 this year, or creeping close to it; providing stable interfaces for maintainable software. It's time to hit the reset button on "X% of internal AI initiatives failed"

  15. I went to look up and see if this guy was in his early to mid 20s when I got to this point

    > That <Command> tag isn't Markdown at all; it's a React component.

    Turns out he's in his 40s, so he lived through ms word, front page and the JavaScript wars; this is almost certainly satire

  16. For whatever reason, it really bothers me when people call containers Dockers in 2025
  17. I've been really impressed with codex so far. I have been working on a flight simulator hobby project for the last 6 months and finally came to the conclusion that I need to switch from floating origin, which my physics engine assumes with the coordinate system it uses, to a true ECEF coordinate system (what underpins GPS). This involved a major rewrite of the coordinate system, the physics engine, even the graphics system and auxilary stuff like asset loading/unloading etc. that was dependent on local X,Y,Z. It even rewrote the PD autopilot to account for the changes in the coordinate system. I gave it about a paragraph of instructions with a couple of FYIs and... it just worked! No major graphical glitches except a single issue with some minor graphical jitter, which it fixed on the first try. In total took about 45 minutes but I was very impressed.

    I was unconvinced it had actually, fully ripped out the floating origin logic, so I had it write up a summary and then used that as a high level guide to pick through the code and it had, as you said, followed the instructions to the letter. Hugely impressive. In march of 2023 OpenAI's products struggled to draw a floating wireframe cube.

  18. Your knowledge on the topic is at least six months out of date; April 2025 was a huge leap forward in usability, and recent releases in the last 30 days are at least what I would call a full generation newer technology than June of 2024. Summer 2025 was arguably the dawn of true AI assisted coding. Heck reasoning models were still bleeding edge in late December 2024. They might not be 10x better but their ability to competently use (and build their own) tools makes them almost incomparable to last year's technology.
  19. sounds exactly like blogging 20 years ago
  20. A lot of data centers are near the Columbia river, as power is cheap there thanks to hydroelectric; which flows through an arid desert-like region, but is also the largest river in the western US and it's simply impossible to pump too much water out of it.

This user hasn’t submitted anything.