john@ouachitalabs.com
https://linkedin.com/in/jomccr https://ouachitalabs.com/
Building https://macrosforhumans.com - a voice-first macro tracker
- My boss (great engineer) had been complaining about this with his internal github copilot quality no matter the model or task. Turns out he never cleared the context. It was just the same conversation spread thin across nearly a dozen completely separate repositories because they were all in his massive vscode workspace at once.
This was earlier this year... So I started giving internal presentations on basic context management, best practices, etc after that for our engineering team.
- I've run across more and more strudel musicians (developers?) doing a kind of live coding performance art and posting clips on tiktok and reels. It's really entertaining to watch. I've been meaning to dabble in it.
- > If Microsoft misappropriates GPL code how exactly is that "stealing" from me, the user, of that code? I'm not deprived in any way, the author is, so I can't make sense of your premise here.
The user in this example is deprived of freedoms 1, 2, and 3 (and probably freedom 0 as well if there are terms on what machines you can run the derivative binary on).
Read more here: https://www.gnu.org/philosophy/free-sw.html
Whether or not the user values these freedoms is another thing entirely. As the software author, licensing your code under the GPL is making a conscious effort to ensure that your software is and always will be free (not just as in beer) software.
- There are many misconceptions of the GPL, gnu, and free software movement. I love the idealism of free software and you hit the nail on the head.
Below are the four freedoms for those who are interested. Straight from the horse's mouth: https://www.gnu.org/philosophy/free-sw.html
The freedom to run the program as you wish, for any purpose (freedom 0). The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this. The freedom to redistribute copies so you can help others (freedom 2). The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. - This landing page is vibe coded and littered with mistakes/typos (per 1,000 tokens), outdated models (Gemini 1.5?), the security link at the bottom of the page is an href=#, and I can see the "dashboard" without logging in or signing up.
> Message Privacy: Your API requests are processed and immediately forwarded. We never store or log conversation content.
> Minimal Data: We only store your email and usage records. Nothing else. Your data stays yours.
Source: trust me bro.
- I think they're referring to this study on LLM poisoning in the pretraining step: https://arxiv.org/abs/2510.07192 (related article: https://www.anthropic.com/research/small-samples-poison)
I'll admit I'm out of my element when discussing this stuff. Maybe somebody more plugged into the research can enlighten.
- Point 2 is very often overlooked. Building products that are worse than the baseline chatgpt website is very common.
- Yes, it's still a terrible UX. Anybody claiming otherwise is using Apple only, which still has trouble (albeit a bit less than mixed ecosystems), or stockholm syndrome.
- Copilot as a harness for the model is generic enough to work with every model. Claude sonnet, haiku, and opus are trained with and specifically for Claude code in mind.
Also, as a heavy user of both, there are small paper cuts that seriously add up with copilot. Things that are missing like sub agents. The options and requests for feedback that cc can give (interactive picker style instead of prompt based). Most annoyingly commands running in a new integrated vscode terminal instance and immediately mistakenly "finishing" even though execution has just begun.
It's just a better harness than copilot. You should give it a shot for a while and see how you like it! I'm not saying its the best for everybody. At the end of the day these issues turn into something like the old vi/emacs wars.
Not sponsored, just a heavy user of both. Claude code is not allowed at work, so we use copilot. I purchased cc for my side projects and pay for the $125/m plan for now.
- I worked at a company like this, I was an intern with wide eyes seeing the migration to git via bitbucket in the year ... 2018? What a sight to see.
That company had its own data center, tape archives, etc. It had been running largely the same way continuously since the 90s. When I left for a better job, the company had split into two camps. The old curmudgeonly on-prem activists and the over-optimistic cloud native AWS/GCP certified evangelist with no real experience in the cloud (because they worked at a company with no cloud presence). I'm humble enough to admit that I was part of the second camp and I didn't know shit, I was cargo culting.
This migration is still not complete as far as I'm aware. Hopefully the teams that resisted this long and never left for the cloud get to settle in for another decade of on-prem superiority lol.
- Absolutely, it's wildly fun to read the outputs of even a little tiny 0.8M model trained on CPU. And now I've actually got a much better understanding of the transformer architecture after playing around with it for a day. This repo is probably going to spawn some new folks to try out ideas which will turn into new researchers in the field, no doubt.
- This weekend I just cracked into nanoGPT (https://github.com/karpathy/nanoGPT), an older but fabulous learning exercise where you build and train a crappy shakespeare GPT with ~0.8M parameters on a cpu. Results are about what you'd expect from that, they suck, but you can start to feel the magic, especially if you're not a deep learning professional and you just want to poke around and hack on it.
I started writing up a blog post on my weekend with nanoGPT but it's not done yet... Would have been great to link to here lol oh well
- My side project - https://macrosforhumans.com - is a traditional mobile macro tracker with first class support for voice (and soon image and text blob) inputs for your recipes, ingredients, measurements, units, etc. Kind of a neat project that may never make it too far off the ground considering I am not a mobile dev but it's been fun to build so far with the help of claude code. It's built with flutter and a fastapi backend.
In the AI macro food logging world, there's really only Cal AI which estimates macros based on an image. I use cronometer personally, and it's super annoying to have to type everything in manually, so it makes sense why folks reach for something like Cal AI. However, the problem with something like Cal AI is accuracy. It's at best a guess based on the image. Macros for humans tries to be more of a traditional weigh your food, log it, etc kind of app, while updating the main interface for how users input that info into something more friendly.
I set myself a hard deadline to present a live demo at a local showcase/pitch event thing at the end of the month. I bet the procrastination will kick in hard enough to get the backend hosted with a proper database and a bit more UI polish running on my phone. :-)
Here's a really early demo video I recorded a few weeks ago. I had just spoken the recipe on the left and when I stop recording you can see my backend streams the objects out as they're parsed from the LLM https://www.youtube.com/watch?v=K4wElkvJR7I
- Pointers in and of themselves are not difficult to learn on their own, but when you're learning them alongside your first programming language, it's just adds to the difficulty I think.
I think a lot of noobs learning C struggle with pointers especially because there are no good error messages besides "segmentation fault" :D
- This is some of the cleanest, modern looking, beautiful C code I've seen in a while. I know it's not the kernel, and there's probably good reasons for lots of #ifdef conditionals, random underscored types, etc in bigger projects, but this is actually a great learning piece to teach folks the beauty of C.
I've also never seen tests written this way in C. Great work.
C was the first programming language I learned when I was still in middle/high school, raising the family PC out of the grave by installing free software - which I learned was mostly built in C. I never had many options for coursework in compsci until I was in college, where we did data structures and algorithms in C++, so I had a leg up as I'd already understood pointers. :-)
Happy to see C appreciated for what it is, a very clean and nice/simple language if you stay away from some of the nuts and bolts. Of course, the accessibility of the underlying nuts and bolts is one of the reasons for using C, so there's a balance.
- It's a symptom of a regression of society as a whole. Twenty years ago things were different. Not just the vibes, but the net hope for humanity and short/medium/long term forecasts of the health of the united states were all much higher among the average joe.
The ship is sinking.
- > No-one really likes engineering war stories
This is so wrong. I love reading these kinds of stories
- Curious how text-aligned tabular formats work for LLMs considering humans probably find them more readable than other formats
I'm seeing pretty good success with extracting data out of 10-Qs which are formatted like this by default using the `edgartools` library's default `filing.text()` method.System Sales(a) Number of Units (in Millions) ──────────────────────────────────────────────────────────────────────── KFC Division 31,981 $ 34,452 Taco Bell Division 8,757 17,193 Pizza Hut Division 20,225 13,108 Habit Burger & Grill Division 383 713 YUM 61,346 $ 65,466 - Curious that you say that. I feel like the reason I love to use claude code is mostly because of the orchestration around the model itself. Maybe I've been trained by claude to write for it in a certain way. But when I try other clis like codex, gemini, and more recently opencode, they don't seem as well built and polished or even as capable, despite me liking the gemini and gpt-5 models themselves and using their apis more than anthropic's for work.
> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished.”
Software engineering pre-LLMs will never, ever come back. Lots of folks are not understanding that. What we're doing at the end of 2025 looks so much different than what we were doing at the end of 2024. Engineering as we knew it a year or two ago will never return.