Recent public projects include:
– FileKitty: https://github.com/banagale/FileKitty (a prompt engineering utility)
– Chief of Staff: https://chiefofstaffhq.com (a pre-AI-generation text-to-speech SaaS)
I also very occasionally write at: https://banagale.com
Feel free to say hello, rob @ the domain above.
- 2 points
- I can see github providing this, but it would still be at the git-operation level.
What I've found using this contextify-query cli in talking to my project(s) CLI AI history is substantial detail and context that represents the journey of a feature (or lack thereof).
In high velocity agentic coding, git practices seem to almost be cast aside by many. The reason I say that is Claude Code's esc-esc has a file reversion behavior that doesn't presume "responsible" use of git at all!
What I find interesting is that neither Anthropic nor OpenAI have seized on this, it is somewhat meta to the mainline interpreting requests correctly. That said, insights into what you've done and why can save a ton of unnecessary implementation cycles (and wasted tokens ta-boot).
Any thoughts on the above?
If you're open to giving the app a try, and enable updates on the DMG, the query service + CC skill should drop here in a few days. It's pretty dope.
Another alternative for update notifications is to watch the public repo where I'm publishing DMG releases: https://github.com/PeterPym/contextify/releases
Anyhow, this is really cool feedback and I appreciate the exchange you provided here. Thank you. If you have any further thoughts you want to share I'll keep an eye on this thread or can be reached at rob@contextify.sh
- That’s an interesting direction. I haven’t thought of this in multiplayer sense.
Would you see this as something that is sort of turn-key, where a central database is hosted and secured to your group?
Or would you require something more DIY like a local network storage device?
And similarly would you be open to having the summaries generated by a frontier model? Or would you again need it to be something that you hosted locally?
Thank you for the feedback and interest.
- I was into playing the mods for the original and played some of 2142 on PC.
Has the official multiplayer gameplay held up? I did try a release around the time of RDR2 on Xbox and it had seemed like pay to play may have messed with it at some point.
Curious if the mod support seems like a jailbreak from the official multiplayer.
- Building Contextify - a MacOS application that consumes Claude Code and Codex transcripts, stores them in a local sql db.
The main window uses Apple’s local LLM to summarize your conversation in realtime, with some swoopty UI like QUEUED state on Claude Code.
I’ve just added macOS Sequoia support and a really cool CLI with Claude Code skill allowing seamless integration of information from your conversational history into aI’s responses to questions about your development history.
The CLI interface contract was designed to mutual agreement between Claude code and codex with the goal of satisfying their preferences for RAG.
This new query feature and pre-Tahoe support should be out this week, but you can download the app now on the App Store or as a DMG.
I’m very excited about this App and I would love to get any feedback from people here on HN!
My Show HN: from this past week has a short demo video and a bit more info:
- I'd be curious to see an estimate on the google side.
Here are some real rough estimates in Apple's ecosystem:
For macos alone the install base is something like 110-130 million, and only Apple Silicon macs can run the new model, so maybe 45 million active macs are updated to macos 26 and can run their model.
There are a bunch of details but of the iPhones out there that are new enough to run Apple Intelligence and have iOS 26, something like 220 million can.
For iPad same conditions but for iPados its something like 60 million.
So, something like 325 million active devices are out there ready to run LLM completion requests.
- Thanks for the follow-on anecdote. I'd be happy to try out your app. Please email me when it is available: rob@contextify.sh.
- Hey Mark, I posted about this in another comment [1] but I also think the LLM is decent, and beyond its quality the scale of distribution is a big deal.
I had pondered practical implementations of the model since it was announced and have just released today a new native macos application that uses it to summarize Claude Code and Codex conversations as they occur. [2]
If you use either of these CLI agents and have time to try the app out and provide feedback, I'd appreciate it! I'm at rob@contextify.sh.
- FWIW, AI is not entirely locked down in the Apple ecosystem. Sure, they control it but they've already built the foundation of a major opportunity for developers.
There's an on device LLM that is packaged in iOS, iPadOS and macOS 26 (Tahoe) [1]. They even have a HIG on use of generative AI [2]
Something like half of all macs are running macOS 26 [3] already, so this could be the most widely distributed on-device LLM on the planet.
I think people are sleeping on this, partly because the model is seen as under powered. But I think we can presume it won't always be so.
I've just posted a Show HN of app for macOS 26 I created that uses Apple's local LLM to summarize conversations you've had with Claude Code and Codex. [3]
I've been somewhat surprised at the quality and reliability of Apple's built-in LLM and have only been limited by the logic I've built around it.
I think Apple's packaging of an LLM in its core operating systems is actually a fast move with AI and even has potential to act as an existential threat to Windows.
[1] https://developer.apple.com/videos/play/wwdc2025/286/
[2] https://developer.apple.com/design/human-interface-guideline...
- Thank you, and thanks for all the pre-release feedback. :)
There is definitely value in letting codex take a swing at tasks. I've found multiple times where codex has come up with stronger implementation plans on complex changes.
I actually have a feature of resuming conversations begun on one service on the other. It was working, but I had to prune it for the initial release. If there was strong interest in this I could seek to bring it back.
But maybe you're thinking of more of a /compact and resume on the other service to get the cleanest context to start a convo from?
- 6 points
- Are they planning to include this? It seems like the kind of demarcation point the framework would avoid crossing into.
Hello. I am a Senior backend engineer with 10+ years building Django-based systems and data pipelines.Location: Portland, Oregon Remote: Yes Willing to relocate: No Technologies: Python (Django, FastAPI), PostgreSQL, Redis/Celery, AWS, Docker, Terraform, LLM integration (GPT, Claude), data pipelines/ETL Resume: https://banagale.com/cv/ Email: rob@banagale.comMost recently, I designed and implemented an LLM-assisted data pipeline that converted security bulletins into actionable intelligence for an enterprise cyber security product.
I enjoy working with Django, previously migrated live auth systems with zero downtime and took SaaS products from prototype to production.
I have founded a startup and grown the business from zero to profitable exit.
I'm seeking a senior backend, data engineering, of founding engineer role at a stable, product-focused company. I'm strong in API design, data modeling, and production AI integrations.
Please reach out if you would like to chat. I look forward to meeting with you.
- FWIW, the org decided against vector embeddings for Claude Code due in part to maintenance. See 41:05 here: https://youtu.be/IDSAMqip6ms
- The original Leisure Suit Larry game had age verification. I did not know there was an escape key sequence from it, so failed it many times: https://youtu.be/RCV-Ka-R_Xg?si=ZXx8W0f8XtL-_p6H&t=30
- My experience is that consumer products, for example winter jackets, have gone down in quality and way, way up in price.
The designs themselves are often just bad. It is almost like consumers are punished for not spending enough with bad color options. (You can see this in running shoes as well)
Even if the right aesthetic and function is found, there's no real consideration of body type or shape, you get a few sizes to choose from.
I think big brands have consumers over a barrel today. When the design and materials assembly are solved, I think people will pay more to get the thing they want. And I think they're more likely to keep them longer as a result.
- I see a future where FOSS designs for consumer products compete with commercial releases.
It will take far more sophisticated micro-manufacturing (like 3d-printing but different tools handling more types of materials).
Get the jacket in your exact size with the best materials. Benefit from having incrementally improved from the original (for example under arm vent zipper angle improved). All of it unbranded or custom branded.
Seems hard to believe annual released, mass manufacturing will compete.
Hello. I am a Senior backend engineer with 10+ years building Django-based systems and data pipelines.Location: Portland, Oregon Remote: Yes Willing to relocate: No Technologies: Python (Django, FastAPI), PostgreSQL, Redis/Celery, AWS, Docker, Terraform, LLM integration (GPT, Claude), data pipelines/ETL Resume: https://banagale.com/cv/ Email: rob@banagale.comMost recently, I designed and implemented an LLM-assisted data pipeline that converted security bulletins into actionable intelligence for an enterprise cyber security product.
I enjoy working with Django, previously migrated live auth systems with zero downtime and took SaaS products from prototype to production.
I have founded a startup and grown the business from zero to profitable exit.
I'm seeking a senior backend, data engineering, of founding engineer role at a stable, product-focused company. I'm strong in API design, data modeling, and production AI integrations.
Please reach out if you would like to chat. I look forward to meeting with you.
What are the prompts you're usign?