- That's awesome and I have a few similar conversations with Claude. I wasn't quite an AI luddite a couple months ago, but close. I joined a new company recently that is all in on AI and I have a comically huge token budget so I jumped all the way in myself. I have my choice of tools I can use and once I tried Claude Code it all clicked. The topology they are creating for AI tooling and concepts is the best of all the big LLMs, by far. If they can figure out the remote/cloud agent piece with the level of thoughtfulness they have given to Code, it'd be amazing. Cursor Cloud has that area locked down right now, but I'm looking forward to how Anthropic approaches it.
- Agree. I'd add that a aha moment to skills is AI agents are pretty good at writing skills. Let's say you have developed an involved prompt that explains how to hit an API (possibly with the complexity of reading credentials from an env var or config file) or run a tool locally to get some output you want the agent to analyze (example, downloading two versions of python packages and diffing them to analyze changes). Usually the agent reading the prompt it's going to leverage local tools to do it (curl, shell + stdout, git, whatever) every single time. Every time you execute that prompt there is a lot thinking spent on deciding to run these commands and you are burning tokens (and time!). As an eng you know that this is a relatively consistent and deterministic process to fetch the data. And if you were consuming it yourself, you'd write a script to automate it.
So you read about skills (prompt + scripts) to make this more repeatable and reduce time spent thinking. At that point there are two paths you can go down -- write the skill and prompt yourself for the agent to execute -- or better -- just tell the agent to write the skill and prompt and then you lightly edit it and commit it.
This may seem obvious to some, but I've seen engineers create skills from scratch because they have a mental model around skills being something that people must build for the agent, whereas IMO skills are you just bridging a productivity gap that the agent can't figure out itself (for now), which is instructing it to write tools to automate its own day to day tedium.
- Skills have a lot of uses, but one in particular I like is replacing one off MCP server usage. You can use (or write) an MCP server for you CI system and then add the instructions to your AGENTS.md to query the CI MCP for build results for the current branch. Then you need to find a way to distribute the MCP server so the rest of the team can use it or cook it into your dev environment setup. But all you really care about is one tool in the MCP server, the build result. Or...
You can hack together a shell, python, whatever script that fetches build results from your CI server, dumps them to stdout in a semi structured format like markdown, then add a 10-15 line SKILL.md and you have the same functionality -- the skill just executes the one-off script and reads the output. You package the skill with the script, usually in a directory in the project you are working on, but you can also distribute them as plugins (bundles) that claud code can install from a "repository", which can just be a private git repo.
It's a little UNIX-y in a way, little tools that pipe output to another tool and they are useful in a standalone context or in a chain of tools. Whereas MCP is a full blown RPC environment (that has it's uses, where appropriate).
- Another thing I'd suggest: look into and use non-coding AI tools that improve productivity. For example:
Zoom meeting transcriptions and summaries or Granola. A lot of context is lost when you take manual notes in meetings. If you use a tool that turns a meeting into notes automatically, you can use those notes to bootstrap a prompt/plan for agents.
- Unless there is not something I am seeing, people aren't racing to move to rural New England. Maybe it's retirees, red to blue state migrations, or remote workers. But I haven't seen a ton of evidence of that. People didn't really migrate out here before covid and I don't think enough people have to justify the rise in prices.
Personally I think people that otherwise would be selling are sitting on their homes because of the interest rates and this is causing a strange feedback loop of low turnover causing low supply which in turn causes new buyers to accept the prices (probably with a hope that interest rates will come down and they can re-fi in the years to come). I also think a non-trivial number of houses that on the market due to the owners passing or going into retirement homes are sitting there on the market because prices are so high but the only money the family is out is taxes. Or they are being turned into rental units, since rental prices are out of whack in these areas too.
My point I guess is where I live we haven't seen a big influx of population (probably the opposite) or significant job or wage growth to make sense of the increase in housing prices. I guess at the end of the day people are just stretching themselves further and sending more money to the banks in the form of interest to get into homes that were literally half the price in 2019. Strange times.
- There's something fundamentally strange about how prices have spiked and inventory has tightened since Covid. Where I live in rural New England, prices are up 50–100% in five years. And this is on pretty poor quality homes. Yes, low interest rates led to a surge in buying and bidding wars that spiked the baseline, but when people say "the real problem is there isn't enough housing" that feels incomplete to me. Of course supply has been an issue for a while, but home prices nearly doubling in five years doesn't look like a normal supply story -- it's not as if we suddenly created 20-50% more qualified buyers in that time. I guess the lack of churn, with people hanging onto those sweet 3% mortgages much longer than usual, is probably part of it. But I really don't have an answer for the current state of home buying. I make great money but if I was to buy a house the quality of the house I got in 2018 with the same % down payment I would be looking at over 40% of my take home going to a mortgage, PMI and taxes.
- I mention mean-time to decision making and that's one of the rationales for the mcp. A skill could call a script that does the same thing -- but at that point aren't we just splitting hairs? We are both talking about automated repetitive thinking + actions that the agent takes? And if the skill requires authentication, you have to encode passing that auth into the prompt. MCP servers can just read tokens from the filesystem at call time and don't require thinking at all.
- What boundaries does this 8GB etcd limit cut across? We've been using Tekton for years now but each pipeline exists in its own namespace and that namespace is deleted after each build. Presumably that kind of wholesale cleanup process keeps the DB size in check, because we've never had a problem with Etcd size...
We have multiple hundreds of resources allocated for each build and do hundreds of builds a day. The current cluster has been doing this for a couple of years now.
- Most guides to wringing productivity out of these higher level Claude code abstractions suffer from conceptual and wall-of-text overload. Maybe it's unavoidable but it's tough to really dig into these things.
One of the things that bugs me about AI-first software development is it seems to have swung the pendulum of "software engineering is riddled with terrible documentation" to "software engineering is riddled with overly verbose, borderline prolix, documentation" and I've found that to be true of blog and reddit posts about using claude code. Examples:
https://www.reddit.com/r/ClaudeAI/comments/1oivjvm/claude_co...
and
https://leehanchung.github.io/blogs/2025/10/26/claude-skills...
These are thoughtful posts, they just are too damn long and I suspect that's _because_ of AI. And I say this as someone who is hungry to learn as much as I can about these Claude code patterns. There is something weirdly inhumane about the way these walls of text posts or READMEs just pummel you with documentation.
- MCPs as a thin layer over existing APIs has lost utility. Custom MCPs for teams that reduces redundant thinking/token consumption and provides more useful context for the agent and decreases mean time to decision making is where MCPs shine.
Something as simple as correlating a git SHA to a CI build takes 10s of seconds and some number of tokens if Claude is utilizing skills (making API calls to the CI server and GitHub itself). If you have an MCP server that Claude feeds a SHA into and gets back a bespoke, organized payload that adds relevant context to its decision making process (such as a unified view of CI, diffs, et. al), then MCP is a win.
MCP shines as a bespoke context engine and fails as a thin API translation layer, basically. And the beauty/elegance is you can use AI to build these context engines.
- It's fairly straightforward to build resilient, affordable and scalable pipelines with DAG orchestrators like tekton running in kubernetes. Tekton in particular has the benefit of being low level enough that it can just be plugged into the CI tool above it (jenkins, argo, github actions, whatever) and is relatively portable.
- I get the feeling that most people commenting here have only surface level experience with deploying k8s applications. I don't care for helm myself but it's less bad than a lot of other approaches like hand rolling manifests with tools like envsubst and sed.
Kustomize also seems like hell when a deployment reaches a certain level of complexity.
- It's a sneaky supply chain threat for docker images. I'm not sure standard container registry tools actively scan for this. Of course you shouldn't be running random untrusted docker images that you find on the internet but it happens all the time in dev envs and in sloppy production environments.
- > It isn't? What is stopping companies from building on GPT-OSS or other local models for cheaper? The AI services have no moat.
Right now there is an efficiency/hardware moat. That's why the Stargate in Abilene and corresponding build outs in Louisiana and elsewhere are some of the most intense capex projects from the private sector ever. Hardware and electric production is the name of the game right now. This Odd Lots podcast is really fresh and relevant to this conversation: https://www.youtube.com/watch?v=xsqn2XJDcwM
Local models, local agents, local everything and the commodification of LLMs, at least for software eng is inevitable IMO, but there is a lot of tooling that hasn't yet been built for that commodified experience yet. For companies rapidly looking to pivot to AI force multiplication, the superscalers are the answer for now. I think it's a highly inefficient approach for technical orgs, but time will create efficiency. For your joe on the street feeding data into an LLM, then I don't think any of those orgs (think your local city hall, or state DMV) are going to run local models. So there is a captured market to some degree for the the current superscalars.
- OpenAI's (and other superscalers) revenue isn't really up for debate, nor is the long term value of their mission/product. The issue (somewhat articulated in this article) is that the superscalers (both public and private) have generated such a massive, unprecedented amount of speculative investment as well as capital expenditure that has left the rest of the economy in the dust (aka minting a mag 7 in less than a decade) that we have a distorted view of the world.
The fear of an AI bubble isn't that AI companies will fail, it's that a downturn in the AI "bubble" will lay bare the underlying bear market that their growth is occluding. What happens then? Nobody knows. Probably nothing good. Personally I think much of the stock market growth in the last few years that seems disconnected with previous trends (see parallel growth betwen gold and equities) is based on retail volume and unorthodox retail patterns (Robinhood, WSB, et. al) that I think conventional historical market analysis is completely unprepared for. At this point everything may go to the moon forever, or it may collapse completely. Either way, we live in a time of little precedence.
- This is such a strange take. Ruby Central, for better or worse, is the steward of Rubygems/Bundler. If Mike Perham wants to withdraw his funding because he thinks DHH is a white supremacist, then that's fine. But DHH didn't do that, Perham did.
Arko is not a completely innocent, non-self-interested character here. He has announced a project to end-run the existing rubygems, bundler, etc infrastructure before all this, in the name of "better tooling", but his tooling is solely owned by him and a handful of people that really, really don't like DHH. Controlling this aspect of the ruby toolchain ecosystem is in their own self-interest and overlaps with their deep disdain for the politics and corporate nature of the existing stewards of the ruby toolchain ecosystem. Maybe their approach and stewardship of this fork of the toolchain is more just, secure and equitable, but make no mistake -- they are fighting the same war that DHH and Shopify are, which is who controls the keys to the toolchain. Do you think if Arko, Perham, et. al. had control they would somehow be completely neutral, apolitical stewards of the ecosystem? No! They have made it clear with their money and machinations that they do not want to operate in the same ecosystem as DHH and their politics and ethics are intertwined with their relationship to the ruby community. They are no different than him.
Meanwhile those of us who just want stability are stuck between two factions who claim righteousness and ownership. I wish they all could be deposed and some more mature non-individual foundation could take over.
- I was listening to a podcast the other day and one of the commercials I think was from Facebook (IIRC, it could have been Google...). It was directed toward students, so I'm pretty sure it was an organic ad, but there were several aspects to it that felt fairly personalized to me (bicycling and outdoor related) that I actually wondered if it was generated on the fly using an LLM.
FB knows me and what I like and they have enough of data on my searches that they could customize a pretty relevant audio ad that, now with LLMs, can feel really relevant and natural, especially with audio gen being so good.
My point though is I wasn't sure if it was LLM generated or not and that's stuck with me. Random ChatGPT copy-pasta is easy for me to pick out – most people do not write that well. But a sophisticated application of this tech probably approaches the it-could-go-either-way territory.
- Are you saying that a company like Colossal has nothing to offer to the field of genetic biocontrol or are you saying there is nothing of interest in the field?
- > Realizing this, these types will give up on re-introducing the original organism and instead create a bioengineered version that can survive in the changed world. I fear this path will not end well for us.
Developing and injecting genetic resiliency into existing populations isn't the worst thing in the world. Additionally adding animals that can only reproduce sterile offspring would be an amazing tool for dealing with invasives. That kind of practical work very easily follows from this R&D.
As a reformed AI skeptic I see the promise in a tool like this, but this is light years behind other Anthropic products in terms of efficacy. Will be interesting to see how it plays out though.