- pglevyThanks for sharing! I tried a similar content-in-url approach for a family grocery list app but I couldn't get the url that short. (It worked but it was a bit cumbersome sharing over Whatsapp.) Will see what I can learn from this!
- At least one person. https://tonsky.me/blog/centering/
- I've been thinking about something like this from a UI perspective. I'm a UX designer working on a product with a fairly legacy codebase. We're vibe coding prototypes and moving towards making it easier for devs to bring in new components. We have a hard enough time verifying the UI quality as it is. And having more devs vibing on frontend code is probably going to make it a lot worse. I'm thinking about something like having agents regularly traversing the code to identify non-approved components (and either fixing or flagging them). Maybe with this we won't fall further behind with verification debt than we already are.
- For context, I'm a UX Designer at a low-code company. LLMs are great at cranking out prototypes using well-known React component libraries. But lesser known low-code syntax takes more work. We made an MCP server that helps a lot, but what I'm working on now is a set of steering docs to generate components and prototypes that are "backwards compatible" with our bespoke front end language. This way our vibe prototyping has our default look out of the box and translates more directly to production code. https://github.com/pglevy/sail-zero
- Our low-code expression language is not well-represented in the pre-training data. So as a baseline we get lots of syntax errors and really bad-looking UIs. But we're getting much better results by setting up our design system documentation as an MCP server. Our docs include curated guidance and code samples, so when the LLM uses the server, it's able to more competently search for things and call the relevant tools. With this small but high-quality dataset, it also looks better than some of our experiments with fine tuning. I imagine this could work for other docs use cases that are more dynamic (ie, we're actively updating the docs so having the LLM call APIs for what it needs seems more appropriate than a static RAG setup).
- > I answered.
I never answer the phone.
- Not an engineer but I think this is where my mind was going after reading the post. Seems like what will be useful is continuously generated "decision documentation." So the system has access to what has come before in a dynamic way. (Like some mix of RAG with knowledge graph + MCP?) Maybe even pre-outlining "decisions to be made," so if an agent is checking in, it could see there is something that needs to be figured out but hasn't been done yet.
- Mine is a much simpler use case but sharing in case it's useful. I wanted to be able to quickly generate and iterate on user flows during design collaboration. So I use some boilerplate HTML/CSS and have the LLM generate an "outline" (basically a config file) and then generate the HTML from that. This way I can make quick adjustments in the outline and just have it refresh the code when needed to avoid too much back forth with the chat.
Overall, it has been working pretty well. I did make a tweak I haven't pushed yet to make it always writes the outline to a file first (instead of just terminal). And I've also started adding slash commands to the instructions so I can type things like "/create some flow" and then just "/refresh" (instead of "pardon me, would you mind refreshing that flow now?").
- But not Sonnet?
- My use case is a little different (mostly prototyping and building design ops tools) but +1 to this flow.
At this point, I typically do an LLM-readme at the branch level to document both planning and progress. At the project level I've started having it dump (and organize) everything in a work-focused Obsidian vault. This way I end up with cross-project resources in one place, it doesn't bloat my repos, and it can be used by other agents from where it is.
- How does this differ from this project? https://github.com/simonw/llm
- This way of putting it resonates with me: unlocking the value of fuzzy knowledge.
- That's fair. It depends on the goal. I'm not trying to change careers. And I didn't get that sense from original poster. I'm mostly interested in prototyping or addressing niche productivity issues. But I feel I learn quite a bit from seeing what the LLM does and asking follow up questions or looking things up. I've been around software dev a lot so that helps with knowing what to ask sometimes. My main point is if someone is interested in building software, they should start building as soon as possible. Don't feel you have learn everything first.
- What I took away from your post was not that you want to learn computer science but that you want to build things with software. If so, now is a really exciting time because it's never been easier for people without a CS background to go from idea to working software.
As a UX designer, I've worked with developers for a long time, so I've picked up knowledge along the way. I've read some books and merged some PRs at work but nothing that would qualify me as a developer.
What am I'm having a lot fun with right now though is building with LLMs. If I have an idea, I'll just throw it into Replit or Claude Code to see what it comes up with and then decide if I want to pursue it further.
My 2 cents: learn by building. Start working down your list of ideas and dig deeper into questions and topics that come up. Will probably keep things more interesting than slogging through a course.
- EmailImprov — A realistic email simulation system designed for testing AI agents and agentic workflows. Generate dynamic, contextual email interactions using distinct personas powered by Ollama LLM integration.
Just got this POC up and running the other day. Realistic sample data for prototyping and testing is frequently a pain point. Even more so for anything having to do with email.
So I wanted something that would pretend to be someone and send and respond to fake emails. And it seems like local LLMs are more than capable of this nowadays. Uses Ollama. Vibe-coded with Claude. UX designer here so be gentle.
- Thanks for the link! I wonder how this works. Is there just no practical impact of the "book value" being so far off the market price? Surely any exchange is done at the prevailing rates.
> The market value of a gold bar depends on its weight, purity level, and the prevailing market price for gold. Rather than market pricing which fluctuates daily, the New York Fed uses the United States official book value of $42.2222 per troy ounce for gold holdings.
- This is an important point. I hope they'll address this soon. I've just started tinkering with Code and took it for granted that I wouldn't lose the "conversation" when the terminal restarted.
Maybe just ask it to save off the contents of the session as it goes?
- Definitely slower loading and jankier scrolling on my phone.
- Designer and vibe coder here... I also had trouble getting Claude to create an MCP server. What I finally realized was I could just point it at one of the Typescript demo repos from Anthropic. Then it easily cranked out what I was asking for. Maybe not an issue now with Claude 4.
- Much better, thanks! I'd still like to see a more mobile-friendly comments link though. I sometimes scan those first to see if the article is worth reading.