- theturtle32 parentI love it conceptually, but I can't get past the abject failure of the right edges of boxes to be properly aligned. Because of a mishmash of non-fixed-width characters (emoji, etc.), each line has a slightly different length and the right edges of boxes are a jagged mess and I can't see anything else until that's cleaned up.
- Yes, this is my experience as well.
- Today I heard the word "Irredentist" for the first time as I'm about to turn 42.
- That isn’t the flex you think it is lol
- Or a reference to oxidation, the process by which rust is formed…
- For me, the best kind of "moat" (tbh I hate that word, since it specifically implies needing to design (...scheme...) and engineer some kind of user lock-in, which is inherently user-hostile) would be staying aggressively on the forefront of DX. More important than feature churn, making it polished and seamless and keeping a smile on my face as I work is the best kind of "moat."
It requires constant attention and vigilance, but that's better for everyone than having some kind of "moat" that lets them start coasting or worse— lets them start diverting focus to features that are relevant for their enterprise sales team but not for developers using the software.
Companies really should have to stay competitive on features and developer happiness. A moat by definition is anti-competitive.
- That’s heartbreaking. :-(
- I feel this with every fiber of my being. I used to do a TON of front-end work, some of it quite cutting edge, delivering highly performant user experiences in the browser that had previously been only thought possible in a native app. Back in like 2009-2015. I was deeply connected with the web standards fundamentals and how to leverage them mostly directly.
I detoured into heavier focus on backend work for quite a while, concurrent with the rise of React, and watched its rise with suspicion because it seemed like such an inefficient way to do things. That, and JSX's limitations around everything having to be an expression made me want to gauge out my eyes.
Still, React pushed and laid the foundation for some really important paradigm shifts in terms of state management. The path from the old mental models around state to a unidirectional flow of immutable data... re-learning a totally new mental model was painful, but important.
Even though it's been chaotic at times, React has delivered a lot of value in terms of innovation and how we conceptualize web application architecture.
But today, when you compare it to something like SolidJS, it's really clear to see how Solid delivers basically all the same benefits, but in an architecture that's both simpler and more performant. And in a way that's much easier to organize and reason about than React. You still get JSX, server components, reactive state management (actually a MUCH better and cleaner foundation for that) and any React dev could move to Solid with fairly little mental re-wiring of the neural pathways. It doesn't require you to really change anything about how you think about application architecture and structure. It just basically does everything React does but better, faster, and with drastically smaller bundle sizes.
Yet I still have to begrudgingly use React in several contexts because of the industry-wide inertia, and I really wish I didn't have to.
- The Mint website is quite lovely! Props for making something so nice and pleasant and clean and easily navigable and informative.
- This is beautiful! I love this so much, as it makes it so simple and intuitive to drop into a sense of curiosity, exploration, serendipity, scanning around, seeing what catches the eye, zooming in and out.
It kind of recaptures part of the intangible sense of flipping through the old physical pages to see what catches the mind's interest. This feels substantively different from the current way that we discover and stumble upon things in the modern web and especially mobile app ecosystems with infinite scroll and algorithmically curated feeds.
- My working theory, which I hold quite confidently, is that anything that doesn't test well with new users in usability testing focus groups or A/B testing eventually gets the axe. But the people conducting that testing are - intentionally or unintentionally - optimizing for the wrong metric: "how quickly and easily can someone who has never seen this app before figure out how to do this action." That's the wrong thing to optimize for at a macro scale. It might make your conversions go up for a while, but at a long term cost of usability, capability, and discoverability that enrages the users that you want to convert into advanced, loyal, word of mouth evangelists for your app because they love it.
When people who are not thinking in that bigger-scale, zoomed-out, societal-level perspective conduct A/B testing or usability testing in a lab or focus group setting, they focus on the wrong metrics (the ones that make an immediate, short-term KPI go up) and then promote the resulting objectively worse UX designs as being evidence-based and data-driven.
It has been destroying software usability for the last 20 years and doing a deep disservice to subsequent generations who are growing up without having been exposed to TRULY thoughtful UX except very rarely.
I will die on this hill.
- I read it perfectly fine on my iphone. Turning the device to landscape and zooming so the article text was full width made it an almost ideal reading experience.
That said, the site does desperately need a responsive redesign so that you don't need to do what I just described.
- Yes! This is how things should be. And additionally, I want to see all the keyboard shortcuts visible on the menu items they activate. And every tool tip that pops up when you hover over a button should also show whatever keyboard shortcut activates that function. It's the best way for novice users to notice and the keyboard shortcuts for the things they care about without having to go elsewhere to look them up.
- I hate everything about this. We've done such a disservice to the next generations by giving them the most dumbed down interfaces to grow up with that they never develop an intuitive sense of how things actually work under the hood. Evidenced by how college students in STEM classes today are often confused when they have to deal with real software that requires them to know where to put files for the first time.
- In practice, "beginner mode" just makes inaccessible all controls deemed by the designer to be outside the realm of basic use cases.
- Integrity.
- > I’d rather play even if moral erosion is required.
Gross.
- This is what always frustrates me: why do companies need to bother with "pathological late stage optimisations" at all, if not for perverse incentives in the fundamental economic and political structure of how companies operate? Why is reaching a growth plateau perceived as stagnation instead of success? Why must a company feel pressured to grow forever, without bound? What's wrong with building a business to sustainability and equilibrium? Why does this almost never happen? Why do we instead see enshittification literally EVERYWHERE?
- "soul"? Oracle never had one in the first place.
- > This amount of scumminess is mind boggling.
It is, unfortunately, the natural result of insufficiently regulated capitalism.
- I've been to different types of (quite excellent!) fusion restaurants in both Peru and Colombia, not to mention multiple cities in Mexico. Good, creative cuisine that draws from multiple cultures is most definitely not limited to primarily a US/Canada thing.
- Correct. RAG is a general term as you describe, but it has become inextricably linked with vector databases in the minds of a lot of people, to the extent that GP and even the blog post we're discussing use RAG specifically to mean the use of a vector database after chunking input to create embeddings.
I agree with you that RAG should be a generic term that is agnostic to the method of retrieval and augmentation. But at the moment, in a lot of people's minds, it specifically means using a vector database.
- Looks like I found what I was looking for: https://github.com/marv1nnnnn/llm-min.txt/blob/main/sample/s...
Edit: not quite.
- I would love to see an example of a full transcript of the generation process for a small-ish library, including all the instructions given to the LLM, its reasoning steps, and its output, for each step in the generation flow.
- Regarding the WebSocket critiques specifically, as the author of https://www.npmjs.com/package/websocket, and having participated in the IETF working group that defined the WebSocket protocol, I completely agree with this blog post's author.
The WebSocket protocol is the most ideal choice for a bi-directional streaming communication channel, and the arguments listed in https://github.com/modelcontextprotocol/modelcontextprotocol... for "Why Not WebSockets" are honestly bewildering. They are at best thin, irrelevant and misleading. It seems as though they were written by people who don't really understand the WebSocket protocol, and have never actually used it.
The comment farther down the PR makes a solid rebuttal. https://github.com/modelcontextprotocol/modelcontextprotocol...
Here are the stated arguments against using the WebSocket protocol, and my responses.
---
Argument 1: Wanting to use MCP in an "RPC-like" way (e.g., a stateless MCP server that just exposes basic tools) would incur a lot of unnecessary operational and network overhead if a WebSocket is required for each call.
Response 1: There are multiple better ways to address this.
Option A.) Define a plain HTTP, non-streaming request/response transport for these basic use cases. That would be both DRAMATICALLY simpler than the "Streaming HTTP" HTTP+SSE transport they did actually define, while not clouding the waters around streaming responses and bi-directional communications.
Option B.) Just leave the WebSocket connection open for the duration of the session instead of tearing it down and re-connecting it for every request. Conceptualizing a WebSocket connection as an ephemeral resource that needs to be torn down and reconstructed for every request is wrong.
---
Argument 2: From a browser, there is no way to attach headers (like Authorization), and unlike SSE, third-party libraries cannot reimplement WebSocket from scratch in the browser.
Response 2: The assertion is true. You cannot attach arbitrary headers to the initial HTTP GET request that initiates a WebSocket connection, not because of the WebSocket protocol's design, but because the design of the browser API doesn't expose the capability. However, such a limitation is totally irrelevant, as there are plenty of other ways that you could decide to convey that information from client to server:
- You can pass arbitrary values via standard HTTP GET query parameters to be interpreted during the WebSocket handshake. Since we're initiating a WebSocket connection and not actually performing a GET operation on an HTTP resource, this does not create issues with caching infrastructure, and does not violate standard HTTP GET semantics. The HTTP GET that initiates a WebSocket connection is HTTP GET in name only, as the response in a successful WebSocket handshake is to switch protocols and no longer speak HTTP for the remainder of the connection's lifetime.
- Cookies are automatically sent just as with any other HTTP request. This is the standard web primitive for correllating session state across connections. I'll grant, however, that it may be a less relevant mechanism if we're talking about cross-origin connections.
- Your subprotocol definition (what messages are sent and received over the WebSocket connection) could simply require that the client sends any such headers, e.g. Authorization, as part of the first message it sends to the server once the underlying WebSocket connection is established. If this is sent pipelined along with the first normal message over the connection, it wouldn't even introduce an additional round-trip and therefore would have no impact on connection setup time or latency.
These are not strange, onerous workarounds.
---
Argument 3: Only GET requests can be transparently upgraded to WebSocket (other HTTP methods are not supported for upgrading), meaning that some kind of two-step upgrade process would be required on a POST endpoint, introducing complexity and latency.
Response 3: Unless I'm missing something, this argument seems totally bewildering, nonsensical, and irrelevant. It suggests a lack of familiarity with what the WebSocket protocol is for. The semantics of a WebSocket connection are orthoganal to the semantics of HTTP GET or HTTP POST. There is no logical concept of upgrading a POST request to a WebSocket connection, nor is there a need for such a concept. MCP is a new protocol that can function however it needs to. There is no benefit to trying to constrain your conceptualization of its theoretical use of WebSockets to fit within the semantics of any other HTTP verbs. In fact, the only relationship between WebSockets and HTTP is that WebSockets utilizes standard HTTP only to bootstrap a connection, after which point it stops speaking HTTP over the wire and starts speaking a totally distinct binary protocol instead. It should be conceptualized as more analogous to a TCP connection than an HTTP connection. If you are thinking of WebSockets in terms of REST semantics, you have not properly understood how WebSockets differs, nor how to utilize it architecturally.
Since the logical semantics of communication over a WebSocket connection in an MCP server are functionally identical to how the MCP protocol would function over STDIN/STDOUT, the assertion that you would need some kind of two-step upgrade process on a POST endpoint is just false, because there would not exist any POST endpoint for you to have interacted with in the first place, and if one did exist, it would serve some other purpose unrelated to the actual WebSocket connection.
---
In my view, the right way to conceptualize WebSocket in MCP is as a drop-in, mostly transparent alternative to STDIO. Once the WebSocket connection is established, the MCP client/server should be able to speak literally EXACTLY the same protocol with each other as they do over STDIO.
- Finally, someone putting into words what I have felt intuitively ever since the concept of an "MVP" and the obsession around A/B testing came into vogue. The quotes that most resonated with me:
---
The MVP concept is often overused to justify low-quality products.
Metrics are treated as precise guides without accounting for interpretation or strategy.
---
When I entered the startup world, I mistakenly followed the MVP playbook. We launched too early, misread feedback, and ended up iterating around noise. What saved the company wasn’t lean methodology. It was building something so good that users couldn’t ignore it. I’ve hired engineers who were obsessed with quality and passed on candidates with vague “passion.”
- This. I actually already internalized fatigue and annoyance about being cold contacted starting at LEAST 15 years ago. If you reach out to me to try to sell me your product, my gut reaction is to block you and never buy.
- From their episode description: "What if we told you that the person who started, runs and owns this establishment has legally ensured that it will never be sold, never go public and never acquire another company?"
We DESPERATELY need more companies to structure themselves like this.
- And the discount was never even big enough for me to even consider taking that risk for a moment! $395 new, but then returned and restocked with that sticker and only marked down to $390? Nah. I always wondered who was actually dumb enough to fall for that.
Only time I ever considered it was when the returned one was the only one left.
- This is exactly EXACTLY my experience as well!