Preferences

mbreese
Joined 13,959 karma
username at gmail.

  1. If you follow HN over the course of 24 hours, you’ll see at least 3 major waves. I tend to have an odd sleep schedule, so I’m always amused when I’m online to see which group is currently active.

    There is an Asian/Australian wave, followed by Europe, and the North America. Some of the best times are when the waves start to mix (ex: early morning NYC time, you’ll get the Europe and NA groups interacting).

    You’ll see different types of stories posted. Similar enough to where it all makes sense to be on HN, but it seems like a different flavor. But the comments are where I start to see more differences. You can get new takes on the same article if you see the comments from different times of the day.

    This doesn’t take into account the total numbers of people active from each region, which I suspect is skewed towards more users the US coasts. But, it does, I think, speak to how US centric HN is. I think it serves the needs each of the waves in a distinct way.

    In many ways, this is just an extension of the weekend HN effect. You can clearly observe differences on the site over the weekends. So, to me, it is unsurprising that you’d find differences across time zones. I’d love to actually see this analyzed more. This is just the take from someone who has been awake at enough hours to observe some anecdotal trends.

  2. If Tailscale is installed on your router, then any client will also be able to connect to Tailscale networks.

    Fo example, if you have a default route back to your home network on the router, any client will also connect through that tunnel back through your home. This assumes you are using your travel router to connect your laptop as opposed to say the hotel wifi. (In this scenario, your travel router is connected to both the hotel wifi as an uplink and Tailscale.)

  3. I’m not worried about the LLM getting offended if I don’t write complete sentences. I’m worried about not getting good results back. I haven’t tested this, and so I could be wrong, but I think a better formed/grammatically correct prompt may result in a better output. I want to say the LLM will understand what I want better, but it has no understanding per se, just a predictive response. Knowing this, I want to get the best response back. That’s why I try to have complete sentences and good (ish) grammar. When I start writing rushed commands back, I feel like I’m getting rushed responses back.

    I also tell the LLM “thank you, this looks great” when the code is working well. I’m not expressing my gratitude… I’m reinforcing to the model that this was a good response in a way it was trained to see as success. We don’t have good external mechanisms to give reviews to an LLM that isn’t based on language.

    Like most of the LLM space, these are just vibes, but it makes me feel better. But it has nothing to do with thinking the LLM is a person.

  4. I’ve had some luck with giving the LLM an overview of what I want the final version to do, but then asking it to perform smaller chunks. This is how I’d approach it myself — I know where I’m trying to go, and will implement smaller chunks at a time. I’ll also sometimes ask it to skip certain functionality - leaving a placeholder and saying we’ll get back to it later.
  5. Or it’s a small enough company without an IT department.

    Think of an SMB where you might know you need to do something (like connect a new store location to the server in your main location’s closet), but don’t know how or can’t afford to hire an IT person full time. This is probably the main market for this. Then once you get more buy in, experience, and reputation, this VPN could stay to see larger clients. That’s at least how I’d expect to see this grow.

  6. It’s all just loading data into the context/conversation. Sometimes as part of the chat response the LLM will request for the client do something - read a file, call a tool, etc. The results of which end up back in the context as well.
  7. I think that's part of the pitch here... swapping out Minio for Garage. Both scale a lot more than for just local development, but local dev certainly seems like a good use-case here.
  8. Same. I know I have a couple someplace in a bin. That and another embedded card from the era, but I think it had something like a DIMM footprint. I thought it was also Dallas semi, but I can’t find it or remember what it is though…

    I remember thinking that some of the tracking features (temperature) of the button would be helpful in some situations. But the ring was the crazy model. Between these and smart cards, authentication was starting to look futuristic. I even remember getting a smart card reader from my credit card company. They thought it would make for more secure web transactions.

    I’ve still seen some iButtons in the wild in odd places. Most recently, I saw them tracking car keys at dealerships. The last car I test drove had a key attached to a fob with an iButton. I was more excited by the iButton tracker than the car.

    But I thought of it as an example of how long lasting some design decisions can really be. I’m sure someone designed this system 20-25 years ago and it is still in service today. I’m sure today it would be NFC. But now I’m thinking about what the iButton of 2050 will look like.

  9. I used to see this is bash scripts all the time. It’s somewhat gone out of favor (along with using long bash scripts).

    If you had to prompt a user for a password, you’d read it in, use it, then thrash the value.

        read -p “Password: “ PASSWD
        # do something with $PASSWD
        PASSWD=“XXXXXXXXXXXXXXXXXX”
    
    It’s not pretty, but a similar concept. (I also don't know how helpful it actually is, but that's another question...)
  10. I wonder how robust the solder joints are for castellated boards. I’d still imagine that to be a weak point vibration-wise. Definitely easier to automate, but would it be that much more robust?

    Thinking about those CM sockets and I think the answer is yes - a castellated solder joint (is that the right term?) would be stronger. But other sockets might be more robust than the CM0.

  11. I’ve never heard it described this way: AGI as similar to human flight. I think it’s subtle and clever - my two most favorite properties.

    To me, we have both achieved and not human flight. Can humans themselves fly? No. Can people fly in planes across continents. Yes.

    But, does it really matter if it counts as “human flight” if we can get from point A to point B faster? You’re right - this is an argument that will last ages.

    It’s a great turn of phrase to describe AGI.

  12. I like the idea of exposing this as a resource. That’s a good idea so you don’t have to wait for a tool call. Is using a resource faster though? Doesn’t the LLM still have to make a request to the MCP server in both cases? Is the idea being that because it is pinned a priori, you’ve already retrieved and processed the HTML, so the response will be faster?

    But I do think the lack of a JavaScript loader will be a problem for many sites. In my case, I still run the innerHTML through a Markdown converter to get rid of the extra cruft. You’re right that this helps a lot. Even better if you can choose which #id element to load. Wikipedia has a lot of extra info that surrounds the main article that even with MD conversion adds extra fluff. But without the JS loading, you’re still going to not be able to process a lot of sites in the wild.

    Now, I would personally argue that’s an issue with those sites. I’m not a big fan of dynamic JS loaded pages. Sadly, I think that that ship has sailed…

  13. Isn’t that for scraping? I think this is for injecting (or making that possible) to add an MCP front end to a site.

    Different use cases, I think.

  14. I think this is a good idea in general, but perhaps a bit too simple. It looks like this only works for static sites, right? It then performs a JS fetch to pull in the html code and then converts it (in a quick and dirty manner) to markdown.

    I know this is pointing to the GH repo, but I’d love to know more about why the author chose to build it this way. I suspect it keeps costs low/free. But why CF workers? How much processing can you get done for free here?

    I’m not sure how you could do much more in a CF worker, but this might be too simple to be useful on many sites.

    Example: I had to pull in a docs site that was built for a project I’m working on. We wanted an LLM to be able to use the docs in their responses. However, the site was based on VitePress. I didn’t have access to the source markdown files, so I wrote an MCP fetcher that uses a dockerized headless chrome instance to load the page. I then pull the innerHTML directly from the processed DOM. It’s probably overkill, but an example of when this tool might not work.

    But — if you have a static site, this tool could be a very simple way to configure MCP access. It’s a nice idea!

  15. I think the idea here is that the web_fetch is restricted to the target site. I might want to include my documentation in an MCP server (from docs.example.com), but that doesn’t mean I want the full web available.
  16. I think deprecation in intra-company code is a completely different beast. You either have a business case for the code or not. And if something is deprecated and a downstream project needs it, it should probably have the budget to support it (or code around the deprecation).

    In many ways, the decision is easier because it should be based on a business use case or budget reason.

  17. > Mickey Mouse Clubhouse's lazy CG animation and unimaginative storytelling

    I think it’s important to remember that you probably aren’t their target audience. Their audience expects to see simple characters with simple stories. The CG doesn’t need to be advanced, so having it fast to produce is the goal. It has to hold the interest of a toddler for 25 min without annoying the parents too much. Shiny and simple rendering is probably what they are going for. You can certainly argue about the educational qualities of the show, but I think entertaining was their primary goal for Mickey Mouse Clubhouse.

    Also, this show hasn’t been made for years, has it? You’re looking at a show that was produced from 2006-2016. The oldest shows would be almost 20 year old CG. The newest is still nearly 10 years old. At the time it was fresh, the CG was pretty good, compared to similar kids shows.

    My kids were young right in this window, and we watched a lot of Disney.

    Disney definitely hit a CG valley though that you can see with some of their shows that switched from a 2D look to a more 3D rendering. Thankfully we aged out of those shows around 2015, so it has been a while. Disney has always been a content shop where quantity has its own quality, so I’m sure I’d have similar opinions as you if I was looking at the shows now. But at the time, it wasn’t bad.

    I’m not sure how the OpenAI integration will work. I can see all sorts of red flags here.

  18. I have a service that other users access through a web interface. It uses an on-premises open model (gpt-oss-120b) for the LLM and a dozen MCP tools to access a private database. The service is accessible from a web browser, but this isn’t something where the users need the ability to access the MCP tools or model directly. I have a pretty custom system prompt and MCP tools definitions that guide their interactions. Think of a helpdesk chatbot with access to a backend database. This isn’t something that would be accessed with a desktop LLM client like Claude. The only standards I can really count on are MCP and the OpenAI-compatible chat completions.

    I personally don’t think of MCP servers as having more utility than local services that individuals use with a local Claude/ChatGPT/etc client. If you are only using local resources, then MCP is just extra overhead. If your LLM can call a REST service directly, it’s extra overhead.

    Where I really see the benefit is when building hosted services or agents that users access remotely. Think more remote servers than local clients. Or something a company might use for a production service. For this use-case, MCP servers are great. I like having some set protocol that I can know my LLMs will be able to call correctly. I’m not able to monitor every chat (nor would I want to) to help users troubleshoot when the model didn’t call the external tool directly. I’m not a big fan of the protocol itself, but it’s nice to have some kind of standard.

    The short answer: not everyone is using Claude locally. There are different requirements for hosted services.

    (Note: I don’t have anything against Claude, but my $WORK only has agreements with Google and OpenAI for remote access to LLMs. $WORK also hosts a number of open models for strictly on-prem work. That’s what guided my choices…)

  19. For what it's worth, I don't write MCP servers that are shell scripts. I have ones that are http servers that load data from a database. It's nothing really all that more exciting than a REST API with an MCP front end thrown on top.

    Many people only use local MCP resources, which is fine... it provides access to your specific environment.

    For me however, it's been great to be able to have a remote MCP HTTP server that responds to requests from more than just me. Or to make the entire chat server (with pre-configured remote MCP servers) accessible to a wider (company internal) audience.

  20. Secrets management is hard. And proper secret sharing setups meant for larger groups are quite unwieldy to work with with smaller groups. Well, they are hard to work with for all sizes of groups, but it seems particularly overkill for small groups. So I see why you'd want to do this. I also kinda like the idea of just encrypting/decrypting .env files. It's a pretty clean design.

    But storing secrets in the same git repository just seems off to me. I don't like the idea of keeping the secrets (even in encrypted form) with the code I'm deploying.

    There should be a better balance somewhere, but I'm not sure this is quite it for me. Shared keepass files (not in git) or 1Password vaults are harder to work with, but I think lean more towards the secure side at the expense of a bit of usability. (Depending on the team, OSs, etc...)

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal