- Kinda get what he’s saying: provide more metadata with structured logging as opposed to lots of string only logs. Ok, modern logging frameworks steer you towards that anyway. But as a counterpoint: often it can be hard to safely enrich logging like that. In the example they include subscription age, user info, etc. More than once I’ve seen logging code lookup metadata or assume it existed, only to cause perf issues or outright errors as expected data didn’t exist. Similar with sampling, it can be frustrating when the thing you need gets sampled out. In the end “it depends” on scenario, but I still find myself not logging enough or else logging too much
- AGUI sounds similar: https://github.com/ag-ui-protocol/ag-ui
- I really want this but for Azure Devops. If you're not familiar, Microsoft owns both Github and Azure Devops, and both do similar: git repos and project management. I can use Github Copilot, Claude Code CLI, etc. against code on my disk, including Azure Devops MCP. But what I can't easily do is like Github Copilot Agent and apparently this Claude Code on Web: Assign a ticket to @SomeAi and have a PR show up in a few minutes. Can't change to github for _reasons_.
Would love any suggestions if anyone in a similar story.
- This could mean in the Drake equation ne -number of planets capable of life- is very small. A planet has to be hit with a comet big enough to deliver a large amount of water but not so big or fast to destroy it. And be in the Goldilocks zone of the star. Also the mass of the planet would play a part - gravity of more massive ones would be more likely to capture a comet. But again, too massive and I could see that hampering life.
- Was thinking the same. Not only would shifting industry to ECMAScript or something else get around trademark nonsense, but now that I think about it I do hear non-techy manager types get confused to this day and call it Java. Also seems like time is right as less is done in plain JavaScript- it’s Typescript, React, framework du jour, WASM. I guess the hard part is convincing an industry to use a different word.
- I did same at first but then recognized it as something that has affected family members. Would be nice to find a name that is a little more respectful to the victims. I can tell you victims already have guilt/shame for falling for these things. I would send them the article but don’t want to call them “pigs” on top of what they have already suffered
- I have a family member falling for these on a regular basis. In their case it’s possibly tied to mental health issues. They are able to drive, converse, care for self, but are sending money to groups that are clearly not real (example: fundraiser for a celebrity supposedly in the hospital, when news shows they weren’t. Was convinced actually conversing with the celebrity). Rest of the family has taken some steps but does feel at a loss. How do you prevent them from being seriously hurt emotionally and financially while respecting their autonomy and dignity? Even when they “come anround” to the fact they have been scammed, that adds insult to injury. The vector is definitely social media and sms/phone.
Any tips would be appreciated. Locking down phone hasn’t really helped, and finances are already segregated to hopefully avoid giving away total life savings.
- This is interesting to me because somehow I’ve had in my head that if we develop the ability in the next couple centuries to send probes interstellar it would be a longer list of possible targets. What this makes me realize is the list of places we visit even in the next thousands of years - even with incredible leaps in propulsion - is very finite. Space may be really really big but the part physically accessible even in long timescales is limited.
- One gotcha with roll your own task scheduler is if you want to run it across multiple machines. If you need 5 machines running different scheduled tasks, you need a locking mechanism to ensure only one machine is processing the task. In the author’s approach this is handled by the queue, but in my read the scheduler can only happen on one machine or you get multiple of the same task in the queue. Retry can get more complicated- depending on the failure you may want an exponential backoff, retrying N times and waiting longer durations between. A nice dashboard to see the status of everything is helpful also.
In .NET world I use Hangfire for this. In Node (I assume what this is) I tinkered with Bull, but not sure what best in class is there.
- Well, sometimes I will, but for example take a simple list+form ontop of a database. Instead of building the UI and the database and then showing the stakeholder, who adds/renames fields, changes relationships etc. I will intentionally build just the UI not wired up to database. Sometimes just to an in-memory store or nothing. Then, _after_ the stakeholder is somewhat happy with the UI, I "bake" things like a service or data layer, etc. This way the changes the stakeholder inevitably has up front have less of an impact.
- This pretty much exactly describes my strategy to ship better code faster. Especially the “top down” approach: I’m actually kind of surprised there isn’t like a “UI first” or “UI Driven Development” manifesto like w TDD or BDD. Putting a non functional UI in front of stakeholders quickly often results in better requirements gathering and early refinement that would be more costly later in the cycle.
- My first intranet job early 2000s reporting was done this way. You could query a db via asp to get some xml, then transform using xslt and get a big html report you could print. I got pretty good at xslt. Nowadays I steer towards a reporting system for reports, but for other scenario you’re typically doing one of the stacks he mentioned: JSON or md + angular/vue/react/next/nuxt/etc
I’ve kinda gotten to a point and curious if others feel same: it’s all just strings. You get some strings from somewhere, write some more strings to make those strings show other strings to the browser. Sometimes the strings reference non strings for things like video/audio/image. But even those get sent over network with strings in the http header. Sometimes people have strong feelings about their favorite strings, and there are pros and cons to various strings. Some ways let you write less strings to do more. Some are faster. Some have angle brackets, some have curly brackets, some have none at all! But at the end of the day- it’s just strings.
- I’ve wanted this for a while. I worked on a bank app where the home rolled solution was atrocious. Line of business apps don’t make sense in Microsoft store. But really where I land is to greatly prefer web apps deployed to IaaS because deployment is easier and compatibility is usually a known quantity. Debugging installer or desktop app issues on remote servers and desktops is a hassle I like to avoid if I can
- Totally understand why it’s not in the post, and it did help me understand mcp more. That said, that’s the issue: most articles I’ve seen are geared toward how to do a local-use-only mcp. In the ones I want to build I need to deploy into an enterprise and know the current user and am not quite clear how yet. The answers on using oauth help though. Maybe a future post idea :)
- I’m trying to wrap my head around mcp but auth and security is still the confusing thing to me. In this case, I get there is an oauth redirect happening, but where is the token being stored? How would that work in an enterprise or saas environment where you want to expose an mcp for users but ensure they can only get “their” data? How does the LLm reliably tell the mcp who the current user is?
- As bad and annoying as this is, I do think “we won’t pay the ransom but set up a reward fund in the same amount to find the perps” is an interesting approach. It turns the tables such that any of the criminals or associates now are incentivized to turn on each other. I could see ways it wouldn’t work (they lie to get the reward, future scammers set up the scam with a patsy so they can collect reward), and am not sure it plays the same if there is actual exposed keys, etc.
- .NET has made great strides in this front in recent years. Newer versions optimize cpu and ram usage of lots of fundamentals, and introduced new constructs to reduce allocations and cpu for new code. One might argue they were able because they were so bad, but it’s worth looking into if you haven’t in a while.
- Yeah, this resonates, and not just with software. I have a “jack of all trades” attitude passed down from my father and grandfather. If I don’t know how, I learn. I do my own large tree work, electrical, plumbing, car repair, and haircuts. In many ways this is good- it’s saved me money and in the case of the zombie apocalypse I think I could last a few months at least. Sometimes I think I should pay someone else, but somehow that always feels like more trouble. So I end up feeling like I should do it all. And then get overwhelmed and in a rut with all the psychic weight of the things I need to do.
Similar with software. I always tell my clients “yes I can do that” because, well, I can. But then I end up juggling too much, working nights and weekends, and not having time for the tree work and haircuts I have to do at home.
- Interesting article. I’m currently trying to justify “tech debt” to upper management. The way I try to communicate it: we can move faster and better on business requirements if we delete unused code/upgrade legacy dependencies/etc. Both need to happen at the same time, though, and that’s where it gets hard: “this feature request would be so much easier if I upgraded to Angular vlatest” runs the risk of always paying tech debt and never providing value.
- " It’s easy to blame the user's missing grasp of basic version control, but that misses the deeper point."
Uhh, no, that's pretty much the point. A developer without basic understanding of version control is like a pilot without a basic understanding of landing. A ton of problems with AI (or any other tool, including your own brain) get fixed by iterating on small commits and branching. Throw away the commit or branch if it really goes sideways. I can't fathom working on something for 4 months without realizing a problem or having any way to roll back.
That said, the one argument I could see is if Cursor (or copilot, etc) had built in to suggest "this project isn't in source control, we should probably do that before getting too far ahead of ourselves.", then help the user setup sc, repo, commit, etc. The topic _is_ tricky and I do remember not totally grasping git, branching, etc.
- This take has a few problems: Poor people in the US are capable of using fluoride toothpaste and flossing. At least at homeless shelters at outreach things I’ve been to, toothpaste and toothbrushes are freely available. Your argument hinges on them being incapable on the whole and needing a Benevolent But Superior Intelligence to provide an alternative for them.
Second, it completely ignores any debate over effectiveness or side effects. It could well be that fluoride in water is great for teeth but bad for brains. The objections to fluoride in water I’ve seen are more along those lines. Im not clear the validity of those claims but for example anti fluoride advocates don’t typically object to chlorine in water to kill germs. That seems the core issue- without bias from stakeholders, is the benefit of fluoride proven and the risks disproven? It’s hard to answer because a study needs to span many years and exclude many variables.
And in general I think that is what needs to happen with these type debates. Take them _out_ of the sphere of charged political opinion and focus on getting to the objective truth of risks and benefits, then be transparent. People can handle “here are the known pros and cons and what we think that means” over “there are only pros and no cons and if you disagree you hate poor people”
You can also expose the agents as MCP, AGUI and so it can be a tool you integrate with other AI platforms.