https://davepeck.org/about/
- 2 points
- davepeck parentIf you’re curious about or playing with t-strings, see https://t-strings.help/
- Hi! Thanks for your deeply non-silly reply; it's nice to (virtually) meet a cofounder.
If you have time, I'd love to hear your thoughts on Mullvad's campaign here in Seattle.
For what it's worth, I suppose my perspective boils down to: the first three issues aren't issues here in town, or can be addressed in more direct ways (we have a wide choice of providers; 1st party browsers and services cover the gamut of tracking concerns; etc). Circumventing geographical restrictions is useful, but -- perhaps understandably! -- doesn't appear to be what Mullvad is advertising on the trains I ride.
- Long ago, in the era of Firesheep and exploding prevalence of coffee-shop Wi-Fi, consumer VPN services were definitely valuable.
But that was long ago. Now, HTTPS is the norm. The only use cases for consumer VPNs today seem to be (1) "pretend I'm in a different geography so I can stream that show I wanted to see" and (2) "torrent with slightly greater impunity".
I live in Seattle and Mullvad VPN seems to have bought approximately all of the ad space on public transit over the past couple months. Their messaging is all about "freeing the internet" and fighting the power. It's deeply silly and, I worry, probably quite good at attracting new customers who have no need for (or understanding of) VPNs whatsoever.
- 3 points
- > Everyone loves the dream of a free for all and open web. But the reality is how can someone small protect their blog or content from AI training bots?
I'm old enough to remember when people asked the same questions of Hotbot, Lycos, Altavista, Ask Jeeves, and -- eventually -- Google.
Then, as now, it never felt like the right way to frame the question. If you want your content freely available, make it freely available... including to the bots. If you want your content restricted, make it restricted... including to the humans.
It's also not clear to me that AI materially changes the equation, since Google has for many years tried to cut out links to the small sites anyway in favor of instant answers.
(FWIW, the big companies typically do honor robots.txt. It's everyone else that does what they please.)
- There are many reasons that "I used AI to do it all and now I've got $REAL ARR" strikes me as unlikely. To name just two:
1. I code with LLMs (Copilot, Claude Code). Like anyone who has done so, I know a lot about where these tools are useful and where they're hopeless. They can't do it all, claims to the contrary aside.
2. I've built a couple businesses (and failed tragicomically at building a couple more). Like anyone who has done so, I know the hard parts of startups are rarely the tech itself: sales, marketing, building a team with values, actually listening to customers and responding to their needs, making forward progress in a sea of uncertainty, getting anyone to care at all... sheesh, those are hard! Last I checked, AI doesn't singlehandedly solve any of that.
Which is not to say LLMs are useless; on the contrary, used well and aimed at the right tasks, my experience is that they can be real accelerants. They've undoubtedly changed the way I approach my own new projects. But "LLMs did it all and I've got a profitable startup"... I mean, if that's true, link to it because we should all be celebrating the achievement.
- Thanks, I’ll take a look. Everyone uses these tools differently, so I find AI-generated repos (and AI live-coding streams) to be useful learning material.
FWIW: “Infuriating/wonderful” is exactly how I feel about LLM copilots, too! Like you, I also use them extensively. But nothing I’ve built (yet?) has crossed the threshold into salable web services and every time someone makes the claim that they’ve primarily used AI to launch a new business with paid customers, links are curiously absent from the discussion… too bad, since they’d be great learning material too!
- Ive never once successfully gotten a usable sprite sheet out of ChatGPT. The concept seems foreign to it and no matter how hard I try to steer it it’ll find a way to do something hopeless (inconsistent frame sizes; incoherent animations; no sense of consistent pixel sizes or what distinguishes (say) 8-bit from 16-bit era sprites; it’ll draw graph paper in the background for some reason; etc etc.). If anyone has a set of magic prompts for this, I’d love to learn about it. But my suspicion is that it’s just fundamentally the wrong tool for the job — you probably need a purpose-built model.
- > I've literally built the entire MVP of my startup on Claude Code and now have paying customers.
Would you mind linking to your startup? I’m genuinely curious to see it.
(I won’t reply back with opinions about it. I just want to know what people are actually building with these tools!)
- Am I the only one who found Dohmke’s communication style to be… buzzword forward? For a company whose roots were in pragmatic engineering, I always felt that there was a too-heavy component of hype, particularly around AI, in pretty much every recent public announcement. Yet, despite all the rhetoric and GitHub’s superior position in the industry, they failed to capture the current AI editor market.
Structurally, it seems to make sense for GitHub to be part of Microsoft proper.
Perhaps this is a change for the better.
(PS: despite their “failure” to win hearts and minds, I do recommend giving Copilot in VSCode another look these days. Its agentic mode is very good and rapidly improving; I find it comparable to Claude Code at this point, particularly when paired with a strong model. Related to structure: I never quite understood the line between what parts of this GitHub made, and what parts of this the vscode and related Microsoft teams made.)
- Baseten serves models as a service, at scale. There’s quite a lot of interesting engineering both for inference and infrastructure perf. This is a pretty good deep dive into the tricks they employ: https://www.baseten.co/resources/guide/the-baseten-inference...
- Yeah.
It does feel like we're marching toward a day when "software on tap" is a practical or even mundane fact of life.
But, despite the utility of today's frontier models, it also feels to me like we're very far from that day. Put another way: my first computer was a C64; I don't expect I'll be alive to see the day.
Then again, maybe GPT-5 will make me a believer. My attitude toward AI marketing is that it's 100% hype until proven otherwise -- for instance, proven to be only 87% hype. :-)
- I think Litestar is superb for building API backends. Love it; use it; only good things to say. Their Advanced Alchemy is coming along nicely, too.
Litestar of course supports old-school server-template-rendered sites, too; it even has a plugin for HTMX requests and responses. In practice, I find that the patterns that serve API endpoints so well sometimes get in the way of old-school "validate form and redirect, or re-render with errors" endpoints. In particular, Litestar has no "true" form support of its own; validation is really intended to flag inbound schema mismatch on API calls, not flag multiple individual error fields. Using `@post("/route", exception_handlers={...})` is pretty awkward for these endpoints. I'd be excited to see some better tools/DX in-the-box here.
- 3 points
- > What can be padded is quite inconsistent.
Yes, I agree. “What works” is type dependent and Python’s builtin types don’t always behave the same way.
I know you know this, but for those who may not have run across it before and are curious:
f-strings invoke the format() builtin on the evaluated value.
In the case of None:
Under the hood, format() turns around and calls type(value).__format__(format_spec). Looking at the docstrings:>>> format(None) # as in f"{None}" 'None' >>> format(None, '<10') # as in f"{None:<10}" TypeError: unsupported format string passed to NoneType.__format__
That is, NoneType inherits Python's default object formatter, which doesn't support any format spec; if it's not empty, it's a TypeError.>>> help(type(None).__format__) Help on built-in function __format__: __format__(self, format_spec, /) Default object formatter. Return str(self) if format_spec is empty. Raise Type Error otherwise.On the other hand, `str` and `int` both have deep custom __format__ implementations that can do much more, like padding [0].
PS: upcoming t-strings in 3.14 add another twist to format specs: rather than being type dependent, "what works" depends on whatever code processes the Template instance. Unlike with f-strings, format() is not called automatically when t-strings are evaluated; in fact, there's no requirement that processing code does anything with the format specs at all. (Good processing code probably will want to do something!)
[0] https://docs.python.org/3/library/string.html#format-specifi...
- Hah!
T-strings use the exact same syntax as f‑strings, but their “format spec” (the part after the optional :, like .2f) can effectively be anything.
That might make creating a quiz tricky? With f‑strings, Python immediately calls format() on that spec; with t‑strings, it's simply captured in Interpolation.format_spec. It's up to your code to decide whether to call format() or do something entirely different with this value.
- > But in a real production environment, for real applications, you still want to avoid it because it isn't particularly easy to create robust systems for industrial use.
This is silly and seems to discount the massive Python codebases found in "real production environment"s throughout the tech industry and beyond, some of which are singlehandedly the codebases behind $1B+ ventures and, I'd wager, many of which are "robust" and fit for "industrial use" without babysitting just because they're Python.
(I get not liking a given language or its ecosystem, but I suspect I could rewrite the same reply for just about any of the top 10-ish most commonly used languages today.)
- 2 points
- 578 points