[ my public key: https://keybase.io/joshstrange; my proof: https://keybase.io/joshstrange/sigs/9yQAUjB3P9Xt5_LeWFPzRkPZv_m7q7M-EgiD3XOB6eo ]
- joshstrangeI’d be (genuinely) interested to hear from people who think this will help. In my mind, if the JSON isn’t valid I wouldn’t trust a “healed” version of it to be correct either. I mean, I guess you just do schema validation on your end and so maybe fixing a missing comma/brace/etc is actually really helpful. I’ve not done JSON generation at scale to know.
- Based on what’s been reported it was almost certainly just a mistake but I also could 100% see this administration selling ads on their live feed.
- One of the most valuable things about code generation from LLMs is the ability to edit it, you have all the pieces and can tweak them after the fact. Same with normal generated text. Images, on the other hand, are much harder to modify and the times when you might want text or other “layers” is specifically where they fall apart in my experience. You might get exactly the person/place/thing rendered but the additions to the image aren’t right but it’s nearly impossible to change just the additions without losing at least some of the other image/images.
I’ve often thought “I wish I could describe what I want in Pixelmator and have it create a whole document with multiple layers that I can go back in and tweak as needed”.
- > the Bay Area tradition that treated computers not as office appliances but as tools for thought, instruments of liberation
Missed a great chance to use the "bicycle for our minds" Steve Jobs quote (one of my favorite since it resonates so clearly with me)
- > So I’m actually fine with the proposed change since it also gives me the power as a customer to say “hey, I’m paying for this, fix it.”
I’m paying for GitHub Action now and there is zero recourse (other than leaving). Giving them money doesn’t change anything.
I’d be more willing to pay if GH Actions wasn’t so flakey and frustrating (for hosted or self-hosted runners, I use both). At least self-hosted runners are way cheaper _and_ have better performance.
- This is a hilariously naive take.
If you’ve actually tried this, and actually read the results, you’d know this does not work well. It might write a few decent tests but get ready for an impressive number of tests and cases but no real coverage.
I did this literally 2 days ago and it churned for a while and spit out hundreds of tests! Great news right? Well, no, they did stupid things like “Create an instance of the class (new MyClass), now make sure it’s the right class type”. It also created multiple tests that created maps then asserted the values existed and matched… matched the maps it created in the test… without ever touching the underlying code it was supposed to be testing.
I’ve tested this on new codebases, old codebases, and vibe coded codebases, the results vary slightly and you absolutely can use LLMs to help with writing tests, no doubt, but “Just throw an agent at it” does not work.
- I can assure you WarpBuild has Mac runners that work very well. When I first switched GH only offered 1 Mac runner and it was horribly slow. Literally cut my build times in half by changing 1 line in my workflow file to use the WB runner.
Nowadays GH has more sizes by WB continues to beat them in price and performance.
It’s highway robbery what GH charges for the crap they provide. I can highly recommend WarpBuild for Mac (and Linux) runners.
- I find that page incredibly hard to read. I cannot fathom why someone would lecture others about UI/UX and do it using that as the UI/UX.
Are modals/dialogs perfect? Absolutely not but completely eschewing them is also a mistake. In all things, moderation.
- > staggered rollout
It's too bad no OpenAI Engineers (or Marketers?) know that term exists. /s
I do not understand why it's so hard for them to just tell the truth. So many announcements "Available today for Plus/Pro/etc" really means "Sometime this week at best, maybe multiple weeks". I'm not asking for them to roll out faster, just communicate better.
- Ahh, so since GitHub is completely incompetent when it comes to managing a CI they are going to make it worse for everyone to get their cut.
I hate GH Action runners with a passion. They are slow, overpriced, and clearly held together with duct tape and chewing gum. WarpBuild, on the other hand, was a breeze to setup and provided faster runners and lower prices.
This is a really shitty move.
Hey GitHub, your Microsoft is showing...
- Nothing irks me quite as much as "Did you use ChatGPT/AI on this?" or assumptions that it was used.
Just the other week a client reached out and asked a bunch of questions that resulted in me writing 15+ SQL queries (not small/basic ones) from scratch and then doing some more math/calculations on top of that to get the client the numbers they were looking for. After spending an hour or two on it and writing up my response they said something to the effect up "Thanks for that! I hope AI made it easy to get that all together!".
I'm sure they were mostly being nice and trying (badly) to say "I hope it wasn't too much trouble" but it took me a few iterations to put together a reply that wasn't confrontational. No, I didn't use AI, mostly because they absolutely suck at that kind of thing. Oh, they might spit of convincing SQL statements, those SQL statements might even work and return data, but the chance they got the right numbers is very low in my experience (yes, I've tried).
The nuance in a database schema, especially one that's been around for a while and seen its share of additions/migrations/etc, is something LLMs do not handle well. Sure, if you want a count of users an LLM can probably do that, but anything more complicated that I've tried falls over very quickly.
The whole ordeal frustrated me quite a bit because it trivialized and minimized what was real work that I did (non-billed work, trying to be nice). I wouldn't do this because I'm a professional but there was a moment when I thought "Next time I'll just reply with AI Slop instead and let them sort it out". It really took the wind out of my sails and made me regret the effort I put into getting them the data they asked for.
- > risk THE FREE INTERNET because of that
Come off it, as if he is the only one who can save us. Spare me.
- 1 point
- This is hard, because on one hand I do love self-hosting (I self-host a number of the services they list in their "App Store") but I don't quite get the target market for this (probably because I'm not in it).
The lack of RAID or similar means that you've traded the cloud for 1 component losing all your data. Coupled with the lack of any (obvious) backup solution is concerning. Do you really want to backup your files/images to a single point of failure? If this is supposed to be turn-key then I think there are opportunities to sell cloud backup as an add-on but as-is you are handing people a ticking time bomb.
I'm not a fan of the Crypto angle highlighted in the store, it's a red flag.
I'm interested in what the app compatibility story is here. Like how much post-install configuration are they handling?
> Sonarr on umbrelOS will automatically connect to download clients installed from the Umbrel App Store. Choose from Transmission, qBittorerent, and SABnzbd. Simply install your preferred client(s).
Does that mean they have post-install hooks (on both Sonarr and the download client's end) to configure those? Or is that just speak for "Yeah, you can easily configure XYZ download client that you also installed".
All-in-all it seems overpriced and limited for what it's offering and that's all assuming they stick around and don't peter out. Maybe this is a good first step for someone interested in this but I feel like the type of person interested in this either already can figure out how to set it up themselves (Synology, UnRaid, Docker, etc) or will need a lot of handholding when things break/don't work as expected.
It's entirely possible that there are a lot of people that this would be good for, I just don't know who it would be.
Lastly, no mention of anything like SSO or Remote Access (both things that could be a good value-add IMHO alongside cloud backup). It's overly nerdy in some ways and underly nerdy in others which is why I can't figure out the target audience.
- In my experience, AI/Vibe-coded tools crumble under their own weight given enough iterations and even faster if there is no (real) developer in the loop overseeing/planning/reviewing.
I think that _developers_ might be reaching for more LLM-built tools instead of SaaS in some cases and I also can believe that plenty of people _think_ they are vibe-coding up alternatives to SaaSes they pay for but I think those people are going to have a bad time when it eventually collapses (the tool they made, not talking about the AI bubble).
I'm not anti-LLM (not in the slightest) and you can sometimes (it's not a given) get to 80-90% of an existing product/service with vibe-coding or LLM-assisted development but that last 10-20% (and especially that last 1-5%) are where it gets hard. Really hard.
It's the typical "you can already build such a system yourself quite trivially"-mentality IMHO. I feel this myself all the time, even before LLMs, "Oh, I could clone this easily!" and in many cases I could or even did... or at least I cloned the easy/basic/happy-path version that eschewed a whole slew of features I didn't need/care for. But then the complexity started to set in. [0]
I have the same feeling for things I'm not even trying to clone, just build from scratch. I put together a cookbook for friends and family recently and used LLMs to help write essentially a static site generator to read my JSON data I created (some with the help of LLMs) and render it out as HTML (which I then turned into a PDF). My mind started to run with "Hmm, could I create a product out of this? It was relatively easy to get started..." but then reality set in and I remembered all the little tweaks I had to do (shorten a title here, reduce padding there, etc to make everything fit and look good). Sure, I got 80% of the way there in the first or second iteration of it using LLMs but there was plenty of massaging that had to happen to turn it into something usable that I could send to a printer.
- I work with many distributed, often offline, hosts with varied levels of internet speeds. Does this do any offline caching? Like if I load a vfs litestream database on one of my nodes and it goes offline can it still query or will it fall over unless the data was recently fetched?
- Oh, I know you all did and I remember the cross-signing. I worried that you'd get slapped down somehow, that the crappy cert companies would find a way to stop/reverse it, that the project would fizzle out, etc. I thought it was cool as hell but it seemed something so clearly good couldn't stay good but you all have only gotten better over time.
- It's a bad ad and the AI just makes it worse. The song doesn't rhyme well, the lyrics doesn't make much sense (they feel very forced), the things they portray are mostly unrealistic/exaggerated, and the cherry on top is that McDonalds is somehow a respite from the chaos of Christmas. I've never once in my life thought of McD as somewhere comforting to go. It's just a bad ad period.
Also, no one wants a bad (probably also AI-generated) song about how terrible Christmas is. I'm not saying it's not terrible but no one wants a song about it.
- > What this really signals is the intention (which might be sincere or not) of getting some sort of OEM deal with some device manufacturer.
I assumed they were talking about their partnership with Jony Ive/IO and an internal hardware product, not partnering (not that they won't do that as well).
- I still remember the original announcement around LE and thought "Great idea, no idea if they'll be able to get buy-in from browsers/etc", now I use it on all my self-hosted sites and will probably be transitioning my employer over to it when we switch to automated renewal sometime next year.
LE has been an amazing resource and every time I setup a new website and get a LE cert I smile. Especially after having lived/experienced the pain that was SSL/TLS before LE.