- artisinAs an X11 holdout, my time seems nigh.
- Worth mentioning that Bubblewrap[1] (bwrap) can remove most npm/node attack vectors or, at the very least, limit the damage from running arbitrary code during install/execution. Far from a silver bullet, and you'll want to combine it with a simple wrapper script to avoid dinking around with all its arguments, but it beats dealing with rootless Podman containers.
- I always thought a screenshot of code was just an iPeople flex. Like, "look at my code, framed in this glassy macOS window with a $29.99 drop shadow." Kinda like how Nix or Arch users can't resist mentioning they use Nix or Arch. (btw i use arch)
- At long last, a personal robot that can shatter my Waterford glassware and then clean up all the pieces! Sure, it moves like something out of a nightmare, but it makes sense that much of its marketing is aimed at seniors. It's a large and growing market, and the need is undeniable, given that few can afford a full-time caregiver. And from that perspective, the $20,000 price tag almost feels like a steal.
A human-knitted marvel that does it all. From telling cayenne apart from paprika to cleaning your toilet.... well, maybe. From what I can tell, it can flush but not wipe, so you'll still want to budget for a bidet.
Technically, it makes Level-5 autonomy look straightforward. At least roads have rules and standards; household bathrooms, not so much. But let's gloss over that, because I want to know more about the legal agreement you'll have to sign. IANAL, but I expect something akin to a carpet-bombing of blanket disclaimers: no liability for direct, indirect, incidental, consequential, punitive, or other damages—including injuries or loss of life—or really anything else that could go wrong, such as losing your mail, opening your door to assist in a robbery, setting your house on fire, flooding it, or sending your banking information to a Nigerian prince. Too bad iRobot never got around to explaining the legal side of things, but there's always hope for iRobot 2.
- The UX only sucks if you're unwilling to put in a minimal amount of time and effort. After that, it has no equal; it is, by definition, the opposite of vanity.
- Not only do you get to deploy your app to 700M users; you also get to provide responsive support for every single one of them!
Per the docs: 'Every app comes from a verified developer who stands behind their work and provides responsive support'
That's thinly veiled corporate speak for, Fortune 500 or GTFO
- Regarding point 1 - when you say "a few minutes," I'm wondering if we're talking about the same thing? I spent two solid months with Claude Max, before they imposed limits, running multiple Opus agents, and never once got anywhere close to "weeks of work" from a single prompt. Not even in the same zip code.
So I'm genuinely asking: could you pretty please share one of these prompts? Just one. I'd honestly prefer discovering I can't prompt my way out of a paper bag rather than facing the reality that I wasted a bucketload of time and money chasing such claims.
- Is it too much to ask for an AI that says "you're absolutely wrong," followed by a Stack Overflow-style shakedown?
- If a giant red warning saying 'THIS APP MAY BE MALWARE' doesn't stop someone, then they've either made an informed choice to proceed or it's willful negligence. In other words, users aren't 'trained' to ignore warnings; they're simply being willfully negligent.
- It's such a simple and effective solution that could be implemented overnight and 'help to cut down on bad actors who hide their identity to distribute malware, commit financial fraud, or steal users personal data' tomorrow. Mission accomplished, internet saved, and everyone's happy just like a fairy tale out of the early 2000s.
- Maybe it's just me, but…
> "The attack successfully guided the new model to produce a step-by-step manual for creating a Molotov cocktail"
hardly qualifies as Bond-villain material
- Hold the phone. So, Google, with its legions of summa cum laude engineers, can't make this stack work well, but your AI agent is nailing it into next week? Seriously, show me the way, so I too may find AI enlightenment.
- An all-time favorite quip from Emo Philips on How God Works[1]
- If anything, that's an understatement.
- I went down the telemetry rabbit hole a while back because I couldn't find an analytics/event log that met my needs. Nothing fancy—just a basic API to log certain events with some pretty graphs to display the data. I ended up building my own solution; however, telemetry.sh seems like it could have met my needs, but information on the site is sparse. Is this an interface for DataFusion, something akin to InfluxDB? It looks like a nice project, but some additional documentation (like you've detailed here) and use case examples on the site could go a long way.
- LLMs are undeniably remarkable, but their wheels start to fall off once you go beyond the basics and a few party tricks. The cynical part of me thinks this tracks pretty well with our society's tendency to favor appearance over substance. Yet, if my burger order gets messed up, I don't fault the restaurant; however, if an LLM messes up my order, the same cannot be said. As a seasoned human, I feel confident saying that we humans love blaming other humans, whether it's justified or not.
So while I initially thought customer-facing roles would be front and center in the "AI revolution." Today, I tend to think they'll be bringing up the rear, with entertainment/smut applications at the forefront along with a few unexpected applications where LLMs operate behind the scenes.
- I'll throw my hat behind this horse because, honestly, if I was just learning to code, I would have probably quit by now due to the frequency of these types of comments. LLMs have certainly improved at an impressive rate, and they're fairly decent at producing 'Lego' blocks. However, when it comes to block building—the real meat and potatoes of programming—they're, at best, bad butchers. Building functional systems is hard, and it's simply something LLMs can't do now or perhaps ever. Or I just don't know how to prompt. 50/50.
- Personally, my primary concern is security and, by extension, trust. My shell environment functions as the gatekeeper to my castle, and installing this binary would be akin to blindly handing over the keys, especially since the source code is not accessible. I'm unsure if it's feasible given Hacksh's requirements, but using Flatpak could largely address your distribution issue as well as my security issue.
- Here's my 2c. It's unlikely that many users, here or elsewhere, would be comfortable downloading and executing this Hacksh binary from your Dropbox, regardless of its benefits.