Email: adrianduyzer@gmail.com
- adriandThis is a great idea and insurance companies as the customer is brilliant. I could see this extend to prescribing as well. There are huge numbers of people that would benefit from more readily prescribed drugs like GLP-1s, and these have large portential to decrease chronic disease.
- Update: I haven’t heard from him about this since. That might count as a success story though.
- > On mobile this is a strong contender for the worst UX I've ever seen.
Pretty hyperbolic. First of all, this is a human and their work you’re talking about. A little respect goes a long way. Secondly, if this is the worst UX you’ve ever seen on mobile, I have to assume you’ve only been using the internet for the past week or so. This experience worked great for me on mobile Safari with no instructions required. You can’t say that for a lot of mobile UX including, I might add, Safari itself.
- > "It" being "that it's harder than it looks"?
Honestly, I'm not sure what to expect. There are clearly things he can't do (e.g. to make it work in prod, it needs to be in our environment, etc. etc.) but I wouldn't be at all surprised if he makes great headway. When he first asked me about it, I started typing out all the reasons it was a bad idea - and then I paused and thought, you know, I'm not here to put barriers in his path.
- There was no machine vision stuff in the app at that point. Claude suggested a couple of different ways of handling this and I went with the easiest way: piggybacking on the Apple Vision Framework (which means that this feature, as currently implemented, will only work on Macs - I'm actually not sure if I will attempt a Windows release of this app, and if I do, it won't be for a while).
Despite this being "easier" than some of the alternatives, it is nonetheless an API I have zero experience with, and the implementation was built with code that I would have no idea how to write, although once written, I can get the gist. Here is the "detectNodWithPitch" function as an example (that's how a "nod" is detected - the pitch of the face is determined, and then the change of pitch is what is considered a nod, of course, this is not entirely straightforward).
```
- (void)detectNodWithPitch:(float)pitch { // Get sensitivity-adjusted threshold // At sensitivity 0: threshold = kMaxThreshold degrees (requires strong nod) // At sensitivity 1: threshold = kMaxThreshold - kThresholdRange degrees (very sensitive) float sens = _cppOwner->getSensitivity(); float threshold = NodDetectionConstants::kMaxThreshold - (sens * NodDetectionConstants::kThresholdRange);
}// Debounce check NSTimeInterval now = [NSDate timeIntervalSinceReferenceDate]; if (now - _lastNodTime < _debounceSeconds) return; // Initialize baseline if needed if (!_hasBaseline) { _baselinePitch = pitch; _hasBaseline = YES; return; } // Calculate delta: positive when head tilts down from baseline // (pitch increases when head tilts down, so delta = pitch - baseline) float delta = pitch - _baselinePitch; // Update nod progress for UI meter // Normalize against a fixed max (20 degrees) so the bar shows absolute head movement // This allows the threshold line to move with sensitivity constexpr float kMaxDisplayDelta = 20.0f; float progress = (delta > 0.0f) ? std::min(delta / kMaxDisplayDelta, 1.0f) : 0.0f; _cppOwner->setNodProgress(progress); if (!_nodStarted) { _cppOwner->setNodInProgress(false); // Check if nod is starting (head tilting down past nod start threshold) if (delta > threshold * NodDetectionConstants::kNodStartFactor) { _nodStarted = YES; _maxPitchDelta = delta; _cppOwner->setNodInProgress(true); DBG("HeadNodDetector: Nod started, delta=" << delta); } else { // Adapt baseline slowly when not nodding _baselinePitch = _baselinePitch * (1.0f - _baselineAdaptRate) + pitch * _baselineAdaptRate; } } else { // Track maximum delta during nod _maxPitchDelta = std::max(_maxPitchDelta, delta); // Check if head has returned (delta decreased below return threshold) if (delta < threshold * _returnFactor) { // Nod complete - check if it was strong enough if (_maxPitchDelta > threshold) { DBG("HeadNodDetector: Nod detected! maxDelta=" << _maxPitchDelta << " threshold=" << threshold); _lastNodTime = now; _cppOwner->handleNodDetected(); } else { DBG("HeadNodDetector: Nod too weak, maxDelta=" << _maxPitchDelta << " < threshold=" << threshold); } // Reset nod state _nodStarted = NO; _maxPitchDelta = 0.0f; _baselinePitch = pitch; // Reset baseline to current position _cppOwner->setNodInProgress(false); _cppOwner->setNodProgress(0.0f); } }@end
```
- The president of a company I work with is a youngish guy who has no technical skills, but is resourceful. He wanted updated analytic dashboards, but there’s no dev capacity for that right now. So he decided he was going to try his hand at building his own dashboard using Lovable, which is one of these AI app making outfits. I sent him a copy of the dev database and a few markdown files with explanations regarding certain trickier elements of the data structure and told him to give them to the AI, it will know what they mean. No updates yet, but I have every confidence he’ll figure it out.
Think about all the cycles this will save. The CEO codes his own dashboards. The OP has a point.
- Here's an example from this morning. At 10:00 am, a colleague created a ticket with an idea for the music plugin I'm working on: wouldn't it be cool if we could use nod detection (head tracking) to trigger recording? That way, musicians who use our app wouldn't need a foot switch (as a musician, you often have your hands occupied).
Yes, that would be cool. An hour later, I shipped a release build with that feature fully functional, including permissions plus a calibration UI that shows if your face is detected and lets you adjust sensitivity, and visually displays when a nod is detected. Most of that work got done while I was in the shower. That is the second feature in this app that got built today.
This morning I also created and deployed a bug fix release for analytics on one platform, and a brand-new report (fairly easy to put together because it followed the pattern of other reports) for a different platform.
I also worked out, argued with random people on HN and walked to work. Not bad for five hours! Do I know how long it would have taken to, for example, integrate face detection and tracking into a C++ audio plugin without assistance from AI? Especially given that I have never done that before? No, I do not. I am bad at estimating. Would it have been longer than 30 minutes? I mean...probably?
- > They are probably wading through a bunch of stuff right now, but given the context you have given us, its probably not "scrum meetings"..
This made me laugh. Fair enough. ;)
In terms of the time estimations: if your point is that I don't have hard data to back up my assertions, you're absolutely correct. I was always terrible at estimating how long something would take. I'm still terrible at it. But I agree with the OP. I think the labour required is down 90%.
It does feel to me that we're getting into religious believer territory. There are those who have firsthand experience and are all-in (the believers), there are those who have firsthand experience and don't get it (the faithless), and there are those who haven't tried it (the atheists). It's hard to communicate across those divides, and each group's view of the others is essentially, "I don't understand you".
- Work that would have taken me 1-2 weeks to complete, I can now get done in 2-3 hours. That's not an exaggeration. I have another friend who is as all-in on this as me and he works in a company (I work for myself, as a solo contractor for clients), and he told me that he moved on to Q1 2026 projects because he'd completed all the work slated for 2025, weeks ahead of schedule. Meanwhile his colleagues are still wading through scrum meetings.
I realize that this all sounds kind of religious: you don't know what you're missing until you actually accept Jesus's love, or something along those lines. But you do have to kinda just go all-in to have this experience. I don't know what else to say about it.
- Currently three main projects. Two are Rails back-ends and React front-ends, so they are all Ruby, Typescript, Tailwind, etc. The third is more recent, it's an audio plugin built using the JUCE framework, it is all C++. This is the one that has been blowing my mind the most because I am an expert web developer, but the last time I wrote a line of C++ was 20 years ago, and I have zero DSP or math skills. What blows my mind is that it works great, it's thread safe and performant.
In terms of workflow, I have a bunch of custom commands for tasks that I do frequently (e.g. "perform code review"), but I'm very much in the loop all the time. The whole "agent can code for hours at a time" thing is not something I personally believe. It depends on the task how involved I get, however. Sometimes I'm happy to just let it do work and then review afterwards. Other times, I will watch it code and interrupt it if I am unhappy with the direction. So yes, I am constantly stepping in manually. This is what I meant about "mind meld". The agent is not doing the work, I am not doing the work, WE are doing the work.
- > Like, is there truly an agentic way to go 10x or is there some catch?
Yes. I think it’s practice. I know this sounds ridiculous, but I feel like I have reached a kind of mind meld state with my AI tooling, specifically Claude Code. I am not really consciously aware of having learned anything related to these processes, but I have been all in on this since ChatGPT, and I honestly think my brain has been rewired in a way that I don’t truly perceive except in terms of the rate of software production.
There was a period of several months a while ago where I felt exhausted all the time. I was getting a lot done, but there was something about the experience that was incredibly draining. Now I am past that and I have gone to this new plateau of ridiculous productivity, and a kind of addictive joy in the work. A marvellous pleasure at the orchestration of complex tasks and seeing the results play out. It’s pure magic.
Yes, I know this sounds ridiculous and over-the-top. But I haven’t had this much fun writing software since my 20s.
- I’m super cautious with these messages like I’m sure we all are but on Monday I ordered a printer from Amazon. They said it would arrive on Wednesday. On Wednesday I was working from home and I got a text from “Purolator” saying they’d tried to deliver my package and failed. Shit! I’d been listening to beats too loud to hear the knock on the door! I ran outside to see if the delivery guy was still on my street. No one was around…and then I realized, damn, they got me (to dash outside, anyway).
These things can fail 99.99% of the time but when they land on someone at just the right moment, it’s so easy to just go on autopilot and do the dumb thing.
- My point about my experience with this plugin isn’t that it’s a throwaway or meaningless project. My point is that it might be enough in some cases to verify output without verifying code. Another example: I had to import tens of thousands of records of relational data. I got AI to write the code for the import. All I verified was that the data was imported correctly. I didn’t even look at the code.
- I’m starting to believe there are situations where the human code review is genuinely not necessary. Here’s a concrete example of something that’s been blowing my mind. I have 25 years of professional coding experience but it’s almost all web, with a few years of iOS in the objective C era. I’m also an amateur electronic musician. A couple of weeks ago I was thinking about this plugin that I used to love until the company that made it went under. I’ve long considered trying to make a replacement but I don’t know the first thing about DSP or C++.
You know where this is going. I asked Claude if audio plugins were well represented in its training data, it said yes, off I went. I can’t review the code because I lack the expertise. It’s all C++ with a lot of math and the only math I’ve needed since college is addition and calculating percentages. However, I can have intelligent discussions about design and architecture and music UX. That’s been enough to get me a functional plugin that already does more in some respects than the original. I am (we are?) making it steadily more performant. It has only crashed twice and each time I just pasted the dump into Claude and it fixed the root cause.
Long story short: if you can verify the outcome, do you need to review the code? It helps that no one dies or gets underpaid if my audio plugin crashes. But still, you can’t tell me this isn’t remarkable. I think it’s clear there will be a massive proliferation of niche software.
- These stories never fail to astonish me. Why the same deity? It’s so interesting.
The fact the mind is able to create these powerful visions and patterns and other realities is really incredible. We have this machinery for perceiving the world and moving though it, but that machinery is capable of so many other insane and beautiful and terrifying things - capabilities which are inaccessible except in rare instances.
It’s really quite remarkable. Underneath our prosaic experience of consciousness is something that can generate infinite fractals, awe-inspiring visions of otherworldly creatures, dream landscapes of colour and shape. Why? Where does it all come from? Is this what life would be like all the time without us filtering the information coming into our senses?
- I can’t speak to his true motives but there are ethical reasons to oppose open weights. Hinton is an example of a non-conflicted advocate for that. If you believe AI is a powerful dual use tech technology like nuclear, open weights are a major risk.
- Don't throw away what's working for you just because some other company (temporarily) leapfrogs Anthropic a few percent on a benchmark. There's a lot to be said for what you're good at.
I also really want Anthropic to succeed because they are without question the most ethical of the frontier AI labs.
- 322 points
- My home town of Hamilton, Ontario (population 560k) recently made the news because a guy stole a bus, with passengers onboard, and started driving it through the city. It was newsworthy because he also dropped people off at their stops, and even rejected someone who tried to board with an expired bus pass. But what stood out for me in addition to all that was the police response. They quietly followed the bus, intentionally not using sirens to avoid “spooking” the guy. They waited for the right moment, boarded the bus and arrested him peacefully and without incident.
I recognize my little city is not like LA (which I’ve visited twice) - the types of crimes, the types of criminals and the prevalence of weapons are far different, although we also have our share of gun violence and murder. But we have also not militarized our police, and there’s very much a police culture of service to the community. Here, when a cop uses their weapon, it’s seen as a failure. This was a situation handled properly, and it made me proud.
- > Maybe they were new, or maybe they hadn't slept much because of a newborn baby
Reminds me of House of Dynamite, the movie about nuclear apocalypse that really revolves around these very human factors. This outage is a perfect example of why relying on anything humans have built is risky, which includes the entire nuclear apparatus. “I don’t understand why X wasn’t built in such a way that wouldn’t mean we live in an underground bunker now” is the sentence that comes to mind.