- My main point in the last comment is that inserting himself into their lives at all is disrespectful. He doesn't need a word for them because he has no relationship to them: he was an anonymous donor to enable their actual parents to have kids that they wouldn't have otherwise been able to have.
- That depends a lot on your definition of "children" and "father".
Many people with uninvolved biological fathers would disagree with you that the guy who impregnated their mother counted as their father, especially if they were raised by another man who actually did stick around, and that's even for dads who actually impregnated the mother. Sperm donation takes this even further because he claims to have actually done it anonymously [0], meaning he's as uninvolved as he possibly could have been in the process of being a father.
Many or most of these kids have real men who were actually there helping to raise them through their childhood who they refer to as "father", and it's pretty disrespectful of Durov towards those men to attempt to usurp that title on the grounds of what was supposed to be an anonymous donation.
I'll grant that Durov is more likely than most sperm donors to have some of these kids actually claim him as their father, but that's in no small part because there's now a substantial amount of money tied to them identifying him as such. Cynically I wonder if that's a major motivator for him doing this, because he knows that the kids wouldn't otherwise know or care who he is.
- Whether an empty gun and a magazine counts as a loaded gun varies state-by-state, so the distinction is not as clear-cut as you make it sound. New York State penal code defines a loaded gun as follows:
> 15. "Loaded firearm" means any firearm loaded with ammunition or any firearm which is possessed by one who, at the same time, possesses a quantity of ammunition which may be used to discharge such firearm.
So I guess I'm using the New York definition of an always-on camera.
- I responded to that above. If the mic is always on and controls the camera (both of which are demonstrated in the promo video), any reasonable approach to infosec needs to treat the camera as always on as well.
- Right, I'm not claiming Glass was good, but it at least attempted to use the glasses form factor for something.
- In general I agree, but The Verge in particular tends to just say exactly what the press release says with less detail. If we're going to do a non-press-release source it should be because they're offering context and information that the company would not willingly choose to provide themselves.
- > “They are all my children and will all have the same rights! I don’t want them to tear each other apart after my death,” he said, after revealing that he recently wrote his will.
I agree that it's pretty cringe to refer to all of them as his children when he's literally a sperm donor, but he definitely did call them that.
- If you can ask "Hey Meta, ..." while holding a golf club and unable to touch a button (which the promo video [0] shows you can) then the mic is always on. It may not always be beaming data to Meta, but that's a matter of trust, which I don't have much of for Meta given their history.
The camera may or may not be always on, but it can be turned on by software activated by the always-on mic (again, demonstrated by the promo video), so it would be best to treat it as though it is.
[0] https://about.fb.com/news/2025/06/introducing-oakley-meta-gl...
- Right before LLMs broke into the scene we had a few techniques I was aware of:
* Personality Forge uses a rules-based scripting approach [0]. This is basically ELIZA extended to take advantage of modern processing power.
* Rasa [1] used traditional NLP/NLU techniques and small-model ML to match intents and parse user requests. This is the same kind of tooling that Google/Alexa historically used, just without the voice layer and with more effort to keep the context in mind.
Rasa is actually open source [2], so you can poke around the internals to see how it's implemented. It doesn't look like it's changed architecture substantially since the pre-LLM days. Rhasspy [3] (also open source) uses similar techniques but in the voice assistant space rather than as a full chatbot.
[0] https://www.personalityforge.com/developers/how-to-build-cha...
[1] https://web.archive.org/web/20200104080459/https://rasa.com/ (old link because Rasa's marketing today is ambiguous about whether they're adding LLMs now).
- Somehow we've actually managed to regress from 2013's Google Glass.
Always-on microphone and camera sold by one of the world's sketchiest privacy invaders? Check.
Display that actually takes advantage of the glasses form factor? Nope. Sounds like this could just as easily be the Humane pin.
- I've been totally breaking Linux installs trying to get Nvidia to work for 15 years now, and that's on X11. On the other hand I recently did the first OS upgrade that I've ever done successfully without breaking Nvidia and that was running Wayland.
Nvidia is just really really bad on Linux in general, so it's always a coin toss if you'll be able to boot your system after messing with their drivers, regardless of display server.
- The [flagged] indicator on a submission usually indicates user flagging. Moderators and algorithms just quietly downweight submissions without any visible indicator. So this isn't an HN moderator position, the question to resolve is why users would flag it.
In this case, I'd have flagged them too if I saw them. The "long live" post is an aggressive tirade that reflects poorly on the author and led to a poor-quality discussion. The second is a link to a git commit history, which is weird in its own right and provides no explanation, and the context provided in the comments shows that a generally dislikable figure with extreme political views is now leading a fork of X11 that has yet to prove itself viable. So I'd probably have flagged that one too as pointless drama until proven otherwise.
- Given that we're apparently discussing an entire k8s 2.0 based on HCL that hardly seems like a barrier. You'd have needed to write the HCL tooling to get the 2.0 working anyway.
- 3 points
- > And if you restrict it, they'll just fork your code and overwrite whatever they need.
More power to them. They can take responsibility for that code and maintain it and I don't have to worry about breaking them when I release a new version. Everyone's happy.
- What you're missing is the audience.
This talk is different from his others because it's directed at aspiring startup founders. It's about how we conceptualize the place of an LLM in a new business. It's designed to provide a series of analogies any one of which which may or may not help a given startup founder to break out of the tired, binary talking points they've absorbed from the internet ("AI all the things" vs "AI is terrible") in favor of a more nuanced perspective of the role of AI in their plans. It's soft and squishy rhetoric because it's not about engineering, it's about business and strategy.
I honestly left impressed that Karpathy has the dynamic range necessary to speak to both engineers and business people, but it also makes sense that a lot of engineers would come out of this very confused at what he's on about.
- Yep, it's obvious that a lot of people are interested because junk articles like this usually get penalized harder. That interest is what this piece is preying on. Unfortunately it's easier to write a piece that tells people what they want to hear and spreads FUD than it is to write a piece that corrects the misinformation.
Much more rewarding too, because "we really don't know very much about this yet" is hard to expand to a full click-worthy essay and less likely to move product.
- Just curious: Are you speaking as an insider or just guessing about the test coverage?
- Did you actually read the study? I assumed you did and so I read every word so I could engage with you on it, but it's really feeling like you skimmed it looking for it to prove what you thought it would prove. It's not even all that long, and it's worth reading in full to understand what they're saying.
I started to write out another comment but it ended up just being a repeat of what I wrote above. Since we're going in circles I think I'm going to leave it here. Read the study, or at least read the extracts that I put above. They don't really leave room for ambiguity.
- You can't say "that is just the motivation", because the motivation is what dictated the terms of the experiment. I read the whole study: the contrast between the two types of displays permeates the whole thing.
They repeatedly say that the goal is to measure the effect of flickering in these non-traditional displays and repeatedly say that for displays that do not do the display trickery they're concerned about the traditional measurement methods are sufficient.
You're correct that they do demonstrate that the study shows that the human eye can identify flickering at high framerates under certain conditions, but it also explicitly shows that under normal conditions of one-frame-after-another with blank frames in between for PWM dimming the flickering is unnoticeable after 65 Hz. They go out of their way to prove that before proceeding with the test of the more complicated display which they say was meant to emulate something like a 3D display or similar.
So... yes. Potentially other situations could trigger the same visibility (I'd be very concerned about VR glasses after reading this), but that's a presumption, not something demonstrated by the study. The study as performed explicitly shows that regular PWM is not perceptible as flicker above the traditionally established range of frame rates, and the authors repeatedly say that the traditional measurement methods are entirely "appropriate" for traditional displays that render plain-image frames in sequence.
EDIT: Just to put this quote down again, because it makes the authors' point abundantly clear:
> The light output of modern displays may at no point of time actually resemble a natural scene. Instead, the codes rely on the fact that at a high enough frame rate human perception integrates the incoming light, such that an image and its negative in rapid succession are perceived as a grey field. This paper explores these new coded displays, as opposed to the traditional sort which show only a sequence of nearly identical images.
They explicitly call out that the paper does not apply to traditional displays that show a sequence of nearly identical images.
I agree that the primary issue is that it's a software-controlled microphone with no off switch controlled by software written by Meta. I only emphasized the wake word listening in response to OP's claim that it's not always on when it must be.