Smartphones were a step back in a lot of ways. Typing is slower. No mouse. Fingers are fat and imprecise. The result is most applications were severely dumbed down to work on a smartphone.
The trade-off was portability. Everyone can carry a smartphone, so it's okay that the human-interaction is worse in a lot of ways. Then, when we need that richer interaction, we can reach for a laptop.
The problem with smart glasses is they go even a step further in how poor the interaction is. Speech as an interface for computers is perhaps the worst interface. Yes, it's neat and shows up in sci-fi all the time. But if you think about it, it's a very bad interface. It's slow, it's imprecise, it's wishy-washy, it context dependent. Imagine, for example, trying to navigate your emails by speech only. Disaster.
Smart glasses, however, are not more portable than phones. Not by much. Everyone already has a phone. So what do we gain from smart glasses? IMO, not very much. Smart glasses may become popular, but will they replace the smartphone? In my opinion, fat chance.
What I think is more likely, actually, is smartphones replacing smart glasses. They already have cameras. So the capabilities are about the same, except smart phones can do WAY more. For most people, I imagine, the occasional "look at this thing and tell me about it" usecase can be satisfied by a smartphone.
Good point, and it could be argued the user soon followed that dumbification, with youngest generations not even understanding the file/folder analogy.
I think we can go dumber ! Why need an analogy at all ? It will all be there, up in your face and you can just talk to it !
There's also touch pads on the side of the smart glasses as another input option. And I could imagine some people liking little trackball-esque handheld controllers(like from the Black Mirror episode "The Entire History of You").
And there's also air gestures using cameras on the smart glasses to watch what your hands are doing.
I don't think any of these has the raw data input bandwidth that a keyboard has, and for a lot of use cases even a touchscreen could be better. But maybe that can be made up by the hands-free, augmented reality features of smart glasses.
I was among the nerds who swore I'd never use a touch keyboard, and I refused to buy a smartphone without a physical keyboard until 2011. Yes, typing on a screen was awful at first. But then text prediction and haptics got better, and we invented swipe keyboards. Today I'm nearly as fast and comfortable on a touch keyboard as I am on a physical one on a "real" computer.
My point is that input devices get better. We know when something can be improved, and we invent better ways of interacting with a computer.
If you think that we can't improve voice input to the point where it feels quicker, more natural and comfortable to use than a keyboard, you'd be mistaken. We're still in very early stages of this wave of XR devices.
In the past couple of years alone, text-to-speech and speech recognition systems have improved drastically. Today it's possible to hold a nearly natural sounding conversation with AI. Where do you think we'll be 10 years from now?
> Imagine, for example, trying to navigate your emails by speech only. Disaster.
That's because you're imagining navigating a list on a traditional 2D display with voice input. Why wouldn't we adapt our GUIs to work better with voice, or other types of input?
Many XR devices support eye tracking. This works well for navigation _today_ (see some visionOS demos). Where do you think we'll be 10 years from now?
So I think you're, understandably, holding traditional devices in high regard, and underestimating the possibilities of a new paradigm of computing. It's practically inevitable that XR devices will become the standard computing platform in the near future, even if it seems unlikely today.
AR will always be somewhat awkward until you can physically touch and interact with the material things. It’s useful, sure, but not a replacement.
Haptic feedback is probably my favorite iPhone user experience improvement on both the hardware and software side.
However, I will never be able to type faster than on my keyboard, and even with the most advanced voice inputs, I will always be able to type longer and with less fatigue than if I were to use my voice—having ten fingers and one set of vocal cords.
All options are going to be valid and useful for a very long time.
There's nothing tactile about a glass pane. It's simply a medium through which we access digital objects, and a very clunky one at that. Yet we got used to it in a very short amount of time.
If anything, XR devices have the possibility to offer a much more natural tactile experience. visionOS is already touch-driven, and there are glove-like devices today that provide more immersive haptics. Being able to feel the roughness or elasticity of a material, that kind of thing. It's obviously ridiculous to think that everyone will enjoy wearing a glove all day, but this technology can only improve.
This won't be a replacement for physical objects, of course. It will always be a simulation. But the one we can get via spatial computing will be much more engaging and intuitive than anything we've used so far.
> I will never be able to type faster than on my keyboard, and even with the most advanced voice inputs, I will always be able to type longer and with less fatigue than if I were to use my voice—having ten fingers and one set of vocal cords.
Sure, me neither—_today_. But this argument ignores the improvements we can make to XR interfaces.
It won't just be about voice input. It will also involve touch input, eye tracking, maybe even motion tracking.
A physical board with keys you press to produce single characters at a time is a very primitive way of inputting data into a machine.
Today we have virtual keyboards in environments like visionOS, which I'm sure are clunky and slow to use. But what if we invent an accurate way of translating the motion of each finger into a press of a virtual key? That seems like an obvious first step. Suddenly you're no longer constrained by a physical board, and can "type" with your hands in any position. What if we take this further and can translate patterns of finger positions into key chords, in a kind of virtual stenotype? What if we also involve eye, motion and voice inputs into this?
These are solvable problems we will address over time. Thinking that just because they're not solved today they never will be is very shortsighted.
Being able to track physical input from several sources in 3D space provides a far richer environment to invent friendly and intuitive interfaces than a 2D glass pane ever could. In that sense, our computing is severely constrained by the current generation of devices.
> It's practically inevitable that XR devices will become the standard computing platform in the near future
Yeah I mean I just really doubt it. I'm not seeing a whole lot of benefit over smartphones, which are already ubiquitous. At best, I'm hearing that it won't suck that much. Which... okay not really high praise.
I'm sure, like the smartphone, it will replace SOME usecases. The difference is that the usecases the smartphone replaced were really important ones that cover 80% of common stuff people do. So now everyone has a smartphone.
Will that be the case with XR? I doubt it. The usecases it will cover will be, at absolute best, incremental as compared to the smartphone. And, I presume, the smartphone will cover those usecases too. Which is why I think it's more likely smartphones swallow these glasses thingy than the other way around.
I'm not trying to convince anyone. Believe what you want to believe :)
> But I am saying that, as a programmer, if you told me I had to only use an iPhone at work I'd probably set myself on fire.
Sure, me too. But that's a software and ergonomics problem. There's no way you will ever be as productive on a 6" display, tapping on a glass pane, as you would on a much larger display(s), with a more comfortable physical keyboard with far richer haptics. Not to mention the crippled software environment of iOS.
But like I mentioned in other threads, it would be shortsighted to think that interfaces of XR devices will not be drastically better in the future. Everyone keeps focusing on how voice input is bad, ignoring that touch, eye and motion tracking in a 3D environment can deliver far richer interfaces than 2D displays ever did. Plus voice input will only get better, as it has greatly improved over the last 2 years alone.
> I'm not seeing a whole lot of benefit over smartphones, which are already ubiquitous. At best, I'm hearing that it won't suck that much. Which... okay not really high praise.
Have you seen the user avatars in visionOS 26? Go watch some demos if you haven't.
Being able to have a conversation with someone that feels like they're physically next to you is _revolutionary_. Just that use case alone will drive adoption of XR devices more than anything else. Video conferences on 2D displays from crappy webcams feels primitive in comparison. And that is _today_. What will that experience be like in 10 years?
I'm frankly surprised that a community of tech nerds can be so dismissive of a technology that offers more immersive digital experiences. I'm pretty sure that most people here own "battlestations" with 2+ screens. Yet they can't imagine what the experience of an infinite amount of screens in a 3D environment could be like? Forget the fact that today's generation of XR displays are blurry, have limited FoV, or anything else. Those are minor limitations of today's tech that will improve over time. I'm 100% sure that once all of those issues are ironed out, this community will be the first to adopt XR for "increased productivity". Hell, current gen devices are almost there, and some are already adopting them for productivity work.
So those are just two examples. Once the tech is fully mature, and someone creates a device that brings all these experiences together in a comfortable and accessible package, it will be an iPhone-like event where the market will explode. I suspect we're less than a decade away from that event.
When somebody finally gets a clue and implements that, no typist on Earth will be able to keep up with it.
That's because the communication is going from a person to a person and both are very highly tuned to not only hear the words, but the tone, context, subtext, and undertones. There can be all kinds of information packed in a few words that have nothing to do with the words.
Machines, even LLMs, can't do this. I don't think they every will. So typing and shortcut commands and the like are far more efficient interacting with a computer.
Laptops, of course, have the much bigger screen and keyboard, not really replicated by smartphones. They have use-cases that smartphone can’t cover well for hardware reasons. So they’ve stuck around (in a notably diminished form).
If good AR glasses become a thing… I dunno, they could easily replace monitors generally, right? Then a laptop just becomes a keyboard. That’s a hardware function that seems necessary.
What niche is left for the smartphone?
I believe that was the entire point of the comparison. Smartphones replaces SOME use cases of laptops in the same way ubiquitous smart glasses could replaces SOME use cases of smartphones.
If you are afraid of technology, Android or iPadOS is lightyears ahead of Windows or MacOS.
It's more than enough to handle paying bills, applying for jobs, etc. Hell, a Bluetooth keyboard and a bit of grit + GitHub CodeSpaces and you can write develop applications.
You can also cast your screen to a TV or on a handful of phones use USB c to HDMI.
It is hard to say when the peak of laptops in circulation was, right? Because simultaneously the tech has been maturing (longer product lifetimes) and smartphones have taken some laptop niches.
I’m not even clear on what we’re measuring when we say “replace.” Every non-technical person I know has a laptop, but uses it on maybe a weekly basis (instead of daily, for smartphones).
BTW, I have to consciously turn off my cybersecurity mindset when thinking about smart glasses. It's hard not to see all the new attack vectors they introduce.
I wear my Ray Ban Metas a lot (bought in 2023) and love them but i can't take selfies with them. I have to pull out my phone. They are complimentary to phone tho i do enjoy not having my phone on me to take pics, vids and ask it for the time now (add 5G to it and it will do more like stream music).
Whatever Open AI is working on to replace the iPhone it will need to be able take selfies! I'm betting it's just an AI Phone with the experience of the movie H.E.R. where almost everything is done from the lock screen and it takes the best selfies of you (gets you to the best lighting) and everything under the sun.
Me, (old millennial) can not even conceive getting any real work done just on the smartphone. But I'm a power user. I need to log onto linux servers and administer them. Or I need to crack open Excel files and use spreadsheets. Not an ordinary user.
You only really need one for doing some type of work
Laptops and tablets replaced desktops. Nobody sits down in an office and does work on a smartphone.
Smartphones replaced phones, pagers, music players and cameras.
10 years ago all my non-tech friends and family had laptops. Now they all use their smartphones as primary computing devices. My nephew who just graduated from high school and works in IT doesn’t even own a personal laptop.
Also, mini pcs is a new trend nowadays. I wouldn't say that this is the direction things go any more.
Smart glasses will probably do the same to smartphones.
Things are rarely completely replaced, at least not quickly.
Now that we have USB-C monitors, phones have USB-C, and high-end phones have CPU performance similar to low-end desktop CPUs (A18 vs Intel 14100), we could actually start replacing laptops with phones for some use cases.
Once there is an actual usable in-glasses screen, I will agree.
A few years ago I tried someone's smartglasses with a screen. It basically had similar functionality to my first Fitbit: it would show texts, notifications, caller ID.
I really want one of those and went looking, but couldn't find it.
Shameless plug: We build an open source OS for glasses that works with them. AugmentOS.org
Even Realities G1 are the best HUD glasses on the market right now. They’re the first pair (with prescription) that I can wear all-day without pain, and without looking like a dork.
My team used to main the Vuzix Z100 glasses, starting with the Vuzix Ultralite reference design that predated them. We won’t touch them these days (and recently stopped selling them on our store).
Others… Meizu StarV Air 2 and INMO GO2: both lack public SDK, GO2 is too heavy. Brilliant Labs Frame: cool prototyping toy, awful glasses.
For “AI glasses” that have camera, no display:
You have the RayBan and a number of companies making these. The only one I can recommend is our upcoming Mentra Live (https://mentra.glass). It has the same camera sensor as the RayBan, but runs open source software & has an SDK.
For more sci-fi glasses that run Android and have display + camera, see the INMO Air 3, TCL RayNeo X3. These are too heavy to be worn as regular glasses, but are fun prototyping tools.
All these companies will exist in 2026. As for a 5 year horizon, I’d place my bets on Even Realities, and Vuzix (as a waveguide supplier, not consumer HW). Meizu and TCL will stick around as Chinese megacorps, though I’m 50/50 they will continue developing consumer smart glasses. Brilliant Labs is cooked unless they turn things around with their next pair of glasses.
Google & Android XR: I don’t expect their glasses to be competitive for at least a few HW generations at minimum. In terms of public information, we know they’re monocular and heavy (>45g), which is an immediate killer for the majority of users.
Meta, Microsoft, Apple, etc. are far more likely to snitch on you to the government you actually live under.
I'm not the gp, but for me, there are several bigger concerns:
First, the possibility that access could be leveraged for intelligence gathering or industrial espionage. The goal might be geopolitics, but I still don't want my data to be fodder nor do I want to explain to my employer that I'm responsible for their breach.
Second, the possibility of becoming collateral damage during an escalation of hostilities between my country and China. If I've grown dependent a device, I face significant disruption if they block cloud services or even outright remotely brick it. The war in Ukraine demonstrated this isn't limited to the other country's exports, but they're still at the greatest risk.
So yeah, a company snitching on me to big brother I live under is just one threat I have to consider when giving access to all my data.
Do you have any experience with their progressives? The ones I'm trying are so lousy that I'm going to try multifocal contacts next week. According to the order form, their progressive lenses seem somewhat decent.
I sure as f* hope not. I already struggle with my cellphone addiction and all of its constant distractions and assaults on my attention span, the last thing I want is something from one of the largest advertising companies on the planet glued to my face.
It's sort of like blaming the obesity epidemic on lack of willpower. Yes, any individual is responsible for himself. At the same time, companies have found better and more ingenious ways to addict lots of us using food. When I look back at pictures from the 1950s and see that nearly everyone is skinny/normal weight, am I just supposed to think that they had so much more willpower than today's people?
We'll need to overhaul the concept of limited liability before we do that though, the thought of someone being left without their eyes because a company goes bankrupt and no-one is at fault is pretty horrifying.
Ray Ban does it for their Meta glasses, but Lensology can handle stronger prescription lenses.
Ehh, there is nothing special about the lens, all the magic is in the frame, and the rayban and oakley frames look very similar to their standard versions. Getting new lenses for sunglasses is very common.
Have you never had prescription sunglasses?
The frame will probably change slightly over time to make them incompatible.
I just sent an old pair of glasses to eyeglasses.com for new lenses. I never considered this to be a big deal.
This is probably true.
The rest of your comment is probably not true for most people.
It just depends on how strong your prescription is and how willing the shop/website is to do special orders.
If you are almost blind, then your choice of lenses/frames will be much lower than if you are only slightly blind(most people). Any reputable eyeglass/optician shop should be able to make custom lenses for pretty much any frame. They can't always do the super sleek shades that some people like to wear.
I see this world all the time at the beach; lots of people wear sunglasses there.
Because it’s not going to ever be socially acceptable to just start talking to your glasses vs silently typing on a phone in most public places/situations.
With that said, I don’t think these can replace phones until they’re quite a lot smaller and lighter. And to make it worse, you’d need at least two pairs - regular and sun. Possibly three if you’re someone who regularly uses safety glasses.
Meta says they will open it up though.
That said, I'm not sure I'd want smart glasses. Being stuck on a computer for work, I try to take some time every day to be completely free of digital things. It's hard enough to do that with a smart phone in my pocket vying for my attention. I imagine it would be only harder with smart glasses over my eyeballs.
I also don't understand how they're used to locate items around the house. Is there some sort of GPS? Or do you mean it helps by virtue of seeing (e.g. prescription)?
AR glasses will be a hit, no doubt, but I don't see what's so special about glasses with a mic, camera and speaker on them. Seems especially for an older person that it would be more useful getting a phone with a screen and pointing at things and seeing it on a display.
A phone you have to hold in your hand whereas glasses you don't. Therefore glasses are superior for these use cases.
I’m very curious what this person did before these glasses were released.
The glasses have a camera, and small speakers near your ears. They also have a microphone, so you can give them voice commands. Like Amazon Alexa, but in the glasses.
And mobile phones aren’t going anywhere because mobile computing has peaked: there are no use cases that require a device with a different form factor, it’s just a matter of lifestyle preference.
If we’re abandoning screen based devices, I’d rather have a small 2000s style flip phone with all the latest tech and LLM features built in, than something like glasses, which clash too much with fashion choices. Bonus if the battery life is insane.
It tends to wear on the bridge of the nose after a while. And I'm sure these e-glasses are going to be heavier than normal glasses with a battery and electronics in addition to the normal things glasses have.
I couldn't fathom if I would use these things for myself (at least not now, cause I'm ok with my Smartphone and don't really want to get a Meta account), but this, definitely changes my perspective a little.
I returned them cause I didn’t like forcing a camera into everyone’s face unannounced and the photos it took weren’t very good (vertical pics cutoff most stuff in my field of view, weird choice of focal length. Maybe with two cameras they could have a wide angle and a telephoto but the ray bans at least just had the one.
I think that they've done it, this is Meta's iPod
I would love to try these types of devices but there is no way I'd ever give money to Meta or put my personal information into their systems or encourage my friends and family to do so either.
Hopefully Meta puts in a bunch of R&D to see what works in this space and then someone else (Apple?) just copies it.
I’ve tried the glasses myself, and I’m convinced that wearable eyewear like this will eventually replace the mobile phone. With ongoing advances in miniaturization, it’s only a matter of time before AR and AV are fully integrated into everyday wearables.