Smartphones were a step back in a lot of ways. Typing is slower. No mouse. Fingers are fat and imprecise. The result is most applications were severely dumbed down to work on a smartphone.
The trade-off was portability. Everyone can carry a smartphone, so it's okay that the human-interaction is worse in a lot of ways. Then, when we need that richer interaction, we can reach for a laptop.
The problem with smart glasses is they go even a step further in how poor the interaction is. Speech as an interface for computers is perhaps the worst interface. Yes, it's neat and shows up in sci-fi all the time. But if you think about it, it's a very bad interface. It's slow, it's imprecise, it's wishy-washy, it context dependent. Imagine, for example, trying to navigate your emails by speech only. Disaster.
Smart glasses, however, are not more portable than phones. Not by much. Everyone already has a phone. So what do we gain from smart glasses? IMO, not very much. Smart glasses may become popular, but will they replace the smartphone? In my opinion, fat chance.
What I think is more likely, actually, is smartphones replacing smart glasses. They already have cameras. So the capabilities are about the same, except smart phones can do WAY more. For most people, I imagine, the occasional "look at this thing and tell me about it" usecase can be satisfied by a smartphone.
Good point, and it could be argued the user soon followed that dumbification, with youngest generations not even understanding the file/folder analogy.
I think we can go dumber ! Why need an analogy at all ? It will all be there, up in your face and you can just talk to it !
There's also touch pads on the side of the smart glasses as another input option. And I could imagine some people liking little trackball-esque handheld controllers(like from the Black Mirror episode "The Entire History of You").
And there's also air gestures using cameras on the smart glasses to watch what your hands are doing.
I don't think any of these has the raw data input bandwidth that a keyboard has, and for a lot of use cases even a touchscreen could be better. But maybe that can be made up by the hands-free, augmented reality features of smart glasses.
I was among the nerds who swore I'd never use a touch keyboard, and I refused to buy a smartphone without a physical keyboard until 2011. Yes, typing on a screen was awful at first. But then text prediction and haptics got better, and we invented swipe keyboards. Today I'm nearly as fast and comfortable on a touch keyboard as I am on a physical one on a "real" computer.
My point is that input devices get better. We know when something can be improved, and we invent better ways of interacting with a computer.
If you think that we can't improve voice input to the point where it feels quicker, more natural and comfortable to use than a keyboard, you'd be mistaken. We're still in very early stages of this wave of XR devices.
In the past couple of years alone, text-to-speech and speech recognition systems have improved drastically. Today it's possible to hold a nearly natural sounding conversation with AI. Where do you think we'll be 10 years from now?
> Imagine, for example, trying to navigate your emails by speech only. Disaster.
That's because you're imagining navigating a list on a traditional 2D display with voice input. Why wouldn't we adapt our GUIs to work better with voice, or other types of input?
Many XR devices support eye tracking. This works well for navigation _today_ (see some visionOS demos). Where do you think we'll be 10 years from now?
So I think you're, understandably, holding traditional devices in high regard, and underestimating the possibilities of a new paradigm of computing. It's practically inevitable that XR devices will become the standard computing platform in the near future, even if it seems unlikely today.
AR will always be somewhat awkward until you can physically touch and interact with the material things. It’s useful, sure, but not a replacement.
Haptic feedback is probably my favorite iPhone user experience improvement on both the hardware and software side.
However, I will never be able to type faster than on my keyboard, and even with the most advanced voice inputs, I will always be able to type longer and with less fatigue than if I were to use my voice—having ten fingers and one set of vocal cords.
All options are going to be valid and useful for a very long time.
There's nothing tactile about a glass pane. It's simply a medium through which we access digital objects, and a very clunky one at that. Yet we got used to it in a very short amount of time.
If anything, XR devices have the possibility to offer a much more natural tactile experience. visionOS is already touch-driven, and there are glove-like devices today that provide more immersive haptics. Being able to feel the roughness or elasticity of a material, that kind of thing. It's obviously ridiculous to think that everyone will enjoy wearing a glove all day, but this technology can only improve.
This won't be a replacement for physical objects, of course. It will always be a simulation. But the one we can get via spatial computing will be much more engaging and intuitive than anything we've used so far.
> I will never be able to type faster than on my keyboard, and even with the most advanced voice inputs, I will always be able to type longer and with less fatigue than if I were to use my voice—having ten fingers and one set of vocal cords.
Sure, me neither—_today_. But this argument ignores the improvements we can make to XR interfaces.
It won't just be about voice input. It will also involve touch input, eye tracking, maybe even motion tracking.
A physical board with keys you press to produce single characters at a time is a very primitive way of inputting data into a machine.
Today we have virtual keyboards in environments like visionOS, which I'm sure are clunky and slow to use. But what if we invent an accurate way of translating the motion of each finger into a press of a virtual key? That seems like an obvious first step. Suddenly you're no longer constrained by a physical board, and can "type" with your hands in any position. What if we take this further and can translate patterns of finger positions into key chords, in a kind of virtual stenotype? What if we also involve eye, motion and voice inputs into this?
These are solvable problems we will address over time. Thinking that just because they're not solved today they never will be is very shortsighted.
Being able to track physical input from several sources in 3D space provides a far richer environment to invent friendly and intuitive interfaces than a 2D glass pane ever could. In that sense, our computing is severely constrained by the current generation of devices.
> It's practically inevitable that XR devices will become the standard computing platform in the near future
Yeah I mean I just really doubt it. I'm not seeing a whole lot of benefit over smartphones, which are already ubiquitous. At best, I'm hearing that it won't suck that much. Which... okay not really high praise.
I'm sure, like the smartphone, it will replace SOME usecases. The difference is that the usecases the smartphone replaced were really important ones that cover 80% of common stuff people do. So now everyone has a smartphone.
Will that be the case with XR? I doubt it. The usecases it will cover will be, at absolute best, incremental as compared to the smartphone. And, I presume, the smartphone will cover those usecases too. Which is why I think it's more likely smartphones swallow these glasses thingy than the other way around.
I'm not trying to convince anyone. Believe what you want to believe :)
> But I am saying that, as a programmer, if you told me I had to only use an iPhone at work I'd probably set myself on fire.
Sure, me too. But that's a software and ergonomics problem. There's no way you will ever be as productive on a 6" display, tapping on a glass pane, as you would on a much larger display(s), with a more comfortable physical keyboard with far richer haptics. Not to mention the crippled software environment of iOS.
But like I mentioned in other threads, it would be shortsighted to think that interfaces of XR devices will not be drastically better in the future. Everyone keeps focusing on how voice input is bad, ignoring that touch, eye and motion tracking in a 3D environment can deliver far richer interfaces than 2D displays ever did. Plus voice input will only get better, as it has greatly improved over the last 2 years alone.
> I'm not seeing a whole lot of benefit over smartphones, which are already ubiquitous. At best, I'm hearing that it won't suck that much. Which... okay not really high praise.
Have you seen the user avatars in visionOS 26? Go watch some demos if you haven't.
Being able to have a conversation with someone that feels like they're physically next to you is _revolutionary_. Just that use case alone will drive adoption of XR devices more than anything else. Video conferences on 2D displays from crappy webcams feels primitive in comparison. And that is _today_. What will that experience be like in 10 years?
I'm frankly surprised that a community of tech nerds can be so dismissive of a technology that offers more immersive digital experiences. I'm pretty sure that most people here own "battlestations" with 2+ screens. Yet they can't imagine what the experience of an infinite amount of screens in a 3D environment could be like? Forget the fact that today's generation of XR displays are blurry, have limited FoV, or anything else. Those are minor limitations of today's tech that will improve over time. I'm 100% sure that once all of those issues are ironed out, this community will be the first to adopt XR for "increased productivity". Hell, current gen devices are almost there, and some are already adopting them for productivity work.
So those are just two examples. Once the tech is fully mature, and someone creates a device that brings all these experiences together in a comfortable and accessible package, it will be an iPhone-like event where the market will explode. I suspect we're less than a decade away from that event.
When somebody finally gets a clue and implements that, no typist on Earth will be able to keep up with it.
That's because the communication is going from a person to a person and both are very highly tuned to not only hear the words, but the tone, context, subtext, and undertones. There can be all kinds of information packed in a few words that have nothing to do with the words.
Machines, even LLMs, can't do this. I don't think they every will. So typing and shortcut commands and the like are far more efficient interacting with a computer.
Laptops, of course, have the much bigger screen and keyboard, not really replicated by smartphones. They have use-cases that smartphone can’t cover well for hardware reasons. So they’ve stuck around (in a notably diminished form).
If good AR glasses become a thing… I dunno, they could easily replace monitors generally, right? Then a laptop just becomes a keyboard. That’s a hardware function that seems necessary.
What niche is left for the smartphone?
I believe that was the entire point of the comparison. Smartphones replaces SOME use cases of laptops in the same way ubiquitous smart glasses could replaces SOME use cases of smartphones.
If you are afraid of technology, Android or iPadOS is lightyears ahead of Windows or MacOS.
It's more than enough to handle paying bills, applying for jobs, etc. Hell, a Bluetooth keyboard and a bit of grit + GitHub CodeSpaces and you can write develop applications.
You can also cast your screen to a TV or on a handful of phones use USB c to HDMI.
The post I was responding to clearly meant “like smartphones replaced […] laptops,” which is to say, they don’t think AR glasses will replace smartphones (because smartphones didn’t completely replace laptops). I get that.
Then I pointed out that smartphones did more-or-less replace a number of other electronic devices. And there are some reasons they didn’t fully replace laptops. Then I went on to think about the niches that could exist should AR glasses become a major thing.
It is hard to say when the peak of laptops in circulation was, right? Because simultaneously the tech has been maturing (longer product lifetimes) and smartphones have taken some laptop niches.
I’m not even clear on what we’re measuring when we say “replace.” Every non-technical person I know has a laptop, but uses it on maybe a weekly basis (instead of daily, for smartphones).
BTW, I have to consciously turn off my cybersecurity mindset when thinking about smart glasses. It's hard not to see all the new attack vectors they introduce.
I wear my Ray Ban Metas a lot (bought in 2023) and love them but i can't take selfies with them. I have to pull out my phone. They are complimentary to phone tho i do enjoy not having my phone on me to take pics, vids and ask it for the time now (add 5G to it and it will do more like stream music).
Whatever Open AI is working on to replace the iPhone it will need to be able take selfies! I'm betting it's just an AI Phone with the experience of the movie H.E.R. where almost everything is done from the lock screen and it takes the best selfies of you (gets you to the best lighting) and everything under the sun.
Me, (old millennial) can not even conceive getting any real work done just on the smartphone. But I'm a power user. I need to log onto linux servers and administer them. Or I need to crack open Excel files and use spreadsheets. Not an ordinary user.
You only really need one for doing some type of work
Laptops and tablets replaced desktops. Nobody sits down in an office and does work on a smartphone.
Smartphones replaced phones, pagers, music players and cameras.
10 years ago all my non-tech friends and family had laptops. Now they all use their smartphones as primary computing devices. My nephew who just graduated from high school and works in IT doesn’t even own a personal laptop.
Also, mini pcs is a new trend nowadays. I wouldn't say that this is the direction things go any more.
I'd also say a mini PC is still a "performant desktop" in a smaller form factor, which is probably a reaction to gaming desktops becoming unnecessarily large and unwieldy. Similar to how importing Japanese kei trucks has become popular now that American pickups are sized for vanity and not work practicality.
Mini PCs have the same problem like laptops, in trying to squeeze performance in a small form factor which then poses a heat dissipation issue (and even more because the adapter is typically inside the form in this case and that results in more heat). And you cannot put the same high end gaming gpu in a small form factor, which is an extra characteristic they share with laptops vs larger desktops.
Also macs are fine with gaming if the game actually runs. It would not be my primary choice or suggestion if looking specifically for a gaming machine, but I also do not need to look for another machine for gaming now that I have a macbook anyway, as it runs games good enough at high end settings. But a mini pc would not have been a suggestion as gaming machine either.
Smart glasses will probably do the same to smartphones.
Things are rarely completely replaced, at least not quickly.
Now that we have USB-C monitors, phones have USB-C, and high-end phones have CPU performance similar to low-end desktop CPUs (A18 vs Intel 14100), we could actually start replacing laptops with phones for some use cases.