Preferences

const_cast parent
I doubt it, these devices have a serious user input problem. The cornerstone of computers is human-computer interaction. That's what makes these pieces of silicon useful. They're tools for humans - meaning, it doesn't matter if the tool is better if it can't be used easier.

Smartphones were a step back in a lot of ways. Typing is slower. No mouse. Fingers are fat and imprecise. The result is most applications were severely dumbed down to work on a smartphone.

The trade-off was portability. Everyone can carry a smartphone, so it's okay that the human-interaction is worse in a lot of ways. Then, when we need that richer interaction, we can reach for a laptop.

The problem with smart glasses is they go even a step further in how poor the interaction is. Speech as an interface for computers is perhaps the worst interface. Yes, it's neat and shows up in sci-fi all the time. But if you think about it, it's a very bad interface. It's slow, it's imprecise, it's wishy-washy, it context dependent. Imagine, for example, trying to navigate your emails by speech only. Disaster.

Smart glasses, however, are not more portable than phones. Not by much. Everyone already has a phone. So what do we gain from smart glasses? IMO, not very much. Smart glasses may become popular, but will they replace the smartphone? In my opinion, fat chance.

What I think is more likely, actually, is smartphones replacing smart glasses. They already have cameras. So the capabilities are about the same, except smart phones can do WAY more. For most people, I imagine, the occasional "look at this thing and tell me about it" usecase can be satisfied by a smartphone.


MailleQuiMaille
> The result is most applications were severely dumbed down to work on a smartphone.

Good point, and it could be argued the user soon followed that dumbification, with youngest generations not even understanding the file/folder analogy.

I think we can go dumber ! Why need an analogy at all ? It will all be there, up in your face and you can just talk to it !

goda90
Voice is slow, but it can be sped up with vocal macros. One syllable/non-word noise commands.

There's also touch pads on the side of the smart glasses as another input option. And I could imagine some people liking little trackball-esque handheld controllers(like from the Black Mirror episode "The Entire History of You").

And there's also air gestures using cameras on the smart glasses to watch what your hands are doing.

I don't think any of these has the raw data input bandwidth that a keyboard has, and for a lot of use cases even a touchscreen could be better. But maybe that can be made up by the hands-free, augmented reality features of smart glasses.

itsdrewmiller
Eye tracking is a UI in its infancy but should be as fast as manual manipulation. Either form factor could use it but glasses are more motivated to figure it out. Headwear is also well situated for neural interfaces.
imiric
> Smartphones were a step back in a lot of ways.

I was among the nerds who swore I'd never use a touch keyboard, and I refused to buy a smartphone without a physical keyboard until 2011. Yes, typing on a screen was awful at first. But then text prediction and haptics got better, and we invented swipe keyboards. Today I'm nearly as fast and comfortable on a touch keyboard as I am on a physical one on a "real" computer.

My point is that input devices get better. We know when something can be improved, and we invent better ways of interacting with a computer.

If you think that we can't improve voice input to the point where it feels quicker, more natural and comfortable to use than a keyboard, you'd be mistaken. We're still in very early stages of this wave of XR devices.

In the past couple of years alone, text-to-speech and speech recognition systems have improved drastically. Today it's possible to hold a nearly natural sounding conversation with AI. Where do you think we'll be 10 years from now?

> Imagine, for example, trying to navigate your emails by speech only. Disaster.

That's because you're imagining navigating a list on a traditional 2D display with voice input. Why wouldn't we adapt our GUIs to work better with voice, or other types of input?

Many XR devices support eye tracking. This works well for navigation _today_ (see some visionOS demos). Where do you think we'll be 10 years from now?

So I think you're, understandably, holding traditional devices in high regard, and underestimating the possibilities of a new paradigm of computing. It's practically inevitable that XR devices will become the standard computing platform in the near future, even if it seems unlikely today.

tyg13
For me, voice input is an immediate no-go because I don't want to have to talk to myself while I'm in line at the grocery store, or waiting for my oil change, or in the dozens of other situations where I typically use my smartphone to do things.
bandoti
Curious to see how this goes. It seems to me it’s hard to match reality—for example, books, book shelves, pencils, drafting tables, gizmos, keyboards, mouse, etc. Things with tactile feedback. Leafing through a book typeset on nice paper will always be a better experience than the best of digital representations.

AR will always be somewhat awkward until you can physically touch and interact with the material things. It’s useful, sure, but not a replacement.

Haptic feedback is probably my favorite iPhone user experience improvement on both the hardware and software side.

However, I will never be able to type faster than on my keyboard, and even with the most advanced voice inputs, I will always be able to type longer and with less fatigue than if I were to use my voice—having ten fingers and one set of vocal cords.

All options are going to be valid and useful for a very long time.

imiric
> It seems to me it’s hard to match reality—for example, books, book shelves, pencils, drafting tables, gizmos, keyboards, mouse, etc. Things with tactile feedback. Leafing through a book typeset on nice paper will always be a better experience than the best of digital representations.

There's nothing tactile about a glass pane. It's simply a medium through which we access digital objects, and a very clunky one at that. Yet we got used to it in a very short amount of time.

If anything, XR devices have the possibility to offer a much more natural tactile experience. visionOS is already touch-driven, and there are glove-like devices today that provide more immersive haptics. Being able to feel the roughness or elasticity of a material, that kind of thing. It's obviously ridiculous to think that everyone will enjoy wearing a glove all day, but this technology can only improve.

This won't be a replacement for physical objects, of course. It will always be a simulation. But the one we can get via spatial computing will be much more engaging and intuitive than anything we've used so far.

> I will never be able to type faster than on my keyboard, and even with the most advanced voice inputs, I will always be able to type longer and with less fatigue than if I were to use my voice—having ten fingers and one set of vocal cords.

Sure, me neither—_today_. But this argument ignores the improvements we can make to XR interfaces.

It won't just be about voice input. It will also involve touch input, eye tracking, maybe even motion tracking.

A physical board with keys you press to produce single characters at a time is a very primitive way of inputting data into a machine.

Today we have virtual keyboards in environments like visionOS, which I'm sure are clunky and slow to use. But what if we invent an accurate way of translating the motion of each finger into a press of a virtual key? That seems like an obvious first step. Suddenly you're no longer constrained by a physical board, and can "type" with your hands in any position. What if we take this further and can translate patterns of finger positions into key chords, in a kind of virtual stenotype? What if we also involve eye, motion and voice inputs into this?

These are solvable problems we will address over time. Thinking that just because they're not solved today they never will be is very shortsighted.

Being able to track physical input from several sources in 3D space provides a far richer environment to invent friendly and intuitive interfaces than a 2D glass pane ever could. In that sense, our computing is severely constrained by the current generation of devices.

const_cast OP
I'm not saying I don't believe you. But I am saying that, as a programmer, if you told me I had to only use an iPhone at work I'd probably set myself on fire.

> It's practically inevitable that XR devices will become the standard computing platform in the near future

Yeah I mean I just really doubt it. I'm not seeing a whole lot of benefit over smartphones, which are already ubiquitous. At best, I'm hearing that it won't suck that much. Which... okay not really high praise.

I'm sure, like the smartphone, it will replace SOME usecases. The difference is that the usecases the smartphone replaced were really important ones that cover 80% of common stuff people do. So now everyone has a smartphone.

Will that be the case with XR? I doubt it. The usecases it will cover will be, at absolute best, incremental as compared to the smartphone. And, I presume, the smartphone will cover those usecases too. Which is why I think it's more likely smartphones swallow these glasses thingy than the other way around.

imiric
> I'm not saying I don't believe you.

I'm not trying to convince anyone. Believe what you want to believe :)

> But I am saying that, as a programmer, if you told me I had to only use an iPhone at work I'd probably set myself on fire.

Sure, me too. But that's a software and ergonomics problem. There's no way you will ever be as productive on a 6" display, tapping on a glass pane, as you would on a much larger display(s), with a more comfortable physical keyboard with far richer haptics. Not to mention the crippled software environment of iOS.

But like I mentioned in other threads, it would be shortsighted to think that interfaces of XR devices will not be drastically better in the future. Everyone keeps focusing on how voice input is bad, ignoring that touch, eye and motion tracking in a 3D environment can deliver far richer interfaces than 2D displays ever did. Plus voice input will only get better, as it has greatly improved over the last 2 years alone.

> I'm not seeing a whole lot of benefit over smartphones, which are already ubiquitous. At best, I'm hearing that it won't suck that much. Which... okay not really high praise.

Have you seen the user avatars in visionOS 26? Go watch some demos if you haven't.

Being able to have a conversation with someone that feels like they're physically next to you is _revolutionary_. Just that use case alone will drive adoption of XR devices more than anything else. Video conferences on 2D displays from crappy webcams feels primitive in comparison. And that is _today_. What will that experience be like in 10 years?

I'm frankly surprised that a community of tech nerds can be so dismissive of a technology that offers more immersive digital experiences. I'm pretty sure that most people here own "battlestations" with 2+ screens. Yet they can't imagine what the experience of an infinite amount of screens in a 3D environment could be like? Forget the fact that today's generation of XR displays are blurry, have limited FoV, or anything else. Those are minor limitations of today's tech that will improve over time. I'm 100% sure that once all of those issues are ironed out, this community will be the first to adopt XR for "increased productivity". Hell, current gen devices are almost there, and some are already adopting them for productivity work.

So those are just two examples. Once the tech is fully mature, and someone creates a device that brings all these experiences together in a comfortable and accessible package, it will be an iPhone-like event where the market will explode. I suspect we're less than a decade away from that event.

int_19h
What is your wpm with a touch keyboard (however fancy) vs an actual physical one?
CamperBob2
Something that needs to be considered before answering that question is that current predictive text engines are ridiculously stupid compared to what an LLM (or even an "SLM") with access to all of your previous texts could do.

When somebody finally gets a clue and implements that, no typist on Earth will be able to keep up with it.

kalleboo
Since iOS 17, Apple already uses a transformer language model that trains on your input in the keyboard.
kgwxd
i'll never wear them but i'm sure they'll have wireless conn for a keyboard, mouse, and other sane inputs, just like phones. for me the worst part of touchscreen is having to hold the device like a fancy glass egg (on a sane device i'd look up how to spell the word for that) no matter what i'm doing out of fear the wrong thing will happen if i don't. at least a plain monitor strapped to my face doesn't have that concern.
naveen99
To play devils advocate, Speech is how humans delegate to other humans. Usually faster and clearer to communicate with an employee via voice in person or over the phone than on email.
Eddy_Viscosity2
> Usually faster and clearer to communicate with an employee via voice in person

That's because the communication is going from a person to a person and both are very highly tuned to not only hear the words, but the tone, context, subtext, and undertones. There can be all kinds of information packed in a few words that have nothing to do with the words.

Machines, even LLMs, can't do this. I don't think they every will. So typing and shortcut commands and the like are far more efficient interacting with a computer.

naveen99
That’s my point. It’s not the interface that’s the bottleneck. Ai needs to get a lot better and faster …
A lot of people spend hours consuming auto-playing short-form video content. I would guess the majority of young people, in the West.

This item has no comments currently.