Preferences

> It seems to me it’s hard to match reality—for example, books, book shelves, pencils, drafting tables, gizmos, keyboards, mouse, etc. Things with tactile feedback. Leafing through a book typeset on nice paper will always be a better experience than the best of digital representations.

There's nothing tactile about a glass pane. It's simply a medium through which we access digital objects, and a very clunky one at that. Yet we got used to it in a very short amount of time.

If anything, XR devices have the possibility to offer a much more natural tactile experience. visionOS is already touch-driven, and there are glove-like devices today that provide more immersive haptics. Being able to feel the roughness or elasticity of a material, that kind of thing. It's obviously ridiculous to think that everyone will enjoy wearing a glove all day, but this technology can only improve.

This won't be a replacement for physical objects, of course. It will always be a simulation. But the one we can get via spatial computing will be much more engaging and intuitive than anything we've used so far.

> I will never be able to type faster than on my keyboard, and even with the most advanced voice inputs, I will always be able to type longer and with less fatigue than if I were to use my voice—having ten fingers and one set of vocal cords.

Sure, me neither—_today_. But this argument ignores the improvements we can make to XR interfaces.

It won't just be about voice input. It will also involve touch input, eye tracking, maybe even motion tracking.

A physical board with keys you press to produce single characters at a time is a very primitive way of inputting data into a machine.

Today we have virtual keyboards in environments like visionOS, which I'm sure are clunky and slow to use. But what if we invent an accurate way of translating the motion of each finger into a press of a virtual key? That seems like an obvious first step. Suddenly you're no longer constrained by a physical board, and can "type" with your hands in any position. What if we take this further and can translate patterns of finger positions into key chords, in a kind of virtual stenotype? What if we also involve eye, motion and voice inputs into this?

These are solvable problems we will address over time. Thinking that just because they're not solved today they never will be is very shortsighted.

Being able to track physical input from several sources in 3D space provides a far richer environment to invent friendly and intuitive interfaces than a 2D glass pane ever could. In that sense, our computing is severely constrained by the current generation of devices.


This item has no comments currently.