Vision Pro will turn any surface into a display with touch control
Developers using Apple Vision Pro have learned they will be able to create controls and displays, and have them appear to be on any surface in the user's room.

Vision Pro can make any surface appear to be a control, or an app display
Apple has been working on Vision Pro for a long time, and along the way it has applied for countless patents that were to do with it -- even if it wasn't always obvious what the company's plans were. Now one intriguing patent application from 2020 has been revealed to be a part of Vision Pro that will help developers.
Developer Steve Troughton Smith has discovered that it's possible to pick a surface within the headset's field of view, and then place any app so that it appears to be actually on that surface.
"A natural way for humans to interact with (real) objects is to touch them with their hands," said Apple's 2020 patent application. "[Screens] that allow detecting and localizing touches on their surface are commonly known as touch screens and are nowadays common part of, e.g., smartphones and tablet computers."
Troughton Smith used an Apple Music app for his experimentation, but it can be any app and seemingly any controls. So where Apple's virtual keyboard for the Vision Pro is not practical for long typing sessions, it could be that a user's desk is turned into a keyboard.
That would still make for a stunted typing experience, as there would be no travel on the keys. But there are already keyboard projectors that use lasers to display a QWERTY layout directly onto a flat surface.
Read on AppleInsider

Comments
Wondering how the dexterous you can be with the hand tracking. 10 finger typing is one example. Another is say, can you work a virtual Rubik's cube with your hands?
The trick with a virtual Rubik's cube is doing it with 10 fingers simultaneously. Well, maybe a max of 6? The current SDK says a max of 2 fingers simultaneously, so work to do on this front, though I don't if that is what visionOS provides as a default, and developers can add more on their own.
The trick for the software with a Rubik's cube is to know when the back fingers - farthest from the cameras and often behind UI elements and your other fingers - are millimeters away from the UI element. If the tracking is good enough, it will know that your fingers a touching virtual surfaces, know that the user is grabbing the UI element in 3 dimensions, and the software can perform a continuous response to the user's action.
What Apple has shown with the hand tracking has been rather Spartan. They only have shown the most fundamental way to navigate the environment. Just a pinch and swipe. Your hands are a gigantic control devices with a lot of degrees of freedom. Eg, there are people wondering how you can play games without controllers. The answer is with hand gestures. One hand could be forward-back-right-left, including velocity and acceleration if the developer desires, the other hand could be firing, inventory selection etc, and your head and eyes are for aiming.
Heck, I've heard VR journos ask how people can play Beat Saber without controllers. These are effing VR journalists?! Just go grab a couple of sticks and start swinging. The object tracking will do the rest. With the right colored sticks, the cameras and software will know where the sticks are in 3D space. For this, you probably really need 120 Hz though. On the double-plus side, if the sticks are properly waited, players will actually know what it is like to swing swords. Probably a bad idea as it can get both dangerous and will not make an approachable game for the masses.