Apple Vision Pro still has a way to go
The Apple Vision Pro still has a lot to be worked on before the hardware is released, with many features announced at WWDC to receive lots of attention in the coming months.

Apple Vision Pro
Apple's slick presentation launching the Apple Vision Pro showed off many of the features of the headset, while demonstrations of features to attendees were promising. However, it appears that Apple still has a long way to go to get all of the elements together for the final release version.
According to Mark Gurman in the "Power On" newsletter for Bloomberg on Sunday, the demonstrations showed off the most polished parts of the visionOS operating system. Many elements, such as in-air typing and adaptations of core iPad apps, still need work.
There is still more to do in relation to 3D video content, third-party app support, the handling of prescription lenses, comfort, and the all-important EyeSight feature.
While Apple used a smaller team to develop the headset pre-launch, due to its culture of secrecy, it's now bringing in thousands more employees to improve the project. This includes internal testing of the hardware, though that too is still limited to senior engineers and executives.
For core apps, developers of iOS and iPadOS versions are making new editions for visionOS. Apps for Calendar, Maps, Books, Mail, and others have yet to be finished.
EyeSight, the feature that showed the user's eyes on an external display, wasn't functional on demonstration hardware. Again, Apple is now expanding the number of engineers working on it to make sure it works properly.
In initial expanded testing, it's been found that, while Apple tried to solve the weight issue of other headsets by removing the battery, the metal-framed hardware still apparently feels too heavy after hours of use. As a comfort fix, a second strap that goes over the wearer's head is on the way, but one that could be offered as a separate accessory.
For marketing, Apple will be making dedicated areas for demonstrations and for customers to select the right sizes of bands and light seal. Stores will be provided an iPhone app to scan customer faces to aid in selecting the correct size.
Some features aren't going to make it into the first release, but are still intended for future releases. While the first version will be able to support one desktop view of a nearby Mac screen at a time, future models will be capable of handling multiple displays.
The use of Personas for FaceTime conferences will apparently only be limited to one-to-one chats. In the second generation, multiple Vision Pro users will be able to use Personas in a group conversation.
Virtual fitness content and Fitness+ is also anticipated for the next generational release.
Apple is probably a long way away from its next headset launch, but it has already shifted some employees from the original Vision Pro to the newer models. Those future editions include a second-gen high-end model and a lower-end variant.
Read on AppleInsider

Comments
I expect to use the VP as a desktop display for a new Studio Pro I will be purchasing at the same time in 2024
My use cases will be video, photo editing and 3-D graphics for geologic modeling (a specialized field but this would be ideal for both oil and gas as well as hard rock exploration, as well as the primary display for the Studio Pro.
By 2026+ I expect to be using a future VP iteration as my primary entertainment (TV replacement) and computer display for all of my devices.
A lot of ink will be spilled over the price. But like all computers these are tools for living and working more efficiently. VP will replace thousands of dollars of display technology (multiple high res computer displays and several high end TV's). In addition, it will allow me to be more productive (ie. higher work product in a shorter period of time) and thus be the superior tech for my workflow.
Don't people, like, sweat? If you aren't sweating, you are not working out. If I use any headset for fitness, the straps and seals will have to be put in the wash, and there would be a rather high risk of damaging the hardware with sweat too. Those hardware parts would have to be at minimum be able to be rinsed.
You really need a light seal that goes over your forehead, so that the weight can be carried there instead of the cheek bones or cheek muscles, and a longitudinal strap from the forehead to the back strap.
I kind of thought Apple would have a fan in the device to drive air in and out of the light seal, or through the device, so that it keep the volume under the light seal cool and fresh too.
It is curious that Apple did not show EyeSight to the media. Many people are assuming that it will look weird off angle and that it is not an actual image of you. We’ll have to wait and see.
Most likely, Apple did not show it because they are not ready to reveal all the secrets of how it works. Others companies are going to try to copy it and Apple isn’t going to help them get started.
Mike Rockwell revealed that the headset generates a different EyeSight view for each person looking at it. I think a key question is: how many of these different views can it generate at once and the lenticular lens cope with? My guess is that it will look incredible when 1-3 people are looking at it but might start to look a bit weird for additional people depending on the angle they are at. But maybe they really are able to create enough views that it never looks weird even if there is a crowd of people looking at it. Whatever number of views they decided they needed probably took years of work to determine and is the key piece of data that Apple will want to hold back from competitors. So I don’t think Apple are keeping this feature under wraps because it looks bad but rather because they don’t want to help others get started in copying it.
The only thing left is to polish the software, and that depends upon Apple and the developers, but on Apple side, the base foundation of the software is done, particularly the part where Apple Vision OS is compatible to all the other previous operating systems and development tools used by Apple, I don’t think this release is any different than the original iPhone or iPad intros, however in one respect, it should be better, because Apple is leveraging everything that they’ve done in the past 16 years.
https://www.youtube.com/watch?v=TX9qSaGXFyg
https://www.youtube.com/watch?v=Btf4mN37OsU
Here are examples of this effect in video displays:
Those support dozens of individual view directions (10:08 in 2nd video, it says 48 views). Here's one using a printed image:
It's a standard image that has 10 frames encoded in it and the lens displays a different part for a different viewpoint. The more viewpoints, the lower the resolution would be per view. For a basic image of eyes/eyebrows, I'd guess Apple Vision Pro has no more than 10 viewing angles with maybe a 4k front display. This would be a 400x300 image per view that maybe looks like more than this due to some interpolation with adjacent views. It looks quite blurred vs the rest of the face but won't be much different from someone with sunglasses:
honestly, though Cook makes a big stink about it, that’s the lamest feature of the whole device. It drives up cost, power consumption, and unit size dramatically with the outer screen - all for a facsimile of your eyes to display to try to make people believe you aren’t actually distracted by what you’re really doing - which is a worthless feature. Your wife/girlfriend is still going to force you to remove the thing when you should be spending quality time with her and/or the kids. Your boss isn’t going to believe you’re not actually browsing apple insider while pretending to work. And the color blobs being displayed while you have everyone tuned out is annoying. Would be better off just black and save the juice.