Why Apple Vision Pro has a chance at being the future of work

2»

Comments

  • Reply 21 of 30
    mattinozmattinoz Posts: 2,340member
    avon b7 said:
    chutzpah said:
    avon b7 said:
    tmay said:
    avon b7 said:
    danox said:
    avon b7 said:
    zomp said:
    Apple is always holding secrets so that us and, more importantly, the competition can only guess at apple's roadmap for vision pro. Apple is a hardware company and it's up to the software creators to imagine new ways to use the device. So rather than apple promote something so that the folks can say "How are you going to do that?", they allow us to imagine the possibilities so that we can create new things. Hence "Think Different". Apple will always continue to modify the hardware as the needs arrive - at the moment vision pro is fantastic, but they have no idea what we will dream up and what more needs to be added to the hardware and software updates. That's what makes apple so amazing! They leave it up to us to create the future of their devices.
    The roadmap is the same for everyone. 

    Everyone knows where everyone wants to go. It's how they get there and at what cost that is more important. 

    Moving the screen towards you for a more immersive experience is the most basic goal. Interacting with a 3D like environment is part of that. Then the audio/visual experience itself (resolution/quality etc). The computer experience. Interaction with the external environment. 

    Size, battery life, 'speed' etc. 

    It's not exactly a new field. 


    But it is a new Apple ecosystem and there will be daily announcements from Apple developers in time highlighting the fact that they have ported their software over to Apple VisionOS, and that drumbeat Tsunami, will get louder and louder as we get closer to Apple’s Vision Pro release date. The R1 SOC is also new what are it’s full spec’s capabilities?.

    The competition if there is any won’t have any of that over the next six months, a long slow steady drum beat of a rising army of VisionOS developers.

    https://developer.apple.com/news/?id=h3qjwosp

    However, none of that changes the facts.

    Everyone is moving on the same roadmap and with the same end goals.

    That industry roadmap was there years before Apple even announced the Vision Pro. Now Apple is officially on it. 

    None of us may be big Zuckerberg fans but what he said the other day wasn't really off the mark. 

    If you re-watch the presentation how much was truly 'new'? Or not already planned? 

    If anything, the true upshot was that it tempered people's expectations, which is no bad thing. Perhaps some people were simply getting ahead of themselves. 

    The R1 is a dedicated chip for specific tasks. Those kinds of specific chips are all over the place.

    Here’s one for RF processing which could very well end up in an XR device at some point:

    https://www.gizchina.com/2023/03/06/honor-c1-rf-chipset-launched-sets-a-new-benchmark/

    If there are no general purpose chipsets up to a particular task then companies tend to bake their own. They might be for in-house use like those from Honor, Huawei, Google, Apple etc, or made available on the open market like those from Qualcomm, Broadcom, Mediatek, Sony etc. 

    You mention the full specs of the R1 but what are the full specs of the Vision Pro? 

    With most tech announcements, what is not said (or done) is just as important as what is said and done. 

    We know there is no cellular option but it was surprising, to me at least, that they kept most of the presentation in the AR realm and not so much the VR realm. No one knows how many of the announced features actually work because those who had hands on access were not allowed to use them. 

    Now, for a non-production unit that is the order of the day. Especially as those units would be running early software implementations but the reality is that no one got to try out some of the tentpole features. 

    Ecosystems are just ecosystems.

    They serve a purpose and there is a lot to say on that subject but not really relevant here. 

    The Vision Pro will just slip into the Apple ecosystem. But then again, why wouldn't it?

    How well? No one knows yet. 

    It is true that developers are an important component of many ecosystems and of course the Vision Pro was announced now precisely to be able to widen and hone the developer support. Marketing was another factor. That was absolutely necessary. 

    My take is that the package as a whole looks great. The finesse. It all comes at a price but that has to be understood. Let those with the disposable income and the will to be early adopters iron out the wrinkles. 

    The roadmap, though. There's not much new there. 

    Oh boy; Honor makes a new RF chipset that is slightly faster that Apple's iPhone RF chipset, and no one cares except avon b7 who is beyond excited.

    Meanwhile, Apple, again, "stuns" competition with custom R1 sensor processor, on top of the M2 processor.

    Competitors: "Everything is nominal", "Look at our user base and marketshare", "Look at our affordable prices", "Look at our roadmap", "Ecosystems are just ecosystems"; state Apple is doing nothing that they haven't already explored, all while they quietly watch any future profits shrink to nothing.
    You miss the point as only you can. 

    Why did Honor even develop the chip if slight gains were the order of the day? 

    You have deliberately ignored the point which is actually worse than just missing it!

    The point was that if you can't get the results you want with off-the-shelf solutions (even slightly modified ones) you bake your own. 

    It's what Honor did. 
    It's what Apple did. 
    It's what Huawei did. 

    It's what lots of companies do!

    Absolutely nothing out of the ordinary.

    Right? 
    I don't believe any other major VR headsets* have used eye tracking as a primary mode of interaction.  They've all used hand operated controllers as the primary.  Likewise focussing on being a consumer-level productivity device**.  And the approach to pass through is so far ahead it makes others look like they haven't been trying at all.

    It is very much out of the ordinary.



    * Google Cardboard a slight exception, though it was so much less ambitious that it barely counts.
    ** HoloLens was always targetted for specialist productivity, not for mass appeal.
    The reason active eye tracking is not commonplace in VR headsets is simply because controllers are cheap and extremely functional.

    Eyetrackers are very 'old' technology and have been up for inclusion on headsets for a while. They are simply more expensive to implement actively (as opposed to passively which is available on some consumer VR headsets) because, once you remove the controllers, you need to add another way (gestures for example) to resolve the same problems. 

    That isn't technology related per se. It's more of a cost consideration. I've been working in UX for a few years now and eye trackers are essential.

    As mentioned above, gesture recognition is another area brought on through the absence of controllers but again, gesture interaction brings cost considerations. Gesture recognition is also a well worn technology. 

    If you reduce the importance of cost in your consumer focused product, the door opens to other options. That is what Apple has chosen to do with the initial release although I'm sure an 'SE' version is being planned, and IMO sans the bells and whistles. 

    Zuckerberg and many other companies have spoken about these aspects for some time. There have been lots of concept devices, prototypes and whatnot. The problem is price and mass consumer appeal for a device that will not get anywhere the usage time of a phone. 

    We have known where everyone wants to go for a very long while and it's the same place Apple wants to go. 

    I believe Xiaomi announced something at MWC and even used the word 'spatial' in computing terms. Quite logical when you consider that VR is spatial by definition. 
    Controllers are not "extremely functional", they make me work how they want to work and limit me to what they want to do. There are limiting in function. Still they are cheap and easy and basically just lazy. 


    tmaywilliamlondonHirsuteJim
  • Reply 22 of 30
    avon b7avon b7 Posts: 7,734member
    mattinoz said:
    avon b7 said:
    chutzpah said:
    avon b7 said:
    tmay said:
    avon b7 said:
    danox said:
    avon b7 said:
    zomp said:
    Apple is always holding secrets so that us and, more importantly, the competition can only guess at apple's roadmap for vision pro. Apple is a hardware company and it's up to the software creators to imagine new ways to use the device. So rather than apple promote something so that the folks can say "How are you going to do that?", they allow us to imagine the possibilities so that we can create new things. Hence "Think Different". Apple will always continue to modify the hardware as the needs arrive - at the moment vision pro is fantastic, but they have no idea what we will dream up and what more needs to be added to the hardware and software updates. That's what makes apple so amazing! They leave it up to us to create the future of their devices.
    The roadmap is the same for everyone. 

    Everyone knows where everyone wants to go. It's how they get there and at what cost that is more important. 

    Moving the screen towards you for a more immersive experience is the most basic goal. Interacting with a 3D like environment is part of that. Then the audio/visual experience itself (resolution/quality etc). The computer experience. Interaction with the external environment. 

    Size, battery life, 'speed' etc. 

    It's not exactly a new field. 


    But it is a new Apple ecosystem and there will be daily announcements from Apple developers in time highlighting the fact that they have ported their software over to Apple VisionOS, and that drumbeat Tsunami, will get louder and louder as we get closer to Apple’s Vision Pro release date. The R1 SOC is also new what are it’s full spec’s capabilities?.

    The competition if there is any won’t have any of that over the next six months, a long slow steady drum beat of a rising army of VisionOS developers.

    https://developer.apple.com/news/?id=h3qjwosp

    However, none of that changes the facts.

    Everyone is moving on the same roadmap and with the same end goals.

    That industry roadmap was there years before Apple even announced the Vision Pro. Now Apple is officially on it. 

    None of us may be big Zuckerberg fans but what he said the other day wasn't really off the mark. 

    If you re-watch the presentation how much was truly 'new'? Or not already planned? 

    If anything, the true upshot was that it tempered people's expectations, which is no bad thing. Perhaps some people were simply getting ahead of themselves. 

    The R1 is a dedicated chip for specific tasks. Those kinds of specific chips are all over the place.

    Here’s one for RF processing which could very well end up in an XR device at some point:

    https://www.gizchina.com/2023/03/06/honor-c1-rf-chipset-launched-sets-a-new-benchmark/

    If there are no general purpose chipsets up to a particular task then companies tend to bake their own. They might be for in-house use like those from Honor, Huawei, Google, Apple etc, or made available on the open market like those from Qualcomm, Broadcom, Mediatek, Sony etc. 

    You mention the full specs of the R1 but what are the full specs of the Vision Pro? 

    With most tech announcements, what is not said (or done) is just as important as what is said and done. 

    We know there is no cellular option but it was surprising, to me at least, that they kept most of the presentation in the AR realm and not so much the VR realm. No one knows how many of the announced features actually work because those who had hands on access were not allowed to use them. 

    Now, for a non-production unit that is the order of the day. Especially as those units would be running early software implementations but the reality is that no one got to try out some of the tentpole features. 

    Ecosystems are just ecosystems.

    They serve a purpose and there is a lot to say on that subject but not really relevant here. 

    The Vision Pro will just slip into the Apple ecosystem. But then again, why wouldn't it?

    How well? No one knows yet. 

    It is true that developers are an important component of many ecosystems and of course the Vision Pro was announced now precisely to be able to widen and hone the developer support. Marketing was another factor. That was absolutely necessary. 

    My take is that the package as a whole looks great. The finesse. It all comes at a price but that has to be understood. Let those with the disposable income and the will to be early adopters iron out the wrinkles. 

    The roadmap, though. There's not much new there. 

    Oh boy; Honor makes a new RF chipset that is slightly faster that Apple's iPhone RF chipset, and no one cares except avon b7 who is beyond excited.

    Meanwhile, Apple, again, "stuns" competition with custom R1 sensor processor, on top of the M2 processor.

    Competitors: "Everything is nominal", "Look at our user base and marketshare", "Look at our affordable prices", "Look at our roadmap", "Ecosystems are just ecosystems"; state Apple is doing nothing that they haven't already explored, all while they quietly watch any future profits shrink to nothing.
    You miss the point as only you can. 

    Why did Honor even develop the chip if slight gains were the order of the day? 

    You have deliberately ignored the point which is actually worse than just missing it!

    The point was that if you can't get the results you want with off-the-shelf solutions (even slightly modified ones) you bake your own. 

    It's what Honor did. 
    It's what Apple did. 
    It's what Huawei did. 

    It's what lots of companies do!

    Absolutely nothing out of the ordinary.

    Right? 
    I don't believe any other major VR headsets* have used eye tracking as a primary mode of interaction.  They've all used hand operated controllers as the primary.  Likewise focussing on being a consumer-level productivity device**.  And the approach to pass through is so far ahead it makes others look like they haven't been trying at all.

    It is very much out of the ordinary.



    * Google Cardboard a slight exception, though it was so much less ambitious that it barely counts.
    ** HoloLens was always targetted for specialist productivity, not for mass appeal.
    The reason active eye tracking is not commonplace in VR headsets is simply because controllers are cheap and extremely functional.

    Eyetrackers are very 'old' technology and have been up for inclusion on headsets for a while. They are simply more expensive to implement actively (as opposed to passively which is available on some consumer VR headsets) because, once you remove the controllers, you need to add another way (gestures for example) to resolve the same problems. 

    That isn't technology related per se. It's more of a cost consideration. I've been working in UX for a few years now and eye trackers are essential.

    As mentioned above, gesture recognition is another area brought on through the absence of controllers but again, gesture interaction brings cost considerations. Gesture recognition is also a well worn technology. 

    If you reduce the importance of cost in your consumer focused product, the door opens to other options. That is what Apple has chosen to do with the initial release although I'm sure an 'SE' version is being planned, and IMO sans the bells and whistles. 

    Zuckerberg and many other companies have spoken about these aspects for some time. There have been lots of concept devices, prototypes and whatnot. The problem is price and mass consumer appeal for a device that will not get anywhere the usage time of a phone. 

    We have known where everyone wants to go for a very long while and it's the same place Apple wants to go. 

    I believe Xiaomi announced something at MWC and even used the word 'spatial' in computing terms. Quite logical when you consider that VR is spatial by definition. 
    Controllers are not "extremely functional", they make me work how they want to work and limit me to what they want to do. There are limiting in function. Still they are cheap and easy and basically just lazy. 


    Controllers do exactly the same thing as gestures and from exactly the same place.

    Placement and action. 

    Your comment doesn't make any sense. 

    Gestures make you work how they want to work and limit you in the same way as controllers do. 


    The pain points of controllers are basically the batteries and sometimes the breakdown in communication with the host device. 

    The pain points of gestures are you need line of sight with the sensors doing the gesture interpretation and the accuracy of the interpretation itself. 

    I imagine (I haven't really thought about it) that to avoid false positives with accidental hand fidgeting, a mechanism to 'wake' the interpretation system might be needed. 



    edited June 2023
  • Reply 23 of 30
    mattinozmattinoz Posts: 2,340member
    avon b7 said:
    mattinoz said:
    avon b7 said:
    chutzpah said:
    avon b7 said:
    tmay said:
    avon b7 said:
    danox said:
    avon b7 said:
    zomp said:
    Apple is always holding secrets so that us and, more importantly, the competition can only guess at apple's roadmap for vision pro. Apple is a hardware company and it's up to the software creators to imagine new ways to use the device. So rather than apple promote something so that the folks can say "How are you going to do that?", they allow us to imagine the possibilities so that we can create new things. Hence "Think Different". Apple will always continue to modify the hardware as the needs arrive - at the moment vision pro is fantastic, but they have no idea what we will dream up and what more needs to be added to the hardware and software updates. That's what makes apple so amazing! They leave it up to us to create the future of their devices.
    The roadmap is the same for everyone. 

    Everyone knows where everyone wants to go. It's how they get there and at what cost that is more important. 

    Moving the screen towards you for a more immersive experience is the most basic goal. Interacting with a 3D like environment is part of that. Then the audio/visual experience itself (resolution/quality etc). The computer experience. Interaction with the external environment. 

    Size, battery life, 'speed' etc. 

    It's not exactly a new field. 


    But it is a new Apple ecosystem and there will be daily announcements from Apple developers in time highlighting the fact that they have ported their software over to Apple VisionOS, and that drumbeat Tsunami, will get louder and louder as we get closer to Apple’s Vision Pro release date. The R1 SOC is also new what are it’s full spec’s capabilities?.

    The competition if there is any won’t have any of that over the next six months, a long slow steady drum beat of a rising army of VisionOS developers.

    https://developer.apple.com/news/?id=h3qjwosp

    However, none of that changes the facts.

    Everyone is moving on the same roadmap and with the same end goals.

    That industry roadmap was there years before Apple even announced the Vision Pro. Now Apple is officially on it. 

    None of us may be big Zuckerberg fans but what he said the other day wasn't really off the mark. 

    If you re-watch the presentation how much was truly 'new'? Or not already planned? 

    If anything, the true upshot was that it tempered people's expectations, which is no bad thing. Perhaps some people were simply getting ahead of themselves. 

    The R1 is a dedicated chip for specific tasks. Those kinds of specific chips are all over the place.

    Here’s one for RF processing which could very well end up in an XR device at some point:

    https://www.gizchina.com/2023/03/06/honor-c1-rf-chipset-launched-sets-a-new-benchmark/

    If there are no general purpose chipsets up to a particular task then companies tend to bake their own. They might be for in-house use like those from Honor, Huawei, Google, Apple etc, or made available on the open market like those from Qualcomm, Broadcom, Mediatek, Sony etc. 

    You mention the full specs of the R1 but what are the full specs of the Vision Pro? 

    With most tech announcements, what is not said (or done) is just as important as what is said and done. 

    We know there is no cellular option but it was surprising, to me at least, that they kept most of the presentation in the AR realm and not so much the VR realm. No one knows how many of the announced features actually work because those who had hands on access were not allowed to use them. 

    Now, for a non-production unit that is the order of the day. Especially as those units would be running early software implementations but the reality is that no one got to try out some of the tentpole features. 

    Ecosystems are just ecosystems.

    They serve a purpose and there is a lot to say on that subject but not really relevant here. 

    The Vision Pro will just slip into the Apple ecosystem. But then again, why wouldn't it?

    How well? No one knows yet. 

    It is true that developers are an important component of many ecosystems and of course the Vision Pro was announced now precisely to be able to widen and hone the developer support. Marketing was another factor. That was absolutely necessary. 

    My take is that the package as a whole looks great. The finesse. It all comes at a price but that has to be understood. Let those with the disposable income and the will to be early adopters iron out the wrinkles. 

    The roadmap, though. There's not much new there. 

    Oh boy; Honor makes a new RF chipset that is slightly faster that Apple's iPhone RF chipset, and no one cares except avon b7 who is beyond excited.

    Meanwhile, Apple, again, "stuns" competition with custom R1 sensor processor, on top of the M2 processor.

    Competitors: "Everything is nominal", "Look at our user base and marketshare", "Look at our affordable prices", "Look at our roadmap", "Ecosystems are just ecosystems"; state Apple is doing nothing that they haven't already explored, all while they quietly watch any future profits shrink to nothing.
    You miss the point as only you can. 

    Why did Honor even develop the chip if slight gains were the order of the day? 

    You have deliberately ignored the point which is actually worse than just missing it!

    The point was that if you can't get the results you want with off-the-shelf solutions (even slightly modified ones) you bake your own. 

    It's what Honor did. 
    It's what Apple did. 
    It's what Huawei did. 

    It's what lots of companies do!

    Absolutely nothing out of the ordinary.

    Right? 
    I don't believe any other major VR headsets* have used eye tracking as a primary mode of interaction.  They've all used hand operated controllers as the primary.  Likewise focussing on being a consumer-level productivity device**.  And the approach to pass through is so far ahead it makes others look like they haven't been trying at all.

    It is very much out of the ordinary.



    * Google Cardboard a slight exception, though it was so much less ambitious that it barely counts.
    ** HoloLens was always targetted for specialist productivity, not for mass appeal.
    The reason active eye tracking is not commonplace in VR headsets is simply because controllers are cheap and extremely functional.

    Eyetrackers are very 'old' technology and have been up for inclusion on headsets for a while. They are simply more expensive to implement actively (as opposed to passively which is available on some consumer VR headsets) because, once you remove the controllers, you need to add another way (gestures for example) to resolve the same problems. 

    That isn't technology related per se. It's more of a cost consideration. I've been working in UX for a few years now and eye trackers are essential.

    As mentioned above, gesture recognition is another area brought on through the absence of controllers but again, gesture interaction brings cost considerations. Gesture recognition is also a well worn technology. 

    If you reduce the importance of cost in your consumer focused product, the door opens to other options. That is what Apple has chosen to do with the initial release although I'm sure an 'SE' version is being planned, and IMO sans the bells and whistles. 

    Zuckerberg and many other companies have spoken about these aspects for some time. There have been lots of concept devices, prototypes and whatnot. The problem is price and mass consumer appeal for a device that will not get anywhere the usage time of a phone. 

    We have known where everyone wants to go for a very long while and it's the same place Apple wants to go. 

    I believe Xiaomi announced something at MWC and even used the word 'spatial' in computing terms. Quite logical when you consider that VR is spatial by definition. 
    Controllers are not "extremely functional", they make me work how they want to work and limit me to what they want to do. There are limiting in function. Still they are cheap and easy and basically just lazy. 


    Controllers do exactly the same thing as gestures and from exactly the same place.

    Placement and action. 

    Your comment doesn't make any sense. 

    Gestures make you work how they want to work and limit you in the same way as controllers do. 


    The pain points of controllers are basically the batteries and sometimes the breakdown in communication with the host device. 

    The pain points of gestures are you need line of sight with the sensors doing the gesture interpretation and the accuracy of the interpretation itself. 

    I imagine (I haven't really thought about it) that to avoid false positives with accidental hand fidgeting, a mechanism to 'wake' the interpretation system might be needed. 



    Controller may have uses but if you need to do anything then the device is limited. 
    Like the iPhone or more so the iPad the pencil is useful but if it was required to use the iPhone or the iPad both devices would have been duds. 

    Same here if the gestures are well thought out then you will learn by discovery for the most part, and developers aren't bound by the buttons built in to the device they can extend gestures at will even add complimentary controllers if that would suit. 

    Image X-Plane with cardboard cutouts of the cockpit controls beyond the basic ones like throttle, Yoke now you leveraging muscle memory without needing to build $1,000 of replicas. There are systems that work now with controllers but it just isn't the same.

    Rinse and repeat across 100's of uses and the price difference swings to the more capable device.
    williamlondon
  • Reply 24 of 30
    avon b7avon b7 Posts: 7,734member
    mattinoz said:
    avon b7 said:
    mattinoz said:
    avon b7 said:
    chutzpah said:
    avon b7 said:
    tmay said:
    avon b7 said:
    danox said:
    avon b7 said:
    zomp said:
    Apple is always holding secrets so that us and, more importantly, the competition can only guess at apple's roadmap for vision pro. Apple is a hardware company and it's up to the software creators to imagine new ways to use the device. So rather than apple promote something so that the folks can say "How are you going to do that?", they allow us to imagine the possibilities so that we can create new things. Hence "Think Different". Apple will always continue to modify the hardware as the needs arrive - at the moment vision pro is fantastic, but they have no idea what we will dream up and what more needs to be added to the hardware and software updates. That's what makes apple so amazing! They leave it up to us to create the future of their devices.
    The roadmap is the same for everyone. 

    Everyone knows where everyone wants to go. It's how they get there and at what cost that is more important. 

    Moving the screen towards you for a more immersive experience is the most basic goal. Interacting with a 3D like environment is part of that. Then the audio/visual experience itself (resolution/quality etc). The computer experience. Interaction with the external environment. 

    Size, battery life, 'speed' etc. 

    It's not exactly a new field. 


    But it is a new Apple ecosystem and there will be daily announcements from Apple developers in time highlighting the fact that they have ported their software over to Apple VisionOS, and that drumbeat Tsunami, will get louder and louder as we get closer to Apple’s Vision Pro release date. The R1 SOC is also new what are it’s full spec’s capabilities?.

    The competition if there is any won’t have any of that over the next six months, a long slow steady drum beat of a rising army of VisionOS developers.

    https://developer.apple.com/news/?id=h3qjwosp

    However, none of that changes the facts.

    Everyone is moving on the same roadmap and with the same end goals.

    That industry roadmap was there years before Apple even announced the Vision Pro. Now Apple is officially on it. 

    None of us may be big Zuckerberg fans but what he said the other day wasn't really off the mark. 

    If you re-watch the presentation how much was truly 'new'? Or not already planned? 

    If anything, the true upshot was that it tempered people's expectations, which is no bad thing. Perhaps some people were simply getting ahead of themselves. 

    The R1 is a dedicated chip for specific tasks. Those kinds of specific chips are all over the place.

    Here’s one for RF processing which could very well end up in an XR device at some point:

    https://www.gizchina.com/2023/03/06/honor-c1-rf-chipset-launched-sets-a-new-benchmark/

    If there are no general purpose chipsets up to a particular task then companies tend to bake their own. They might be for in-house use like those from Honor, Huawei, Google, Apple etc, or made available on the open market like those from Qualcomm, Broadcom, Mediatek, Sony etc. 

    You mention the full specs of the R1 but what are the full specs of the Vision Pro? 

    With most tech announcements, what is not said (or done) is just as important as what is said and done. 

    We know there is no cellular option but it was surprising, to me at least, that they kept most of the presentation in the AR realm and not so much the VR realm. No one knows how many of the announced features actually work because those who had hands on access were not allowed to use them. 

    Now, for a non-production unit that is the order of the day. Especially as those units would be running early software implementations but the reality is that no one got to try out some of the tentpole features. 

    Ecosystems are just ecosystems.

    They serve a purpose and there is a lot to say on that subject but not really relevant here. 

    The Vision Pro will just slip into the Apple ecosystem. But then again, why wouldn't it?

    How well? No one knows yet. 

    It is true that developers are an important component of many ecosystems and of course the Vision Pro was announced now precisely to be able to widen and hone the developer support. Marketing was another factor. That was absolutely necessary. 

    My take is that the package as a whole looks great. The finesse. It all comes at a price but that has to be understood. Let those with the disposable income and the will to be early adopters iron out the wrinkles. 

    The roadmap, though. There's not much new there. 

    Oh boy; Honor makes a new RF chipset that is slightly faster that Apple's iPhone RF chipset, and no one cares except avon b7 who is beyond excited.

    Meanwhile, Apple, again, "stuns" competition with custom R1 sensor processor, on top of the M2 processor.

    Competitors: "Everything is nominal", "Look at our user base and marketshare", "Look at our affordable prices", "Look at our roadmap", "Ecosystems are just ecosystems"; state Apple is doing nothing that they haven't already explored, all while they quietly watch any future profits shrink to nothing.
    You miss the point as only you can. 

    Why did Honor even develop the chip if slight gains were the order of the day? 

    You have deliberately ignored the point which is actually worse than just missing it!

    The point was that if you can't get the results you want with off-the-shelf solutions (even slightly modified ones) you bake your own. 

    It's what Honor did. 
    It's what Apple did. 
    It's what Huawei did. 

    It's what lots of companies do!

    Absolutely nothing out of the ordinary.

    Right? 
    I don't believe any other major VR headsets* have used eye tracking as a primary mode of interaction.  They've all used hand operated controllers as the primary.  Likewise focussing on being a consumer-level productivity device**.  And the approach to pass through is so far ahead it makes others look like they haven't been trying at all.

    It is very much out of the ordinary.



    * Google Cardboard a slight exception, though it was so much less ambitious that it barely counts.
    ** HoloLens was always targetted for specialist productivity, not for mass appeal.
    The reason active eye tracking is not commonplace in VR headsets is simply because controllers are cheap and extremely functional.

    Eyetrackers are very 'old' technology and have been up for inclusion on headsets for a while. They are simply more expensive to implement actively (as opposed to passively which is available on some consumer VR headsets) because, once you remove the controllers, you need to add another way (gestures for example) to resolve the same problems. 

    That isn't technology related per se. It's more of a cost consideration. I've been working in UX for a few years now and eye trackers are essential.

    As mentioned above, gesture recognition is another area brought on through the absence of controllers but again, gesture interaction brings cost considerations. Gesture recognition is also a well worn technology. 

    If you reduce the importance of cost in your consumer focused product, the door opens to other options. That is what Apple has chosen to do with the initial release although I'm sure an 'SE' version is being planned, and IMO sans the bells and whistles. 

    Zuckerberg and many other companies have spoken about these aspects for some time. There have been lots of concept devices, prototypes and whatnot. The problem is price and mass consumer appeal for a device that will not get anywhere the usage time of a phone. 

    We have known where everyone wants to go for a very long while and it's the same place Apple wants to go. 

    I believe Xiaomi announced something at MWC and even used the word 'spatial' in computing terms. Quite logical when you consider that VR is spatial by definition. 
    Controllers are not "extremely functional", they make me work how they want to work and limit me to what they want to do. There are limiting in function. Still they are cheap and easy and basically just lazy. 


    Controllers do exactly the same thing as gestures and from exactly the same place.

    Placement and action. 

    Your comment doesn't make any sense. 

    Gestures make you work how they want to work and limit you in the same way as controllers do. 


    The pain points of controllers are basically the batteries and sometimes the breakdown in communication with the host device. 

    The pain points of gestures are you need line of sight with the sensors doing the gesture interpretation and the accuracy of the interpretation itself. 

    I imagine (I haven't really thought about it) that to avoid false positives with accidental hand fidgeting, a mechanism to 'wake' the interpretation system might be needed. 



    Controller may have uses but if you need to do anything then the device is limited. 
    Like the iPhone or more so the iPad the pencil is useful but if it was required to use the iPhone or the iPad both devices would have been duds. 

    Same here if the gestures are well thought out then you will learn by discovery for the most part, and developers aren't bound by the buttons built in to the device they can extend gestures at will even add complimentary controllers if that would suit. 

    Image X-Plane with cardboard cutouts of the cockpit controls beyond the basic ones like throttle, Yoke now you leveraging muscle memory without needing to build $1,000 of replicas. There are systems that work now with controllers but it just isn't the same.

    Rinse and repeat across 100's of uses and the price difference swings to the more capable device.
    I think you're mixing things up a bit here and getting ahead of yourself. 

    The gestures that Apple demoed were limited and replicated the current controllers on the market today. The eye tracker handles placement. 

    That means in terms of functionality the different proposals are identical and each have their pain points. 

    So, my reading is that the eye tracker places the focus and then you use gestures to click, drag etc

    Is there any gesture on the announced product that cannot be achieved using interface elements and a controller? 

    Swipe, pinch etc? 

    If I am reading you correctly, you are suggesting app developers will be able to invent their own app gestures and have the system interpret them. 

    That would mean users learning different gestures for different apps. Not a system wide gesture collection. 

    AFAIK that isn't on the table with what Apple has announced.

    Apologies in advance if I have misread what you are saying. 
  • Reply 25 of 30
    tmaytmay Posts: 6,361member
    avon b7 said:
    mattinoz said:
    avon b7 said:
    chutzpah said:
    avon b7 said:
    tmay said:
    avon b7 said:
    danox said:
    avon b7 said:
    zomp said:
    Apple is always holding secrets so that us and, more importantly, the competition can only guess at apple's roadmap for vision pro. Apple is a hardware company and it's up to the software creators to imagine new ways to use the device. So rather than apple promote something so that the folks can say "How are you going to do that?", they allow us to imagine the possibilities so that we can create new things. Hence "Think Different". Apple will always continue to modify the hardware as the needs arrive - at the moment vision pro is fantastic, but they have no idea what we will dream up and what more needs to be added to the hardware and software updates. That's what makes apple so amazing! They leave it up to us to create the future of their devices.
    The roadmap is the same for everyone. 

    Everyone knows where everyone wants to go. It's how they get there and at what cost that is more important. 

    Moving the screen towards you for a more immersive experience is the most basic goal. Interacting with a 3D like environment is part of that. Then the audio/visual experience itself (resolution/quality etc). The computer experience. Interaction with the external environment. 

    Size, battery life, 'speed' etc. 

    It's not exactly a new field. 


    But it is a new Apple ecosystem and there will be daily announcements from Apple developers in time highlighting the fact that they have ported their software over to Apple VisionOS, and that drumbeat Tsunami, will get louder and louder as we get closer to Apple’s Vision Pro release date. The R1 SOC is also new what are it’s full spec’s capabilities?.

    The competition if there is any won’t have any of that over the next six months, a long slow steady drum beat of a rising army of VisionOS developers.

    https://developer.apple.com/news/?id=h3qjwosp

    However, none of that changes the facts.

    Everyone is moving on the same roadmap and with the same end goals.

    That industry roadmap was there years before Apple even announced the Vision Pro. Now Apple is officially on it. 

    None of us may be big Zuckerberg fans but what he said the other day wasn't really off the mark. 

    If you re-watch the presentation how much was truly 'new'? Or not already planned? 

    If anything, the true upshot was that it tempered people's expectations, which is no bad thing. Perhaps some people were simply getting ahead of themselves. 

    The R1 is a dedicated chip for specific tasks. Those kinds of specific chips are all over the place.

    Here’s one for RF processing which could very well end up in an XR device at some point:

    https://www.gizchina.com/2023/03/06/honor-c1-rf-chipset-launched-sets-a-new-benchmark/

    If there are no general purpose chipsets up to a particular task then companies tend to bake their own. They might be for in-house use like those from Honor, Huawei, Google, Apple etc, or made available on the open market like those from Qualcomm, Broadcom, Mediatek, Sony etc. 

    You mention the full specs of the R1 but what are the full specs of the Vision Pro? 

    With most tech announcements, what is not said (or done) is just as important as what is said and done. 

    We know there is no cellular option but it was surprising, to me at least, that they kept most of the presentation in the AR realm and not so much the VR realm. No one knows how many of the announced features actually work because those who had hands on access were not allowed to use them. 

    Now, for a non-production unit that is the order of the day. Especially as those units would be running early software implementations but the reality is that no one got to try out some of the tentpole features. 

    Ecosystems are just ecosystems.

    They serve a purpose and there is a lot to say on that subject but not really relevant here. 

    The Vision Pro will just slip into the Apple ecosystem. But then again, why wouldn't it?

    How well? No one knows yet. 

    It is true that developers are an important component of many ecosystems and of course the Vision Pro was announced now precisely to be able to widen and hone the developer support. Marketing was another factor. That was absolutely necessary. 

    My take is that the package as a whole looks great. The finesse. It all comes at a price but that has to be understood. Let those with the disposable income and the will to be early adopters iron out the wrinkles. 

    The roadmap, though. There's not much new there. 

    Oh boy; Honor makes a new RF chipset that is slightly faster that Apple's iPhone RF chipset, and no one cares except avon b7 who is beyond excited.

    Meanwhile, Apple, again, "stuns" competition with custom R1 sensor processor, on top of the M2 processor.

    Competitors: "Everything is nominal", "Look at our user base and marketshare", "Look at our affordable prices", "Look at our roadmap", "Ecosystems are just ecosystems"; state Apple is doing nothing that they haven't already explored, all while they quietly watch any future profits shrink to nothing.
    You miss the point as only you can. 

    Why did Honor even develop the chip if slight gains were the order of the day? 

    You have deliberately ignored the point which is actually worse than just missing it!

    The point was that if you can't get the results you want with off-the-shelf solutions (even slightly modified ones) you bake your own. 

    It's what Honor did. 
    It's what Apple did. 
    It's what Huawei did. 

    It's what lots of companies do!

    Absolutely nothing out of the ordinary.

    Right? 
    I don't believe any other major VR headsets* have used eye tracking as a primary mode of interaction.  They've all used hand operated controllers as the primary.  Likewise focussing on being a consumer-level productivity device**.  And the approach to pass through is so far ahead it makes others look like they haven't been trying at all.

    It is very much out of the ordinary.



    * Google Cardboard a slight exception, though it was so much less ambitious that it barely counts.
    ** HoloLens was always targetted for specialist productivity, not for mass appeal.
    The reason active eye tracking is not commonplace in VR headsets is simply because controllers are cheap and extremely functional.

    Eyetrackers are very 'old' technology and have been up for inclusion on headsets for a while. They are simply more expensive to implement actively (as opposed to passively which is available on some consumer VR headsets) because, once you remove the controllers, you need to add another way (gestures for example) to resolve the same problems. 

    That isn't technology related per se. It's more of a cost consideration. I've been working in UX for a few years now and eye trackers are essential.

    As mentioned above, gesture recognition is another area brought on through the absence of controllers but again, gesture interaction brings cost considerations. Gesture recognition is also a well worn technology. 

    If you reduce the importance of cost in your consumer focused product, the door opens to other options. That is what Apple has chosen to do with the initial release although I'm sure an 'SE' version is being planned, and IMO sans the bells and whistles. 

    Zuckerberg and many other companies have spoken about these aspects for some time. There have been lots of concept devices, prototypes and whatnot. The problem is price and mass consumer appeal for a device that will not get anywhere the usage time of a phone. 

    We have known where everyone wants to go for a very long while and it's the same place Apple wants to go. 

    I believe Xiaomi announced something at MWC and even used the word 'spatial' in computing terms. Quite logical when you consider that VR is spatial by definition. 
    Controllers are not "extremely functional", they make me work how they want to work and limit me to what they want to do. There are limiting in function. Still they are cheap and easy and basically just lazy. 


    Controllers do exactly the same thing as gestures and from exactly the same place.

    Placement and action. 

    Your comment doesn't make any sense. 

    Gestures make you work how they want to work and limit you in the same way as controllers do. 


    The pain points of controllers are basically the batteries and sometimes the breakdown in communication with the host device. 

    The pain points of gestures are you need line of sight with the sensors doing the gesture interpretation and the accuracy of the interpretation itself. 

    I imagine (I haven't really thought about it) that to avoid false positives with accidental hand fidgeting, a mechanism to 'wake' the interpretation system might be needed. 



    Controllers once in hand, are physically modal, and if you want to use a keyboard, or pen as an example, you have to set the controllers down. Then what happens with your UI?

    Gestures and eye tracking, are not modal, and I'd guess that 99% of users will have all of their fingers and both hands, always available. That's a pretty consistent basis for a UI, and if you want to use a keyboard and mouse, a pen, or a game controller, those are not a difficult transition, and Vision Pro is still eye tracking and capturing gestures.

    As far as your FUD about gesture interpretation and accuracy, sure, that is something to consider as future concern, but today, even the simple gestures for the Vision Pro are getting rave reviews, something that controllers would not.
    mattinozwilliamlondon
  • Reply 26 of 30
    mattinozmattinoz Posts: 2,340member
    avon b7 said:
    mattinoz said:
    avon b7 said:
    mattinoz said:
    avon b7 said:
    chutzpah said:
    avon b7 said:
    tmay said:
    avon b7 said:
    danox said:
    avon b7 said:
    zomp said:
    Apple is always holding secrets so that us and, more importantly, the competition can only guess at apple's roadmap for vision pro. Apple is a hardware company and it's up to the software creators to imagine new ways to use the device. So rather than apple promote something so that the folks can say "How are you going to do that?", they allow us to imagine the possibilities so that we can create new things. Hence "Think Different". Apple will always continue to modify the hardware as the needs arrive - at the moment vision pro is fantastic, but they have no idea what we will dream up and what more needs to be added to the hardware and software updates. That's what makes apple so amazing! They leave it up to us to create the future of their devices.
    The roadmap is the same for everyone. 

    Everyone knows where everyone wants to go. It's how they get there and at what cost that is more important. 

    Moving the screen towards you for a more immersive experience is the most basic goal. Interacting with a 3D like environment is part of that. Then the audio/visual experience itself (resolution/quality etc). The computer experience. Interaction with the external environment. 

    Size, battery life, 'speed' etc. 

    It's not exactly a new field. 


    But it is a new Apple ecosystem and there will be daily announcements from Apple developers in time highlighting the fact that they have ported their software over to Apple VisionOS, and that drumbeat Tsunami, will get louder and louder as we get closer to Apple’s Vision Pro release date. The R1 SOC is also new what are it’s full spec’s capabilities?.

    The competition if there is any won’t have any of that over the next six months, a long slow steady drum beat of a rising army of VisionOS developers.

    https://developer.apple.com/news/?id=h3qjwosp

    However, none of that changes the facts.

    Everyone is moving on the same roadmap and with the same end goals.

    That industry roadmap was there years before Apple even announced the Vision Pro. Now Apple is officially on it. 

    None of us may be big Zuckerberg fans but what he said the other day wasn't really off the mark. 

    If you re-watch the presentation how much was truly 'new'? Or not already planned? 

    If anything, the true upshot was that it tempered people's expectations, which is no bad thing. Perhaps some people were simply getting ahead of themselves. 

    The R1 is a dedicated chip for specific tasks. Those kinds of specific chips are all over the place.

    Here’s one for RF processing which could very well end up in an XR device at some point:

    https://www.gizchina.com/2023/03/06/honor-c1-rf-chipset-launched-sets-a-new-benchmark/

    If there are no general purpose chipsets up to a particular task then companies tend to bake their own. They might be for in-house use like those from Honor, Huawei, Google, Apple etc, or made available on the open market like those from Qualcomm, Broadcom, Mediatek, Sony etc. 

    You mention the full specs of the R1 but what are the full specs of the Vision Pro? 

    With most tech announcements, what is not said (or done) is just as important as what is said and done. 

    We know there is no cellular option but it was surprising, to me at least, that they kept most of the presentation in the AR realm and not so much the VR realm. No one knows how many of the announced features actually work because those who had hands on access were not allowed to use them. 

    Now, for a non-production unit that is the order of the day. Especially as those units would be running early software implementations but the reality is that no one got to try out some of the tentpole features. 

    Ecosystems are just ecosystems.

    They serve a purpose and there is a lot to say on that subject but not really relevant here. 

    The Vision Pro will just slip into the Apple ecosystem. But then again, why wouldn't it?

    How well? No one knows yet. 

    It is true that developers are an important component of many ecosystems and of course the Vision Pro was announced now precisely to be able to widen and hone the developer support. Marketing was another factor. That was absolutely necessary. 

    My take is that the package as a whole looks great. The finesse. It all comes at a price but that has to be understood. Let those with the disposable income and the will to be early adopters iron out the wrinkles. 

    The roadmap, though. There's not much new there. 

    Oh boy; Honor makes a new RF chipset that is slightly faster that Apple's iPhone RF chipset, and no one cares except avon b7 who is beyond excited.

    Meanwhile, Apple, again, "stuns" competition with custom R1 sensor processor, on top of the M2 processor.

    Competitors: "Everything is nominal", "Look at our user base and marketshare", "Look at our affordable prices", "Look at our roadmap", "Ecosystems are just ecosystems"; state Apple is doing nothing that they haven't already explored, all while they quietly watch any future profits shrink to nothing.
    You miss the point as only you can. 

    Why did Honor even develop the chip if slight gains were the order of the day? 

    You have deliberately ignored the point which is actually worse than just missing it!

    The point was that if you can't get the results you want with off-the-shelf solutions (even slightly modified ones) you bake your own. 

    It's what Honor did. 
    It's what Apple did. 
    It's what Huawei did. 

    It's what lots of companies do!

    Absolutely nothing out of the ordinary.

    Right? 
    I don't believe any other major VR headsets* have used eye tracking as a primary mode of interaction.  They've all used hand operated controllers as the primary.  Likewise focussing on being a consumer-level productivity device**.  And the approach to pass through is so far ahead it makes others look like they haven't been trying at all.

    It is very much out of the ordinary.



    * Google Cardboard a slight exception, though it was so much less ambitious that it barely counts.
    ** HoloLens was always targetted for specialist productivity, not for mass appeal.
    The reason active eye tracking is not commonplace in VR headsets is simply because controllers are cheap and extremely functional.

    Eyetrackers are very 'old' technology and have been up for inclusion on headsets for a while. They are simply more expensive to implement actively (as opposed to passively which is available on some consumer VR headsets) because, once you remove the controllers, you need to add another way (gestures for example) to resolve the same problems. 

    That isn't technology related per se. It's more of a cost consideration. I've been working in UX for a few years now and eye trackers are essential.

    As mentioned above, gesture recognition is another area brought on through the absence of controllers but again, gesture interaction brings cost considerations. Gesture recognition is also a well worn technology. 

    If you reduce the importance of cost in your consumer focused product, the door opens to other options. That is what Apple has chosen to do with the initial release although I'm sure an 'SE' version is being planned, and IMO sans the bells and whistles. 

    Zuckerberg and many other companies have spoken about these aspects for some time. There have been lots of concept devices, prototypes and whatnot. The problem is price and mass consumer appeal for a device that will not get anywhere the usage time of a phone. 

    We have known where everyone wants to go for a very long while and it's the same place Apple wants to go. 

    I believe Xiaomi announced something at MWC and even used the word 'spatial' in computing terms. Quite logical when you consider that VR is spatial by definition. 
    Controllers are not "extremely functional", they make me work how they want to work and limit me to what they want to do. There are limiting in function. Still they are cheap and easy and basically just lazy. 


    Controllers do exactly the same thing as gestures and from exactly the same place.

    Placement and action. 

    Your comment doesn't make any sense. 

    Gestures make you work how they want to work and limit you in the same way as controllers do. 


    The pain points of controllers are basically the batteries and sometimes the breakdown in communication with the host device. 

    The pain points of gestures are you need line of sight with the sensors doing the gesture interpretation and the accuracy of the interpretation itself. 

    I imagine (I haven't really thought about it) that to avoid false positives with accidental hand fidgeting, a mechanism to 'wake' the interpretation system might be needed. 



    Controller may have uses but if you need to do anything then the device is limited. 
    Like the iPhone or more so the iPad the pencil is useful but if it was required to use the iPhone or the iPad both devices would have been duds. 

    Same here if the gestures are well thought out then you will learn by discovery for the most part, and developers aren't bound by the buttons built in to the device they can extend gestures at will even add complimentary controllers if that would suit. 

    Image X-Plane with cardboard cutouts of the cockpit controls beyond the basic ones like throttle, Yoke now you leveraging muscle memory without needing to build $1,000 of replicas. There are systems that work now with controllers but it just isn't the same.

    Rinse and repeat across 100's of uses and the price difference swings to the more capable device.
    I think you're mixing things up a bit here and getting ahead of yourself. 

    The gestures that Apple demoed were limited and replicated the current controllers on the market today. The eye tracker handles placement. 

    That means in terms of functionality the different proposals are identical and each have their pain points. 

    So, my reading is that the eye tracker places the focus and then you use gestures to click, drag etc

    Is there any gesture on the announced product that cannot be achieved using interface elements and a controller? 

    Swipe, pinch etc? 

    If I am reading you correctly, you are suggesting app developers will be able to invent their own app gestures and have the system interpret them. 

    That would mean users learning different gestures for different apps. Not a system wide gesture collection. 

    AFAIK that isn't on the table with what Apple has announced.

    Apologies in advance if I have misread what you are saying. 
    The advice they have given developers is don’t got crazy and keep it real. Don’t use hands up gestures unless they build on real world muscle memory.

    and the down facing camera are there to avoid fatigue. 

    they showed using a pen gesture for notes, free form and pdf for writing and markup. 

    Not I’ve seen any others. 


    tmaywilliamlondon
  • Reply 27 of 30
    avon b7avon b7 Posts: 7,734member
    tmay said:
    avon b7 said:
    mattinoz said:
    avon b7 said:
    chutzpah said:
    avon b7 said:
    tmay said:
    avon b7 said:
    danox said:
    avon b7 said:
    zomp said:
    Apple is always holding secrets so that us and, more importantly, the competition can only guess at apple's roadmap for vision pro. Apple is a hardware company and it's up to the software creators to imagine new ways to use the device. So rather than apple promote something so that the folks can say "How are you going to do that?", they allow us to imagine the possibilities so that we can create new things. Hence "Think Different". Apple will always continue to modify the hardware as the needs arrive - at the moment vision pro is fantastic, but they have no idea what we will dream up and what more needs to be added to the hardware and software updates. That's what makes apple so amazing! They leave it up to us to create the future of their devices.
    The roadmap is the same for everyone. 

    Everyone knows where everyone wants to go. It's how they get there and at what cost that is more important. 

    Moving the screen towards you for a more immersive experience is the most basic goal. Interacting with a 3D like environment is part of that. Then the audio/visual experience itself (resolution/quality etc). The computer experience. Interaction with the external environment. 

    Size, battery life, 'speed' etc. 

    It's not exactly a new field. 


    But it is a new Apple ecosystem and there will be daily announcements from Apple developers in time highlighting the fact that they have ported their software over to Apple VisionOS, and that drumbeat Tsunami, will get louder and louder as we get closer to Apple’s Vision Pro release date. The R1 SOC is also new what are it’s full spec’s capabilities?.

    The competition if there is any won’t have any of that over the next six months, a long slow steady drum beat of a rising army of VisionOS developers.

    https://developer.apple.com/news/?id=h3qjwosp

    However, none of that changes the facts.

    Everyone is moving on the same roadmap and with the same end goals.

    That industry roadmap was there years before Apple even announced the Vision Pro. Now Apple is officially on it. 

    None of us may be big Zuckerberg fans but what he said the other day wasn't really off the mark. 

    If you re-watch the presentation how much was truly 'new'? Or not already planned? 

    If anything, the true upshot was that it tempered people's expectations, which is no bad thing. Perhaps some people were simply getting ahead of themselves. 

    The R1 is a dedicated chip for specific tasks. Those kinds of specific chips are all over the place.

    Here’s one for RF processing which could very well end up in an XR device at some point:

    https://www.gizchina.com/2023/03/06/honor-c1-rf-chipset-launched-sets-a-new-benchmark/

    If there are no general purpose chipsets up to a particular task then companies tend to bake their own. They might be for in-house use like those from Honor, Huawei, Google, Apple etc, or made available on the open market like those from Qualcomm, Broadcom, Mediatek, Sony etc. 

    You mention the full specs of the R1 but what are the full specs of the Vision Pro? 

    With most tech announcements, what is not said (or done) is just as important as what is said and done. 

    We know there is no cellular option but it was surprising, to me at least, that they kept most of the presentation in the AR realm and not so much the VR realm. No one knows how many of the announced features actually work because those who had hands on access were not allowed to use them. 

    Now, for a non-production unit that is the order of the day. Especially as those units would be running early software implementations but the reality is that no one got to try out some of the tentpole features. 

    Ecosystems are just ecosystems.

    They serve a purpose and there is a lot to say on that subject but not really relevant here. 

    The Vision Pro will just slip into the Apple ecosystem. But then again, why wouldn't it?

    How well? No one knows yet. 

    It is true that developers are an important component of many ecosystems and of course the Vision Pro was announced now precisely to be able to widen and hone the developer support. Marketing was another factor. That was absolutely necessary. 

    My take is that the package as a whole looks great. The finesse. It all comes at a price but that has to be understood. Let those with the disposable income and the will to be early adopters iron out the wrinkles. 

    The roadmap, though. There's not much new there. 

    Oh boy; Honor makes a new RF chipset that is slightly faster that Apple's iPhone RF chipset, and no one cares except avon b7 who is beyond excited.

    Meanwhile, Apple, again, "stuns" competition with custom R1 sensor processor, on top of the M2 processor.

    Competitors: "Everything is nominal", "Look at our user base and marketshare", "Look at our affordable prices", "Look at our roadmap", "Ecosystems are just ecosystems"; state Apple is doing nothing that they haven't already explored, all while they quietly watch any future profits shrink to nothing.
    You miss the point as only you can. 

    Why did Honor even develop the chip if slight gains were the order of the day? 

    You have deliberately ignored the point which is actually worse than just missing it!

    The point was that if you can't get the results you want with off-the-shelf solutions (even slightly modified ones) you bake your own. 

    It's what Honor did. 
    It's what Apple did. 
    It's what Huawei did. 

    It's what lots of companies do!

    Absolutely nothing out of the ordinary.

    Right? 
    I don't believe any other major VR headsets* have used eye tracking as a primary mode of interaction.  They've all used hand operated controllers as the primary.  Likewise focussing on being a consumer-level productivity device**.  And the approach to pass through is so far ahead it makes others look like they haven't been trying at all.

    It is very much out of the ordinary.



    * Google Cardboard a slight exception, though it was so much less ambitious that it barely counts.
    ** HoloLens was always targetted for specialist productivity, not for mass appeal.
    The reason active eye tracking is not commonplace in VR headsets is simply because controllers are cheap and extremely functional.

    Eyetrackers are very 'old' technology and have been up for inclusion on headsets for a while. They are simply more expensive to implement actively (as opposed to passively which is available on some consumer VR headsets) because, once you remove the controllers, you need to add another way (gestures for example) to resolve the same problems. 

    That isn't technology related per se. It's more of a cost consideration. I've been working in UX for a few years now and eye trackers are essential.

    As mentioned above, gesture recognition is another area brought on through the absence of controllers but again, gesture interaction brings cost considerations. Gesture recognition is also a well worn technology. 

    If you reduce the importance of cost in your consumer focused product, the door opens to other options. That is what Apple has chosen to do with the initial release although I'm sure an 'SE' version is being planned, and IMO sans the bells and whistles. 

    Zuckerberg and many other companies have spoken about these aspects for some time. There have been lots of concept devices, prototypes and whatnot. The problem is price and mass consumer appeal for a device that will not get anywhere the usage time of a phone. 

    We have known where everyone wants to go for a very long while and it's the same place Apple wants to go. 

    I believe Xiaomi announced something at MWC and even used the word 'spatial' in computing terms. Quite logical when you consider that VR is spatial by definition. 
    Controllers are not "extremely functional", they make me work how they want to work and limit me to what they want to do. There are limiting in function. Still they are cheap and easy and basically just lazy. 


    Controllers do exactly the same thing as gestures and from exactly the same place.

    Placement and action. 

    Your comment doesn't make any sense. 

    Gestures make you work how they want to work and limit you in the same way as controllers do. 


    The pain points of controllers are basically the batteries and sometimes the breakdown in communication with the host device. 

    The pain points of gestures are you need line of sight with the sensors doing the gesture interpretation and the accuracy of the interpretation itself. 

    I imagine (I haven't really thought about it) that to avoid false positives with accidental hand fidgeting, a mechanism to 'wake' the interpretation system might be needed. 



    Controllers once in hand, are physically modal, and if you want to use a keyboard, or pen as an example, you have to set the controllers down. Then what happens with your UI?

    Gestures and eye tracking, are not modal, and I'd guess that 99% of users will have all of their fingers and both hands, always available. That's a pretty consistent basis for a UI, and if you want to use a keyboard and mouse, a pen, or a game controller, those are not a difficult transition, and Vision Pro is still eye tracking and capturing gestures.

    As far as your FUD about gesture interpretation and accuracy, sure, that is something to consider as future concern, but today, even the simple gestures for the Vision Pro are getting rave reviews, something that controllers would not.
    My instinct is not to see 'typing' as a 'gesture' but I think in Apple’s use case it's fair to consider it that way. 

    But in terms of functionality, a controller can be used - and is used.

    The more you need to type, the more you need better options, and that inevitably leads to non-virtual devices like physical keyboards and pencils. For both, some kind of 'resistance' (and often feedback) is preferred. 

    That's why, even with iPads, the virtual keyboard is not enough, and people opt for physical keyboards. 

    Is there anything in the Vision Pro that suggests that aspect will change? 

    I don't think so. If you spend much of your time typing in a virtual space a physical keyboard might be the preferred option and in both scenarios, they can be used and integrated into the scene. 

    As for 'FUD', that's not something I entertain.

    It's just nonsense on your part. And those 'simple' gestures on the Vision Pro are precisely the ones that controllers handle every day. Controllers don't get rave reviews for what they do because they do the job: cheaply and functionally. 

    However, you admit that gesture interpretation and accuracy are a future concern. 

    Of course they are, because no one will know until devices begin to arrive but you are willing to automatically discard that observation, even while admitting it as a concern. 

    A controller is your method of interaction with your virtual world. 

    A hand gesture is your method of interaction in your virtual world. 

    They are both used from the hand and are used for the same tasks. 

    Neither break new ground but one is cheaper to implement than the other. 







    edited June 2023
  • Reply 28 of 30
    tmaytmay Posts: 6,361member
    avon b7 said:
    tmay said:
    avon b7 said:
    mattinoz said:
    avon b7 said:
    chutzpah said:
    avon b7 said:
    tmay said:
    avon b7 said:
    danox said:
    avon b7 said:
    zomp said:
    Apple is always holding secrets so that us and, more importantly, the competition can only guess at apple's roadmap for vision pro. Apple is a hardware company and it's up to the software creators to imagine new ways to use the device. So rather than apple promote something so that the folks can say "How are you going to do that?", they allow us to imagine the possibilities so that we can create new things. Hence "Think Different". Apple will always continue to modify the hardware as the needs arrive - at the moment vision pro is fantastic, but they have no idea what we will dream up and what more needs to be added to the hardware and software updates. That's what makes apple so amazing! They leave it up to us to create the future of their devices.
    The roadmap is the same for everyone. 

    Everyone knows where everyone wants to go. It's how they get there and at what cost that is more important. 

    Moving the screen towards you for a more immersive experience is the most basic goal. Interacting with a 3D like environment is part of that. Then the audio/visual experience itself (resolution/quality etc). The computer experience. Interaction with the external environment. 

    Size, battery life, 'speed' etc. 

    It's not exactly a new field. 


    But it is a new Apple ecosystem and there will be daily announcements from Apple developers in time highlighting the fact that they have ported their software over to Apple VisionOS, and that drumbeat Tsunami, will get louder and louder as we get closer to Apple’s Vision Pro release date. The R1 SOC is also new what are it’s full spec’s capabilities?.

    The competition if there is any won’t have any of that over the next six months, a long slow steady drum beat of a rising army of VisionOS developers.

    https://developer.apple.com/news/?id=h3qjwosp

    However, none of that changes the facts.

    Everyone is moving on the same roadmap and with the same end goals.

    That industry roadmap was there years before Apple even announced the Vision Pro. Now Apple is officially on it. 

    None of us may be big Zuckerberg fans but what he said the other day wasn't really off the mark. 

    If you re-watch the presentation how much was truly 'new'? Or not already planned? 

    If anything, the true upshot was that it tempered people's expectations, which is no bad thing. Perhaps some people were simply getting ahead of themselves. 

    The R1 is a dedicated chip for specific tasks. Those kinds of specific chips are all over the place.

    Here’s one for RF processing which could very well end up in an XR device at some point:

    https://www.gizchina.com/2023/03/06/honor-c1-rf-chipset-launched-sets-a-new-benchmark/

    If there are no general purpose chipsets up to a particular task then companies tend to bake their own. They might be for in-house use like those from Honor, Huawei, Google, Apple etc, or made available on the open market like those from Qualcomm, Broadcom, Mediatek, Sony etc. 

    You mention the full specs of the R1 but what are the full specs of the Vision Pro? 

    With most tech announcements, what is not said (or done) is just as important as what is said and done. 

    We know there is no cellular option but it was surprising, to me at least, that they kept most of the presentation in the AR realm and not so much the VR realm. No one knows how many of the announced features actually work because those who had hands on access were not allowed to use them. 

    Now, for a non-production unit that is the order of the day. Especially as those units would be running early software implementations but the reality is that no one got to try out some of the tentpole features. 

    Ecosystems are just ecosystems.

    They serve a purpose and there is a lot to say on that subject but not really relevant here. 

    The Vision Pro will just slip into the Apple ecosystem. But then again, why wouldn't it?

    How well? No one knows yet. 

    It is true that developers are an important component of many ecosystems and of course the Vision Pro was announced now precisely to be able to widen and hone the developer support. Marketing was another factor. That was absolutely necessary. 

    My take is that the package as a whole looks great. The finesse. It all comes at a price but that has to be understood. Let those with the disposable income and the will to be early adopters iron out the wrinkles. 

    The roadmap, though. There's not much new there. 

    Oh boy; Honor makes a new RF chipset that is slightly faster that Apple's iPhone RF chipset, and no one cares except avon b7 who is beyond excited.

    Meanwhile, Apple, again, "stuns" competition with custom R1 sensor processor, on top of the M2 processor.

    Competitors: "Everything is nominal", "Look at our user base and marketshare", "Look at our affordable prices", "Look at our roadmap", "Ecosystems are just ecosystems"; state Apple is doing nothing that they haven't already explored, all while they quietly watch any future profits shrink to nothing.
    You miss the point as only you can. 

    Why did Honor even develop the chip if slight gains were the order of the day? 

    You have deliberately ignored the point which is actually worse than just missing it!

    The point was that if you can't get the results you want with off-the-shelf solutions (even slightly modified ones) you bake your own. 

    It's what Honor did. 
    It's what Apple did. 
    It's what Huawei did. 

    It's what lots of companies do!

    Absolutely nothing out of the ordinary.

    Right? 
    I don't believe any other major VR headsets* have used eye tracking as a primary mode of interaction.  They've all used hand operated controllers as the primary.  Likewise focussing on being a consumer-level productivity device**.  And the approach to pass through is so far ahead it makes others look like they haven't been trying at all.

    It is very much out of the ordinary.



    * Google Cardboard a slight exception, though it was so much less ambitious that it barely counts.
    ** HoloLens was always targetted for specialist productivity, not for mass appeal.
    The reason active eye tracking is not commonplace in VR headsets is simply because controllers are cheap and extremely functional.

    Eyetrackers are very 'old' technology and have been up for inclusion on headsets for a while. They are simply more expensive to implement actively (as opposed to passively which is available on some consumer VR headsets) because, once you remove the controllers, you need to add another way (gestures for example) to resolve the same problems. 

    That isn't technology related per se. It's more of a cost consideration. I've been working in UX for a few years now and eye trackers are essential.

    As mentioned above, gesture recognition is another area brought on through the absence of controllers but again, gesture interaction brings cost considerations. Gesture recognition is also a well worn technology. 

    If you reduce the importance of cost in your consumer focused product, the door opens to other options. That is what Apple has chosen to do with the initial release although I'm sure an 'SE' version is being planned, and IMO sans the bells and whistles. 

    Zuckerberg and many other companies have spoken about these aspects for some time. There have been lots of concept devices, prototypes and whatnot. The problem is price and mass consumer appeal for a device that will not get anywhere the usage time of a phone. 

    We have known where everyone wants to go for a very long while and it's the same place Apple wants to go. 

    I believe Xiaomi announced something at MWC and even used the word 'spatial' in computing terms. Quite logical when you consider that VR is spatial by definition. 
    Controllers are not "extremely functional", they make me work how they want to work and limit me to what they want to do. There are limiting in function. Still they are cheap and easy and basically just lazy. 


    Controllers do exactly the same thing as gestures and from exactly the same place.

    Placement and action. 

    Your comment doesn't make any sense. 

    Gestures make you work how they want to work and limit you in the same way as controllers do. 


    The pain points of controllers are basically the batteries and sometimes the breakdown in communication with the host device. 

    The pain points of gestures are you need line of sight with the sensors doing the gesture interpretation and the accuracy of the interpretation itself. 

    I imagine (I haven't really thought about it) that to avoid false positives with accidental hand fidgeting, a mechanism to 'wake' the interpretation system might be needed. 



    Controllers once in hand, are physically modal, and if you want to use a keyboard, or pen as an example, you have to set the controllers down. Then what happens with your UI?

    Gestures and eye tracking, are not modal, and I'd guess that 99% of users will have all of their fingers and both hands, always available. That's a pretty consistent basis for a UI, and if you want to use a keyboard and mouse, a pen, or a game controller, those are not a difficult transition, and Vision Pro is still eye tracking and capturing gestures.

    As far as your FUD about gesture interpretation and accuracy, sure, that is something to consider as future concern, but today, even the simple gestures for the Vision Pro are getting rave reviews, something that controllers would not.
    My instinct is not to see 'typing' as a 'gesture' but I think in Apple’s use case it's fair to consider it that way. 

    But in terms of functionality, a controller can be used - and is used.

    The more you need to type, the more you need better options, and that inevitably leads to non-virtual devices like physical keyboards and pencils. For both, some kind of 'resistance' (and often feedback) is preferred. 

    That's why, even with iPads, the virtual keyboard is not enough, and people opt for physical keyboards. 

    Is there anything in the Vision Pro that suggests that aspect will change? 

    I don't think so. If you spend much of your time typing in a virtual space a physical keyboard might be the preferred option and in both scenarios, they can be used and integrated into the scene. 

    As for 'FUD', that's not something I entertain.

    It's just nonsense on your part. And those 'simple' gestures on the Vision Pro are precisely the ones that controllers handle every day. Controllers don't get rave reviews for what they do because they do the job: cheaply and functionally. 

    However, you admit that gesture interpretation and accuracy are a future concern. 

    Of course they are, because no one will know until devices begin to arrive but you are willing to automatically discard that observation, even while admitting it as a concern. 

    A controller is your method of interaction with your virtual world. 

    A hand gesture is your method of interaction in your virtual world. 

    They are both used from the hand and are used for the same tasks. 

    Neither break new ground but one is cheaper to implement than the other. 







    You are vested in controllers, as are most, if not all of the competitors. They are cheap, their modality is a hindrance, and they aren't as effective as Apple's UI, and that is from the preponderance of reviewers of the Vision Pro.

    Apple will press its visionOS UI and integration with its broad ecosystem, as an advantage over controllers, and that is an ultimate win for Apple in the premium space. By the time that competitors pivot away from controllers, and they will, it will be too late.

    All of this foretells that Apple will end up making most of the profits.
    edited June 2023 williamlondon
  • Reply 29 of 30
    mattinozmattinoz Posts: 2,340member
    tmay said:
    avon b7 said:
    tmay said:
    avon b7 said:
    mattinoz said:
    avon b7 said:
    chutzpah said:
    avon b7 said:
    tmay said:
    avon b7 said:
    danox said:
    avon b7 said:
    zomp said:
    Apple is always holding secrets so that us and, more importantly, the competition can only guess at apple's roadmap for vision pro. Apple is a hardware company and it's up to the software creators to imagine new ways to use the device. So rather than apple promote something so that the folks can say "How are you going to do that?", they allow us to imagine the possibilities so that we can create new things. Hence "Think Different". Apple will always continue to modify the hardware as the needs arrive - at the moment vision pro is fantastic, but they have no idea what we will dream up and what more needs to be added to the hardware and software updates. That's what makes apple so amazing! They leave it up to us to create the future of their devices.
    The roadmap is the same for everyone. 

    Everyone knows where everyone wants to go. It's how they get there and at what cost that is more important. 

    Moving the screen towards you for a more immersive experience is the most basic goal. Interacting with a 3D like environment is part of that. Then the audio/visual experience itself (resolution/quality etc). The computer experience. Interaction with the external environment. 

    Size, battery life, 'speed' etc. 

    It's not exactly a new field. 


    But it is a new Apple ecosystem and there will be daily announcements from Apple developers in time highlighting the fact that they have ported their software over to Apple VisionOS, and that drumbeat Tsunami, will get louder and louder as we get closer to Apple’s Vision Pro release date. The R1 SOC is also new what are it’s full spec’s capabilities?.

    The competition if there is any won’t have any of that over the next six months, a long slow steady drum beat of a rising army of VisionOS developers.

    https://developer.apple.com/news/?id=h3qjwosp

    However, none of that changes the facts.

    Everyone is moving on the same roadmap and with the same end goals.

    That industry roadmap was there years before Apple even announced the Vision Pro. Now Apple is officially on it. 

    None of us may be big Zuckerberg fans but what he said the other day wasn't really off the mark. 

    If you re-watch the presentation how much was truly 'new'? Or not already planned? 

    If anything, the true upshot was that it tempered people's expectations, which is no bad thing. Perhaps some people were simply getting ahead of themselves. 

    The R1 is a dedicated chip for specific tasks. Those kinds of specific chips are all over the place.

    Here’s one for RF processing which could very well end up in an XR device at some point:

    https://www.gizchina.com/2023/03/06/honor-c1-rf-chipset-launched-sets-a-new-benchmark/

    If there are no general purpose chipsets up to a particular task then companies tend to bake their own. They might be for in-house use like those from Honor, Huawei, Google, Apple etc, or made available on the open market like those from Qualcomm, Broadcom, Mediatek, Sony etc. 

    You mention the full specs of the R1 but what are the full specs of the Vision Pro? 

    With most tech announcements, what is not said (or done) is just as important as what is said and done. 

    We know there is no cellular option but it was surprising, to me at least, that they kept most of the presentation in the AR realm and not so much the VR realm. No one knows how many of the announced features actually work because those who had hands on access were not allowed to use them. 

    Now, for a non-production unit that is the order of the day. Especially as those units would be running early software implementations but the reality is that no one got to try out some of the tentpole features. 

    Ecosystems are just ecosystems.

    They serve a purpose and there is a lot to say on that subject but not really relevant here. 

    The Vision Pro will just slip into the Apple ecosystem. But then again, why wouldn't it?

    How well? No one knows yet. 

    It is true that developers are an important component of many ecosystems and of course the Vision Pro was announced now precisely to be able to widen and hone the developer support. Marketing was another factor. That was absolutely necessary. 

    My take is that the package as a whole looks great. The finesse. It all comes at a price but that has to be understood. Let those with the disposable income and the will to be early adopters iron out the wrinkles. 

    The roadmap, though. There's not much new there. 

    Oh boy; Honor makes a new RF chipset that is slightly faster that Apple's iPhone RF chipset, and no one cares except avon b7 who is beyond excited.

    Meanwhile, Apple, again, "stuns" competition with custom R1 sensor processor, on top of the M2 processor.

    Competitors: "Everything is nominal", "Look at our user base and marketshare", "Look at our affordable prices", "Look at our roadmap", "Ecosystems are just ecosystems"; state Apple is doing nothing that they haven't already explored, all while they quietly watch any future profits shrink to nothing.
    You miss the point as only you can. 

    Why did Honor even develop the chip if slight gains were the order of the day? 

    You have deliberately ignored the point which is actually worse than just missing it!

    The point was that if you can't get the results you want with off-the-shelf solutions (even slightly modified ones) you bake your own. 

    It's what Honor did. 
    It's what Apple did. 
    It's what Huawei did. 

    It's what lots of companies do!

    Absolutely nothing out of the ordinary.

    Right? 
    I don't believe any other major VR headsets* have used eye tracking as a primary mode of interaction.  They've all used hand operated controllers as the primary.  Likewise focussing on being a consumer-level productivity device**.  And the approach to pass through is so far ahead it makes others look like they haven't been trying at all.

    It is very much out of the ordinary.



    * Google Cardboard a slight exception, though it was so much less ambitious that it barely counts.
    ** HoloLens was always targetted for specialist productivity, not for mass appeal.
    The reason active eye tracking is not commonplace in VR headsets is simply because controllers are cheap and extremely functional.

    Eyetrackers are very 'old' technology and have been up for inclusion on headsets for a while. They are simply more expensive to implement actively (as opposed to passively which is available on some consumer VR headsets) because, once you remove the controllers, you need to add another way (gestures for example) to resolve the same problems. 

    That isn't technology related per se. It's more of a cost consideration. I've been working in UX for a few years now and eye trackers are essential.

    As mentioned above, gesture recognition is another area brought on through the absence of controllers but again, gesture interaction brings cost considerations. Gesture recognition is also a well worn technology. 

    If you reduce the importance of cost in your consumer focused product, the door opens to other options. That is what Apple has chosen to do with the initial release although I'm sure an 'SE' version is being planned, and IMO sans the bells and whistles. 

    Zuckerberg and many other companies have spoken about these aspects for some time. There have been lots of concept devices, prototypes and whatnot. The problem is price and mass consumer appeal for a device that will not get anywhere the usage time of a phone. 

    We have known where everyone wants to go for a very long while and it's the same place Apple wants to go. 

    I believe Xiaomi announced something at MWC and even used the word 'spatial' in computing terms. Quite logical when you consider that VR is spatial by definition. 
    Controllers are not "extremely functional", they make me work how they want to work and limit me to what they want to do. There are limiting in function. Still they are cheap and easy and basically just lazy. 


    Controllers do exactly the same thing as gestures and from exactly the same place.

    Placement and action. 

    Your comment doesn't make any sense. 

    Gestures make you work how they want to work and limit you in the same way as controllers do. 


    The pain points of controllers are basically the batteries and sometimes the breakdown in communication with the host device. 

    The pain points of gestures are you need line of sight with the sensors doing the gesture interpretation and the accuracy of the interpretation itself. 

    I imagine (I haven't really thought about it) that to avoid false positives with accidental hand fidgeting, a mechanism to 'wake' the interpretation system might be needed. 



    Controllers once in hand, are physically modal, and if you want to use a keyboard, or pen as an example, you have to set the controllers down. Then what happens with your UI?

    Gestures and eye tracking, are not modal, and I'd guess that 99% of users will have all of their fingers and both hands, always available. That's a pretty consistent basis for a UI, and if you want to use a keyboard and mouse, a pen, or a game controller, those are not a difficult transition, and Vision Pro is still eye tracking and capturing gestures.

    As far as your FUD about gesture interpretation and accuracy, sure, that is something to consider as future concern, but today, even the simple gestures for the Vision Pro are getting rave reviews, something that controllers would not.
    My instinct is not to see 'typing' as a 'gesture' but I think in Apple’s use case it's fair to consider it that way. 

    But in terms of functionality, a controller can be used - and is used.

    The more you need to type, the more you need better options, and that inevitably leads to non-virtual devices like physical keyboards and pencils. For both, some kind of 'resistance' (and often feedback) is preferred. 

    That's why, even with iPads, the virtual keyboard is not enough, and people opt for physical keyboards. 

    Is there anything in the Vision Pro that suggests that aspect will change? 

    I don't think so. If you spend much of your time typing in a virtual space a physical keyboard might be the preferred option and in both scenarios, they can be used and integrated into the scene. 

    As for 'FUD', that's not something I entertain.

    It's just nonsense on your part. And those 'simple' gestures on the Vision Pro are precisely the ones that controllers handle every day. Controllers don't get rave reviews for what they do because they do the job: cheaply and functionally. 

    However, you admit that gesture interpretation and accuracy are a future concern. 

    Of course they are, because no one will know until devices begin to arrive but you are willing to automatically discard that observation, even while admitting it as a concern. 

    A controller is your method of interaction with your virtual world. 

    A hand gesture is your method of interaction in your virtual world. 

    They are both used from the hand and are used for the same tasks. 

    Neither break new ground but one is cheaper to implement than the other. 







    You are vested in controllers, as are most, if not all of the competitors. They are cheap, their modality is a hindrance, and they aren't as effective as Apple's UI, and that is from the preponderance of reviewers of the Vision Pro.

    Apple will press its visionOS UI and integration with its broad ecosystem, as an advantage over controllers, and that is an ultimate win for Apple in the premium space. By the time that competitors pivot away from controllers, and they will, it will be too late.

    All of this foretells that Apple will end up making most of the profits.

    Case in point video below in which...
    Guy builds "look and pinch" tech demo on Oculus has has Eye, Hand tracking and Pass thru...

    Then wonders why you need controllers to do anything at all on device. 
    Also how will VR become more accepted and used while tied to modal controllers. 


Sign In or Register to comment.