Apple interested in Sony's 3D camera sensor technology, report says

Posted:
in iPhone edited December 2018
Apple has reportedly expressed interest in 3D camera sensors produced by existing supplier Sony, hardware that utilizes accurate time of flight technology instead of the structured light solution currently deployed in iPhone and iPad's TrueDepth camera system.

iPhone XDual-sensor rear-facing camera on iPhone X.


Satoshi Yoshihara, general manager in charge of sensors at Sony, said the company plans to start production of its 3D chips next summer to meet demand from "several" smartphone makers.

Yoshihara failed to provide manufacturing figures and did not name potential customers, though he did note Sony's 3D chip business is operating profitably, reports Bloomberg Quint. The publication claims Apple is interested in adopting the technology, but does not make clear whether the information came from Sony or an unnamed source.

Apple's existing 3D hardware, TrueDepth, uses a single vertical-cavity surface-emitting laser (VCSEL) to project structured light -- a grid of dots -- onto a subject. By measuring deviations and distortions in the grid, the system is able to generate a 3D maps that is in this case used for biometric authentication.

Sony's technology, on the other hand, is a time of flight (TOF) system that creates depth maps by measuring the time it takes pulses of light to travel to and from a target surface. According to Yoshihara, TOF technology is more accurate than structured light and can operate at longer distances.

Rumors of Apple's interest in TOF solutions is well documented, though today's report is the first to attach Sony to potential production plans. In June 2017, reports indicated Apple was evaluating TOF for a rear-facing camera that would assist in augmented reality applications and faster, more accurate autofocus operation.

Whether TOF will make it into a shipping Apple product is unclear. Noted Apple analyst Ming-Chi Kuo in September said the company is unlikely to integrate the technology in a next-generation iPhone model. Instead, Apple is expected to continue to rely on the dual-camera system first deployed with iPhone 7.

"We believe that iPhone's dual-camera can simulate and offer enough distance/depth information necessary for photo-taking; it is therefore unnecessary for the 2H19 new iPhone models to be equipped with a rear-side ToF," Kuo says.

Along with Sony, which supplies camera modules for iPhone and iPad, Infineon, Panasonic and STMicroelectronics are also developing 3D chips based on TOF technology.

Comments

  • Reply 1 of 13
    chasmchasm Posts: 3,294member
    If Apple is genuinely interested in this technology (and there’s no reason why they wouldn’t be, but we are only getting one side of the story at present), it’s unlikely to be in the 2019 iPhones because the design on those was probably decided and locked down a while back (Apple works way ahead, but they do have some flexibility for big breakthroughs).

    It would be a candidate for the 2020 iPhones, again if the story is true, because the existing VCSEL technology is more than good enough for the purposes 95 percent of users put it to, i.e. close-range portraits and selfies.
    StrangeDayswatto_cobracornchipdoozydozengatorguy
  • Reply 2 of 13
    How about going a step further and adapting this for self-driving car environment sensing?
    cornchip
  • Reply 3 of 13
    MplsPMplsP Posts: 3,925member
    Something doesn’t seem right here - the speed of light is roughly 3e8 m/2, or 3e10 cm/sec. This means that to have a resolution of 0.5 cm, a sensor would need to be able to have a temporal resolution of at least 1 cm/3e10 cm/s = 3e-11 seconds, or on the order of 30GHz. In practice it would be higher. That means no only would the sensor need to register that fast as well as be able to sample that fast. 
    ednlsvanstrombeowulfschmidtyojimbo007watto_cobra
  • Reply 4 of 13
    I've always maintained that "the must-have" apps are largely tied to the input capability (typically sensors) available to developers -- GPS, camera (and features), Bluetooth, NFC, etc.

    We've been on a "must-have app" plateau for a few years.  When 3D sensing is made available to developers, the next few "must have" apps will arrive.
    edited December 2018 radarthekatwatto_cobracornchipdoozydozen
  • Reply 5 of 13
    MacProMacPro Posts: 19,727member
    Apple could certainly not go wrong partnering with Sony.  Not directly related to 3D but about Sony. Having just returned from Disney with my new Sony Alpha A7 iii and taking hundreds of handheld night shots I am certainly a convert to Sony technology.  I have never seen such amazing low light photographs, utterly staggering and no post-processing required. This after decades of using high-end Canon bodies and L lenses.
    radarthekattmaywatto_cobracornchipdoozydozengatorguy
  • Reply 6 of 13
    We can see new and new cool technologies that will amaze us in future. :-)
    watto_cobra
  • Reply 7 of 13
    radarthekatradarthekat Posts: 3,842moderator
    MplsP said:
    Something doesn’t seem right here - the speed of light is roughly 3e8 m/2, or 3e10 cm/sec. This means that to have a resolution of 0.5 cm, a sensor would need to be able to have a temporal resolution of at least 1 cm/3e10 cm/s = 3e-11 seconds, or on the order of 30GHz. In practice it would be higher. That means no only would the sensor need to register that fast as well as be able to sample that fast. 
    Yes, 30Ghz would provide a resolution of about 4.5mm.  Not bad if you’re measuring distances of 100 meters, as you would with a sensor being used in an autonomous drive system.  The 3Ghz of current CPUs would yield only 45mm resolution, which would not work at all for close in objects, like portrait mode pictures.  However, these sensors aren’t controlled by the clock speed of the CPU.  They are more akin to an anolog system, with a timing signal built in to the unit that synchronizes the emitter pulse with the sensor array.  Light returning to the array generates a voltage in each pixel, basically operating at the speed of physics on the sensor side.  The timing period itself determines the distance at which the sensor operates.  For example, one type of system modulates the pulse with an RF carrier.  The sensor array then measures the phase shift to determine time of flight.  Such systems are limited to a range of up to about 60m, but can be tuned for shorter ranges (limiting their accuracy to the short range but maximizing resolution at that shorter range).  Such a modulated system is what’s used by Microsoft Kinect.  Other methods yield sub millimeter accuracy.

    Here’s more on the various technologies employed and their respective resolutions and applications.

    https://en.m.wikipedia.org/wiki/Time-of-flight_camera


    tmayyojimbo007watto_cobracornchip
  • Reply 8 of 13
    MacPro said:
    Apple could certainly not go wrong partnering with Sony.  Not directly related to 3D but about Sony. Having just returned from Disney with my new Sony Alpha A7 iii and taking hundreds of handheld night shots I am certainly a convert to Sony technology.  I have never seen such amazing low light photographs, utterly staggering and no post-processing required. This after decades of using high-end Canon bodies and L lenses.
    You said could, as if they’re not already buying camera components off Sony.  I do think Apple should be better with low light photos, AI can do more on this front.

    Along with Sony, which supplies camera modules for iPhone and iPad

    We have some time, till iPhones catch up to full frame mirrorless cameras.  
  • Reply 9 of 13
    claire1claire1 Posts: 510unconfirmed, member
    MacPro said:
    Apple could certainly not go wrong partnering with Sony.  Not directly related to 3D but about Sony. Having just returned from Disney with my new Sony Alpha A7 iii and taking hundreds of handheld night shots I am certainly a convert to Sony technology.  I have never seen such amazing low light photographs, utterly staggering and no post-processing required. This after decades of using high-end Canon bodies and L lenses.
    Last I checked, iPhone cameras were developed with Sony.
  • Reply 10 of 13
    MacProMacPro Posts: 19,727member
    LordeHawk said:
    MacPro said:
    Apple could certainly not go wrong partnering with Sony.  Not directly related to 3D but about Sony. Having just returned from Disney with my new Sony Alpha A7 iii and taking hundreds of handheld night shots I am certainly a convert to Sony technology.  I have never seen such amazing low light photographs, utterly staggering and no post-processing required. This after decades of using high-end Canon bodies and L lenses.
    You said could, as if they’re not already buying camera components off Sony.  I do think Apple should be better with low light photos, AI can do more on this front.

    “Along with Sony, which supplies camera modules for iPhone and iPad”

    We have some time, till iPhones catch up to full frame mirrorless cameras.  
     I don't see Sony's full frame cameras ever being passed by smart phones, it's down to the size of the sensor even with the same technology.  Being mirrorless has nothing to do with the sensor, Sony could make a DSLR with the same sensor and achieve the same results but obviously no point.. You see 'partnering' and 'buying' as being the same thing?  I was inferring a deeper relationship but that will never happen, thus tongue in cheek.
    watto_cobragatorguy
  • Reply 11 of 13
    MacProMacPro Posts: 19,727member

    claire1 said:
    MacPro said:
    Apple could certainly not go wrong partnering with Sony.  Not directly related to 3D but about Sony. Having just returned from Disney with my new Sony Alpha A7 iii and taking hundreds of handheld night shots I am certainly a convert to Sony technology.  I have never seen such amazing low light photographs, utterly staggering and no post-processing required. This after decades of using high-end Canon bodies and L lenses.
    Last I checked, iPhone cameras were developed with Sony.
    Where did I say they weren't?  My comment was more about the lack of sensor development by Canon really.
    edited December 2018
  • Reply 12 of 13
    MplsPMplsP Posts: 3,925member
    MplsP said:
    Something doesn’t seem right here - the speed of light is roughly 3e8 m/2, or 3e10 cm/sec. This means that to have a resolution of 0.5 cm, a sensor would need to be able to have a temporal resolution of at least 1 cm/3e10 cm/s = 3e-11 seconds, or on the order of 30GHz. In practice it would be higher. That means no only would the sensor need to register that fast as well as be able to sample that fast. 
    Yes, 30Ghz would provide a resolution of about 4.5mm.  Not bad if you’re measuring distances of 100 meters, as you would with a sensor being used in an autonomous drive system.  The 3Ghz of current CPUs would yield only 45mm resolution, which would not work at all for close in objects, like portrait mode pictures.  However, these sensors aren’t controlled by the clock speed of the CPU.  They are more akin to an anolog system, with a timing signal built in to the unit that synchronizes the emitter pulse with the sensor array.  Light returning to the array generates a voltage in each pixel, basically operating at the speed of physics on the sensor side.  The timing period itself determines the distance at which the sensor operates.  For example, one type of system modulates the pulse with an RF carrier.  The sensor array then measures the phase shift to determine time of flight.  Such systems are limited to a range of up to about 60m, but can be tuned for shorter ranges (limiting their accuracy to the short range but maximizing resolution at that shorter range).  Such a modulated system is what’s used by Microsoft Kinect.  Other methods yield sub millimeter accuracy.

    Here’s more on the various technologies employed and their respective resolutions and applications.

    https://en.m.wikipedia.org/wiki/Time-of-flight_camera


    Yes, but that's my point. The resolution is on the order of 1cm. Fine for automotive uses but completely useless to replace FaceID
    watto_cobracornchip
  • Reply 13 of 13
    radarthekatradarthekat Posts: 3,842moderator
    MplsP said:
    MplsP said:
    Something doesn’t seem right here - the speed of light is roughly 3e8 m/2, or 3e10 cm/sec. This means that to have a resolution of 0.5 cm, a sensor would need to be able to have a temporal resolution of at least 1 cm/3e10 cm/s = 3e-11 seconds, or on the order of 30GHz. In practice it would be higher. That means no only would the sensor need to register that fast as well as be able to sample that fast. 
    Yes, 30Ghz would provide a resolution of about 4.5mm.  Not bad if you’re measuring distances of 100 meters, as you would with a sensor being used in an autonomous drive system.  The 3Ghz of current CPUs would yield only 45mm resolution, which would not work at all for close in objects, like portrait mode pictures.  However, these sensors aren’t controlled by the clock speed of the CPU.  They are more akin to an anolog system, with a timing signal built in to the unit that synchronizes the emitter pulse with the sensor array.  Light returning to the array generates a voltage in each pixel, basically operating at the speed of physics on the sensor side.  The timing period itself determines the distance at which the sensor operates.  For example, one type of system modulates the pulse with an RF carrier.  The sensor array then measures the phase shift to determine time of flight.  Such systems are limited to a range of up to about 60m, but can be tuned for shorter ranges (limiting their accuracy to the short range but maximizing resolution at that shorter range).  Such a modulated system is what’s used by Microsoft Kinect.  Other methods yield sub millimeter accuracy.

    Here’s more on the various technologies employed and their respective resolutions and applications.

    https://en.m.wikipedia.org/wiki/Time-of-flight_camera


    Yes, but that's my point. The resolution is on the order of 1cm. Fine for automotive uses but completely useless to replace FaceID
    Depending upon the system used.  The shorter the range required to be covered, the finer the resolution can be achieved.  It’s easier to design a system that’s accurate to 1mm, or 0.5mm when constraining the range of that system to 20”, as is the approximate range of FaceId on the front facing array, than it would be to achieve that fine a resolution over a 100 meter range on a rear facing TOF array.  But you might not need such fine resolution for rear-facing sensor applications.  It might be just fine to place AR objects within a few centimeters when projecting them onto a building across the street.  Or when range-finding the distance to objects an autonomous vehicle is approaching as it speeds along.  Sample rate will be paramount in that latter application more than absolute accuracy to a fine degree. 
Sign In or Register to comment.