Future iPhone may pre-process video on the image sensor to cut size & reduce power demand

Posted:
in General Discussion
Apple is looking at ways of reducing how much processing an iPhone or other device has to do on the image sensor itself in order to produce high quality video, with the aim of saving both file size and battery power.

Future lenses may discard unchanged pixels before passing them to processing
Future lenses may discard unchanged pixels before passing them to processing


Currently cameras in iPhones, iPads, or any other device, have an image sensor which registers all light it receives. That video data is then passed to a processor which creates the image. But that processing on the main device CPU can be intensive and make the device's battery run down quickly.

"Generating static images with an event camera," a newly revealed US patent application that proposes only processing the parts of a video image that have changed since the previous frame.

"Traditional cameras use clocked image sensors to acquire visual information from a scene," says the application. "Each frame of data that is recorded is typically post-processed in some manner [and in] many traditional cameras, each frame of data carries information from all pixels."

"Carrying information from all pixels often leads to redundancy, and the redundancy typically increases as the amount of dynamic content in a scene increases," it continues. "As image sensors utilize higher spatial and/or temporal resolution, the amount of data included in a frame will most likely increase."

In other words, as the iPhone cameras get higher resolution, and as people use them the record longer videos, the sheer volume of data being processed rises.

"Frames with more data typically require additional hardware with increased capacitance, which tends to increase complexity, cost, and/or power consumption," says the application. "When the post-processing is performed at a battery-powered mobile device, post-processing frames with significantly more data usually drains the battery of the mobile device much faster."

The application then describes methods where "an event camera indicates changes in individual pixels." Instead of capturing everything, the "camera outputs data in response to detecting a change in its field of view."

Typically when an image is processed and made into a JPEG or similar format, pixels that have not changed are discarded so that the file size can be made smaller. In Apple's proposal, those pixels would not even reach the processor.

Detail from the patent describing the detection of pixel changes
Detail from the patent describing the detection of pixel changes


"[Pixel]event sensors that detect changes in individual pixels [can thereby allow] individual pixels to operate autonomously," it continues. "Pixels that do not register changes in their intensity produce no data. As a result, pixels that correspond to static objects (e.g., objects that are not moving) do not trigger an output by the pixel event sensor, whereas pixels that correspond to variable objects (e.g., objects that are moving) trigger an output by the pixel event sensor."

The patent application is credited to Aleksandr M. Movshovich and Arthur Yasheng Zhang. The former has multiple prior patents in television imaging.

Comments

  • Reply 1 of 9
    It’s somewhat similar in the human retina... Neat bit of technology copying nature, if they manage it.
    watto_cobra
  • Reply 2 of 9
    hmlongcohmlongco Posts: 539member
    I wonder where the break-even point is at, given that eventually you'd have so many pixels change that passing pixel AND pixel location data is greater than the amount of data you'd have passed had you just sent the entire image.
    FileMakerFellerwatto_cobra
  • Reply 3 of 9
    Very cool, but I wonder if this has more to do with AR.
    watto_cobra
  • Reply 4 of 9
    Basically that's what video codecs do ... record the changes in frame, so this would sorta be a partially hardware codec.
  • Reply 5 of 9
    netroxnetrox Posts: 1,429member
    Not really innovative. It's obvious. you just discard data that is not necessary. 

    this is where we need to stop giving patents for obvious concepts. 


  • Reply 6 of 9
    killroykillroy Posts: 276member
    netrox said:
    Not really innovative. It's obvious. you just discard data that is not necessary. 

    this is where we need to stop giving patents for obvious concepts. 



    Well no. In the CPU or GPU you have a point. But in the image taking chip, that;s new.
    watto_cobra
  • Reply 7 of 9
    killroy said:
    netrox said:
    Not really innovative. It's obvious. you just discard data that is not necessary. 

    this is where we need to stop giving patents for obvious concepts. 



    Well no. In the CPU or GPU you have a point. But in the image taking chip, that;s new.
    This line of logic leads to accepting that applying a particular algorithm in a new environment is an innovation worthy of patent protection. Personally, I find that hard to swallow and would rather that the original invention gets the protection (which it deserves) and employing it in novel situations is instead treated as a competitive advantage for as long as that lasts. Otherwise you get the situation of "but it's on a computer now!" which does not advance state of the art.
  • Reply 8 of 9
    XedXed Posts: 2,608member
    netrox said:
    Not really innovative. It's obvious. you just discard data that is not necessary. 

    this is where we need to stop giving patents for obvious concepts. 
    :sigh:
    watto_cobrakillroy
  • Reply 9 of 9
    killroykillroy Posts: 276member
    killroy said:
    netrox said:
    Not really innovative. It's obvious. you just discard data that is not necessary. 

    this is where we need to stop giving patents for obvious concepts. 



    Well no. In the CPU or GPU you have a point. But in the image taking chip, that;s new.
    This line of logic leads to accepting that applying a particular algorithm in a new environment is an innovation worthy of patent protection. Personally, I find that hard to swallow and would rather that the original invention gets the protection (which it deserves) and employing it in novel situations is instead treated as a competitive advantage for as long as that lasts. Otherwise you get the situation of "but it's on a computer now!" which does not advance state of the art.

    You can run X86 on a Intel or AMD  chip to do work. You can can do the same work on a ARM chip but not run X86 code.
    So a clean algorithm code on the image taking chip is safe.

Sign In or Register to comment.