The awarded U.S. patent No. 2012/0120227 for "multi-point touch focus" describes a system that lets the user of a camera-equipped device select two or more areas of focus on a touchscreen which, when a picture is taken, are passed through a dedicated image processor to produce optimal sharpness and exposure for both regions.
The patent background mentions that as the capabilities of automated image capturing progresses, so do the "possibilities for capturing images not as desired by the photographer." To solve this dilemma, Apple proposes a solution that combines multitouch input, live-preview imaging and advanced auto-focusing algorithms and tracking assets.
From the patent abstract:
A camera includes a lens arranged to focus an image on an image sensor and a touch sensitive visual display for freely selecting two or more regions of interest on a live preview image by touch input. An image processor is coupled to the image sensor and the touch sensitive visual display. The image processor displays the live preview image according to the image focused on the image sensor by the lens. The image processor further receives the selection the regions of interest and controls acquisition of the image from the image sensor based on the characteristics of the image in regions that correspond to at least two of the regions of interest on the live preview image. The image processor may optimize sharpness and/or exposure of the image in at least two of the regions of interest. The image processor may track movement of the selected regions of interest.
As seen above, the patent calls for an image processor that continuously tracks the user-defined "regions of interest" in a combination of live image processing and auto-focus technology similar to that found in the iPhone 4S. Unlike the current handset, however, the focus will not be limited to center-weighting or face-detection and will in theory choose the best mix of camera settings to achieve the highest possible clarity in the multiple selected areas.
In order to facilitate the multi-point focus acquisition, the dedicated imaging processor calculates the two or more user-defined regions and adjusts the camera's focus drive to change the distance between the image sensor and the rear element of the lens. In addition, the regions' sizes can be changed on-the-fly by pinching the touchscreen, which will change how the imaging processor will weight the photo's overall focus.
Also noted was the ability to perform exposure adjustments using the "regions of interest" multi-point method. Currently, the third-party Camera+ app allows users to spot-meter a live image, but Apple's solution would take that functionality a step further by introducing multiple metering points.
Illustration of pinching to change "region of interest" size in Apple's multi-point touch focusing patent.
Because the system requires a significant amount of computing power, the patent suggests that a dedicated chip be used to process the raw sensor data as well as control the camera's operation. Past iDevices have all used the main SoC to process image data and a dedicated chip would likely yield quicker operation and higher quality pictures.
Apple took a giant step forward in mobile camera design with the 8MP unit found in the iPhone 4S. Reworked optics, a backlit sensor and an inline image signal processor are used to create images many consumers believe to be the best in the smartphone market.
It is unclear whether Apple will implement the newly patented technology in the next-gen iPhone rumored to debut sometime this fall, but the size of the imaging sensor, dedicated processing unit and new optics may be too large to fit into the expectedly thin chassis.