How is this revolutionary comapred to HTC phones which stores up to 100 images and automatically chooses the best one, or allows you to choose it yourself?
Only difference is you choose whether you want this function on or not and you tell the phone when to start storing the images.
As a professional photographer who likes to use my iPhone when my DSLR is not with me, I would despise a device that picked the “best” photo for me. Unless I had access to the buffer to pick the best one for myself. What if in the sharpest shot of a group of people they all had their eyes closed? What software could distinguish that or any other content based difference? I can think of many such examples. Terrible idea. Stabilization, if possible, would be much better.
Obviously because they are ALREADY taking the most advantage of the image sensor they have. What this does is improve what the user captures.
I'm wondering if they might also be doing a bit of "HDR" -- changing focus and exposure to get more value on a combined image. I know what we see so far is just "find the most in focus of the samples." It's a bit like a video capture and you use the buffer to make them all photo quality for a brief moment.
It's an improvement -- so what's to complain about?
I don't believe I was complaining. Just pointing out a preferred method.
Comments
Only difference is you choose whether you want this function on or not and you tell the phone when to start storing the images.
I use this on my 10 month old HTC all the time.
As a professional photographer who likes to use my iPhone when my DSLR is not with me, I would despise a device that picked the “best” photo for me. Unless I had access to the buffer to pick the best one for myself. What if in the sharpest shot of a group of people they all had their eyes closed? What software could distinguish that or any other content based difference? I can think of many such examples. Terrible idea. Stabilization, if possible, would be much better.
Quote:
Originally Posted by Fake_William_Shatner
Obviously because they are ALREADY taking the most advantage of the image sensor they have. What this does is improve what the user captures.I'm wondering if they might also be doing a bit of "HDR" -- changing focus and exposure to get more value on a combined image. I know what we see so far is just "find the most in focus of the samples." It's a bit like a video capture and you use the buffer to make them all photo quality for a brief moment.
It's an improvement -- so what's to complain about?
I don't believe I was complaining. Just pointing out a preferred method.
How comes Apple can file this patent. OneTapAhead is using this technique long time ago back to Nov 2011.
Check it out: http://itunes.apple.com/us/app/onetapahead/id480167013?l=en&ls=1&mt=8
I think this patent is going to be disputaed. As Blackberry Z10 has that feature released.
__________________________________________________________________________________________
iphonage