hmm wrote: »
How you would you align two sensors on one lens?
hmm wrote: »
I'm also unsure what you would gain from the chroma sensor idea.
They would need to use a different type of sensor for the color information than luminance. Assuming sensitivity is close enough within wavelengths and IR can be suitably filtered from the recorded spectrum, they could seemingly use unfiltered cmos or ccd chips. The typical RGBG bayer array is in fact made using filtered sensors, as the sensors can't otherwise differentiate between colors. I'm mostly skeptical on the idea that a much lower depth will be almost as good when it comes to perceived chroma. Right now with hdr it's based on a normalized 0 to 1 range where superbrights are computed based on where they no longer clip. I've noticed the trend toward this in smartphones too, and you'll start to see more interesting color correction applications pop up as a result if these guys provide any access to uncompressed footage for editing purposes. I have wanted to see an uncompressed output for some time from these companies or the ability to customize the way the video is processed as it goes. If they want to keep it fairly automated, it will be some attempt to to map everything within range prior to compression. With several exposures that should be possible. I'll look for info on that HTC later, but unfortunately I don't think Apple would be so forthcoming. As of right now you can't get anything really raw from any of the iphones.
They could of course just forget their obsession with thin and put in a giant sensor with a telescopic lens attached but that would be too easy.
Some smartphones already do HDR video but they run at a high FPS and alternate the exposure between even and odd frames and then combine them. I think they combine the exposures after recording is done:
"In the normal 1080p video it’s totally saturated, in the HDR video we can see clouds and sky, and the street appears different as well. There are however some recombination artifacts visible during fast panning left and right. There’s also some odd hysteresis at the beginning as you begin shooting video where the brightness of the whole scene changes and adapts for a second. This ostensibly prevents the whole scene from quickly changing exposures, but at the beginning it’s a little weird to see that fade-in effect which comes with all HDR video. HTC enables video HDR by running its video capture at twice the framerate with an interlaced high and low exposure that then get computationally recombined into a single frame which gets sent off to the encoder."
You can see the sky burns out in the normal one and you get that horrible exposure shift that makes you know it's from a phone or cheap SLR camera. The HDR one has a more film quality to it, likely because professional cameras shoot in HDR and don't have the exposure shifts. Ultimately, the aim would be to get the output as close to the expensive cameras as possible using whatever technique gives the best results:
In the first video I noted the sky. They would have needed to compress the range regardless to get that. It's just that if plenty of information is captured, they may be able to use a feathered range to make adjustments. If you're interested in seeing something like that in practice nuke and davinci resolve have light versions. Resolve lite is especially good. The point was that you can take a certain range regardless of what the software calls it and make a fairly discrete adjustment to that range due to being able to differentiate well. Otherwise you could have information for a sky which is still remapped out of range in the target gamut. I've never seen any evidence that professional cameras specifically shoot in HDR in the sense of multiple exposures. Their dynamic range has increased over the years. At this point some sensors for still capture claim as high as 14 stops of dynamic range. What you actually get varies depending on conditions, and video grading software does a better job of making use of that range. The thing of having values beyond the typical clamp range isn't new, but more professional cameras have functional linear workflows that preserve the original data as far as possible by just rasterizing previews and maintaining the original framebuffer values rather than gamma corrected versions. I'm tired today so I may have messed up my explanation somewhere.
The second one is my favorite, because it used a song from Ferris Bueller's day off. The subtitles are great.
quote from the video
The only color correction I did was to even out the color temperature discrepancies between the two cameras, and those adjustements were done on the Alexa footage since I value my sanity.
The Alexa was a bit more subtle. It had less noise. I've seen these differences across still and video assets.
wizard69 wrote: »
I'm still of the mind that if I need a real camera I will reach for one, if Apple produces something that out does a dedicated camera in a cell phone they could change my mind.
There is no such thing according to the iOS end user documentation.
There is a sleep mode in iOS, which turns off the display. The device still receives messages/push notifications, takes calls, plays music, etc.
The only other setting is "Off."
I know, that's why I didn't understand his comment. Thought it was some kind of hidden routine in iOS. I still don't understand his comment.