Well, 'tis true that there's more to bokeh than averaging pixels.
Yes, we are all annoying in our own way goodbyeranch. I would say the new Portrait mode simulates depth of field but not necessarily bokeh, as the author of this article claims. Bokeh is a specific type of background blur, usually involving circles (like the ones in digitalclips' parrot photo), which can be manipulated using depth of field and different types of lenses, but they are not the same thing. Anyway, I think Portrait mode is a great new feature that I've been looking forward too. Yes, it's not prefect and prone to glitches, but I'd still rather have the feature on a point-and-shoot camera than not. Here is an article that does a pretty good job of explaining both depth of field and bokeh for those not completely familiar with the terms and how they are different.
The fact that there is even a discussion like this regarding a camera in A PHONE is amazing in itself. The fact that an average non-hobbyist is being exposed to this technique/feature and will be able to take better portraits where the subject pops is pretty cool.
polymnia said: I don't want to minimize the complexity (and probably engineering around Adobe patents) that would be involved in replicating the Lens Blur from Photoshop into an iPhone, but Apple seems to find ways to implement things.
As you probably know there is a huge debate about software patents. In this case the software is attempting to replicate a natural phenomena - the visual effect that is created by a lens. That would be like patenting the laws of physics. Copyright is a different matter as that would involve copying code or UI. There are so many ways to skin a cat that it would be unlikely to present a problem. Apple would probably write it in Swift and I'm pretty sure Photoshop is all C++.
I want it to be able to do "Instant Alpha" like Keynote can - make the background transparent so the foreground can be easily composited onto another background.
polymnia said: I don't want to minimize the complexity (and probably engineering around Adobe patents) that would be involved in replicating the Lens Blur from Photoshop into an iPhone, but Apple seems to find ways to implement things.
As you probably know there is a huge debate about software patents. In this case the software is attempting to replicate a natural phenomena - the visual effect that is created by a lens. That would be like patenting the laws of physics. Copyright is a different matter as that would involve copying code or UI. There are so many ways to skin a cat that it would be unlikely to present a problem. Apple would probably write it in Swift and I'm pretty sure Photoshop is all C++.
Bingo. If only the courts and USPTO understood this simple concept. Code is speech. Speech is protected by copyright.
I want it to be able to do "Instant Alpha" like Keynote can - make the background transparent so the foreground can be easily composited onto another background.
I want it to be able to do "Instant Alpha" like Keynote can - make the background transparent so the foreground can be easily composited onto another background.
They could probably offer that feature as they have apparently mastered the mask part.
The use of a green screen, so to speak, is usually used to mask people in motion and it does a pretty good job of knocking out the background for video but the edges are a little ragged. In this case you want the background, only altered. Anyone who has used Photoshop to create a mask, either with channels, the pen tool, magnetic lasso or the magic wand, knows how difficult it is to cut around frizzy hair. The mask in the photo of the dogs is almost perfect. Only very slight blurring of the fly away hairs of the dog on the left. As others have mentioned, they now need to work on more realistic circles of confusion in the background as shown in the parrot image.
Woah... how can they mask the subject in real time?
I presume the two cameras create a depth map and then software applies blur according to a distance algorithm. Remember years ago when Jony explained that the blur in iOS and macOS was non-obvious and actually took a lot of effort to get right? They may be using the same blur technique here.
Using Photoshop, masking the subject and tree behind her will take me an hour at least, and that's a rough mask. We're talking about hair and trunk here. The blur is the easiest part.
You could do that picture of the woman in Photoshop in <1 minute. If you look closely the hair is blurred at the edges too and the tree isn't masked at all, it's not a very accurate mask such as some of the add on plug-ins for Photoshop can achieve, not to mention in Photoshop, once masked you could use a tilt or field blur to make the DOF look more realistic. That said, I'm sure it's quite good enough for most folks and a pleasing effect for some images.
This is amazing stuff. Look at some of the pictures on the techcrunch article. wow.
I want to see how it performs with (1) a person with a lot of loose hair strands in front of a background to be blurred out and (2) what it does when something semi-transparent is in front of a background which is blurred out. Since there's no real way for the software to really determine the depth of what it's looking at, I think it will have trouble with both of these examples.
EDIT: Yes, it does pretty well all things considered. There are still problem areas that trip up the effect and since there is no actual z-depth information being collected and it's relying on machine intelligence to determine what is in front and in back, it's obviously not going to be perfect. Here's the link to the article cited above:
I want it to be able to do "Instant Alpha" like Keynote can - make the background transparent so the foreground can be easily composited onto another background.
Chroma keying, maybe?
I think the commenter is suggesting Depth Keying.
Another interesting idea of something that could be done with the depth map.
Like I mentioned in an earlier comment, this Depth Map could be used in all kinds of creative ways.
I know, right. I can't see how it's possible for them to ever get accurate artificial bokeh, because of the edge problem of masking. If the edge is not sharp on the part that is meant to be sharp then it just looks like an amateur photoshop job. This is an effect that *is* worth pursuing but they should not release it until it's fully formed. They have to engineer fake shutter blade effects as well as some other work, however having said that, even done badly it's an improvement, but it should not be called bokeh, it should just be called something like 'portrait enhancement' mode.
I want it to be able to do "Instant Alpha" like Keynote can - make the background transparent so the foreground can be easily composited onto another background.
Chroma keying, maybe?
Yeah, if they can do this thing, they can chroma key the background and do compositing, even inserting extra fake light sources and having it light realistically the scene (since they got a depth map, you can then introduce a realistic lighting post photo).
In fact, with a depth map, you could do a lot of very very cool effects; many that make current photo filters look sophomoric.
Apple or third parties could even create "Scenes" with their own depth maps; you could then be mapped into the environment, a kind of very interesting Augmented reality.
Comments
http://digital-photography-school.com/understanding-depth-field-beginners/
The fact that an average non-hobbyist is being exposed to this technique/feature and will be able to take better portraits where the subject pops is pretty cool.
I assume you don't like to learn.
EDIT: Yes, it does pretty well all things considered. There are still problem areas that trip up the effect and since there is no actual z-depth information being collected and it's relying on machine intelligence to determine what is in front and in back, it's obviously not going to be perfect. Here's the link to the article cited above:
https://techcrunch.com/2016/09/21/hands-on-with-the-iphone-7-plus-crazy-new-portrait-mode/
Another interesting idea of something that could be done with the depth map.
Like I mentioned in an earlier comment, this Depth Map could be used in all kinds of creative ways.
I know, right. I can't see how it's possible for them to ever get accurate artificial bokeh, because of the edge problem of masking. If the edge is not sharp on the part that is meant to be sharp then it just looks like an amateur photoshop job. This is an effect that *is* worth pursuing but they should not release it until it's fully formed. They have to engineer fake shutter blade effects as well as some other work, however having said that, even done badly it's an improvement, but it should not be called bokeh, it should just be called something like 'portrait enhancement' mode.
In fact, with a depth map, you could do a lot of very very cool effects; many that make current photo filters look sophomoric.
Apple or third parties could even create "Scenes" with their own depth maps; you could then be mapped into the environment, a kind of very interesting Augmented reality.