Skip to main content

Here’s how future iPhones could use camera depth data to create impressive Portrait Mode videos

One of the headline features of the dual camera system in recent iPhones is Portrait Mode, an effect that simulates a DSLR-style shallow depth of field by intelligently blurring the background of your photos. Apple took the feature a step further with the iPhone X, adding Portrait Mode selfies and introducing simulated Portrait Lighting.

While these features are currently limited to still photos, future iPhones could someday use the same technology to bring the depth effect to videos, a stunning look currently only possible with high-end video gear or a considerable amount of work in post-production. This feature could be a game changer for videographers, editors, and consumers alike, who have already embraced the iPhone as a serious filmmaking tool. With some effort, the effect is actually possible with today’s iPhones. Here’s how it can be done.

When you take a photo in Portrait Mode on an iPhone today, the depth information associated with the image is stored as a grayscale depth map. iOS uses this depth map to determine which parts of the photo should be blurred and which should remain in focus. This is the same way that 3D artists fake depth of field in 3D renders – the animation software creates a depth map that’s later interpreted by the renderer.

If you want to create the same effect in a video today, there are two common options. The first and easiest is to spend the money on expensive cameras and lenses with a wide aperture – that’s what gives you a shallow depth of field.

A depth map generated by 3D rendering software. Darker areas are in focus and lighter areas are blurred.

The second and more tedious option is to use post-production software like Adobe After Effects to build your own depth maps or video masks in order to specify which parts of a scene should be in focus. This can involve a strenuous process called rotoscoping, a time-consuming task that is often done frame-by-frame.

iOS 11 includes improved developer frameworks that give more access to depth data captured by the iPhone’s cameras. Apple showed off these new capabilities at WWDC 2017 with a sample app called AVCamPhotoFilter. Essentially, this lets developers capture streaming depth data from the camera at a limited resolution. This sample app is the basis for my solution.

Using both an iPhone X running AVCamPhotoFilter and an iPhone 7 Plus in the standard camera application, I stacked the devices, keeping the lenses as close together as possible. I recorded the same scene on each, screen capturing the depth data on my iPhone X to make a moving depth map. The video below demonstrates the process involved and the depth effect that results.

Bringing both pieces of footage into Adobe After Effects, I was able to add shallow depth of field quite easily to my video by applying a camera lens blur to my footage, and telling After Effects the depth map source. This is essentially what iOS does with depth data today, just behind the scenes.

The result isn’t perfect, but took a fraction of the time that building a depth map by hand would take. The output would be more accurate if both the source video and depth data came from the same camera lens, but AVCamPhotoFilter doesn’t support capturing both concurrently. While there are significant limitations to this workaround, the end result is surprisingly almost as polished as the depth effect is on still photos.

Unlocking Portrait Mode for video on current iPhone hardware may prove to be challenging. The feature is already computationally intensive for still photos, and would be significantly more taxing in a video. Third-party applications like Fabby have attempted to recreate the effect entirely in software, but aren’t convincing. However, Apple’s A-series chips and camera hardware continue to advance by leaps and bounds on a yearly basis, so this feature might not be too far out of reach.

https://www.youtube.com/watch?v=REZl-ANYKKY

The possibilities of Portrait mode videos extend far beyond simple shallow depth of field effects. The same data could eventually be used to simulate Portrait Lighting in videos – just like in Apple’s own TV ad.

Creative manipulation of depth data could even make possible effects like tilt-shift videos and instant masking of subjects as if they were standing in front of a green screen. Apple has already used this technique to great effect in a recent update to their Clips app on iPhone X, adding “Selfie Scenes” that can place you downtown in a city or even on the set of Star Wars.

Finally, Portrait Mode for video could further establish the iPhone as an essential filmmaking tool. Traditional cameras do not capture depth data at all, giving the iPhone an immediate advantage over even high-end video gear.

Apple has made their dedication to the iPhone’s camera clear, funding a short film shot entirely on iPhone, and devoting significant engineering resources to new camera features with every new model. Portrait Mode videos would take the iPhone one step closer to the goal of not only being the best camera you have with you, but the best camera, period.


Check out 9to5Mac on YouTube for more Apple news:

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Michael Steeber Michael Steeber

Michael is a Creative Editor who covered Apple Retail and design on 9to5Mac. His stories highlighted the work of talented artists, designers, and customers through a unique lens of architecture, creativity, and community.

Contact Michael on Twitter to share Apple Retail, design, and history stories: @MichaelSteeber