BOKEH BY APPLE

The iPhone 7 Plus’ portrait mode still needs some work

No photo tips here.
No photo tips here.
Image: Quartz/Dave Gershgorn
We may earn a commission from links on this page.

iPhone 7 Plus users are getting an extra treat when they update their operating systems today: portrait mode.

According to Apple’s literature, the iPhone’s two cameras work in harmony to understand the depth of a scene, and then machine learning algorithms decide what should be skewed out of focus. This effect, called shallow depth of field, is traditionally achieved by the optics of DSLR and SLR camera lenses designed for low-light scenarios. The feature is called portrait mode, and makes sense as a smartphone feature, since we’re constantly using phones to take photos of people.

Portrait mode blurs the background, but with slight fringing on the right side of the face.
Portrait mode blurs the background, but with slight fringing on the right side of the face.
Image: Quartz/ Dave Gershgorn

But portrait mode is still in a beta, despite being deployed in the latest official update of the iPhone, and it shows. Under any scrutiny, portrait mode literally falls apart at the edges, seemingly unable to tell where some objects start and others begin. Parts of images are inexplicably blurred, and other pieces which should be blurred are not. This should get better over time—it’s just a matter of tweaking the algorithms—but this kind of error casts doubt on Apple’s proclamation of leading the pack in machine learning expertise.

In machine learning research, identifying and separating the different objects in an image is called segmentation. It’s a basic part of nearly any effort at visual machine learning, a field often referred to as “computer vision.” As an example, Facebook’s segmentation tools, called DeepMask and SharpMask, are algorithms that scour images looking for differences between pixels and use that information to designate where objects begin and end. It’s how Facebook finds faces within photos so they can be tagged, and the company is now working to extend the tools to be able to identify people in live videos.

The iPhone’s task is harder than image recognition tasks like tagging photos on Facebook, however, because it does this processing in real time—and without the benefit of data centers like Facebook uses.

The camera unusually blurs the beard and temple.
The camera unusually blurs the beard and temple.
Image: Quartz/ Dave Gershgorn

One problem with portrait mode in its current iteration is that the changes the iPhone makes to people’s faces in portrait mode push them dangerously close to the “uncanny valley.”

That’s the idea that humans are extremely good at recognizing when something meant to look human—like a robot or computer-generated face—isn’t quite human, and that sense of almost-but-not-quite-there creates unease or even revulsion. Portrait mode creates small perturbations around a person’s face that makes it seem like something is just slightly off, just slightly inhuman, with the image.

Upon further inspection, some photos are incorrectly blurred around edges, or have odd blurring where there shouldn’t be any. The issues can be seen most clearly when looking at photos taken in portrait mode of ordinary objects, like plants and office scenes.

The camera blurs this plant and radiator at odd, seemingly random intervals.
The camera blurs this plant and radiator at odd, seemingly random intervals.
Image: Quartz/ Dave Gershgorn

The feature is still in beta, which means Apple is still working out all the kinks, but even so, it’s easy to see that you shouldn’t throw out your DSLR just yet.