Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Apple and Google prove camera software is more important than megapixels

Lots of cheaper smartphones offer 48 and even 64MP cameras, but the leaders are still using 12 megapixels. What gives?
By
November 3, 2019
Pixel 4 XL vs iPhone 11 Pro Max - camera software vs megapixels

Camera prowess has become the defining factor of the modern smartphone, as the new Google Pixel 4 and Apple iPhone 11 series, among a few other new releases, have shown. Come out with the best camera experience available and the praise will flow. This photography phenomenon isn’t reserved for the flagship market — great pictures are selling cheap phones too.

However, these two markets are completely at odds with their approach to cameras. In the more affordable tiers, smartphones offer 48-, 64-, and soon 108-megapixel sensors. They’re applying the old theory that bigger numbers must be better. But ask Apple, Google, and Samsung, and they’ll tell you that just 12 megapixels are all you need, and the results seem to agree with the flagship tier players.

See the proof: Pixel 4 vs the best smartphone cameras around

Beware of the megapixel temptation

While megapixels look great on paper, converting them into good looking images is another job entirely.

A number of the high-resolution cameras we’ve seen on the market produce very blurry looking pictures lacking in detail. The reason is that there’s more than just pixel counting to make a good looking image. This includes a high-quality lens and high-end image processing algorithms. Some phones can pull off very detailed images, such as the HUAWEI Mate 30 Pro, but more affordable handset falls short.

Not convinced? Check out this example image below. I’ve pitted the 48MP HONOR 9X against the 12MP Pixel 3. This isn’t a far comparison based on price but proves the megapixel point. It’s pretty clear which crop captures the most detail.

The best phone cameras improved a lot in 2019, but their hardware isn't vastly different.

A key reason for this is that these huge megapixel sensors all use a technology called “pixel binning.” Rather than a traditional Bayer color filter, these use a Quad-Bayer filter pattern. In reality, these cameras have a color resolution closer to one-quarter of their pixel count. So a 48MP pixel binning camera is more like a 12MP camera, 64MP closer to 16MP, and 108MP closer to 27MP in terms of real resolvable detail. That’s presuming a cheaper smartphone company does a decent job with the lenses too, which is unlikely.

The bottom line is don’t trust the numbers, trust the images. So far, these huge megapixel sensors have mostly been a disappointment.

Read more: Don’t fall for the 100MP camera hype

Computational photography is the future

Google Pixel 4 - camera software vs megapixels

While the megapixel race has produced more than a few disappointments, the flagship tier of the market has barely changed hardware in several years. Instead, high-end products have improved their imaging capabilities through the use of computational photography.

Improvements in image processing are producing better detail, white balance, and colors in both daylight and low light. Computational photography is also powering many of our favorite camera features, including night modes, bokeh depth-of-field effects, and AI scene detection. For examples of computational photography in action, see the excellent quality of Apple’s low-light pictures, HUAWEI’s 5x hybrid zoom, or the Pixel 4’s astrophotography capabilities.

Image processing capabilities are harder to convey than megapixel counting, but Apple and Google prove this is the way forward.

We’re already seeing a few of these techniques make their way to more affordable handsets. Night mode and software bokeh capabilities can be found in nearly all phones just a year or so after being flagship exclusives. However, the cost of advanced image processing and machine learning hardware is currently keeping the most advanced computational photography algorithms in more expensive phones, at least for now.

Today’s best smartphone shooters aren’t just dependant on great camera hardware, they make use of bleeding-edge image processing and machine learning components too. Apple, HUAWEI, and Samsung have doubled down on the capabilities inside their in-house processors, while Google is in on the trend with its additional Neural Core processor. These chips are essential for running these advanced imaging algorithms efficiently without draining all your battery life.

Eventually, these capabilities will make their way to more affordable phones and manufacturers may drop their camera resolutions to help process the image data more efficiently. In the meantime, mid-range smartphones are opting for higher resolution sensors to make themselves appear competitive. But the future of mobile photography lies firmly in smarter, more advanced processing capabilities.

Want to learn more about what computational photography has in store for smartphone cameras? Check out the video above. Now, more than ever consumers should be wary of using megapixels as a barometer of quality.