Let’s talk about pixels. Specifically, iPhone 14 pixels. More specifically, the pixels of the iPhone 14 Pro. Because while the main news is that the latest Pro models offer a 48MP sensor instead of a 12MP one, it’s actually not the most important improvement Apple has made to its camera. this year.
In fact, of the four biggest changes this year, the 48MP sensor is for me the least important. But have patience here, as there is a lot we need to unpack before I can explain why I think the 48MP sensor is so much less important than:
- The size of the sensor
- Pixel binning
- The photonic engine
One 48MP sensor, two 12MP
Colloquially, we talk about the iPhone camera in the singular, so we refer to three different lenses: main, wide-angle and telephoto. We do this because it’s familiar – that’s how DLSR and mirrorless cameras work, one sensor, multiple (interchangeable) lenses – and because that’s the illusion Apple creates in the camera app, for simplicity.
The reality is, of course, different. The iPhone actually has three cameras forms. Each camera module is separate and each has its own sensor. When you tap, for example, the 3x button, you’re not just selecting telephoto, you’re switching to a different sensor. When you zoom the slide, the camera app automatically and invisibly selects the appropriate camera module e then performs the necessary cropping.
Only the main camera module has a 48MP sensor; the other two modules still have 12MP ones.
Apple was fully ahead of this when it introduced the new models, but it’s an important detail that some may have missed (our emphasis):
For the first time ever, the Pro range presents a novelty 48 MP main camera with a quad-pixel sensor that adapts to the photo taken and features optical image stabilization with second generation sensor shift.
The 48MP sensor works part-time
Even when you’re using the main camera, with its 48MP sensor, you’re still only taking 12MP photos by default. Once again, Apple:
For most photos, the quad-pixel sensor combines every four pixels into one large quad pixel.
The only time you shoot at 48 megapixels is when:
- You are using the main camera (not telephoto or wide angle)
- You’re shooting in ProRAW (which is off by default)
- You’re shooting in decent light
If you want to do it, here’s how. But most of all, you won’t …
Apple’s approach makes sense
You may ask, why give us a 48MP sensor and then mostly not use it?
Apple’s approach makes sense, because, in truth, there are Very few occasions when shooting at 48MP is better than shooting at 12MP. And since doing so creates much larger files, consuming storage space with a voracious appetite, it makes no sense that this is the default.
I can think of only two scenarios where taking a 48MP image is a worthwhile thing to do:
- You’re going to print the photo, in a large format
- You have to crop the image very heavily
This second reason is also a bit debatable, because if you need to crop so much, you may be better off using the 3x camera.
Now let’s talk about the sensor size
When comparing a smartphone camera to a high-quality DSLR or mirrorless camera, there are two big differences.
One of them is the quality of the lenses. Standalone cameras can have much better lenses, both in physical size and in cost. It is not unusual for a professional or passionate amateur photographer to spend a four-figure sum on a single lens. Smartphone cameras obviously can’t compete with this.
The second is the size of the sensor. All other things being equal, the larger the sensor, the better the image quality. Smartphones, by the very nature of their size, and all the other technologies they need to fit in, have much smaller sensors than standalone cameras. (They also have limited depth, which imposes another substantial limitation on the size of the sensor, but there is no need to go into it.)
A smartphone-sized sensor limits image quality and also makes it harder to achieve shallow depth of field, which is why the iPhone does it artificially, with Portrait mode and cinematic video.
Big Apple sensor + limited megapixel approach
While there are obvious and less obvious limits to the sensor size you can use in a smartphone, Apple has historically used larger sensors than other smartphone brands, which is part of why the iPhone has long been considered the reference phone for camera quality. (Samsung later moved on to do this too.)
But there is a second reason. If you want the best quality images possible from a smartphone, you also want the pixels be as big as possible.
This is why Apple has religiously stuck to 12MP, while brands like Samsung have crammed up to 108MP in the same size sensor. Squeezing lots of pixels into a tiny sensor substantially increases noise, which is especially noticeable in low-light photos.
Okay, it took me a while to get there, but now I can finally tell why I think the larger sensor, pixel-binning and Photonic Engine is a much bigger deal than the 48MP sensor …
No. 1: iPhone 14 Pro / Max sensor is 65% larger.
This year, the iPhone 14 Pro / Max’s main camera sensor is 65% larger than that of last year’s model. Of course it’s still nothing compared to a standalone camera, but for a smartphone camera, it’s (pun intended) huge!
But, as previously mentioned, if Apple squeezed four times as many pixels into a sensor that is only 65% larger, that would actually result in worse quality! This is exactly why you will mostly keep shooting 12MP images. And it’s thanks to …
# 2: Pixel Binning
To take 12MP images on the main camera, Apple uses a pixel binning technique. This means that the four-pixel data is converted to a virtual pixel (average of the values), so the 48MP sensor is mainly used as a larger 12MP one.
This illustration is simplified, but gives the basic idea:
What does this mean? Pixel size is measured in microns (one millionth of a meter). Most premium Android smartphones have pixels that measure somewhere in the range of 1.1 to 1.8 microns. The iPhone 14 Pro / Max, when using the sensor in 12MP mode, actually has pixels that measure 2.44 microns. It’s a truly significant improvement.
Without pixel binning, the 48MP sensor would, more often than not, be a downgrade.
# 3: Photonic Engine
We know that smartphone cameras obviously can’t compete with standalone cameras in terms of optics and physics, but where they can compete is in computational photography.
Computational photography has been used in SLRs for literally decades. When changing measurement modes, for example, this instructs the computer inside the DLR to interpret the raw data from the sensor in a different way. Likewise, in consumer DSLRs and all mirrorless cameras, you can select from a variety of photo modes, which again tell the microprocessor how to adjust the data from the sensor to achieve the desired result.
So computational photography already plays a much bigger role in standalone cameras than many people think. And Apple is very, very good at computational photography. (Okay, it’s still not good for cinematic videos, but give it a few years …)
The Photonic Engine is the dedicated chip that powers Apple’s Deep Fusion approach to computational photography, and I’m already seeing a huge difference in dynamic range in photos. (Examples to follow in an iPhone 14 journal next week.) Not just the range itself, but the smart decisions that are made about it. which shadow to bring out, e which highlight to tame.
The result is significantly better photos, which have as much to do with the software as the hardware.
A noticeably larger sensor (in smartphone terms) is really a big deal when it comes to image quality.
Pixel binning means that Apple has actually created a much larger 12MP sensor for most photos, allowing them to realize the benefits of the larger sensor.
The Photonic Engine means a dedicated chip for image processing. I’m already seeing the real life benefits of this.
More to follow in a piece from the iPhone 14’s diary, as I put the camera through a more thorough test over the next few days.
FTC: We use automatic affiliate links to earn income. moreover.
Check out 9to5Mac on YouTube for more Apple news: