Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

What is computational photography and why does it matter?

Camera hardware doesn't matter as much as cutting-edge software anymore.
By

Published onFebruary 12, 2023

Have you ever tapped the camera shutter on your smartphone only to find out that the end result looks dramatically different than what you saw in the viewfinder? You can thank computational photography for that, a software processing technique that has become commonplace on nearly every single smartphone now. But why is this step necessary, especially when photographers have lived without it for decades?

For starters, a smartphone has to be more portable than a bulky DSLR or mirrorless camera. To that end, phone makers have been forced to devise ways to improve image quality without increasing the device’s physical footprint. That’s where computational photography comes in. It’s an ensemble of techniques like HDR that allows smartphones to compensate for compact hardware with state-of-the-art software processing.

Let’s take a deeper look at computational photography, some examples of it in the context of modern smartphones, and how different implementations can vary from one another.

What is computational photography?

Samsung Galaxy S22 Ultra vs Apple iPhone 14 Pro Max cameras top
Robert Triggs / Android Authority

The term computational photography refers to software algorithms that enhance or process images taken from your smartphone’s camera.

You may have heard of computational photography by a different name. Some manufacturers like Xiaomi and HUAWEI call it “AI Camera”. Others, like Google and Apple, boast about their in-house HDR algorithms that kick into action as soon as you open the camera app. Regardless of what it’s called, though, you’re dealing with computational photography. In fact, most smartphones use the same underlying image processing techniques.

Computational photography is a catch-all term for a range of image post-processing techniques.

Still, it’s worth noting that not all computational photography implementations are equal. Different manufacturers often take different approaches to the same scene. From color science to enhancement features like skin smoothing, processing can vary from one brand to the next. Some brands like OnePlus and Xiaomi have even partnered with imaging giants like Hasselblad and Leica to enhance their color science. Ultimately, you’ll find that no two competing smartphones produce the same image.

For an example of this fact, take a look at Google’s Pixel lineup. The company stuck with the same 12MP primary sensor for four generations spanning the Pixel 2 through 5. Meanwhile, competitors upgraded their camera hardware on a yearly basis. To make up for this gap, Google relied heavily on computational photography to bring new features with each Pixel release. Stick around until the next section for some examples. Of course, computational photography doesn’t completely negate the need for better hardware. The Pixel 6 series brought about clear improvements once Google finally updated the camera hardware.

You can no longer judge a smartphone's camera performance based on its hardware alone.

In summary then, the advent of computational photography means that you can no longer judge a smartphone camera based on its on-paper specifications. Even the megapixel count doesn’t matter as much as it once did. We’ve seen devices with 12MP sensors yield better results than some 48 and 108MP shooters.

Techniques and examples of computational photography

With the basic explanation out of the way, here’s how computational photography influences your photos every time you hit the shutter button on your smartphone.

Image stacking or instantaneous HDR

google pixel 7 pro camera app 1
Ryan Haines / Android Authority

Smartphone camera sensors are quite small when compared to dedicated full-frame or even many point-or-shoot cameras. This means only a limited amount of light can be gathered by the sensor in the few milliseconds that the shutter is opened. Keep the shutter open any longer and you’ll get a blurry mess since nobody can keep their hands perfectly still.

To counter this problem, modern smartphones capture a burst of photos at various exposure levels and combine them to produce a composite shot with improved dynamic range than a single shot. When done right, this method can prevent blown-out highlights and crushed shadows.

While high dynamic range (HDR) photography is not a new technique by any means, it has become instantaneous and widely available thanks to computational photography on modern smartphones. Many of the best camera phones now begin capturing photos in the background as soon as you open the camera app. Once you tap the shutter button, the app simply retrieves its buffer of images from memory and combines them with the latest one to produce a pleasing, evenly exposed shot with minimal noise. Modern smartphones also use machine learning to select the best shot and detect motion, but more on that in a later section.

Portrait mode

Another limitation of the smaller camera sensors on smartphones is their inability to naturally produce a shallow depth of field. The blurry out-of-focus background behind an object, commonly known as bokeh, is a signature trait of larger camera and lens systems. Thanks to computational photography and some clever bit of software, however, smartphones can now achieve this look by adding a blur effect after you tap the shutter button. On most smartphones, portrait mode will detect the subject of your photo (usually a face) and apply a semi-convincing blur effect to the background. Portrait mode is never perfect, but it can often take a trained eye to find imperfections.

Newer smartphones can also apply this blur effect to videos. On the Pixel 7 series, this feature is called Cinematic Blur, while Apple rolls it into the iPhone’s Cinematic Mode.

Super resolution zoom / Space zoom

Smartphones have historically struggled with zoom, with older devices simply resorting to a lossy digital crop of the main sensor. But not anymore, thanks to software-enhanced zoom that can be combined with a telephoto or periscope lens to deliver up to 30x or even 100x zoom on some smartphones.

Super-resolution zoom kicks in whenever you pinch to zoom in. It begins by capturing multiple frames with slight shifts between shots to gather as much detail as possible. Even if you hold your phone perfectly still, the app will manipulate the optical image stabilization system to introduce slight jitter. This is enough to simulate multiple shots from different positions and merge them into a higher-resolution composite shot that looks convincing enough to pass off as optical zoom, even if the phone doesn’t have any telephoto hardware.

On smartphones that already have a telephoto lens like the Galaxy S23 series and Pixel 7 Pro, computational photography can allow you to go beyond the hardware-level 3x zoom.

Night mode / Night Sight

At night, gathering light becomes even more of a challenge for tiny smartphone camera sensors. In the past, low-light photography was pretty much impossible, unless you were willing to settle for dark and noisy shots. All of that changed with the advent of Night mode, which almost magically brightens up your image and reduces noise compared to a standard shot. As you can see in the comparison above, turning on night mode makes a massive difference.

According to Google, Night Sight on Pixel smartphones doesn’t just capture a burst of shots as in traditional image stacking, it also takes longer exposures over several seconds. The phone also checks for motion and if it detects a moving subject during the burst, it reduces the exposure time for that particular frame to avoid motion blur. Finally, all of the shots are combined using the same technology as super-resolution zoom, which reduces noise and increases detail. Of course, there’s even more going on behind the scenes — a Google researcher once told us how certain street lights posed a major challenge for automatic white balance.

Replace the whole sky

Here’s a fun application of computational photography. Using the AI Skyscaping tool in Xiaomi’s MIUI Gallery app, you can change the color of the sky after you capture a photo. From a starry night sky to a cloudy overcast day, the feature uses machine learning to automatically detect the sky and replace it with the mood of your choice. Of course, not every option will give you the most natural look (see the third photo above), but the fact that you can achieve such an edit with just a couple of taps is impressive in its own right.

Astrophotography mode

pixel 6 pro astrophotography
Rita El Khoury / Android Authority

Just like Night mode, ASTROphotography mode takes image stacking one step further. The goal is to capture a starry night sky with pin-sharp detail and minimal noise. Traditionally, this would only be possible with dedicated equipment that synchronizes your camera’s movement with stars in the sky since they move over time. However, computational photography allows you to achieve this with any basic tripod.

On Pixel smartphones, the mode works by capturing up to 15 sets of 16-second exposures and combining them, all while accounting for the movement of the stars. Needless to say, it’s a lot more computationally demanding than basic image stacking or HDR, which uses an extremely short burst of 10-15 shots. We’ve also seen a few other smartphone makers like Xiaomi, realme, and vivo offer astrophotography modes of late.

Face and Photo Unblur

Ever grabbed a quick shot only to later realize that the subject ended up blurry? That’s exactly what Face and Photo Unblur on the Pixel smartphones aims to fix. The best part is that you don’t need to enter a special mode to take advantage of it.

On the Pixel 6 and higher, the camera app automatically detects when either the device or subject is moving too quickly and activates Face Unblur. From that point on, it will capture photos from both, the ultrawide and primary lens with short and long shutter times respectively. When you tap the shutter button, the app intelligently stitches the two shots to give you a bright frame with pin-sharp focus on the subject’s face.

In addition to Face Unblur, you can also use Photo Unblur on the Pixel 7 to post-process existing blurry photos.

Action pan and long exposure

With the Pixel 6 series, Google introduced computational photography modes dedicated to moving subjects.

Action Pan tries to mimic the look of tracking a moving subject against a stationary background. With a traditional camera, you’d need to be moving at the same speed as the subject to achieve this look. But the above shot was captured using a Pixel 6 Pro in Action Pan mode, which separates the subject from the background and adds a convincing-looking motion blur. Other manufacturers like vivo have also added similar modes of late.

The second mode is kind of the opposite as it adds a motion effect to the subject against a stationary background. Once again, the Pixel simplifies long exposure shots, as long as you prop your phone against a rock or use a simple smartphone photography accessory like a tripod. In any case, it increases the exposure time to capture light trails from moving objects like vehicles, waterfalls, a Ferris wheel, or stars in the sky.

A brief history of computational photography

A comparison between the iPhone XS and Google's Night Sight.

Even though you may have only recently heard about it, computational photography has been around for several decades. However, we’ll only focus on the smartphone aspect of the technology in this article.

In 2013, the Nexus 5 debuted with Google’s now-popular HDR+ feature. At the time, the company explained that the HDR+ mode captured a burst of intentionally over- and under-exposed images and combined them. The result was an image that retained detail in both, shadows and highlights, without the blurry results you’d often get from traditional HDR.

Google has pushed the HDR envelope on its smartphones for nearly a decade now.

Fast forward a few years and we were right on the cusp of a computational photography revolution. Improvements to the image signal processors (ISPs) in mainstream SoCs allowed smartphones to leverage on-device machine learning for faster and more intelligent processing.

For the first time ever, smartphones could classify and segment objects in a split second. Put simply, your device could tell if you’re photographing a plate of food, text, or a human being. This enabled features like simulated background blur (bokeh) in portrait mode and super resolution zoom. Google’s HDR+ algorithm also improved in terms of speed and quality with the launch Snapdragon 821 found in the first-generation Pixel smartphone.

Machine learning enabled features like night mode, panoramas, and portrait mode.

Apple eventually followed through with its own machine learning and computational photography breakthroughs on the iPhone XS and 11 series. With Apple’s Photonic Engine and Deep Fusion, a modern iPhone shoots nine images at once and uses the SoC’s Neural Engine to determine how to best combine the shots for maximum detail and minimum noise.

We also saw computational photography bring new camera features to mainstream smartphones. The impressive low-light capabilities of the HUAWEI P20 Pro and Google Pixel 3, for instance, paved the way for night mode on other smartphones. Pixel binning, another technique, uses a high-resolution sensor to combine data from multiple pixels into one for better low-light capabilities. This means you will only get a 12MP effective photo from a 48MP sensor, but with much more detail.

Do all smartphones use computational photography?

Most smartphone makers, including Google, Apple, and Samsung, use computational photography. To understand how various implementations can vary, here’s a quick comparison.

On the left is a photo shot using a OnePlus 7 Pro using its default camera app. This image represents OnePlus’ color science and computational photography strengths. On the right is a photo of the same scene, but shot using an unofficial port of the Google Camera app on the same device. This second image broadly represents the software processing you’d get from a Pixel smartphone (if it had the same hardware as the OnePlus 7 Pro).

Right off the bat, we notice significant differences between the two images. In fact, it’s hard to believe we used the same smartphone for both photos.

Looking at the darker sections of the image, it’s evident that Google’s HDR+ algorithm prefers a more neutral look as compared to OnePlus, where the shadows are almost crushed. There’s more dynamic range overall in the GCam image and you can nearly peer into the shed. As for detail, both do a decent job but the OnePlus does veer a tad bit into over-sharpened territory. Finally, there’s a marked difference in contrast and saturation between the two images. This is common in the smartphone industry as some users prefer vivid, punchy images that look more appealing at a glance, even if it comes at the expense of accuracy.

Even with identical hardware, different computational photography methods will yield different results.

This comparison makes it easy to see how computational photography improves smartphone images. Today, this technology is no longer considered optional. Some would even argue that it’s downright essential to compete in a crowded market. From noise reduction to tone mapping depending on the scene, modern smartphones combine a range of software tricks to produce vivid and sharp images that rival much more expensive dedicated cameras. Of course, all this tech helps photos look great, but learning to improve your photography skills can go a long way too. To that end, check out our guide to smartphone photography tips that can instantly improve your experience.


FAQs

No. Computational photography is a software-based technique used by smartphones to improve image quality. On the other hand, computer vision refers to using machine learning for detecting objects and faces through images. Self-driving cars, for example, use computer vision to see ahead.

Yes, iPhone embraced computational photography many years ago. With the iPhone XS and 11 series, Apple introduced the Smart HDR and Deep Fusion.

You might like