Affiliate links on Android Authority may earn us a commission. Learn more.
Google doesn't need state-of-the-art cameras to revolutionize photography
Over the last few years, team Android Authority has been endlessly debating the hardware versus software question when it comes to phone cameras and photography.
Some of us ascribe to Xiaomi, vivo, Sony, and HUAWEI’s ways of doing things. These companies keep upping their cameras’ game by throwing excellent hardware at the problem. Larger sensors, bigger and brighter and more pixels, two telephoto sensors instead of one, dedicated macro lenses, collaborations with Leica and ZEISS, and more — there’s a new hardware solution to any photography problem you come across.
Other team members admire Google’s approach and results i.e., applying software enhancements to create features out of thin air. Portrait mode, night mode, astrophotography, hybrid zoom, long exposures and action shots, face unblur, photo unblur, portrait light, and top shot; Google managed to implement all of these with nothing but software.
Then there are those, like me, who change their mind every few months. The Pixel 2, 3, and 4 convinced me that a lot can be done with software alone. Then, when the Pixel 5 launched with the same sensor as the Pixel 3, it was obvious that Google had literally milked everything it could out of that old lens. New hardware was evidently due. The Pixel 6 Pro and 7 Pro brought just that.
The Pixel 8 Pro, however, doesn’t change much from the 7 Pro in terms of camera hardware. The sensors bring in more light and the ultrawide lens has a higher resolution — that’s it. But if the Pixel 8 Pro lacks impressive hardware on paper, it more than makes up for it in quirky photography software chops that literally came out of nowhere.
What's more important when it comes to phone photography?
Purposeful software to augment camera hardware
I didn’t know I wanted Best Take and Magic Audio Eraser in my life. But now that they’re there, I don’t want to use a phone without them anymore. The new Magic Editor is miles ahead of the funky Magic Eraser, specifically when you ask it to remove an object and fill the empty space with something that makes more sense. The manual Pixel camera controls on the 8 Pro are something I’ve wanted for years too, though Google has taken its sweet time on implementing them.
And the fact that the first three work on any photo/video I’ve ever taken, not just the ones I’ve recently shot with the Pixel 8 Pro, is an absolute W. Just like Photo Unblur, portrait blur, and portrait light did before them, these new features are there to equally elevate the pics I took 20 years ago or 20 seconds ago.
Google doesn't necessarily need new sensors to change the mobile photography game each year.
Google doesn’t really need state-of-the-art hardware and mindblowing spec sheets to revolutionize photography. We’ve known this for years, we should stop doubting it. Yes, it can do more and it can do it better with upgraded sensors, as my Pixel 8 Pro vs 7 Pro camera shootout shows, but it doesn’t necessarily need new sensors to change the mobile photography game each year. It just needs a better Tensor G3 processor that can handle more — many more — Machine Learning computations than the last one. And when that isn’t enough for local processing, it’ll send your media to its servers to be edited there. That’s why the new Video Boost mode with video Night Sight isn’t live yet. Google will roll it out by December, supposedly, but only to the Pixel 8 Pro.
Which brings me to the standout rebuttal to my argument: the base Pixel 8. Google is gate-keeping two important features — manual camera controls and Video Boost — from its lower-end Pixel and forcing them as exclusives on the 8 Pro, even though both phones have the same processor and same capabilities. Artificially keeping the Pixel 8 on a lower playing field is a missed opportunity for Google to lean into the fact that camera hardware really doesn’t matter as much.
Still, both Pixel 8 and Pixel 8 Pro owners like me still get to delve into a weird photography world where things can magically happen with a few taps here and there.
Best Take puts the power of Photoshop in my noob hands
There was a time when you had to shoot the same scene under multiple exposures on your phone and then spend some time in Photoshop or Lightroom to edit them into one pic. On-device HDR leapfrogged that in simplicity several years ago, and now Best Take can do a similar thing for a series of photos of two or more people, merging the best face of each person into one image. I’ve gone through some of my past pics and found group shots where one or several people didn’t have their best face on. Fixing that took less than a minute instead of an hour on Photoshop. (Uh, after having learned the skill to do it, of course.)
Put my husband and his two brothers in one room and mayhem ensues, so it’s almost impossible to take a tidy pic of all of them together. Don’t get me wrong, I love the spontaneity of each photo in the series above, but I also love that I can get a “normal” photo out of all of them.
Best Take isn’t foolproof yet; it didn’t detect my face in the photo above and somehow decided that my yellow wolf-masked friend had two different facial expressions in both pics. Oh, Google… It’s a yellow wolf; its “face” and “expressions” are identical in both pics.
Best Take isn't perfect, but it helps me get the most presentable group photo out of a bunch of snaps.
Aside from missing certain faces in some pics, it also has trouble merging faces from photos with different white balance settings (it doesn’t fix the white balance), and it sometimes fails to blend the facial properties properly. So clearly, Best Take still has a ways to go to be a foolproof solution in all situations, but for most settings, it’ll work and work well, and it won’t take a fraction of the time — and expertise — required to fix an image with other photo editing tools.
Magic Editor is a much better Magic Eraser
When Magic Eraser launched on the Pixel 6, I spent several hours testing it and pushing it to its limits. The results were, let’s face it, quite gimmicky. The new Magic Editor is leaps and bounds ahead of it because it uses generative AI to fill in the blanks.
Magic Editor attempts to guess what's behind the erased element and fills it in as best as it can.
Yes, Magic Editor can do more than just erase elements from your photo, but for me, this has been its main use case. Take the three examples below and see the difference. While Eraser just replaces the erased elements with blobs and blotches of adjacent colors, Editor figures out, more or less, what’s behind them and fills that in. It draws the road pathway in the first pic, attempts to complete the column in the second, and, more impressively, manages to create a realistic background in the third, linking the two sides of the fence together.
Audio Magic Eraser helps me focus on the sounds that matter
Improvements in video capture are essential to grab better videos, but so is the post-processing afterward. Many times, I’ve found myself with a cute video ruined by someone else’s speech, heavy winds, crowd noise, or car honks. I’m not a video or audio expert, I don’t know how to fix those. So I usually trim the video as best as I can to reduce their nuisance and call it a day.
The new Audio Magic Eraser puts a tool in my hand that I never thought I could have as an audio and video amateur.
The new Audio Magic Eraser puts a tool in my hand that I never thought I could have. It detects and splits different sounds, so I can control them individually. In the falconry video below, you can hear the falconer’s explanations in the original version. Nice, but what if I wanted to focus on the actual sounds that the eagle is making? Tap a button, slide the speech down, and poof! You can barely hear the instructions anymore and all the focus is placed on the eagle. Magic?
There are still artifacts and I’m sure the result isn’t as good as what you’d get from proper audio editing, but the fact that this took less than a minute and I could do it with zero skills is amazing.
The power of post-processing vs. original captures
Cropping, rotating, and aligning have been possible for decades, yet I still strive to shoot the most visually pleasing, well-aligned photo from the get-go. Photo Unblur has existed for a year, but I still stabilize my hand as much as I can before snapping a photo. Other photo editing tools for brightness, highlights, and shadows are quite old, and yet I will always try to compose a shot so it doesn’t need any of those edits later.
That’s why I don’t think I’ll rely on post-processing to fix all of my photos and videos. I haven’t done it yet and I won’t start doing it now. I won’t stop trying to get a group of people to look their best in the same shot, wait until all people have moved out of a frame or move to avoid them, or do my best to capture a video with as little audio disturbances as possible.
I'd be fine with a biennial tick-tock rhythm of hardware and then software photography leaps.
But the fact that I have these tools at my disposal now, should I ever need them, is quite mind-blowing. And I love that Google is making the effort to propel photo and video editing forward. This year’s Pixels aren’t pushing the envelope of media capture, they’re pushing the envelope on editing. And maybe next year we’ll see another hardware leap. I’d be fine with a biennial tick-tock rhythm of hardware and software improvements, as long as we keep moving the needle forward.