Affiliate links on Android Authority may earn us a commission. Learn more.
Hoping a new GPU means the Pixel 10 will be a gaming beast? Prepare to be disappointed
We’re barely through processing the arrival of the Pixel 9 series, yet our latest leaks are already looking ahead to the Pixel 10 and Google’s next-gen Tensor G5 processor. While the future chip’s CPU performance appears to be taking a sideward step, leaked specs suggest a bigger change in the graphics department with the adoption of Imagination Technologies’ DXT architecture — specifically a two-core DXT-48-1536 clocked at 1.1GHz.
Imagination Technologies might not be a name you’re super familiar with in today’s mobile chipset market. You’ll find its GPUs in the odd mid-range design like 2022’s MediaTek’s Dimensity 930, but you’re more likely to remember it from earlier iPhone silicon. Imagination’s PowerVR architecture powered models up to the A10 Fusion before Apple licensed its IP for more bespoke in-house GPUs. A return to flagship silicon with Google’s Tensor G5 is an exciting development.
You’re reading an Authority Insights story. Discover Authority Insights for more exclusive reports, app teardowns, leaks, and in-depth tech coverage you won’t find anywhere else.
How does Imagination’s DXT architecture stack up?
To be blunt, Google’s Tensor series underwhelms in the graphics department, languishing at least two generations behind the fastest in the business in terms of performance. It’s also been slow to adopt new GPU designs and continues to dodge ray-tracing support, a niche feature but one that we now expect in a flagship-tier mobile GPU. That looks set to change, at least somewhat, with the Tensor G5 and the DXT-48 GPU.
I’m not going to fixate on specific performance numbers; it’s far too early for that, and the DXT architecture is an unknown quantity when it comes to mobile benchmarks and titles. Still, a two-SPU core “High Configuration” DXT setup boasts 1,536 FP32 FLOPs per clock, putting it at 1.69 TFLOPs at the G5’s reported 1.1GHz clock speed. While comparing TFLOPS across GPU architectures is fraught with caveats, there are benchmark numbers floating about online for a very rough comparison.
Qualcomm’s 1.7 TFLOP Snapdragon X Plus GPU scores around 3,200 in Wild Life Extreme. Somewhere in that ballpark would make the Tensor G5 about 20% to 25% faster than its predecessor, at least in this test. That would be the most significant leap in the Pixel’s graphic performance for generations, but we’d expect an even larger jump if Tensor adopted Arm’s latest Mali-G925 architecture on 3nm. Regardless, that works out slower than 2023’s Snapdragon 8 Gen 2 and, therefore, well off the pace of the fastest gaming phones you can buy today and upcoming 2025 rivals packing the powerhouse Snapdragon 8 Elite.
Google’s internal figures, as seen by Android Authority, suggest performance could jump a bit higher. The graphs aren’t well labeled but point to a 35% to 60% gain over the G4, depending on the benchmark. That would be more significant, but even Google’s data shows it landing well short of Apple and Qualcomm’s latest, offering performance that’s still not as zippy as leading 2023 silicon.
The Tensor G5's GPU will see the biggest boost in generations, but it won't be enough to catch the leaders.
The Tensor G5’s expected GPU won’t see it contesting the performance crown then, but sustained performance might still make for an interesting comparison point. Thankfully, the DXT architecture sports some interesting features that will close the gap on its rivals.
Ray tracing remains optional with DXT, as it is with Arm’s Mali/Immortalis split. Google is opting for the smallest Ray Acceleration Cluster (RAC) unit configuration it can (a DXT-48-1536-0.5RT2), with a half RAC in each core. Again, the G5 is not aiming for beastly performance.
Still, Imagination sports what it calls the industry’s only Level 4 ray tracing implementation, which might see it punch above its weight. Imagination sports full ALU offloading (freeing up GPU rendering resources), BVH processing (much faster intersection calculations), and Ray Coherency Sort (group processing of nearby rays) in hardware, thereby accelerating ray tracing performance. Neither Arm’s Immortalis nor Qualcomm’s Adreno supports BVH or Ray Coherency in hardware. That said, we’re yet to test Imagination’s long-touted ray-tracing claims, so I won’t set my expectations too high.
Why switch from Mali after all these years?
Imagination’s DXT white paper contains some other interesting tidbits. The architecture supports up to 2×4 and 4×4 Fragment Shading Rate (aka Variable Rate Shading), which you’ll already find in the current Tensor’s Arm Mali-G715 and other high-end platforms. There’s also industry-standard ASTC texture compression but with HDR support. The key takeaway is that this is a GPU architecture that’s very competitive from a feature standpoint.
We also know the new GPU supports virtualization, which is not found in current Tensor chips. This allows for the use of accelerated graphics in a virtual machine, potentially allowing Google to bring one of its numerous virtualization-based features to the Pixel 10. Perhaps, new features are one of the reasons for switching GPU vendors?
One of the more interesting aspects of Imagination’s GPU architecture is its 128-bit Superscalar ALUs, combined with a Decentralised Multi-Core approach to GPU cores. The former means the arithmetic logic units process multiple pieces of 32- or 16-bit data at once, with the added perk that wide registers are highly adaptable for a range of high- and low-precision data types.
Imagination has a very different GPU architecture to Mali and Adreno.
This is a different approach from other mobile GPU architectures, where you’ll typically find dedicated 32- and 16-bit ALUs working concurrently on the most common graphics data sizes, with smaller data sizes optionally supported within those ALUs for machine learning. The traditional setup is good for graphics and not bad for lower bit-depth machine learning workloads either. However, it can’t leverage single instruction multiple data (SIMD) on larger data types, which can be beneficial for memory bandwidth and cache resources, which are always at a premium in mobile GPUs.
Paired with two GPU cores that work independently also means potentially higher performance and/or lower power consumption when crunching through graphics and compute workloads, thanks to parallel processing efficiencies. In other words, you can have cores contribute to a single or different workload as fast as possible or power down a core to save energy.
Additional efficiency savings for graphics and/or machine learning workloads may have caught Google’s eye. That said, these cores can’t share internal resources, which can lead to bottlenecks or underutilization compared to a unified shader architecture (such as Mali), so it’s not without its risks. We’ll just have to wait and see how it performs.
Google may leverage DXT's novel architecture for AI workloads as well.
Speaking of AI, I’ve crunched some numbers seen in Google’s internal documents and estimate that the DXT-45 is roughly 5% faster at FMA operations than the G-715, which isn’t a whole lot. However, it’s possible that a larger 128-bit register means the DXT can still get more done with each operation through SIMD and/or better bandwidth utilization. It’ll be interesting to see if Google leverages the GPU for AI workloads, especially as its TPU is only looking at a 14% gain next generation.
Still, I’m not convinced that compute workloads or gaming performance are the reason for switching — DXT doesn’t look like it’s going to beat the competition here. The true reason for the swap probably lies somewhere in the balance between IP costs, energy efficiency, and the feature set on offer. Either way, Google seems to have decided that Imagination Technologies is the better option going forward.
Is Google making the right choice with the Tensor G5?
Unfortunately, those who hoped that the switch to a new GPU would propel the Tensor G5 and Pixel 10 up the graphics leaderboards will be disappointed. While a modest 25% to potentially much larger 60% gain is very welcome, the chip will still lag two years behind the leaders. Worse, its GPU looks set to stagnate again with the Tensor G6, leaving the Pixel 11 even further off the pace.
However, Tensor G5 is gaining a few tools in the transition. Ray-tracing support, a different architecture for GPU-bound tasks, and GPU virtualization mean that the Pixel 11 certainly won’t be short on features and will be an upgrade for gamers. It might just help Google offer a few more of those interesting Pixel-exclusive capabilities that keep the series in the spotlight.
Ultimately, the Tensor G5's performance looks set to fall further behind the market leaders.
But that’s getting ahead of ourselves; the Pixel 10 remains almost a year away and the competition is already forging ahead. While Google is preparing some CPU and GPU changes with the Tensor G5, the chip is still, unfortunately, shaping up to be some way behind the fastest processors in the business. Will Google’s AI lead be enough to keep its competitors at bay? I’m increasingly concerned that it won’t.