Teraflops are a measure of how much performance a computer can achieve in a short amount of time. They are measured in gigahertz (GHz).
Consoles vs. PCs: The Beasts in the Den
The Xbox Series X GPU is based on AMD’s RDNA 2 architecture and will be capable of 12 teraflops. Meanwhile, Sony’s PlayStation 5 (also based on AMD’s RDNA 2 architecture) will have a GPU with 10.28 teraflops.
That’s a whole lot of flops-ing going on, and it’s comparable, or better, to what high-end consumer PC graphics cards offer right now.
As of April 2020, the Radeon RX 5700XT (around $400 at this writing) is one of the top AMD cards, with a 9.75-teraflop GPU. The NVIDIA GeForce RTX 2080 Ti ($1,300 to $1,500 at this writing), meanwhile, is capable of 13.4 teraflops. There’s also NVIDIA’s RTX Titan, with a whopping 16.31 teraflops. But, at well over $2,000 at this writing, this one is out of reach for most gamers.
Let’s not get ahead of ourselves, though. Let’s talk about what teraflops are, and why this specification is important for graphics.
What Are FLOPS?
FLOPS stands for floating-point operations per second. Floating-point arithmetic is the common way to crunch numbers in game development. Without getting too lost in the weeds, floating-point operations make it easier for computers to work more efficiently with a wider range of numbers.
The most common way to express flops is in the single-precision, floating-point format, FP32. This means the computer is using 32 bits to store data in that format. There’s also a half-precision format that takes up 16 bits (FP16) instead of 32. The most common way to express teraflops for GPUs right now is single-precision. However, AMD used FP16 in its Vega GPUs, and RDNA 2 allows for FP16.
In the real world, floating-point is much easier for game creators to use with 3D graphics. If games relied on fixed-point operations, like the original PlayStation, it would result in many problems. Game visuals would look and behave poorly, and the code would be, generally, less efficient.
So, hurrah for floating-point operations!
FLOPS Inflation Led to TFLOPS
Games have to process a ton of data, and that’s why flops are an important benchmark. The more flops a GPU can do, the faster the data can be processed, and the more computing power there is for running games.
The original Sega Dreamcast (1999) had 1.4 Gigaflops, meaning it could process up to 1.4 billion floating-point operations per second. A few years later, the original Xbox (2002) was rocking 20 gigaflops (20 billion flops). The PlayStation 3 (2006) had close to twelve times that, at 230.4 gigaflops.
Each console got significantly better than its predecessor, due, in large part, to graphics computing power. The flops specification is a quick way to get a sense of how much power is under the hood of a particular console’s graphics processor, or GPU.
The 12 teraflops of computing power in the upcoming Xbox Series X means it’s capable of up to 12 trillion floating-point operations per second. The PlayStation 5, meanwhile, maxes out at 10.28 trillion flops.
If we relied solely on flops as a measure, we’d conclude the Xbox Series X is going to be better than the PlayStation 5—which would be a mistake.
How Important Are TFLOPS?
The flops count matters between console generations, but not as much when that gap is narrower.
Even comparing the teraflops count for modern graphics cards, like the AMD Radeon 5700 XT and the GeForce RTX 2080 Ti, can be misleading. The new consoles will use AMD’s new RDNA 2 architecture. New architecture does usually mean better performance than previous cards, even with similar hardware specifications.
Like anything else in computing, though, it’s all about the implementation. The CPU specs, RAM, and even software, make all the difference. When you put it all together, the consensus is that the new consoles should outperform most PC gaming rigs currently out there.
The Xbox Series X and the PlayStation 5 will have eight-core, sixteen-thread processors. This hits PC gaming levels of awesome, and it’s been a long time in coming to set-top boxes. Both consoles also plan to use NVMe SSDs, which means faster load times for games and all-around improved responsiveness.
The new console GPUs will also have an impressive number of computing units at high clock speeds: 52 at 1.825 GHz for the Xbox, and 36 CUs at 2.23 GHz for the PlayStation. For comparison, the Radeon 5700 XT has 40 CUs at 1.6 GHz.
Of course, AMD’s RDNA 2 won’t live solely inside the new consoles. Once it hits PC graphics cards (along with NVIDIA’s expected Ampere architecture,) any advantage the consoles have over PCs will disappear.
TFLOPS Aren’t the Only Thing That Matters
There’s no doubt the new consoles will be powerful beasts. Microsoft and Sony say their consoles will hit 60 frames per second at 4K resolution in AAA titles (typically the most demanding games for graphics).
Microsoft is also looking to hit 120 frames per second at 4K for esports games, which are usually less demanding in terms of graphics. However, higher refresh rates mean a smoother picture and an easier time understanding what’s going on in the field of play. Given the chaos that ensues in esports, smoother visuals are a big plus.
In addition to improved performance at higher resolutions, the new consoles will also support ray tracing. We first saw this new technology in NVIDIA graphics cards. Ray tracing boosts lighting effects within a game, often with dramatic improvements. It also offers a more dynamic, lifelike gaming environment in which shadows and reflections are more realistic. The computing power (teraflops) available inside the upcoming GPUs will also help these new features.
RELATED: What Is Ray Tracing?
Teraflops aren’t the only specification you should pay attention to. However, it will give you a general idea of how a console’s graphics power measures up against other hardware—past and present.