Also sourced from Chiphell, the Radeon RX 9070 XT is expected to command a price tag between $479 for AMD’s reference card and roughly $549 for an AIB unit, varying based on which exact product one opts for. At that price, the Radeon RX 9070 XT easily undercuts the RTX 5070, which will start from $549, while offering 16 GB of VRAM, albeit of the older GDDR6 spec.
I like the 16GB mem vs 12GB on the 5070 but it is a bit slower mem. Hard to say how much that will matter in the real world. I’m also dual booting Linux so AMD wins there.
For me it will come down to performance comparisons once they are released. Huge note: I don’t really care much about ray tracing. It’s cool tech, but not a big enough difference for me in the vast majority of games.
You’re still seeing ray tracing as a graphics option instead of what it actually is: Something that makes game development considerably easier while at the same time dramatically improving lighting - provided it replaces rasterized graphics completely. Lighting levels the old-fashioned way is a royal pain in the butt, time- and labor-intensive, slow and error-prone. The rendering pipelines required to pull it off convincingly are a rat’s nest of shortcuts and arcane magic compared to the elegant simplicity of ray tracing.
In other words: It doesn’t matter that you don’t care about it, because in a few short years, the vast majority of 3D games will make use of it. The necessary install base of RT-capable GPUs and consoles is already there if you look at the Steam hardware survey, the PS5, Xbox Series and soon Switch 2. Hell, even phones are already shipping with GPUs that can do it at least a little.
Game developers have been waiting for this tech for decades, as has anyone who has ever gotten a taste of actually working with or otherwise experiencing it since the 1980s.
My personal “this is the future” moment was with the groundbreaking real-time ray tracing demo heaven seven demo from the year 2000:
https://pouet.net/prod.php?which=5
I was expecting it to happen much sooner though, by the mid to late 2000s at the latest, but rasterized graphics and the hardware that runs it were improving at a much faster pace. This demo runs in software, entirely on the CPU, which obviously had its limitations. I got another delicious taste of near real-time RT with Nvidia’s iRay rendering engine in the early 2010s, which could churn out complex scenes with PBR materials (instead of the simple, barely textured geometric shapes of heaven seven) at a rate of just a few seconds per frame on a decent GPU with CUDA, even in real-time on a top of the line card. Even running entreily on the CPU, this engine was as fast as a conventional CPU rasterizer. I would sometimes preach about just how this was a stepping stone towards this tech appearing in games, but people rarely believed me back then.
I agree that it’s the future, and once it becomes commonplace I’ll definitely be interested. I don’t think that will happen this generation, but I could be wrong.
I used to think vram wasnt a big deal, but my 10G 3080 is already useless for some newer games, namely indiana jones
Not literally unplayable, but severely hamstrung, which is not OK for a high end card thats only 2 gen old (soon to be 3 gen, i guess)