The 5800X3D has the same core architecture as the 5800X but it runs at 11% lower base and 4% lower boost clocks. The lower clocks are in exchange for an extra 64MB of cache (96MB up from 32MB) and around 40% more money. For most real-world tasks performance is comparable to the 5800X. Cache sensitive scenarios such as low res. canned game benchmarks with a 3090-Ti ($2,000 USD) benefit at the cost of everything else. Be wary of sponsored reviews with cherry picked games that showcase the wins, conveniently ignore frame drops and gloss over the losses. Also watch out for AMD’s army of Neanderthal social media accounts on reddit, forums and youtube, they will be singing their own praises as usual. Instead of focusing on real-world performance, AMD’s marketers aim to dupe consumers with bankrolled headlines. The same tactics were used with the Radeon 5000 series GPUs. Zen 4 needs to bring substantial IPC improvements for all workloads, rather than overpriced “3D” marketing gimmicks. New PC builders have little reason to look further than the $260 12600K which, at a fraction of the price, offers better all round performance in gaming, desktop and workstation applications. Users with an existing AM4 build should wait just a few more months for better performance at lower prices with Raptor Lake or even Zen 4. The marketers selling expensive “3D” upgrades today will quickly move onto Zen 4 (3D) leaving unfortunate buyers stuck on an overpriced, 6 year old, dead-end, platform. [Mar '22 CPUPro]
What’s scary is that I think the owner of userbenchmark actually believes that statement. Which might explain how he’s so out of touch that he thinks his own crap doesn’t stink and deserves to be locked behind a subscription. I’m just sad that there might be a not insignificant number of people that pay for it.
The real Neanderthal social media account is the one writing that review.
Instruction and data caches have a real, tangible benefit. Although there is a point of diminishing returns, more L3 cache is absolutely worth a 10% clock speed trade-off for consumer systems. Fetching memory from the bus is an order of magnitude slower than fetching from cache, and the processor has to perform other work or stall while it’s waiting for that.
But, knowing the bias of the reviewer, they’re probably running DDR4 at 5200 MT/s (2000 over JEDEC specs) on their Intel systems to make up for the lack of cache while thinking, “just buy a more expensive processor and RAM, you brain-dead cretins.”
I mean it’s kinda amazing that there’s someone looking at a 14th gen Intel CPU sucking back 200+ watts, while it gets spanked by a 7800X3D running at 65 watts, and thinking “AMD is hurting consumers”. That’s some next level shit.
Ok so I am about to build a new rig, and looking at the specs the X3D does seem less powerful and more expensive than the regular 7950.
While I completely agree that this guy seems extremely biased and that he comes off like an absolute dickbag, I don’t think the essence of his take is too far off base if you strip off the layers of spite.
Really, it seems like the tangible benefit of the X3D that most people will realize is that it offers similar performance with lower energy consumption, and thus lower cooling requirements. Benchmarks from various sources seem to bear this out as well.
It seems like a chip that in general performs on par with the 7950x but with better efficiency, and if you have a specific workload that can benefit from the extra cache it might show a significant improvement. Higher end processors these days already have a fuckton of cache so it isn’t surprising to me that this doesn’t benchmark much better than the cheaper 7950x.
The only reason I can think for a site to do this is that they were about to go under already. This will absolutely tank them as there are free alternatives.
Uhh… Aren’t… Aren’t these two statements kinda contradictory?
No no, you see; it performs reasonably consistency under varying real world conditions but for a CPU to truly shine it needs to handle all workloads, including unrealistic synthetic ones.
Hopefully this will hurt them up to a point where they go out of business. Just look at their review of the 5800X3D, it’s so unreal.
Jesus
What’s scary is that I think the owner of userbenchmark actually believes that statement. Which might explain how he’s so out of touch that he thinks his own crap doesn’t stink and deserves to be locked behind a subscription. I’m just sad that there might be a not insignificant number of people that pay for it.
I’m certain he must’ve lost a lot of money betting against amd on the stock market right around the time of zen1 and he never got over it.
The real Neanderthal social media account is the one writing that review.
Instruction and data caches have a real, tangible benefit. Although there is a point of diminishing returns, more L3 cache is absolutely worth a 10% clock speed trade-off for consumer systems. Fetching memory from the bus is an order of magnitude slower than fetching from cache, and the processor has to perform other work or stall while it’s waiting for that.
But, knowing the bias of the reviewer, they’re probably running DDR4 at 5200 MT/s (2000 over JEDEC specs) on their Intel systems to make up for the lack of cache while thinking, “just buy a more expensive processor and RAM, you brain-dead cretins.”
I mean it’s kinda amazing that there’s someone looking at a 14th gen Intel CPU sucking back 200+ watts, while it gets spanked by a 7800X3D running at 65 watts, and thinking “AMD is hurting consumers”. That’s some next level shit.
Well said. The only thing hurting consumers is the reviewers omitting information or spreading misinformation.
Ok so I am about to build a new rig, and looking at the specs the X3D does seem less powerful and more expensive than the regular 7950.
While I completely agree that this guy seems extremely biased and that he comes off like an absolute dickbag, I don’t think the essence of his take is too far off base if you strip off the layers of spite.
Really, it seems like the tangible benefit of the X3D that most people will realize is that it offers similar performance with lower energy consumption, and thus lower cooling requirements. Benchmarks from various sources seem to bear this out as well.
It seems like a chip that in general performs on par with the 7950x but with better efficiency, and if you have a specific workload that can benefit from the extra cache it might show a significant improvement. Higher end processors these days already have a fuckton of cache so it isn’t surprising to me that this doesn’t benchmark much better than the cheaper 7950x.
The only reason I can think for a site to do this is that they were about to go under already. This will absolutely tank them as there are free alternatives.
Wat
Fellow AMD Neanderthal Army soldiers: any idea when I get my cool uniform and …paycheck?
Uhh… Aren’t… Aren’t these two statements kinda contradictory?
Not if you remember that the writers are being paid by Intel. Then, it all comes together.
No no, you see; it performs reasonably consistency under varying real world conditions but for a CPU to truly shine it needs to handle all workloads, including unrealistic synthetic ones.
You’re expecting rationale from someone who just made crazy statements because their feeling are hurt.