AMD
I use Linux, so not Nvidia. AMD is great. Good power for the money.
Supposedly Nvidia has become a lot better on Linux lately. They finally dropped their weird framebuffer API or whatever (the one that was the reason for horrible Wayland compatibility and also caused a heated Linus Torvalds moment), and I think they even made their linux drivers open source.
They do support their driver yes, but it will never be as good as long as it’s proprietary. The open nvidia module isn’t ready and still backed by proprietary blobs.
Historically speaking, Nvidia was always the best for Linux. Nvidia’s success history with Linux trace back to the 2004 with State-of-the-art 3d capabilities (albeit for arcade machines). At that time ATi radeon 3D capabilities for Linux were below sub-par.
The problem with Linux+Nvidia is that it was never “the Linux way”… but always the “Nvidia way”.
The Linux way is… flexibility: it mean you can use whatever kind of Linux you want, and the drivers works straight out of the box (basically you need open source drivers). Instead Nvidia always pushed for fixed binary blob that required specific kernel and rigid environment.
The modern support for Linux by AMD is mostly “the Linux way”, that’s why the Linux community love AMD more than Nvidia.
In any case of hardware parity between Nvidia and AMD; Linux crowd will always prefer AMD, because AMD mean you can use any kind of Linux distro-thing and have an uncompromising gaming experience.
Removed by mod
I must be in the minority, but I’ve been digging Intels Arc GPUs. For their price point and the fact I don’t play bleeding edge AAA games, they’ve actually done pretty well. Additionally I’m tired of nvidia’s price gouging and AMD following after, I want to support a disruptive third party. Their driver support gets better every release and I can’t wait to see their next generation of cards.
Too many edge case issues, especially for someone who plays a lot of indie titles and uses Linux. Also, they kinda just went into the low performance market. If they’d launch something for the upper midrange I’d be more interested (assuming they improved on a lot of fronts of course).
I’d get an arc when upgrading but I’ve an 6800xt so there wouldn’t be an upgrade for me.
Didn’t even know Intel made dedicated GPUs. The integrated ones have always been positively terrible.
On Windows, Nvidia without thinking twice. On Linux, depends, on rDNA 4 and the next release of Nvidia drivers, but probably still Nvidia.
Unfortunately, despite how much I would rather buy from someone else, AMD’s products are just inferior, especially software.
Examples of AMD being worse:
- AMD’s implementation of opengl is a joke, the open source implementation used on Linux is several times faster and made for free by volunteers, without internal knowledge
- AMD will never run physx, which is every day less relevant, but if AMD from the past had proposed an alternative we would have a standardized physics extension in DirectX by now, like with dlss
- AMD’s ray accelerators are “incomplete” compared to Nvidia RT cores, which is why ray tracing is better on Nvidia, and which is why with rDNA 4 they are changing how they work
- GCN was terrible and very different from Nvidia’s architecture, it was hard to optimize for both. rDNA is more similar, but now AMD has a plethora of old junk to maintain compatible with rDNA
- Nvidia has been constantly investing in new software technologies (nowadays it’s mainly AI), AMD didn’t and now it’s always playing catch up
AMD also has its wins, for example:
- They often make their stuff open source, mainly because it’s convenient for its underdog position
- Has a pretty good software stack on Linux (much better than on windows) partly because it’s not entirely done by them
- Nvidia has been a bad faith actor for many years on the Linux space, even if it’s in its redemption arc
- Modern GPU seems to be catching up in compute performance
- AMD is less greedy with VRAM, mainly because they are less at risk of competing with their own enterprise lineup
- Current Nvidia’s prices are stupid
I would still prefer Nvidia right now, but maybe it’s gonna change with the next releases.
P.s. I have used a GTX 1060, an RX 480, and a Vega 56
Found the Nvidia fan boy
I’m literally using a full AMD PC right now. I don’t like Nvidia as much as the next person. I think they use terrible monopolistic practices, and if the competition were on par I would not buy Nvidia. But they aren’t.
The guy asked what’s better for gaming and you want on a rant about Nvidia being better because of AI workloads and other software.
Amd are the better cards for gaming, Nvidia may have better ray tracing but most games don’t even use ray tracing so you will spend an extra 30% to get the same gaming performance from an AMD card that actually has enough Vram to play the games at ultra settings and higher resolution.
Well, if you are not gonna use Nvidia’s extra stuff, buy an AMD, by all means.
But what you say is disingenuous. “AI and other software” is not entirely unrelated to gaming. Things like hairworks, physx, and most gameworks in general run on CUDA. And for AI (which I don’t care about that much) there is DLSS, and they are working on AI enhanced rendering.
Most games don’t use those technologies, but some do, and you will miss out on those.
but if AMD from the past had proposed an alternative we would have a standardized physics extension in DirectX by now, like with dlss
Why the fuck put this on AMD when it was Nvidia who did their usual proprietary bullshit? “AMD is worse than Nvidia because they didn’t provide us with a better alternative!” ???
For your points against:
The OpenGL UMD was completely re-engineered. This premiered with the 22.7.1 release, so nearly two years ago. AMD now have the most performant, highest quality OpenGL UMD in the industry, which is particularly relevant for workstation use cases (where OpenGL remains the backbone of WS graphics).
PhysX is proprietary, I don’t know what can be done about that, but your point is valid here, though given the rise of other physics engines at play, I don’t really know if this is a big hit? Do we really want further consolidation in game systems?
AMDs approach to ray acceleleration has always favoured die area efficiency up until now, though I can totally understand your disappointment with the performance in that area. That said, the moment I really care about RTRT in gaming is when it’s no longer contingent on the raster model. reflections, shadows and GI are nice and all, but we’re still not really there yet.
I dont know how GCN was such a terrible arch since it was the basis of an entire console generation. An argument could be made about how its GPGPU design may have hindered it at gaming on desktops but it had matured extremely well over time with driver upgrades, despite their given price + perf targets at release. Aside from that (and related to point 1), RDNA UMDs are all PAL based. I’m not sure what you’re alluding to with this? Could you please elaborate?
Your final remark is untrue (FMF, AL+, gfx feature interop, mic ANS, a plethora of GPUOpen technologies) but I will forgive you not keeping up with a vendor’s tech if you don’t actively use their products.
They’re all pretty good. Even the Intel cards are pretty good now. I guess, what’s most important to you? If you want maximum compatibility with games, go for Nvidia. If you want better price to performance, go with AMD or Intel. Although, if I were you, I’d wait until AMD and Intel’s next gen. Both are coming (relatively) soon (probably before the end of the year), and will probably be a lot better than what’s out now.
One caveat, if you use or plan to use Linux, Nvidia can present some difficulties, so avoid them.
Actually two caveats, if you plan to use hardware encoding, like you’ll be streaming on Twitch while you play games, avoid AMD. Their hardware encoding is pretty trash. Both Nvidia and Intel are much better.
My current lineup (I know I have a lot of machines, but my wife and I both play games, and I do AI workloads as well):
- RTX 3090 (mostly for AI)
- Radeon RX 6700 XT (great card)
- Arc A380 (for transcoding, but I’ve gamed on it, and it’s great)
- Radeon RX 6600 (my main card, just because it’s in my living room HTPC, running ChimeraOS)
The amount of self-hosted AI integrations is only going to grow as well. I have a 3090 in a closet PC and I use it for everything from image generation to VSCode/Neovim code completion and code chat. One of the things I’d really like to see in the next few years is a wide variety of local AI driven self hosted Alexa replacements.
Oh, I would love that. Self hosted voice assistant is like the panacea. Mycroft was awesome at first, but it never really panned out.
For the hardware encoding side it used to be true before OBS introduced better AMD encoder support. I have a 6800XT and it works just fine for streaming casually, though I agree that if you stream professionally then Nvidia is the better option.
How much VRAM does your AI card have? The one I have only has 6GB, and I’ve found that quite limiting.
The 3090 has 24GB. Yeah, 6GB is too small for a lot of things. Even 24GB is too small for some of the models I’ve tried.
I’ve been wondering what would be the smartest choice to upgrade my 1660 Super. CPU is a Ryzen 5 3200 and I’ve got 16GB of RAM. Dunno if just upgrading the GPU would make a huge difference.
I bought a rtx4070 ti super recently because of superior raytracing and AI. If you don’t care about those things just go by price/performance. Tom’s Hardware has a benchmark of all cards on the same chart.
I had a 1060 and upgraded to a 3080 ahile ago. For next upgrade most likely will do AMD unless Nvidia can convince me to go with them again
Nothing at the moment, I’d wait for the Nvidia RTX 5000 and AMD RX8000 cards. They should release later this year.
Brand-wise I’ve had great reliability with Zotac. They’re seen as a budget brand but I’ve been using their GPUs for years without issue.