It’s a great compute box and small form factor gaming machine. Perfect for supplementing a server with some local AI performance or using it as a living room gaming PC. Definitely not something everyone needs, but me and the friend I watched this with are already each planning to get one.
I would definitely have considered the Framework desktop in my younger days, when I didn’t want a laptop, but the smalles desktop possible. But in what Framework do and is known for, I think it’s a bit meh.
That’s reasonable, but unfortunately, soldered memory will probably become more and more common. Apple Silicon and now these AMD chips have shown that it is genuinely much more capable. This performance would not be possible without it.
Framework still ensured the desktop is as repairable as possible in every way that they can control. It will undoubtably be more repairable than other ITX PCs in the same class. I’ve worked on some ITX PCs in the past, and some of them are HELL to repair. Even a lot of full sized prebuilds are really frustrating to repair. Many Dell and HP desktops use proprietary parts, making repair expensive.
I mean it’s probably “not possible” to convince AMD to engineer it for the few of us that actually care. There are other comparable socketable technologies.
The thing is that companies will want to extract more performance, so customers may eventually lose the option. I hope not, but I do see it as a possibility.
It will be an absurdly expensive gaming machine that’s not repairable or upgradeable. You could get one with equivalent performance that is repairable and upgradeable for half the price.
The 1k version of this system will still be damn capable for gaming. Genuinely how many prebuilt ITX PCs will compete at that price? Also, it’s only the RAM that can’t be upgraded. They will likely sell replacement main boards in the future, the same as with the laptops. Not being able to carry over RAM is disappointing, but I’ve personally never upgraded my CPU without also upgrading my motherboard.
You’re not going to because this GPU is several years old. But you could easily buy it and build a used one.
I don’t think it’s a good option and I don’t think they’ll sell many of them. But I also don’t understand this whole MiniPC craze so maybe I am just irrelevant.
even used I only found this GPU for around d400-500. you’re gonna be skimping on other parts there, definitely won’t have the CPU power of this device.
Mini PCs aren’t for everyone, but for people who have a reason to get them, this is great.
How is it not repairable or upgradeable? They said that other than the fact that the CPU/GPU and RAM are integrated on the motherboard, everything is using standardized components. Other than the integrated memory, that is no less repairable/upgradeable than a Framework laptop.
For inference (running previously-trained models that need lots of RAM), the desktop could be useful, but I would be surprised if training anything bigger than toy examples on this hardware would make sense because I expect compute performance to be limited.
Does anyone here have practical recent experience with ROCm and how it compares with the far-more-dominant CUDA? I would imagine that compatibility is much better now that most models are using PyTorch and that is supported, but what is the performance compared to a dedicated Nvidia GPU?
Thanks for the comment. I have had exposure to similar claims, but wasn’t seeing anyone using AMD GPUs for AI unless they were somehow incentivized by AMD, which made me suspicious.
In principle, more competition in the AI hardware market would be amazing, and Nvidia GPUs do feel overpriced, but I personally don’t want to deal with the struggles of early adoption.
Yea, I don’t see the point of the desktop, but it sounds more like they are pushing it towards AI servers for the small players.
It’s a great compute box and small form factor gaming machine. Perfect for supplementing a server with some local AI performance or using it as a living room gaming PC. Definitely not something everyone needs, but me and the friend I watched this with are already each planning to get one.
I would definitely have considered the Framework desktop in my younger days, when I didn’t want a laptop, but the smalles desktop possible. But in what Framework do and is known for, I think it’s a bit meh.
That’s reasonable, but unfortunately, soldered memory will probably become more and more common. Apple Silicon and now these AMD chips have shown that it is genuinely much more capable. This performance would not be possible without it.
Framework still ensured the desktop is as repairable as possible in every way that they can control. It will undoubtably be more repairable than other ITX PCs in the same class. I’ve worked on some ITX PCs in the past, and some of them are HELL to repair. Even a lot of full sized prebuilds are really frustrating to repair. Many Dell and HP desktops use proprietary parts, making repair expensive.
at least if you want to extract every bit of performance.
Patel noted on LTT that they tried to get modular ram on there, but it’s just not possible. The signal integrity is not holding up
I mean it’s probably “not possible” to convince AMD to engineer it for the few of us that actually care. There are other comparable socketable technologies.
The thing is that companies will want to extract more performance, so customers may eventually lose the option. I hope not, but I do see it as a possibility.
It will be an absurdly expensive gaming machine that’s not repairable or upgradeable. You could get one with equivalent performance that is repairable and upgradeable for half the price.
The 1k version of this system will still be damn capable for gaming. Genuinely how many prebuilt ITX PCs will compete at that price? Also, it’s only the RAM that can’t be upgraded. They will likely sell replacement main boards in the future, the same as with the laptops. Not being able to carry over RAM is disappointing, but I’ve personally never upgraded my CPU without also upgrading my motherboard.
…all of them? Did you look at the gaming performance? Not that impressive. My old 6800XT build will blow that thing outta the water.
The GPU cannot be upgraded either, which is kind of a big deal…
And I can’t find a new prebuilt 6800XT at this price, much less one in an ITX case.
That is true thought, don’t know why I didn’t process that. It’s still a solid option in my eyes for a small, fairly portable gaming pc.
You’re not going to because this GPU is several years old. But you could easily buy it and build a used one.
I don’t think it’s a good option and I don’t think they’ll sell many of them. But I also don’t understand this whole MiniPC craze so maybe I am just irrelevant.
even used I only found this GPU for around d400-500. you’re gonna be skimping on other parts there, definitely won’t have the CPU power of this device.
Mini PCs aren’t for everyone, but for people who have a reason to get them, this is great.
How is it not repairable or upgradeable? They said that other than the fact that the CPU/GPU and RAM are integrated on the motherboard, everything is using standardized components. Other than the integrated memory, that is no less repairable/upgradeable than a Framework laptop.
You obviously can’t upgrade any of these, and if any individual component fails you have to pretty much throw away the entire computer.
For inference (running previously-trained models that need lots of RAM), the desktop could be useful, but I would be surprised if training anything bigger than toy examples on this hardware would make sense because I expect compute performance to be limited.
Does anyone here have practical recent experience with ROCm and how it compares with the far-more-dominant CUDA? I would imagine that compatibility is much better now that most models are using PyTorch and that is supported, but what is the performance compared to a dedicated Nvidia GPU?
ROCM is complete garbage. AMD has an event every year that “Pytorch works now!” and it never does.
ZLUDA is supposedly a good alternative to ROCM but I have not tried it.
Thanks for the comment. I have had exposure to similar claims, but wasn’t seeing anyone using AMD GPUs for AI unless they were somehow incentivized by AMD, which made me suspicious.
In principle, more competition in the AI hardware market would be amazing, and Nvidia GPUs do feel overpriced, but I personally don’t want to deal with the struggles of early adoption.