TL;DR: Does the Arc A310 have any important advantage over recent Intel low-power CPUs with integrated graphics (e.g. N100/N150/N350/N355) specifically for use with Jellyfin, in terms of the number of streams it can transcode simultaneously or something like that?

Even if they do differ, is it something I would notice in a household context (e.g. with probably never more than 4 users at a time), or would the discrete GPU just be overkill?

context, if you need it

My Jellyfin is currently running in a VM on a Proxmox server with a Ryzen 5 3600 CPU and Vega 56 discrete GPU that draws a lot of power unnecessarily and apparently isn’t recommended for Jellyfin transcoding due to lack of encoder quality. I’m thinking about either replacing the GPU with an Arc A310 for ~$100 or replacing the whole CPU/mobo/GPU with some kind of low-power Intel ITX board (the kind designed for routers or NASs, with a soldered-on N100 or similar) for ~$200. I’m leaning towards the latter because it would use less power, be simpler to set up (since, as I understand it, integrated GPU functions are always available instead of needing to be passed through and dedicated to a single VM/container) more versatile in the future (e.g. as a NAS or router), and be a whole additional system, freeing up the AMD hardware for some other use.

But is the N100 option just strictly equal or better for Jellyfin, or is there some other performance trade-off?

(BTW, I know the Arc uses Intel Quick Sync Video version 9 while the N100 uses version 8, with the difference between them being that the newer version supports 8K 10-bit AV1 hardware encoding. I’m not going to be encoding 8K any time in the foreseeable future, so I don’t care about that.)

  • Maxy
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    If were to decide I need compute, I could just put my AMD GPU back in.

    Yeah… It was pretty late in my timezone when I replied, which I’ll use as an excuse for not considering that. That would be a good solution.

    which means it isn’t any better for the purpose of transcoding than the discrete Vega I already have (except for using less power).

    I thought reducing power usage was the main goal, that’s why I suggested this. Though once again, pretty decent chance this is a misunderstanding on my part.

    AMD is NOT recommended.

    I personally use AMD graphics in both a laptop and a desktop, and have never had any problems with decode or encode; I don’t understand what the docs mean with “poor driver support”.

    What I will confess (and once again, forgot to consider yesterday) is that intel and Nvidia hardware encoders generally provide better quality at the same bitrate than AMD*. I do believe software encoders perform better than all hardware encoders in this aspect, which is why I never cared too much about the differences between HW encoders. If I need good quality for the bitrate, I’ll just use the CPU. This is less energy-efficient though, so I guess having a good HW encoder could be pretty relevant to you.

    *I happen to have hardware from AMD, intel and nvidia, so I might do my own tests to see if this still holds true.