
How often do you actually transcode? Most jellyfin clients are capable of decoding almost all codecs. It might be worth checking if you need to encode frequently, let alone encode multiple streams at once, before considering how many streams different hardware might support.
To answer your question: the A310 and N100 appear to be pretty evenly matched when it comes to max number of streams. Intel claims that all Arc hw encoders can encode 4 AV1 streams at 4K60, but that the actual performance might be more limited by the amount of VRAM on the card. Since any iGPU would have acces to normal system RAM, which is probably a lot more than 4GB, these iGPU’s might even be capable of running more parallel streams.
One thing you might want to consider: the A310 has significantly more compute power than the iGPU’s in these processors. This matters if you ever decide to run a local ML model. For example, I backup all my pictures to nextcloud on the same machine that runs jellyfin, and I use the recognise app to perform facial and object recognition. You can also run this model in CPU mode though, and the performance is “good enough” in my i5 3470, so a dGPU might be overkill for this purpose. You could also run local LLM’s, text2speech, speech2text, or similar models, should you care about that.
If I may add a 3rd option though: consider upgrading to a 5600G or something similar. It has more CPU power than a N350 (3x according to passmark), and the iGPU probably had more than enough hwaccell (though software encoding is also very viable with that much CPU power). You wouldn’t free up the AMD hardware this way, and the 5600G doesn’t support AV1, which could be a dealbreaker I guess.
Yeah… It was pretty late in my timezone when I replied, which I’ll use as an excuse for not considering that. That would be a good solution.
I thought reducing power usage was the main goal, that’s why I suggested this. Though once again, pretty decent chance this is a misunderstanding on my part.
I personally use AMD graphics in both a laptop and a desktop, and have never had any problems with decode or encode; I don’t understand what the docs mean with “poor driver support”.
What I will confess (and once again, forgot to consider yesterday) is that intel and Nvidia hardware encoders generally provide better quality at the same bitrate than AMD*. I do believe software encoders perform better than all hardware encoders in this aspect, which is why I never cared too much about the differences between HW encoders. If I need good quality for the bitrate, I’ll just use the CPU. This is less energy-efficient though, so I guess having a good HW encoder could be pretty relevant to you.
*I happen to have hardware from AMD, intel and nvidia, so I might do my own tests to see if this still holds true.