• 0 Posts
  • 79 Comments
Joined 2 years ago
cake
Cake day: June 5th, 2023

help-circle
  • If were to decide I need compute, I could just put my AMD GPU back in.

    Yeah… It was pretty late in my timezone when I replied, which I’ll use as an excuse for not considering that. That would be a good solution.

    which means it isn’t any better for the purpose of transcoding than the discrete Vega I already have (except for using less power).

    I thought reducing power usage was the main goal, that’s why I suggested this. Though once again, pretty decent chance this is a misunderstanding on my part.

    AMD is NOT recommended.

    I personally use AMD graphics in both a laptop and a desktop, and have never had any problems with decode or encode; I don’t understand what the docs mean with “poor driver support”.

    What I will confess (and once again, forgot to consider yesterday) is that intel and Nvidia hardware encoders generally provide better quality at the same bitrate than AMD*. I do believe software encoders perform better than all hardware encoders in this aspect, which is why I never cared too much about the differences between HW encoders. If I need good quality for the bitrate, I’ll just use the CPU. This is less energy-efficient though, so I guess having a good HW encoder could be pretty relevant to you.

    *I happen to have hardware from AMD, intel and nvidia, so I might do my own tests to see if this still holds true.


  • How often do you actually transcode? Most jellyfin clients are capable of decoding almost all codecs. It might be worth checking if you need to encode frequently, let alone encode multiple streams at once, before considering how many streams different hardware might support.

    To answer your question: the A310 and N100 appear to be pretty evenly matched when it comes to max number of streams. Intel claims that all Arc hw encoders can encode 4 AV1 streams at 4K60, but that the actual performance might be more limited by the amount of VRAM on the card. Since any iGPU would have acces to normal system RAM, which is probably a lot more than 4GB, these iGPU’s might even be capable of running more parallel streams.

    One thing you might want to consider: the A310 has significantly more compute power than the iGPU’s in these processors. This matters if you ever decide to run a local ML model. For example, I backup all my pictures to nextcloud on the same machine that runs jellyfin, and I use the recognise app to perform facial and object recognition. You can also run this model in CPU mode though, and the performance is “good enough” in my i5 3470, so a dGPU might be overkill for this purpose. You could also run local LLM’s, text2speech, speech2text, or similar models, should you care about that.

    If I may add a 3rd option though: consider upgrading to a 5600G or something similar. It has more CPU power than a N350 (3x according to passmark), and the iGPU probably had more than enough hwaccell (though software encoding is also very viable with that much CPU power). You wouldn’t free up the AMD hardware this way, and the 5600G doesn’t support AV1, which could be a dealbreaker I guess.





  • MaxytoWorld News@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    I feel like you’re missing the point. My point wasn’t about your income, I wasn’t trying to attack you personally in any way. I was just pointing out that it is unfair to say “their 3.7% is not enough,”: they’re clearly doing way more than required by NATO norms. In fact, they’re second to just Poland in terms of relative spending. Don’t attack Estonia, attack Spain, Slovenia, Luxembourg, Belgium, Canada, Italy, Portugal and Croatia, all of which are below the 2% norm (in order from lowest to highest). Or better yet, if you live in a NATO country: vote on politicians that take our defence seriously (unless you’re American of course, cause that probably won’t help anyone for the foreseeable future).

    TLDR: sure, 3.7% might not be enough, but 1.3% is even worse (Spain). No offence intended.



  • MaxytoTechnology@beehaw.org1080p viewing experience
    link
    fedilink
    arrow-up
    1
    ·
    10 months ago

    I agree that, theoretically speaking, YouTube might be protecting some end users from this type of attack. However, the main reason YouTube re-encodes video is to reduce (their) bandwidth usage. I think it’s very kind towards YouTube to view this as a free service to the general public, when it’s mostly a cost-cutting measure.


  • MaxytoTechnology@beehaw.org1080p viewing experience
    link
    fedilink
    arrow-up
    1
    ·
    10 months ago

    Good point, though I believe you have to explicitly enable AV1 in Firefox for it to advertise AV1 support. YouTube on Firefox should fall back to VP9 by default (which is supported by a lot more accelerators), so not being able to decode AV1 shouldn’t be a problem for most Firefox-users (and by extension most lemmy users, I assume).


  • MaxytoTechnology@beehaw.org1080p viewing experience
    link
    fedilink
    arrow-up
    5
    ·
    10 months ago

    About the “much higher CPU usage”: I’d recommend checking that hardware decoding is working correctly on your device, as that should ensure that even 4K content barely hits your CPU.

    About the “less sharper image”: this depends on your downscaler, but a proper downscaler shouldn’t make higher-resolution content any more blurry than the lower-resolution version. I do believe integer scaling (eg. 4K -> 1080p) is a lot less dependant on having a proper downscaler, so consider bumping the resolution up even further if the video, your internet, and your client allow it.


  • MaxytoTechnology@beehaw.org1080p viewing experience
    link
    fedilink
    arrow-up
    15
    ·
    10 months ago

    I believe YouTube always re-encodes the video, so the video will contain (extra) compression artefacts even if you’re watching at the original resolution. However, I also believe YouTube’s exact compression parameters aren’t public, so I don’t believe anyone outside of YouTube itself knows for sure which videos are compressed in which ways.

    What I do know is that different content also compresses in different ways, simply because the video can be easier/harder to compress. IIRC, shows like last week tonight (mostly static camera looking at a host) are way easier to compress than higher paced content, which (depending on previously mentioned unknown parameters) could have a large impact on the amount of artefacts. This makes it more difficult to compare different video’s uploaded at their different resolutions.



  • Unless your initial recordings were lossless (they probably weren’t), recompressing the files with a lossless flag will only increase the size by a lot. Lossless video is HUGE, which is why almost no one actually records/saves it. What you’re probably looking for is visually lossless transcoding, where you do lose some data, but the difference is too small for most people to notice.

    My recommendations:

    1. Go to your recording software and change the setting to better compress your videos the first time around. Compressing once generally gives a better quality to size ratio than compressing twice. It’s therefore best if your recording software get it right first time, without you having to keep on recompressing your videos.
    2. When tinkering with encoding setting, trying to find what works best for you, it might be useful to install Identity to help you compare the original files and one or more transcoded version(s).
    3. Don’t try to recompress the audio; you’ll save very little space, and the losses in quality become perceptible much faster than video. When using ffmpeg, the “-c:a copy” flag should simply copy the original audio to the new file, without any change in quality or size
    4. I’d recommend taking some time to read through the ffmpeg encoding guides. H265 and AV1 are good for personal archiving, with AV1 providing better compression ratios at the cost of much slower encoding. You could also choose VP9, which is similar in compression ratio and encoding speed to h265.
    5. You’ll have to choose between hardware and software encoding. Hardware encoding can (depending on your specific hardware and settings) be 10-100x faster than software, but software generally gives better compression ratios at similar qualities. You should test this difference for yourself and see if the extra time is worth it for the extra quality. Do keep in mind that AV1 hardware encoding is only supported by some of the most recent GPU’s (rx7000 and rtx4000 from the top of my head). If you don’t have one of those GPU’s, you’ll either have to choose software encoding or pick a different codec.

  • Source: Gapminder, cited as source by the above graph as well

    Funny how much the graph changes when you have more than 1 data point per decade every decade. Almost makes me wonder whether the creator of the above graph was trying to paint a certain picture instead of presenting raw data in a way that makes it easier to grasp, without bias.

    Notice the inflection point where Mao implements the “great leap forward”. Also notice other countries’ similar rates of increasing life expectancy in the graph below, just without the same ravine around 1960.

    I’m sorry, but I have to disagree with (what I think to be) your implicit claim that Mao somehow single-handedly raised China’s life expectancy through the power of communism or whatever. Please do correct me if this wasn’t your implicit claim, and if you we’re either 1) yourself mislead by the graph you shared, or 2) you have some other claim entirely that is somehow supported by said graph.



  • MaxytoMemes@lemmy.mlDear iPhone users:
    link
    fedilink
    arrow-up
    24
    ·
    1 year ago

    Not to be an unfunny nitpicker (I don’t know why I’m denying this, that kinda the whole point), but all iphones do have lossless audio streaming via AirPlay. I’m assuming that you specifically meant Bluetooth streaming, but then you should’ve said so. Furthermore, normal aptx isn’t high resolution, only aptx HD and aptx adaptive are. The phone does support aptx HD as well, but once again, you could’ve said so from the start (though 3 characters more or less might make a significant difference to most memes, this one certainly wouldn’t have had that problem)


  • Luxury! My homeserver has an i5 3470 with 6GB or RAM (yes, it’s a cursed 4+2 setup)! </badMontyPythonReference>

    Interesting, I also run Nextcloud and pihole, and vaultwarden, jellyfin, paperless-ngx, gitea, vscode-server and a minecraft server (every now and then).

    You’re right that such a system really does show its age, but only when doing multiple intensive tasks at the same time. I try not to backup my photos to Nextcloud while running minecraft, for example, as the imagine identification task pins my CPU at 100%. So yes, I agree, you’re probably not doing anything out of the ordinary on your setup.

    The point I was trying to make still stands though, as that pi 2B could run more than I would’ve expected beforehand. I believe it once even ran jellyfin, a simple file server, samba, and a webserver with a simple HTML website. Jellyfin worked just fine, as long as the pi didn’t have to transcode (never got hardware transcoding to work).

    It is funny that you should run out of memory, seeing as everything fits (albeit, just barely) on my machine in 1/5 the memory. Would de overhead of running VM’s account for such a large difference?


  • Coming from someone who started selfhosting on a pi 2B (similar-ish specs), you’d be surprised. If you don’t need anything fast or fancy, that 1GB will go a long way, and plenty of selfhosted apps require very little CPU. The only real problem I faced was that all HTTPS-related network tasks were limited at ~3MB/s, as that is how fast my pi could encrypt the data (presumably, I just saw my webserver utilising the entire CPU and figured this was the most likely explanation)


  • I’ve had good experiences with whisper.cpp (should be in the AUR). I used the large model on my GPU (3060), and it filled 11.5 out of the 12GB of vram, so you might have to settle for a lower tier model. The speed was pretty much real time on my GPU, so it might be quite a bit slower on your CPU, unless the lower tier models are also a lot faster (never tested them due to lack of necessity).

    The large model had pretty much perfect accuracy (only 5 or so mistakes in ~40 pages of transcriptions), and that was with Dutch audio recorded on a smartphone. If it can handle my pretty horrible conditions, your audio should (hopefully) be no problem to transcribe.


  • MaxytoSelfhosted@lemmy.worldData HDD with SSD catch drive
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It depends what you’re optimising for. If you want a single (relatively small) download to be available on your HDD as fast as possible, then your current setup might be better (optimising for lower latency). However, if you want to be maxing out your internet speeds at all time and increase your HDD speeds by making the copy sequential (optimising for throughput), then the setup with the catch drive will be better. Keep in mind that a HDD’s sequential write performance is significantly higher than its random write performance, so copying a large file in one go will be faster than copying a whole bunch of random chunks in a random order (like torrents do). You can check the difference for yourself by doing a disk benchmark and comparing the sequential vs random writes of your drive.