I’ve noticed that sometimes when a particular VM/ service is having issues, they all seem to hang. For example, I have a VM hosting my DNS (pihole) and another hosting my media server (jellyfin). If Jellyfin crashes for some reason, my internet in the entire house also goes down because it seems DNS is unable to be reached for a minute or so while the Jellyfin VM recovers.

Is this expected, and is there a way to prevent it?

  • @manwichmakesameal@lemmy.world
    link
    fedilink
    English
    59 months ago

    Why are you running full VMs for something that can be put in a container? Sounds to me (without having any evidence or proof) that you’re running out of memory and you’re swapping and it’s taking forever. That’s what causes the VMs to slow/stop.

    • @root@lemmy.worldOP
      link
      fedilink
      English
      39 months ago

      I typically prefer VM’s just because I can change the kernel as I please (containers such as LXC will use the host kernel). I know it’s overkill, but I have the storage/ memory to spare. Typically I’m at about 80% (memory) utilization under full load.

  • apigban
    link
    fedilink
    English
    4
    edit-2
    9 months ago

    I’d check high I/O wait, specially if your all of the vms are on HDDs.

    one of the solution I had for this issue was to have multiple DNS servers. solved it by buying a raspberry pi zero w and running a 2nd small instance of pihole there. I made sure that the piZeroW is plugged on a separate circuit in my home.

    • @root@lemmy.worldOP
      link
      fedilink
      English
      29 months ago

      Good point. I just checked and streaming something to my TV causes IO delay to spike to like 70%. I’m also wondering if maybe me routing my Jellyfin (and some other things) through NGINX (also hosted on Proxmox) has something to do with it… Maybe I need to allocate more resources to NGINX(?)

      The system running Proxmox has a couple Samsung Evo 980s in it, so I don’t think they would be the issue.

      • apigban
        link
        fedilink
        English
        29 months ago

        lemme know if you need some tshooting remotely, if schedules permit, we can do screenshares

        • @root@lemmy.worldOP
          link
          fedilink
          English
          29 months ago

          Very nice of you to offer. I made a few changes (routing my problem Jellyfin client directly to the Jellyfin server and cutting out the NGINX hop, as well as limiting the bandwidth of that client incase the line is getting saturated).

          I’ll try to report back if there’s any updates.

          • apigban
            link
            fedilink
            English
            19 months ago

            hey yeah, no stress!

            just lemme know if you’d want someone to brainstorm with.

      • apigban
        link
        fedilink
        English
        19 months ago

        I had this issue when I used kubernetes, sata SSDs cant keep up, not sure what Evo 980 is and what it is rated for but I would suggest shutting down all container IO and do a benchmark using fio.

        my current setup is using proxmox, rusts configured in raid5 on a NAS, jellyfin container.

        all jf container transcoding and cache is dumped on a wd750 nvme, while all media are store on the NAS (max. BW is 150MBps)

        you can monitor the IO using IOstat once you’ve done a benchmark.

    • @root@lemmy.worldOP
      link
      fedilink
      English
      29 months ago

      Yeah, I’ve been looking into it for some time. It seems to normally be an issue on the client side (Nvidia shield), the playback will stop randomly and then restart, and this may happen a couple times (no one really knows why, it seems). I recently reinstalled that server on a new VM and a new OS (Debian) with nothing else running on it, and the only client to seem to be able to cause the crash is the TV running the Shield. It’s hard to find a good client for Jellyfin on the TV it seems :(

  • @MangoPenguin
    link
    English
    39 months ago

    If the VM crashing is because of high CPU usage on all cores, high IO delay on the storage, or an out of memory situation on the host, that would cause all of the other VMs to struggle as well.

    • @root@lemmy.worldOP
      link
      fedilink
      English
      19 months ago

      4 currently with 8GB RAM and no pass through for transcoding (only direct play)

  • @Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    1
    edit-2
    9 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    LXC Linux Containers
    NAS Network-Attached Storage
    NUC Next Unit of Computing brand of Intel small computers
    SSD Solid State Drive mass storage

    [Thread #205 for this sub, first seen 10th Oct 2023, 06:25] [FAQ] [Full list] [Contact] [Source code]