No hate to Ubuntu LTS (my old OS) as I think for an entry to Linux, it’s about the best there is, and then I just got used to it and then I started getting more and more annoyed with Snaps.

FFWD a couple of years and I decided to switch to Mint but I wanted something that was entirely free of Snaps, not just blocked, so LMDE seemed the best fit. I get all the good bits of Mint without the Canonical enforced stuff. It’s been running a week now and after plenty of tweaks (installing Gnome for example) looks and feels exactly how I wanted it without interference from Canonical.

  • Varyk@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    I’m still using Ubuntu, but am a very casual user and mostly use the GUI for everything.

    I keep hearing about how people don’t like snaps, why don’t people like snaps?

    • SolidGrue@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      11 months ago

      Slightly more technical answer but…

      *Nix systems (GNU/Linux, GNU/Hurd, BSD, Solaris, etc) tend to use a common set of system libraries. If you ever watch an upgrade, these are all those packages that look like lib that seem to take forever Windows systems have something similar, except they’re called DLL files. Same idea, really. The libraries are basically sets of pre-canned calls other software uses to do things on the system. Different libraries do different things fron handling low-level stuff like menory and disk access and standard system calls, to calls to specialized hardware, video acceleration, audio-- whatever There’s a parallel system on Windows that does similarly. Theres a system under the shells and launchers called a Dynamic Linker that invokes the libraries when the application launches that “links” the binary application to the required libraries. It scans the symbol tables in the bunary and caches right libraries for the app in memory. It’s the stuff of 300-level Comp Sci courses, so I’m doing it a really rough explanation here.

      *Nix system distributions ship with LOTS of little executable programs. Many of them are support apps that do things behind the scenes like run the logging system, execute background updates, execute the widgets in your graphical desktop. All of those programs generally to rely on the library files that are installed on the system. It economizes on storage which used to be expensive. It also benefits most apps because when the library gets updated for security or efficiency reasons, all the programs using that library also get the update. You don’t have to update all of the executable apps, just the core libraries.

      Unfortunately, this introduced some inherent conflicts. Many times library updates would introduce new feature or usage that wasn’t always backwards-compatible with older code. Major library updates often required all the apps that rely on them to update or you could break your system. Package managers introduced the dependency trees to account for this, but it was often the case that major updates broke stuff, and sysadmins could spend days sorting it all out. In Enterprises where time is money, that got expensive. Add to this that dome distros update software versions infrequently, so the whole ecosystem needs to be maintained holistically because newer apps might require newer libraries that break other, older packages.

      Snap, Flatpack and other “atomic” distribution formats get around this by leveraging a feature of the Linux ecosystem called Linux Executable Containers (LXC) (windows has this too). You might have heard of Docker or Vagrant. These are two LXC(-like) implementations that basically create what’s called a Kernel Namespace that acts like a sandbox or a container for the active app and its libraries. The application ships as a single image that contains a filesystem with the executable binaries, the required libraries, and the linker to make it go. The host system mounts the image as an overlay filesystem on the root filesystem and then runs the application in that filesystem within a private name space with its own RAM and system calls. Their are pros and cons here, but overall this is a popular way to maintain and distribute applications.

      The cons that folks largely object to with containerized apps is loading time. LXC co trainers are meant to be immutable, meaning they don’t store data persistently within the image. When the app loads, and this is generally whenever the app loads, the linker has to scan then cache all the libraries within the namespace because it can’t take advantage of the caches on the host system. Likewise, because the container is a private name space, user and desktop access needs to be ported through some middleware because Tue container doesn’t necessarily have direct access to the system hardware or environment. It can make loading tines linger, and introduce timeouts or delays interacting with the app. Plus, since every app ships with its own libraries there is a lot of extra storage needed. Its duplication of effort, but that’s okay because now disk is cheap, and CPUs are fast.

      So at the expense of storage and maybe a longer loading time, we can distribute images for Linux systems similar to how Windows ships their apps: a big, self-contained binary image that can run relatively independently of the core system. This means as long as your hardware can support the app and the kernel understands how to run the code in the container, you can decouple the user applications on a system from the core system that runs them. It’s a great proposition for code repeatability and cross-platform support You can run RedHat optomized apps on Debian systems, and vice-versa without worrying about library compatibility or base image versions. Developers like it because it avoids dependency conflicts, old repos, 3rd party repos, and makes the install repeatable across different distros. System maintainers like it for the same reasons. It make support much easier for everyone.

      In less clesr on the nuances between Snap and Flatpack, other than Redhat and Canonical seem to like Snap, and everyone else seems to use Flatpack Canonical is leaning hard into Snaps for user software distribution. Not everyone likes that. OP went with a more vanilla distro, but can turn on Flatpack with a button in his software store.

      So this was a wall of text, but I hope someone finds it helpful. Apologies for typos, editing is hard in mobile.

      (Edits; clarity, typos)

    • JTskulk@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      Here’s my bad experience with snaps back when I ran Kubuntu. I thought I’d give it a try, maybe it’ll be like systemd where lots of people loudly complain but it works and it’s something slightly different to learn.

      My Firefox was automatically moved to snap. First of all, I noticed that there was a slight delay whenever I clicked a link outside of Firefox that wasn’t there before. I think it also switched the file picker back to gtk which I hate and it moved my config to somewhere funny. So I get a popup saying that there’s an update for Firefox, and that I need to close it. Normally with apt I just do the update and then restart later. So I close Firefox and… nothing. No user feedback, I can’t tell if it’s doing anything. I assume it’s done and I reopen it. Nope, I get the same popup later. I guess I didn’t leave it closed long enough. The whole experience left a bad taste in my mouth. Canonical is pushing their homegrown software on me because they want to compete with flatpak or whatever, and they made my user experience worse as a result. I gained nothing from this except frustration and distrust that lead to me switching distros when I built a new machine. Snaps also spam your df output with all their different crap that gets mounted. I ended up removing snap and using the Mozilla PPA and was happy again.

    • LerajeOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      I just found them incredibly slow and I don’t like the fact that Canonical have a good go at forcing you to use them, even going as far as not shipping Flathub.

      I am in no way an expert - I do use the terminal for some things (node apps etc) but other than that, I use the GUI for everything.

      • Montagge@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        I don’t have a ton of snaps installed but haven’t noticed any degradation in speed so far for something like Firefox.

        Also I don’t think Ubuntu shipped with flatpak before snaps. Some of the other flavors of Ubuntu did but chose to go with snaps instead. You can always install flatpak if you want.

        Not that anyone has to be okay with snaps or the direction Canonical is going. I’ve been eyeballing LMDE myself!

        • LerajeOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          I should’ve been clearer - the speed issues were to do with installation (and updating) rather than running speed. The whole process seemed very slow and hung quite often. But, as I just said to @Varyk, maybe that was a misconfiguration on my part.

          I’m definitely not an Ubuntu hater at all, its just the culmination of a few things like the Snaps coupled with Canonical’s slightly weird attitude was enough to make me want to switch.

          As for LMDE, I definitely recommend it. It’s solid as a rock and once I put Gnome on it, was exactly how I wanted it.

      • Varyk@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Got it. Thanks, I was curious. I mostly use the terminal for meeting sure I can use gifs as a desktop background, just customization stuff, haha.

        I don’t even notice the snaps. You mean there installation process via snaps is slower or the programs run slower?

        • LerajeOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          The install process mainly - it used to hang quite a lot for me. But, maybe that was something I had misconfigured.

          • Varyk@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            Got it.

            I feel like whenever I do anything in the terminal, I’m just searching and following walkthroughs made by virtuous Linux giants before me.

            I don’t remember any hanged snap installs, but working with Ubuntu provides me with so much information if anything has failed, I probably assumed I did something wrong and found an alternative online instead of solving a snap issue anyway, haha.

            Maybe a bunch of them didn’t work and I just didn’t notice. My setup is pretty constant since it’s my office, i try not to change too many things I don’t need to function.

            Sorry, off hours, just rambling.

    • Yer Ma@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      Probably my biggest issue with snaps was that as a student carrying my laptop all day I would notice snapd eating a lot of power and removing the service helped my battery life significantly

        • Yer Ma@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          I think it was mostly trying to update or sync every time I sat down and opened the laptop, but on an old machine this was problematic

    • Godort@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Largely because canonical(the company that controls Ubuntu) has sole control over the format.