I’d like to thank the admins for being so open and direct about the issues that they’re facing.

  • thelastknowngod
    link
    fedilink
    9611 months ago

    To be fair, with a proper autoscaling scheme in place these services should scale down significantly when not in use.

    That being said, a big reason for using AWS/GCP is all the additional services that are available on the platform… If the workload being run isn’t that complicated, the hyperscalers are probably overkill. Even DO or Linode would be a better option under those circumstances.

    • @Overmind@lemmy.sdf.org
      link
      fedilink
      7111 months ago

      This. AWS architect here. There are a lot of ways to reduce pricing in AWS like horizontal scaling, serverless functions, reserved instances. Most people aren’t aware of it and if you’re going to dive in head first into something like cloud, you’ll need to bear the consequences and then learn eventually.

      • Greyscale
        link
        fedilink
        2811 months ago

        Even with ASGs, ec2 costs a bomb for performance.

        And “serverless” functions are a trap.

        If you’re gonna commit to reserved instances, just buy hardware for goodness sake, its a 3 year commitment with a huge upfront spend.

          • @whoisearth@lemmy.ca
            link
            fedilink
            611 months ago

            Mark my words the loop is coming back around. I look forward to when my work migrates the datacenter off AWS back on prem because of ballooning costs.

            You work in IT long enough you see it for the joke it is. We get paid obscene amounts of money to do what amounts to nothing.

            • @msage@programming.dev
              link
              fedilink
              211 months ago

              Just because rotating managers always come with the ‘new current thing everyone is doing’.

              Like no, 99% of companies can just do what they’ve always done. No need to rebuild everything from scratch.

            • Greyscale
              link
              fedilink
              111 months ago

              I’m already in the middle of that. Everything non-public-facing is going to cheap lease boxes running workloads in docker. idgaf if the machine underneath lives or dies, its 3 lines of config in a terraform script to replace.

        • @masterspace@lemmy.ca
          link
          fedilink
          English
          2
          edit-2
          11 months ago

          And “serverless” functions are a trap.

          How are serverless functions a trap? They seem like a great cheap option for simple CRUD / client > server > db apps (what most apps end up being).

          • Greyscale
            link
            fedilink
            English
            111 months ago

            Anything that is “cheap” to do on serverless is cheaper to do on a $5 droplet, especially once it starts to grow.

            Serverless gets you to buy in to a vendors lock-in.

            • @masterspace@lemmy.ca
              link
              fedilink
              English
              111 months ago

              Interesting, I’ll check out droplets, but in my experience with Azure Functions there’s not much vendor lock in. My API was just a normal Node.js / express server, the only part that was locked in to Azure Functions was the format for the endpoint definitions, but those can be adjusted in like an hour’s worth of time to anything else

            • @Phoenixbouncing@lemmy.world
              link
              fedilink
              1
              edit-2
              11 months ago

              I’m not sure that that’s an apples to apples comparison. A droplet looks more akin to Aws lightsail than lambda, and lambda certainly doesn’t start at $5 a month.

      • thelastknowngod
        link
        fedilink
        1311 months ago

        Yep. And if you want to really save some cash and don’t mind getting a little crazy, use an EKS node orchestrator that supports spot instances. I’m starting to do a serious dive into Harness at the moment actually.

        Google recently released a white paper on cost saving in kubernetes as well.

        • @Toribor@corndog.social
          link
          fedilink
          English
          311 months ago

          If you’ve got a kubernetes cluster running on 10 different spot instances, isn’t there a risk that all ten could be revoked at the same time? Even if they are built out across regions and availability zones?

          • @Zalack@startrek.website
            link
            fedilink
            111 months ago

            Counterargument: I don’t need Lemmy to have 100% uptime. It’s not a corporate service and while – obviously – if it’s down all the time I would eventually move on, I’m not going to fault a not-for-profit entity for periodic failures.

          • thelastknowngod
            link
            fedilink
            111 months ago

            Ideally you’d have a baseline node group of traditional instances and use spot instances only for scale up.

            I think that’s though. PDBs and affinity rules should cover most cases. I’m just starting to dig into this though so I may be mistaken.

          • @Phoenixbouncing@lemmy.world
            link
            fedilink
            111 months ago

            Got my AWS architect cert 2 weeks ago.

            What you can do is setup a spot fleet so it’ll fill up with spots and only use on-demand if spot goes above the on demand price.

            You could also have a pure spot fleet and a reserved instance and use a load balancer with health checks to route traffic.

            The one thing you shouldn’t do with cloud providers is lift and shift your existing instances, that’s what leads to the crazy prices some people are seeing.

            Renting an ec2 on demande and installing your software is almost always the wrong way to do it.

      • @Toribor@corndog.social
        link
        fedilink
        English
        411 months ago

        I’m in a similar boat. I’m a sysadmin supporting a legacy application running on AWS EC2 instances and a new ‘serverless’ microservice based platform as well. It’s really really hard to scale and optimize anything running on EC2s unless you really know what you’re doing or the application is designed with clustering in mind.

        You tend to end up sizing instances based on peak load and then wasting capacity 90% of the time (and burning through cash like crazy). I can imagine a lot of Lemmy admins are overspending so fast they give up before they figure it out.

        • @Dasnap@lemmy.world
          link
          fedilink
          611 months ago

          Nowadays I feel like EC2 is either used for legacy support or testing. Most prod nowadays should probably be built with some kind of container solution so you can scale it easier.