I have many services running on my server and about half of them use postgres. As long as I installed them manually I would always create a new database and reuse the same postgres instance for each service, which seems to me quite logical. The least amount of overhead, fast boot, etc.

But since I started to use docker, most of the docker-compose files come with their own instance of postgres. Until now I just let them do it and were running a couple of instances of postgres. But it’s kind of getting rediciolous how many postgres instances I run on one server.

Do you guys run several dockerized instances of postgres or do you rewrite the docker compose files to give access to your one central postgres instance? And are there usually any problems with that like version incompatibilities, etc.?

  • sardaukar@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    You’ve never had to run migrations that lock tables or rebuild an index in two decades?

    • Shimitar@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Why would that have blocked all my databases at once? That would affect the same database I was migrating, not the others.

      • sardaukar@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 months ago

        Yes, it would cause downtime for the one being migrated - right? Or does that not count as downtime?

        • Shimitar@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Yes it counts indeed… But in that case the service is down while its migrated so the fact the database is also down does it count?

          I mean, it’s a self hosted home service, not your bank ATM network…