I have a PC I have installed Portainer on, with various docker services (home assistant, jellyfin, etc…) with an ISP supplied router fixing various device IP addresses and reaching out to dyndns.

I really want to move everything over to HTTPS connections by supplying certificates, tls termination, etc .
The issue I have is self signed certificates mean I have to manage certificate deployment to everything in the house.

I figure I need to link a domain to the DynDNS entry and arrange certs for the domain. However I can’t make the link function and everywhere wants >£100 to generate a certificate.

How are people solving this issue?

    • ripcord@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      My problem - and I’m not alone - is that I really don’t want to expose anything publicly. Is there a way to do this without exposing anything to the Internet?

      • datallboy@lemmy.techhaven.io
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        You don’t have to expose Nginx publicly. It can exist privately on your network. I have my own domain and DNS server internally. For example nginx.home.datallboy.com and jellyfin.home.datallboy.com will resolve to NPM server at 192.168.1.10. Then nginx can listen for jellyfin.home.datallboy.com, and proxy those connections to my Jellyfin VM at 192.168.1.20.

        Since I own my domain (datallboy.com), I let Nginx Proxy Manager do DNS challenge which is only used to authenticate that I own the domain. This will insert a TXT record on public DNS records for verification, and it can be removed afterwards. LetsEncrypt will then issue a certificate for https://jellyfin.home.datallboy.com which I can only access locally on my network since it only resolves to private IP addresses. The only thing “exposed” is that LetsEncrypt issued a certificate to your domain, which isn’t accessible to the internet anyways.

        You do not have to create your own CA server.

      • Croquette@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I am new at this, but from my understanding, if you want to not expose anything to internet, you would need to create your own CA server to create your own certificates and have the necessary encryption certs for your own https on your home lab.

        • ripcord@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          That’s essentially what I ended up having to do, but keep hoping that I’ve missed something.

          I also find that people seem to ignore this route, assuming people are fine with public dns pointing at your home ip and http/https ports open.

          • wagesj45@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Gotta live on the edge, man. Open up your router. All ports. Firewalls are for pansies. Connect your laptop directly to the modem. Enable ssh and rdp. What could go wrong?

          • Croquette@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            You can setup a VPS between the internet and your home network to limit the exposition of your home network. When a client pings yourdomain.com, it sees the ip of the VPS and not the IP of your home network.

            Otherwise, a VPN + home CA server will make your home network accessible and encrypted as well

      • julle@kbin.juhlin.network
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I have a public domain that I only use internally on my home network. I have a local DNS server that handles all my internal DNS records. So I just point my DNS records to my nginx proxy manager’s local IP address and let it create certs using DNS Challenge. So I don’t need to expose anything external to make it work.

  • NotSteve_@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I use letsencrypt for everything. It’s mostly simple to setup and you’ll get certs for free. If you set it up right, they’ll automatically renew forever too

  • Dave@lemmy.nz
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Pretty much everyone uses Let’s Encrypt for their certs. They are free, and often built in to your reverse proxy.

    Since you have multiple services, I’ll assume you have a reverse proxy set up. So just google Let’s Encrypt and the name of your reverse proxy and you should find a tutorial.

    I’m not sure how using DynDNS impacts on this. If you have your own domain, use Cloudflare Tunnels. You install the software on your server, and it keeps a connection to Cloudflare. No port forwarding, no problems with IP addresses, you can use it behind CGNAT. It also will provide SSL for you for the browser to Cloudflare part, but I highly recommend still setting up Let’s Encrypt for the Cloudflare to Server part.

    • Semi-Hemi-Demigod@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I don’t use DynDNS but I do have two HAProxy servers, one locally and the other on a VPS. The VPS has a cron job that renews the certs every three weeks, and my local server rsyncs them to the right place every so often.

      Then, on my pihole I send requests for my services to the local IP but on the same domain. Because the certs are looking at the domain name and not the IP the cert is valid both on my LAN and from the Internet.

  • clb92@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Caddy reverse proxy handles that for me. I just set my domains’ DNS to point to my public IP, where port 80 and 443 are forwarded to a server with Caddy listening.

  • staticlifetime@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I have free wildcard certs provided from my domain registrar. I host with Kubernetes, which allows for TLS secrets using Ingress. For external/public usage, I also have an NGINX reverse proxy in front of it.

  • Bldck@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago
    1. Cloudflare free tier
    2. Cloudflare wildcard cert (I use one domain with many subdomains)
    3. Docker container for dyndns to cf
    4. Nginx proxy manager
  • yamdwich@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I use pfsense’s HAProxy integration and a combination of Cloudflare or Lets Encrypt certificates for external stuff. For internal-only stuff I have a root CA I distributed to my computers that I use to sign certificates. My docker box that serves most of my internal stuff has an nginx-proxy-manager container with a wildcard certificate so that I don’t have to sign one for every new subdomain on my docker host, and the various containers with services in it talk to it over a private docker network. Buying a cheap domain and managing it through Cloudflare simplifies a ton of stuff.

  • gabriele97@lemmy.g97.top
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    You can use let’s encrypt or you can use certificates given by Cloudflare if you have a domain managed with Cloudflare itself

  • uglytruck@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    As far as the money, you could use DuckDNS. It’s free with the certificate. Not wanting to expose your network, I’m not understanding why you would want to use https. You could use wireguard instead.