I think of myself as technically inclined. I have installed Linux multiple times and have basic command line knowledge, and I’ve programmed in many languages, with the most experience making a static website game using HTML/CSS/JS.
Additionally, I own the superspruce.org domain (my registrar is Dynadot), but I don’t really know how to wield the power of owning a domain. I also have some spare computers to be used for hosting, a 2009 laptop running Lubuntu and a 3900X+32GB RAM desktop other running KDE Neon, but I’m also open to experimenting with cloud hosting too (I know, sacrilege here).
However, I don’t know much about the TCP/IP protocol or other networking protocols. I’m happy to learn, but the curve would need to start gently.
I would want to try hosting my websites, and also a personal non-federated Lemmy instance to serve as a archivable forum for my games. Even if it’s not very useful, it’s great experience.
Some things, or points, to consider.
- Do you want whatever you’re hosting to be internet accessible? Or local network only? As it being able to access from anywhere
- If its something you want to be able to use full time, youll eventually want a dedicated machine. I have an *arr stack etc, which I share with a private group. Eventually downtime was an actual consideration and interrupted my normal usage of my main machine. I picked up a secondhand elitedesk mini Pc for £150 and its doing really well.
- Potentially make use of tools like Zerotier for private networking (others hopefully will chime in with alternatives, but I have had good success with zerotier.
- How do you want to host it, and are you willing to learn? Get a bit of knowledge on docker or podman but this is hardmode as most examples will be docker specific. Using containers will make things simpler. The most complicated part IMO is networking but even then its more docker networking stuff than general TCP/IP - (like you mentioned in your post)
- OS - you mentioned using Linux? I personally use Ubuntu just out of defaulting to what i’ve previously used. But Im currently using Manjaro on my non host PC, which I am liking
- Keep things secure - the more you expose to the internet, the more risk. Keep exposure as small as possible, use letsencrypt or alternatives for anything you want to access over the internet etc
good luck have fun!
- I want it to be internet accessible. A friend taught me how to use nginx for the local network, although I’ve forgotten that now.
- My previously mentioned computers are computers I don’t use that much, especially the old laptop. My main computer is a 2021 ROG Strix G15 Advantage Edition. I won’t be hosting from that.
- Private networking is just hosting over a LAN, right?
- Does Docker work well on old hardware and is Docker easy to learn?
- I have the most knowledge on Ubuntu based distros. I don’t think Manjaro, a bleeding edge rolling distro, is good for hosting.
- How do I use let’s encrypt or something to boost my security?
Not who you’re replying to but here are some answers:
- For internet accessibility there will be things to consider. Do you want non-techy friends or family on it? If so you may need to port forward or use a vps. Some ISPs don’t allow port forwarding, but there are a million solutions to this. Otherwise you can use a VPN to connect to your network and have access to your server that way.
- Old laptops are great for hosting. They are typically more power efficient than desktops, although the expansion options aren’t great if you don’t like jank.
- Not sure.
- Yes and kinda. It is super light unless you’re running it on really old stuff. It is easy once you learn. I recommend docker compose to keep everything in a single config file. You can use watchtower to keep things automatically updated.
- I like Debian. I too use a rolling distro on my desktop, and some people do like rolling distros for servers. The idea of stable release distros like Debian are that they are consistant and you don’t have to worry about updates breaking things. The idea of rolling distros on a server is that you will only ever have to deal with small updates rather than giant ones that may break everything (like between Debian versions). I would not recommend manjaro on a server, or at all really. They have a habit of holding back packages for no real reason and causing dependancie issues. If you like rolling I would try Tumbleweed or Arch with Archinstall. There is also Endeavor OS which is basically easier Arch.
- Letsencrypt is for https certificates. You can use self signed certificates, meaning they are made by your computer rather than letsencrypt’s servers, but you will get warnings in your browser and app interfaces for your services may not work. They are just as secure for personal use. Don’t do http on the internet. That is bad. You know the VPN ads that say hackers can steal all your bank passwords if you don’t buy their VPN, well that is not true because your bank uses https. If you access your services through http over the internet, someone will get your password eventually. Do whatever you want on home network though.
Maybe I should just cloud host, as it costs the same as a VPN. But then again, a VPN will also protect my privacy. Do I need to use let’s encrypt if I use a VPN or cloud host if I want to be secure?
Short answer VPN no, cloud yes. That is, if the VPN server is on your network and you are not simply forwarding ports to an external VPN. If your choices are VPN vs cloud, I would recommend cloud as it is satisfying to be able to type in a URL and go to your site instead of having to go through VPN weirdness. If you keep sensitive data on your server then a VPN is definitely better (again, the local kind). I personally just forward 80 and 443 and use nginx to control what domains everything is on.
I see a number of comments to use a virtual server host, but I have not seen any mention of the main reason WHY this is advisable… If you want to host something from your home, people need a way to reach you. There are two options for this – use a DDNS service (generally frowned upon for permanent installations), or get a static IP address from your provider.
DDNS means you have to monitor whenever your local IP address changes, send out updated records, and wait for those changes to propagate across the internet. This generally will mean several minutes or more of down time where nobody can reach your server, and can happen at completely random times.
A static IP is reliable, but they cost money, and some providers won’t even give you the option unless you get a business-class connection, which costs even more money. However this cost is usually already rolled into the price of a virtual machine.
Keep in mind also that when hosting at home, simply using a laptop to stay online 24/7 is not enough, you also need a battery backup for your network equipment. You will want to learn about setting up a firewall and some kind of IDS to protect the front end of your services, but for starting out you can host this on the same machine as your other services. And if you really want to be safe, set up a second internal machine that you can perform regular backups to, so when your machine gets hacked you have a way to restore the information.
My first server was online for two whole weeks before someone blew it up. Learn security first, everything after that will be easy.
I actually use a dynamic IP and it works pretty well for me, I don’t remember having any issues because of that. Also, what happened after those two weeks to your server and how? I’ve been running my things for over 3 years and I haven’t done anything special in terms of security.
This was back in '99 and I didn’t know much about linux (or servers) at the time, so I’m not exactly sure what they did… but one morning I woke up and noticed my web service wasn’t working. I had an active login on the terminal but was just getting garbage from it, and I couldn’t log in remotely at all. My guess was that someone hacked in, but hacked the system so badly that they basically trashed it. I was able to recover a little data straight from the drive but I didn’t know anything about analyzing the damage to figure out what happened. so I finally ended up wiping the drive and starting over.
At that point I did a sped-run of learning how to set up a firewall, and noticed right away all kinds of attempts to hit my IP. It took time to learn more about IDS and trying not to be too wreckless in setting up my web pages, but apparently it was enough to thwart however that first attacker got in. Eventually I moved to a dedicated firewall in front of multiple servers.
Since then I’ve had a couple instances where someone cracked a user password and started sending spam through, but fail2ban stopped that. And boy are there a LOT of attempts at trying to get into the servers. I should probably bump up fail2ban to block IPs faster and over a longer period when they use invalid user names since attacks these days happen from such a wider range of IPs.
Interesting. I guess security wasn’t that good by default back then, firewalls are now set up by default on pretty much every server distro.
There was no such thing as a default firewall, but even now when I set up a new Debian machine there are no firewall rules, just the base iptables installed so you CAN add rules. Back then we also had insecure things like telnet installed by default and exposed to the world, so there’s really no telling exactly how they managed to get into my machine. It’s still good to learn about network security up front rather than relying on any default settings if someone is planning on self-hosting.
Don’t make anything accessible via the internet if you’re new and starting out. The last thing you want is to accidentally leave a port open, leave an admin page with a default guessable password, or a piece of vulnerable software running and have someone gain access to your local network.
Start locally and learn the basics following the excellent advice of others here, and slowly build your knowledge until you understand the various moving and connecting pieces.
Seems people are already making great recommentations.
Personally I also use docker-compose, a WireGuard VPN and an Nginx-Server to proxy/ssl-terminate all my services to my VPN or the Internet.
Just host whatever you like/need. Something like Nextcloud is probably a solid start.
Btw, I have not seen it mentioned here, but awesome-Lists are a thing on github where people collect various gems for certain categories. Here is the one for self-hosting which I have used extensively and really like: https://github.com/awesome-selfhosted/awesome-selfhosted
Also, when you start hosting more than 2-3 services, keeping them up-to-date might become a hassle which you can forget. For docker-based hosting I’d recommend you setup watchtower, which can keep your servives up-to-date for you.
Also, when you start hosting more than 2-3 services, keeping them up-to-date might become a hassle which you can forget. For docker-based hosting I’d recommend you setup watchtower, which can keep your servives up-to-date for you.
I would caution against automatic updates! Notifications, yes absolutely. But automatically updating things is a great way to have things break suddenly when you’re not in a good place to troubleshoot.
Probably depends. There are some services which I know are rock solid and have never failed me when auto-updating in 2+ years now.
However you are right that it can cause issues. I forgot to mention monitoring totally. My bad. A service like uptime kuma is really worth having for that reason. For a few services that really saved me a few times tbh. I set it up to broadcast status changes to a telegram channel and a special mail inbox. But it can seemingly use any and all services under the sun.
If you only have a handful of services, manual updating is good for learning and can prevent problems by seeing them immediately. However once you have so many services that you can’t possibly update them all or have the motivation to, it’s better to let them auto-update and have a service like uptime kuma notifiy you if something goes wrong imo.
As with all things in the world, it’s matter of striking the right balance of tradeoffs.
Welcome to the cult!
We all started as beginners, but before you start, take my advice and avoid hosting anything open to the internet until you’ve gained more experience in OS/network hardening and risk assessment.
First off, I think you’re starting on a good footing. Having TCP/IP knowlege is good, but you don’t need it from the beginning - it will be relevant once you get into network segmentation and setting up reverse proxies.
I’d say the first thing is to actually choose a rather simple (but useful) application that you can host on Docker and get some experience from OCI-containers and disaster recovery. A lemmy instance (even non federated) might be too much to begin with. Have you considered paperless-ngx, fresh-rss or even syncthing instead? Or begin with formulating what problem you want solved in your daily life.
I’d say, start by watching this video series to gain a better understanding of Docker (I’ve so far assumed that you won’t do baremetal installs, right?!??). There’s also a pretty good online-lab for you to play around in. Remember, you’ll propably realise that your first deployments could be better, and keep yourself mentally prepared to redo and rebuild eventually.
Feel free to message me if you want guidance going forward!
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol TCP Transmission Control Protocol, most often over IP VPN Virtual Private Network nginx Popular HTTP server
4 acronyms in this thread; the most compressed thread commented on today has 16 acronyms.
[Thread #548 for this sub, first seen 26th Feb 2024, 18:05] [FAQ] [Full list] [Contact] [Source code]
I’d say start with getting Lemmy going inside your home network (not accessible to the outside world). That’ll give you a chance to play around with Docker if you want to go the Docker route. I like to make Portainer the first docker container I install (I install it with Docker Compose), and then I manage all other docker containers/etc. through Portainer. Just a quick heads up on Portainer… what Docker calls “docker compose”, Portainer calls a “Stack”, because it can have a “stack” of different stuff running under it.
Anyway, from there I’d figure out a reverse proxy. I use Nginx Proxy Manager, which is nginx under the hood, with a web interface to manage things. I’ve never tried Caddy, but people like that one, too.
The reverse proxy is what controls security, basically. Someone from outside your network types in lemmy.superspruce.org, and you’ve told Dynadot to forward that to your home IP address. You open port 80 and 443 on your router, and forward them to the machine running Nginx-Proxy-Manager. So NPM gets everything that’s pointed at your house on those ports. It see’s the request is for lemmy.superspruce.org, and you’ve told NPM where to look for that, and it handles it from there.
Just doing these things will open up all sorts of learning challenges that you’ll have to figure out through Googling.
It took me years to finally decide to figure out a reverse proxy, and once I wrapped my head around it it makes so much sense. I wish I had learned it sooner.
If you want to expose some service to Internet, first of all learn how to install and correctly use a VPN server (I use Wireguard which I find pretty easy), otherwise keep everything in LAN until you’ll be confident enough to be exposed.
I knew nothing about hosting a service to the public. Cloudflare tunnels helped simplify it for me. It looks more complicated than it is.
https://skillsforall.com/course/networking-basics?courseLang=en-US
https://www.coursera.org/learn/computer-networking
Here’s some links to some online courses to get you started. I think you might have to pay for the second one, but the first one, from Cisco, is free. (You’ll need to create a Cisco account, however.)
It’s absolutely great experience, and I could write a whole screed about what to look up, or I could point you to sources that are better organized than me, so I’m going with the latter.
Here’s a link with many different free Cisco courses:
https://www.cisco.com/c/m/en_sg/partners/cisco-networking-academy/index.html
Everyone else is telling you to stay local, which is great advice, as far as it goes. But you said you want to host your website publicly available, so i’d recommend getting a cheap vps and starting there. It’s not on your network, so if you screw up with security, worst case is you start again from scratch. I’d recommend the cheapest virmach VM you can get, with Debian or Ubuntu, if you like snaps.
First things first, set up ssh with key based logins, with a passphrase on a non standard port (doesn’t provide security, but it will keep your logs from getting innundated immediately). Install UFW, and block all incoming traffic, allow all outgoing traffic, and limit traffic to your ssh port. Install docker and add your user to the docker group. Start learning how to use docker, compose, and as your first container, set up duplicati to back up your docker directory (including all your volumes, which I would store as folders inside your docker directory) somewhere else. I’d set it up to run every evening after you go to bed, and i’d also set a cron script to bring down all your containers before you back up, then bring them back up. Just in case.