- cross-posted to:
- linuxmemes@lemmy.world
- cross-posted to:
- linuxmemes@lemmy.world
EDIT: i had an rpi it died from esd i think
EDIT2: this is also my work machine and i sleep to the sound of the fans
EDIT: i had an rpi it died from esd i think
EDIT2: this is also my work machine and i sleep to the sound of the fans
I run a cluster of VMs that run kubernetes and manage those VMs with containers that run Terraform and ansible. Along with baremetal RISC-V workflows and ASICs.
A tool is a tool and one should pick what works for them.
Why? Wouldn’t the VMs add extra complexity? Couldn’t you just run the containers on the machine?
I’m one of those people with an overkill setup.
Do you have experience with kubernetes or kubectl and DR or ASICs? Not everything should be a container or can do what an ASIC is built/designed for.
If I want a three node cluster for redundancy and speed I’ll need three baremetal machines. Or one hypervisor hosting 3 VMs that run my cluster nodes. I think there is a knowlege gap. Check out these links if you have more questions.
https://kubernetes.io/docs/concepts/architecture/
https://www.redhat.com/en/blog/kubespray-deploy-kubernetes
https://rudimartinsen.com/2023/12/29/kubernetes-cluster-on-vms-2024/
Also some things cannot run as a container due to having architectural differences. These are specifically designed chips for prototyping and software development.
https://www.ijert.org/asic-design-for-a-32-bit-risc-v-processor
Lastly we all have different needs for our home labs. I have to research new tech and processes for my job. It’s a lot of political overhead to get some stuff working on company hardware. I’m very lucky to have a good relationship with systems and storage so that I can buy older retired hardware to run at home. This is not everyone’s usecase and that’s fine.