next.js, nginx, reverse-proxy, ssl
$ sudo apt-get update
$ sudo apt-get install nginx letsencrypt
Mostly used by me for Hetzner cloud, but should work with all Public Cloud server providers.
Using only kubeadm and helm from Fedora's repository Instructions are for Fedora 37+ only, using distribution's repository for installing Kubernetes.
Note: avoid kubernetes and kubernetes-master as there are deprecated and setup is more complex.
Note: I have not yet tinkered using ipv6, but it is in my plans to do it.
The result should be a decent Kubernetes cluster, with Cloud Control Manager installed and controlling layer 2 private networking allowing pods to talk at fastest speed possible. Will default to Hetzner's cloud volumes.
Scaling up the control plane nodes is a bit more complicated. That's because the worker nodes can be easily replaced and/or powered off, but that's not the case for the control plane nodes.
In this case, you can't rely on Terraform to scale up the control plane nodes for you, because Terraform would power off all control plane nodes at the same time, which could make etcd lose the quorum (in that case your cluster would be corrupt/lost).
I haven't tested it, but I think you might have success if you do something like this:
Drain one of the control plane nodes (kubectl drain ) SSH to that node and power it off (sudo poweroff)
# hel1, CX51, Ubuntu 20.04, 10 GB EXT4 Volume, 240 GB EXT4 Volume | |
# local NVME | |
root@ubuntu-32gb-hel1-1:~# hdparm -Tt /dev/sda | |
/dev/sda: |
<?xml version="1.0"?> | |
<ruleset name="Sane Laravel ruleset" | |
xmlns="http://pmd.sf.net/ruleset/1.0.0" | |
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" | |
xsi:schemaLocation="http://pmd.sf.net/ruleset/1.0.0 | |
http://pmd.sf.net/ruleset_xml_schema.xsd" | |
xsi:noNamespaceSchemaLocation=" | |
http://pmd.sf.net/ruleset_xml_schema.xsd"> | |
<description> | |
This enables everything and sets some exceptions |
#cloud-config | |
groups: | |
- docker | |
users: | |
- default | |
- name: ubuntu | |
groups: | |
- docker |
This is a small write up about how to migrate your pritunl install between servers. It's not especially detailed because I'm lazy and your migration story will most likely be different. All this can be avoided by using a remote/hosted mongo instance(compose.io, mongolab, etc.) and simply pointing your pritunl instance at that. If you want more details ask, and I'll do my best to answer and update this write-up accordingly. Also, feel free to criticize my grammar and spelling.