- A single node proxmox v5.4-3 install
- Only one public IP address
- LXC/KVM for "pet" (development) containers
- Portainer / Docker for less stateful more automated use-cases
- My server's FQDN is
stardust.gtown.lan.rymcg.tech
- use your own domain name. - You have a real internet domain name with DNS hosted on DigitalOcean.
Example:
rymcg.tech
has primary DNS pointed tons1.digitalocean.com
.- Other hosts are supported as well, but you will have to substitute the
DO_AUTH_TOKEN
variable for another supported provider to configure DNS ACME challenge responses for automatic TLS/SSL certificate creation. - For public services, you will use this DNS server to create a wildcard
domain. Example:
*.gtown.lan.rymcg.tech
would point to my server's public IP address. This allows any subdomain to access the server. Traefik reverse proxies HTTP requests using a subdomain (Example:echo.gtown.lan.rycmg.tech
) to route the request to the appropriate container. - For private services, the public dns server does not need to resolve any names for proxmox nor any containers. You can still use your domain name for private IP addresses and private dns servers. This is the nature of DNS-01 ACME challenge. You don't need any access to the internet at all except for the server to connect to the Let's Encrypt API, and the DigitalOcean API to create a single proof of ownership of the domain on the public DNS server. You can use the TLS certificates generated this way in a completely firewalled subnet, even with no access to the internet.
- Other hosts are supported as well, but you will have to substitute the
-
In case the installer screen resolution is too small, try booting in non-uefi mode.
-
Follow all the default install steps.
-
Ensure that you have created an ssh key beforehand (use ssh-keygen if not.)
-
Add your ssh pubkey, logging in initially with the password you set in the installer:
ryan@DESKTOP-O8EO2HB:~$ ssh-copy-id [email protected] /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ryan/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys [email protected]'s password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh '[email protected]'" and check to make sure that only the key(s) you wanted were added.
-
Now you can ssh into the proxmox host:
ryan@DESKTOP-O8EO2HB:~$ ssh [email protected]
-
Make sure that no password prompt appeared when logging in. (This would indicate that your ssh key was not setup correctly - A passphrase for your ssh key itself is fine, just not to login to the server directly with a password.)
-
Install some toolings:
root@stardust:~# apt update root@stardust:~# apt install emacs-nox
-
Edit
/etc/sshd/sshd_config
root@stardust:~# emacs /etc/ssh/sshd_config
-
Set
PasswordAuthentication no
-
Save the file, and restart sshd:
root@stardust:~# systemctl restart ssh
https://www.jamescoyle.net/how-to/614-remove-the-proxmox-no-subscription-message
root@stardust:~# emacs /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
Search for the line if (data.status !== 'Active') {
and change it to just if (false) {
(around line 368)
Save the file. This now disables the annoying message popup on login to the dashboard.
Remove the enterprise package repository:
root@stardust:~# rm /etc/apt/sources.list.d/pve-enterprise.list
Add the non-subscription repository, create
/etc/apt/sources.list.d/pve-no-subscription.list
and paste the following into
it:
deb http://download.proxmox.com/debian stretch pve-no-subscription
Update apt to reload repositories:
root@stardust:~# apt-get update
root@stardust:~# pveam update
root@stardust:~# pveam available
Download at least one of the templates listed:
root@stardust:~# pveam download local ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
Now when you create a container via the dashboard, you have some templates to choose from.
Since we only have one public ip address, we want all of our containers to have a unique private ip address.
192.168.2.10
is the static assigned "public" ip address that resolves for my
proxmox server domain name of stardust.gtown.lan.rymcg.tech . My public gateway
is 192.168.2.1
. These will differ for other environments.
For the containers we will create a private subnet 10.10.0.1/12
:
- IP range:
10.10.0.1
--->10.10.15.254
- This is roughly 4048 assignable IPs to containers.
- Gateway :
10.10.0.1
(the proxmoxvmbr1
interface created next.) - Netmask:
255.255.240.0
(/12
in CIDR format) - The netmask is to encompass all the ips, but can make this smaller if not needed.
- Broadcast address:
10.10.15.255
Show the main bridge network the installer created:
root@stardust:~# brctl show
bridge name bridge id STP enabled interface
vmbr0 8000.ecb1d7384f6a no eno1
Identify the name of your physical network interface. My interface is shown as
eno1
.
Create a fresh networking config:
root@stardust:~# emacs /etc/network/interfaces
Remove everything in the file and paste the following:
auto lo
iface lo inet loopback
# eno1 is my physical network adapter name.
# Change 'eno1' *everywhere* in this file for your adapter name.
auto eno1
iface eno1 inet manual
post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
# br0 is your *public* network interface.
# Change this to DHCP if appropriate for your network.
auto br0
iface br0 inet static
# This is the static *public* ip address and gateway:
address 192.168.2.10
netmask 255.255.255.0
gateway 192.168.2.1
# eno1 is the physical network interface to bridge:
bridge_ports eno1
bridge_stp off
bridge_fd 0
bridge_maxwait 0
post-up echo 1 > /proc/sys/net/ipv4/conf/br0/proxy_arp
# Add additional public ip addresses if available:
### post-up ip addr add 192.168.2.11/24 brd + dev br0
### post-up ip addr add 192.168.2.12/24 brd + dev br0
auto vmbr1
iface vmbr1 inet static
# This is the static *private* subnet for containers
address 10.10.0.1
netmask 255.255.240.0
bridge_ports none
bridge_stp off
bridge_maxwait 0
bridge_fd 0
# On startup run the external firewall script
# to setup IP Masquerading and port forwards:
post-up /etc/firewall.sh
Create a firewall script at /etc/firewall.sh :
#!/bin/bash
set -e
PRIVATE_SUBNET=10.10.0.0/12
PUBLIC_INTERFACE=br0
PRIVATE_INTERFACE=vmbr1
PET_CONTAINER=10.10.0.2
exe() { ( echo "## $*"; $*; ) }
reset() {
exe iptables -P INPUT ACCEPT
exe iptables -P FORWARD ACCEPT
exe iptables -P OUTPUT ACCEPT
exe iptables -t nat -F
exe iptables -t mangle -F
exe iptables -F
exe iptables -X
}
masquerade() {
echo 1 > /proc/sys/net/ipv4/ip_forward
exe iptables -t nat -A POSTROUTING -s $PRIVATE_SUBNET -o $PUBLIC_INTERFACE -j MASQUERADE
}
port_forward() {
if [ "$#" -ne 3 ]; then
echo "Specify arguments: SOURCE_PORT DEST_HOST DEST_PORT"
exit 1
fi
SOURCE_PORT=$1; DEST_HOST=$2; DEST_PORT=$3;
exe iptables -t nat -A PREROUTING -p tcp -i $PUBLIC_INTERFACE \
--dport $SOURCE_PORT -j DNAT --to-destination $DEST_HOST:$DEST_PORT
exe iptables -A FORWARD -p tcp -d $DEST_HOST \
--dport $SOURCE_PORT -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
}
## Reset all rules:
reset
## IP Masquerading for entire subnet:
masquerade
## pet ssh server inside a container exposed publicly on port 2222 :
port_forward 2222 $PET_CONTAINER 22
Make the script executable:
chmod a+x /etc/firewall.sh
This has setup the following:
- a
br0
bridge interface as the main public network interface, this is bonded to the physical network interface (eno1
in my case). - a
vmbr1
bridge interface for the private container network. This is the gateway for all containers to access the internet. - Firewall and port forwarding rules are applied as a post-up script for vmbr1
- The firewall rules are defaulted to
ACCEPT
, you should lock this down more for production.
Reboot the server to reload all the networking config.
root@stardust:~# reboot
Assuming your server is not already exposed directly to the internet, you should
proceed by exposing only port 22
to the internet. Do not forward the dashboard
port directly (8006
). Instead, prefer tunneling through SSH. Since SSH is
configured above to only accept valid ssh keys (No passwords allowed!), it is
more secure by only exposing the ssh port to the internet.
Example ssh connection for the proxmox dashboard (port 8006
), portainer
dashboard (port 9000), and traefik dashboard (port 8080
):
ryan@DESKTOP-O8EO2HB:~$ ssh [email protected] \
-L 8006:localhost:8006 -L 9000:localhost:9000 -L 8080:localhost:8080
You can create a permanent config for this. Append this to $HOME/.ssh/config :
Host stardust
Hostname stardust.gtown.lan.rymcg.tech
User root
# localhost:8006 is the proxmox dashboard:
LocalForward 8006 localhost:8006
# localhost:9000 is the portainer dashboard:
LocalForward 9000 localhost:9000
# localhost:8080 is the traefik dashboard:
LocalForward 8080 localhost:8080
Subsitute your own Host and Hostname.
Now your port forwarding is applied automatically when connecting:
ryan@DESKTOP-O8EO2HB:~$ ssh stardust
Connect to https://localhost:8006 (through the ssh tunnel created above)
Accept the self-signed certificate for now.
- Click Create CT
- Fill in the General tab:
- hostname (
pet
) - password
- ssh pubkey (Paste the contents of your local
~/.ssh/id_rsa.pub
)
- hostname (
- Choose a template you downloaded earlier
- On the network tab choose static IP in the
10.10.0.1/12
subnet (10.10.0.2
to10.10.15.254
)- Example:
10.10.0.2/12
- Example:
- Assign the gateway
10.10.0.1
- Finish creation
- Find the container by id in the left hand column (
100
by default) - Click the Start button in the upper right.
- Click on Options in the left menu.
- Double Click the start at boot option
- Click the checkbox to enable
- Click OK.
- Click the Console button in the left menu.
- Login with the credentials chosen in setup.
ping google.com
to test networking
The firewall script created earlier ( /etc/firewall.sh
) contains port
forwarding rules for SSH to ip 10.10.0.2
.
You should be able to ssh to your container on port 2222
from the public network :
ryan@DESKTOP-O8EO2HB:~$ ssh root@stardust -p 2222
root@pet:~#
You can create a permanent local configuration, by adding to your $HOME/.ssh/conf :
Host pet
Hostname stardust.gtown.lan.rymcg.tech
User root
Port 2222
Now you can ssh to the pet container without specifying the host, user, or port:
ryan@DESKTOP-O8EO2HB:~$ ssh pet
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.18-12-pve x86_64)
Last login: Sat Jun 8 15:39:00 2019 from 192.168.2.89
root@pet:~#
On the proxmox host:
apt-get install -y apt-transport-https ca-certificates \
curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
apt-key fingerprint 0EBFCD88
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
apt-get update && apt-get install docker-ce -y
https://www.portainer.io/installation/
docker volume create portainer_data
docker run -d -p 127.0.0.1:9000:9000 --restart always \
--name portainer \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data portainer/portainer
Portainer never works right the first time, so I restart it once:
docker restart portainer
Access portainer at http://localhost:9000 (through the ssh tunnel)
Use the portainer dashboard to deploy apps. If you reboot the server, portainer should start automatically, and then it will restart all of your configured containers again.
If you use a real internet sub-domain names for your containers, you can use a free TLS/SSL certificate from Let's Encrypt. Even if your containers are not used on the internet, the certificate is still useful when used behind a firewall.
You must use a DNS provider that is supported by traefik's DNS-1 challenge and must support Wildcard domains. This tutorial assumes you are using DigitalOcean as the primary DNS provider for your domain.
Traefik is an HTTP reverse proxy for our containers. It will serve content on the proxmox server on port 80 and 443. It determines which container to forward the requests to based on the subdomain of the request and from docker labels applied to the containers. It also sets up Lets Encrypt TLS certificates, and automatically forwards non-https requests over to https.
There isn't a portainer template for Traefik, so lets create our own:
- Click
App Templates
in the left menu. - Click
Add template
- Use the title
traefik
- Use the description
traefik http proxy with DigitalOcean DNS ACME challenge
- Choose the
Container
template type. - Use the name
traefik
- Use the logo URL:
http://docs.traefik.io/img/traefik.logo.png
- Use the Container Image:
traefik:1.7
- Use the Container Command:
--api --docker
- Select network:
bridge
- Click
map additional port
- Map the host port
80
to container port80
- Optional: Restrict to only the first public ip address: 192.168.2.10:80
- Map the host port
- Click
map additional port
- Map the host port
443
to container port443
- Optional: Restrict to only the first public ip address:
192.168.2.10:80
- Map the host port
- Click
map additional port
- Map the host port
127.0.0.1:8080
to container port8080
- This mapping makes the traefik dashboard only viewable through an SSH forward.
- Map the host port
- Click
map additional volume
- Map the container path
/var/run/docker.sock
as a Bind mount to the host path/var/run/docker.sock
in Writable mode.
- Map the container path
- Click
map additional volume
- Map the container path
/etc/traefik
as a Bind mount to the host path/etc/containers/traefik
in Writable mode.
- Map the container path
- Under Environment
- Click
add variable
- Choose
Text - Free text value
- Name:
DO_AUTH_TOKEN
- Label:
DO_AUTH_TOKEN
- Description:
DigitalOcean API Token for DNS ACME challenge
- Click
- Click
Create the template
Now you can create the traefik container from your template:
- You must create a DigitalOcean API Token in the same DigtalOcean account that hosts your domain name DNS.
- In Portainer:
- Click
App Templates
in the left menu. - Click
traefik
in the list of templates. - Turn off
Enable access control
- Paste your DigitalOcean API token into the
DO_AUTH_TOKEN
field. - Click
Deploy the container
- Click
Now create a traefik config file at /etc/containers/traefik/traefik.toml
:
logLevel = "DEBUG"
defaultEntryPoints = ["http", "https"]
[web]
address = ":8080"
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[docker]
watch = true
[acme]
### Set your real email address for Let's Encrypt:
email = "[email protected]"
storage = "/etc/traefik/acme.json"
entryPoint = "https"
acmeLogging = true
[acme.dnsChallenge]
provider = "digitalocean"
delayBeforeCheck = 0
[[acme.domains]]
main = "*.gtown.lan.rymcg.tech"
Be sure to change the following:
-
[acme] email
- Set your real email address for certificate requests to Lets Encrypt. -
[[acme.domains]] main
- Set your wildcard DNS domain name root to issue certificates for. -
Restart the traefik container from portainer.
-
Check the logs for the traefik container via the container details page.
In Portainer:
- Click
Containers
in the left menu. - Click
Add container
in the top button bar - Use the name
echo-test
- Use the image
hashicorp/http-echo
- Disable
Enable access control
- Under
Advanced container settings
- Under
Command & logging
- Set the Command :
--text hello-echo-test
- Set the Command :
- Under
Labels
- Click
add label
- Name:
traefik.frontend.rule
- Value:
Host:echo.gtown.lan.rymcg.tech
- Don't forget the
Host:
part at the front! - This hostname must be a name unique to this service (
echo
) - It must resolve to the same IP as your proxmox server public IP.
- The easiest thing to do is setup a wildcard DNS for
*.gtown.lan.rymcg.tech
or for your own domain name. But this is for public deployments only. - For testing, you can just create an entry in your local
/etc/hosts
file forecho.gtown.lan.rymcg.tech
- Don't forget the
- Name:
- Click
add label
- Name:
traefik.port
- Value:
5678
(The default listen port forhashicorp/http-echo
)
- Name:
- Click
- Under
- Click
Deploy the container
- Load the traefik dashboard at http://localhost:8080
- You should see the Frontend listed for
echo.gtown.lan.rymcg.tech
- You should see the Frontend listed for
- Load http://echo.gtown.lan.rymcg.tech
- You should see the response
hello-echo-test
- The connection should automatically forward to
https://
and provide a valid TLS certificate from Let's Encrypt.
- You should see the response
- If it doesn't work, check the
traefik
container logs via the container details page.
The proxmox dashboard has a self-signed certificate by default. You can change this by following the official docs .... OR you can piggy back off the traefik certificate. I prefer the latter option.
Traefik does not store the certificates in the format that proxmox needs. What we can do is start a container that will watch for changes on the traefik certificates. When they do change, it will reformat the certificates and copy them to the place that proxmox expects them to be.
From portainer:
- Click on Stacks in the left menu.
- Click on
Add stack
- Give it a name:
proxmox-traefik-certdumper
- Use the Web editor and paste the following docker-compose v2 formatted config:
version: '2'
services:
certdumper:
image: enigmacurry/proxmox-traefik-certdumper:latest
volumes:
- /etc/containers/traefik:/traefik
- /etc/pve:/output
restart: always
privileged: true
environment:
PVE_HOST: stardust
CERTIFICATE: "*.gtown.lan.rymcg.tech"
- Change
PVE_HOST
to the hostname (not the FQDN, just the name) of the proxmox server. - Change
CERTIFICATE
to the wildcard DNS name you setup for traefik. - Turn off
Enable access control
- Click
Deploy the stack
- Check for the following new files on the proxmox host:
/etc/pve/nodes/stardust/pveproxy-ssl.pem
/etc/pve/nodes/stardust/pveproxy-ssl.key
- Restart the proxmox dashboard:
systemctl restart pveproxy
- Reload https://stardust.gtown.lan.rymcg.tech:8006
- The cerficate loaded should now be the same one as used by traefik.
- When the certificate expires, the new certificate will get replaced, but you likely will need to restart pveproxy again.
From portainer:
- Click
Volumes
- Click
Add volume
- Name:
gitlab
- Disable
Enable access control
- Click
Create the volume
- Click
App Templates
- Click
GitLab CE
in the list of templates - Name it:
gitlab
- Disable
Enable access control
- Click
Show advanced options
- Map host port
4422
to container port22
- Change the
/etc/gitlab
volume to a Bind mount to the host path/etc/containers/gitlab
as Writable . - Change the
/var/opt/gitlab
volume to a Volume mount, and select thegitlab
volume created above. - Click
add label
- name:
traefik.frontend.rule
- value:
Host:gitlab.gtown.lan.rymcg.tech
- name:
- Click
add label
- name:
traefik.port
- value:
80
- name:
- Click
Deploy the container
Edit the gitlab config file on the proxmox host. Add the following to the bottom of /etc/containers/gitlab/gitlab.rb
:
gitlab_rails['gitlab_shell_ssh_port'] = 4422
external_url 'https://gitlab.gtown.lan.rymcg.tech'
nginx['listen_port'] = 80
nginx['listen_https'] = false
- Restart the gitlab container from portainer.
- Visit https://gitlab.gtown.lan.rymcg.tech
From the proxmox host, download k3os (Rancher kubernetes distribution) ISO to the KVM template path:
wget https://github.com/rancher/k3os/releases/download/v0.2.0/k3os-amd64.iso \
-O /var/lib/vz/template/iso/k3os-amd64.v0.2.0.iso
From the proxmox dashboard:
- Click
Create VM
- General tab:
- Use the name:
k8s-1
- Use the name:
- OS tab:
- Choose the ISO image you downloaded
- Memory tab:
- 8192 MB
- Finish creating the VM
- Start the VM and connect to the console
- Login as
rancher
- Install:
- Run
sudo os-config
- Choose all defaults:
- No cloud-init (maybe later...)
- Authorize github users to ssh:
No
- Create a password for the
rancher
user - Configure Wifi:
No
- Run as server or agent?
Server
- token: none
- Finish install and reboot vm
- Run
After it has rebooted, login again as rancher
using your new password.
Find your network device name:
sudo connmanctl services
- Your device name will be called something like
ethernet_5ac610176f17_cable
Setup a static IP address, netmask, gateway, and DNS for your device:
sudo connmanctl config ethernet_5ac610176f17_cable \
--ipv4 manual 10.10.0.101 255.255.240.0 10.10.0.1 \
--nameservers 1.0.0.1 1.1.1.1
Ping google to check intenet access:
ping google.com
K3os disables SSH port forwarding by default. You must turn it on.
From k8s-1
:
sudo sed -ri 's/^#?AllowTcpForwarding\s+.*/AllowTcpForwarding yes/' /etc/ssh/sshd_config
sudo /etc/init.d/sshd restart
Configure SSH tunnels. My current local ssh config (~/.ssh/config
) looks like:
Host stardust
Hostname stardust.gtown.lan.rymcg.tech
User root
# localhost:8006 is the proxmox dashboard:
LocalForward 8006 localhost:8006
# localhost:9000 is the portainer dashboard:
LocalForward 9000 localhost:9000
# localhost:8080 is the traefik dashboard:
LocalForward 8080 localhost:8080
# k8s-1 ssh
LocalForward 9922 10.10.0.101:22
Host k8s-1
Hostname localhost
User rancher
Port 9922
# Kubernetes API port:
LocalForward 6445 localhost:6445
k8s-1
is only accessible through the tunnel, once a connection to stardust
is established.
From your local machine, login to k8s-1
to test if it works:
ssh k8s-1
Logout.
Add your ssh key to k8s-1
:
ssh-copy-id k8s-1
Copy the kubectl config file to your local machine:
mkdir -p $HOME/.kube
ssh k8s-1 -C "sudo cat /var/lib/rancher/k3s/agent/kubeconfig.yaml" > ~/.kube/k8s-1
ln -s $HOME/.kube/k8s-1 $HOME/.kube/config
Login to k8s-1
again to start the ssh tunnel:
ssh k8s-1
Now in a seperate terminal kubectl should work:
kubectl cluster-info
kubectl get nodes
The default rook operator variable FLEXVOLUME_DIR_PATH
is not correct for
k3os. The correct value for k3os is : /var/lib/rancher/k3s/agent/kubelet/plugins
.
Install the PlenusPyramis fork of rook that already has this patched:
ROOK_RELEASE=https://raw.githubusercontent.com/PlenusPyramis/rook/v1.0.2-k3os-patched/
kubectl create -f $ROOK_RELEASE/cluster/examples/kubernetes/ceph/common.yaml
kubectl create -f $ROOK_RELEASE/cluster/examples/kubernetes/ceph/operator.yaml
kubectl create -f $ROOK_RELEASE/cluster/examples/kubernetes/ceph/cluster-test.yaml
Create Ubuntu 18.04 KVM
SSH to the new ubuntu host:
Install docker:
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
Install minikube:
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube
sudo install minikube /usr/local/bin/
rm minikube
sudo chown `whoami` ~/.kube/config
sudo chown `whoami` -R ~/.minikube
Install kubectl:
sudo snap install kubectl --classic
Install helm:
sudo snap install --classic helm
Install jq dependency:
sudo apt install jq
Start minikube:
sudo minikube start --vm-driver=none
Setup load balancer:
sudo ip route add $(cat ~/.minikube/profiles/minikube/config.json | jq -r ".KubernetesConfig.ServiceCIDR") via $(minikube ip)
kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system
Follow the k8s-lbry README. This section is an abbreviated install log that highlights complications and workarounds necessary for our proxmox/k3os environment.
On your local development machine:
mkdir $HOME/k8s-lbry
cd $HOME/k8s-lbry
curl -Lo run.sh https://raw.githubusercontent.com/lbryio/lbry-docker/master/contrib/k8s-lbry/run.sh
chmod a+x run.sh
./run.sh setup-alias
source ~/.bashrc
k8s-lbry setup
k8s-lbry install-nginx-ingress
k8s-lbry install-cert-manager
k8s-lbry kubectl get svc nginx-ingress-controller -o wide
Make sure when editing the lbrycrd externalip
configuration (in
values-dev.yaml
) that you use the proxmox public ip address not the k3os
IP address.
Create a new firewall rule in the proxmox host's /etc/firewall.sh
to open up
port 9246 publicly:
## k8s-1 ports:
K8S_1=10.10.0.101
port_forward 9246 $K8S_1 9246
k8s-lbry install
Hi @kekule you can read some of the newer things I'm trying here:
https://blog.rymcg.tech/tags/k3s
https://blog.rymcg.tech/tags/proxmox
https://github.com/EnigmaCurry/stardust-k8s
I have not been using LXC lately, but using KVM and K3s instead.
The Traefik example in that first link (k3s) is for Traefik v2.3, but I believe you can just change the variable TRAEFIK_VERSION=v2.5 (not much has changed). The version in stardust-k8s uses the latest version from helm.