Skip to content

Instantly share code, notes, and snippets.

@oleksis
Last active July 18, 2024 07:16
Show Gist options
  • Save oleksis/2c022140068a0f0092dfd29bfd4a125a to your computer and use it in GitHub Desktop.
Save oleksis/2c022140068a0f0092dfd29bfd4a125a to your computer and use it in GitHub Desktop.
Rancher Desktop, K3s and Traefik ingress controller
# https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
FROM nginx:alpine
COPY . /usr/share/nginx/html
<h1>Hello World from NGINX!!</h1>

K3d

Set up a multi-master (HA) Kubernetes Cluster. Building a HA, multi-master (server) cluster

High-Availability K3s

To set up a high availability (HA) Kubernetes cluster using k3d on two Windows machines with WSL2. Here are the steps you can follow to achieve this:

image

  1. Install k3d on both Windows machines (WLS2) by following the instructions in the k3d documentation.

  2. On the first machine (Server 1), create a new k3d cluster with the --cluster-init flag to initialize the first control plane node.

  • Server 1
➜ sudo k3d cluster create my-ha-cluster --servers 1 --k3s-arg '--cluster-init@server:0'

➜ sudo k3d kubeconfig get my-ha-cluster > .kube/config

➜ kubectl get nodes
NAME                         STATUS   ROLES                       AGE   VERSION
k3d-my-ha-cluster-server-0   Ready    control-plane,etcd,master   11m   v1.25.7+k3s1

# Print k3s cluster token
➜ sudo k3d cluster list my-ha-cluster --token
NAME            SERVERS   AGENTS   LOADBALANCER   TOKEN
my-ha-cluster   1/1       0/0      true           mynodetoken
  • Server 2
➜ sudo k3d node create worker1 --cluster https://192.168.1.128:33893 --token mynodetoken --k3s-arg "--node-external-ip=172.17.246.34"

➜ sudo k3d cluster list
NAME                          SERVERS   AGENTS   LOADBALANCER
https://192.168.1.128:33893   0/0       1/1      false
  • Server 1
➜ kubectl get node -o wide
NAME                         STATUS   ROLES                       AGE   VERSION        INTERNAL-IP   EXTERNAL-IP     OS-IMAGE   KERNEL-VERSION                          CONTAINER-RUNTIME
k3d-my-ha-cluster-server-0   Ready    control-plane,etcd,master   69m   v1.25.7+k3s1   172.18.0.3    <none>          K3s dev    6.3.0-oleksis-microsoft-standard-WSL2   containerd://1.6.15-k3s1
k3d-worker1-0                Ready    <none>                      15m   v1.25.7+k3s1   172.18.0.2    172.17.246.34   K3s dev    5.15.90.1-microsoft-standard-WSL2       containerd://1.6.15-k3s1

➜ kubectl create deployment nginx --image=nginx:latest --port=80 --replicas=2

➜ kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP           NODE                         NOMINATED NODE   READINESS GATES
nginx-cd55c47f5-ckcl6   1/1     Running   0          3m51s   10.42.1.3    k3d-worker1-0                <none>           <none>
nginx-cd55c47f5-xmx4w   1/1     Running   0          3m50s   10.42.0.16   k3d-my-ha-cluster-server-0   <none>           <none>

➜ sudo k3d node list
NAME                         ROLE           CLUSTER         STATUS
k3d-my-ha-cluster-server-0   server         my-ha-cluster   running
k3d-my-ha-cluster-serverlb   loadbalancer   my-ha-cluster   running
k3d-my-ha-cluster-tools                     my-ha-cluster   running
- Server 2 wirt `k3d`
➜ k3d cluster create my-server2 -p "80:80@loadbalancer"
INFO[0000] portmapping '80:80' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy]
INFO[0000] Prep: Network
INFO[0005] Created network 'k3d-my-server2'
INFO[0006] Created image volume k3d-my-server2-images
INFO[0006] Starting new tools node...
INFO[0008] Creating node 'k3d-my-server2-server-0'
INFO[0008] Starting Node 'k3d-my-server2-tools'
INFO[0011] Creating LoadBalancer 'k3d-my-server2-serverlb'
INFO[0015] Using the k3d-tools node to gather environment information
INFO[0021] HostIP: using network gateway 172.19.0.1 address
INFO[0021] Starting cluster 'my-server2'
INFO[0021] Starting servers...
INFO[0021] Starting Node 'k3d-my-server2-server-0'
INFO[0062] All agents already running.
INFO[0062] Starting helpers...
INFO[0063] Starting Node 'k3d-my-server2-serverlb'
INFO[0077] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0082] Cluster 'my-server2' created successfully!
INFO[0083] You can now use it like this:
kubectl cluster-info

➜ k3d cluster list
NAME                          SERVERS   AGENTS   LOADBALANCER
https://192.168.1.128:33893   0/0       1/1      false
my-server2                    1/1       0/0      true

➜ kubectl get nodes -o wide
NAME                      STATUS   ROLES                  AGE     VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE   KERNEL-VERSION                      CONTAINER-RUNTIME
k3d-my-server2-server-0   Ready    control-plane,master   3m39s   v1.25.7+k3s1   172.19.0.2    <none>        K3s dev    5.15.90.1-microsoft-standard-WSL2   containerd://1.6.15-k3s1

➜ kubectl get service -o wide
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE     SELECTOR
kubernetes   ClusterIP   10.43.0.1    <none>        443/TCP   5m29s   <none>

➜ docker ps -a
CONTAINER ID   IMAGE                            COMMAND                  CREATED             STATUS          PORTS
                                     NAMES
8b1f4800eed7   ghcr.io/k3d-io/k3d-proxy:5.4.9   "/bin/sh -c nginx-pr…"   17 minutes ago      Up 17 minutes   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:33829->6443/tcp   k3d-my-server2-serverlb
f610e4ed2200   rancher/k3s:v1.25.7-k3s1         "/bin/k3s server --t…"   18 minutes ago      Up 17 minutes                                                                k3d-my-server2-server-0
3b204737019d   rancher/k3s:v1.25.7-k3s1         "/bin/k3s agent --no…"   About an hour ago   Up 58 minutes

➜ docker exec -it k3d-worker1-0 /bin/sh
  ps -ef | grep containerd
  ip ad
  wget -O - -S 10.42.1.3
  ...
  <title>Welcome to nginx!</title>
  ...


➜  kubectl create deployment nginx --image=nginx:latest --port=80
deployment.apps/nginx created

➜ kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE    IP          NODE                      NOMINATED NODE   READINESS GATES
nginx-cd55c47f5-9zpqx   1/1     Running   0          111s   10.42.0.9   k3d-my-server2-server-0   <none>           <none>

➜ kubectl port-forward --address 0.0.0.0 nginx-cd55c47f5-9zpqx 8080:80

Link the Docker socket

export DOCKER_HOST="unix:///mnt/wsl/docker-desktop/shared-sockets/guest-services/docker.sock"
sudo ln -s /mnt/wsl/docker-desktop/shared-sockets/guest-services/docker.sock /var/run/docker.sock

K3S on Windows Subsystem for Linux

Servers with k3s on WSL2

High-Availability K3s

Requirements

  • OS: Ubuntu 22.04.2 LTS (Jammy Jellyfish) see Notes

Master and worker node on two PCs using WSL2

image

  • Server 1
# Master
➜ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--write-kubeconfig-mode 644 server --tls-san 192.168.1.128 --advertise-address 192.168.1.128 --cluster-init" sh -s –
# sudo service k3s status

➜ sudo cp -f /etc/rancher/k3s/k3s.yaml ~/.kube/config

# k3s pre-check
➜ k3s check-config

➜ sudo kubectl get nodes -o wide
NAME      STATUS   ROLES                       AGE    VERSION        INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION                          CONTAINER-RUNTIME
hp450-2   Ready    control-plane,etcd,master   3m6s   v1.26.4+k3s1   172.30.89.132   <none>        Ubuntu 22.04.2 LTS   6.3.0-oleksis-microsoft-standard-WSL2   containerd://1.6.19-k3s1

➜ sudo kubectl get po -n kube-system
NAME                                      READY   STATUS      RESTARTS   AGE
coredns-59b4f5bbd5-skpgb                  1/1     Running     0          3m52s
helm-install-traefik-crd-58ghl            0/1     Completed   0          3m53s
helm-install-traefik-fd8x4                0/1     Completed   2          3m53s
local-path-provisioner-76d776f6f9-9kk25   1/1     Running     0          3m52s
metrics-server-7b67f64457-vmt9q           1/1     Running     0          3m52s
svclb-traefik-c3085c29-9fxt2              2/2     Running     0          76s
traefik-56b8c5fb5c-pbjvx                  1/1     Running     0          76s

➜ kubectl version --client --output json

➜ sudo cat /var/lib/rancher/k3s/server/token
K10bdbf88b63604bbc6dd8d3978a517f6f126e05600b6a0036a2abb851804fee7ff::server:0eacc63dc1cdc76316ee1acde197b0e2
  • Server 2
# Join a Worker Nodeexport K3S_URL="https://192.168.1.128:6443"export K3S_TOKEN="K10bdbf88b63604bbc6dd8d3978a517f6f126e05600b6a0036a2abb851804fee7ff::server:0eacc63dc1cdc76316ee1acde197b0e2"

➜ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--server ${K3S_URL} --token ${K3S_TOKEN} --node-external-ip 192.168.1.47" sh -

➜ sudo journalctl -u k3s-agent -n 100

➜ sudo systemctl status k3s-agent

# Copy the ~/.kube/config file from the master node to the worker node and update the server (127.0.0.1 -> 192.168.1.128) for the default cluster
  • Server 1
➜ sudo kubectl get nodes -o wide
NAME      STATUS   ROLES                       AGE     VERSION        INTERNAL-IP     EXTERNAL-IP    OS-IMAGE
 KERNEL-VERSION                           CONTAINER-RUNTIME
hp450-2   Ready    control-plane,etcd,master   20m     v1.26.4+k3s1   172.30.89.132   <none>         Ubuntu 22.04.2 LTS   6.3.0-oleksis-microsoft-standard-WSL2    containerd://1.6.19-k3s1
luna      Ready    <none>                      6m54s   v1.26.4+k3s1   172.17.246.34   192.168.1.47   Ubuntu 22.04.2 LTS   6.3.0-oleksis-microsoft-standard-WSL2+   containerd://1.6.19-k3s1

➜ kubectl get pods -n kube-system -o wide
NAME                                      READY   STATUS      RESTARTS   AGE   IP          NODE      NOMINATED NODE   READINESS GATES
coredns-59b4f5bbd5-ds75v                  1/1     Running     0          14h   10.42.0.5   hp450-2   <none>           <none>
helm-install-traefik-c4g5h                0/1     Completed   2          14h   10.42.0.4   hp450-2   <none>           <none>
helm-install-traefik-crd-2ws24            0/1     Completed   1          14h   10.42.0.2   hp450-2   <none>           <none>
local-path-provisioner-76d776f6f9-7p76d   1/1     Running     0          14h   10.42.0.3   hp450-2   <none>           <none>
metrics-server-7b67f64457-kshtr           1/1     Running     0          14h   10.42.0.6   hp450-2   <none>           <none>
svclb-traefik-a0e60b86-2h6br              2/2     Running     0          14h   10.42.1.2   luna      <none>           <none>
svclb-traefik-a0e60b86-n6tlz              2/2     Running     0          14h   10.42.0.7   hp450-2   <none>           <none>
traefik-56b8c5fb5c-tcq2q                  1/1     Running     0          14h   10.42.0.8   hp450-2   <none>           <none>

➜ kubectl create deployment nginx --image=nginx:latest --port=80 --replicas=2

➜ kubectl expose deployment nginx --port=80

➜ kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP          NODE      NOMINATED NODE   READINESS GATES
nginx-6b7f675859-27g5g   1/1     Running   0          9m38s   10.42.0.9   hp450-2   <none>           <none>
nginx-6b7f675859-hvfmq   1/1     Running   0          9m38s   10.42.1.3   luna      <none>           <none>

➜ kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE   SELECTOR
kubernetes   ClusterIP   10.43.0.1      <none>        443/TCP   17h   <none>
nginx        ClusterIP   10.43.84.195   <none>        80/TCP    5s    app=nginx

➜  kubectl apply -f .\nginx-ingess.yaml
ingress.networking.k8s.io/nginx-ingress created

➜ kubectl get ingress -o wide
NAME            CLASS     HOSTS      ADDRESS        PORTS   AGE
nginx-ingress   traefik   luna.lan   192.168.1.47   80      13m

➜ curl -v http://luna.lan/
*   Trying 192.168.1.47:80...
* Connected to luna.lan (192.168.1.47) port 80 (#0)
> GET / HTTP/1.1
> Host: luna.lan
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Accept-Ranges: bytes
< Content-Length: 615
< Content-Type: text/html
< Date: Sun, 07 May 2023 19:00:59 GMT
< Etag: "64230162-267"
< Last-Modified: Tue, 28 Mar 2023 15:01:54 GMT
< Server: nginx/1.23.4
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Connection #0 to host luna.lan left intact
  • Server 2
# Option 1
➜ kubectl port-forward --address 0.0.0.0 nginx-6b7f675859-hvfmq 8080:80
Forwarding from 0.0.0.0:8080 -> 80
Handling connection for 8080

# Expose the service using nodePort. Option 2
➜  kubectl apply -f .\nginx-worker-nodePort.yaml
service/nginx-nodeport created

➜ kubectl get service -o wide
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE     SELECTOR
kubernetes       ClusterIP   10.43.0.1      <none>        443/TCP          17h     <none>
nginx            ClusterIP   10.43.84.195   <none>        80/TCP           9m42s   app=nginx
nginx-nodeport   NodePort    10.43.71.214   <none>        8080:30080/TCP   5m12s   app=nginx

Port forwarding Windows to WSL2

To configure port forwarding from the Windows localhost network to the WSL2 network, you can use the netsh command in a Windows Command Prompt or PowerShell terminal. Here's an example of how to forward a port from the Windows host to a WSL2 instance:

  1. Open a Command Prompt or PowerShell terminal as an administrator.

  2. Run the following command to create a new port forwarding rule, replacing <PORT> with the port number you want to forward and <WSL2_IP> with the IP address of your WSL2 instance:

netsh interface portproxy add v4tov4 listenport=<PORT> listenaddress=0.0.0.0 connectport=<PORT> connectaddress=<WSL2_IP>
  1. Verify that the port forwarding rule was created successfully by running:
➜ netsh interface portproxy show all

Listen on ipv4:             Connect to ipv4:

Address         Port        Address         Port
--------------- ----------  --------------- ----------
0.0.0.0         80          172.17.246.34   80

➜ netsh interface portproxy add v4tov4 listenport=8080 listenaddress=0.0.0.0 connectport=8080 connectaddress=172.30.89.132

➜ netsh interface portproxy show all

Listen on ipv4:             Connect to ipv4:

Address         Port        Address         Port
--------------- ----------  --------------- ----------
0.0.0.0         80          172.30.89.132   80
0.0.0.0         8080        172.30.89.132   8080
0.0.0.0         6443        172.30.89.132   6443
0.0.0.0         6444        172.30.89.132   6444
0.0.0.0         10250       172.30.89.132   10250
0.0.0.0         2379        172.30.89.132   2379
0.0.0.0         2380        172.30.89.132   2380

After creating the port forwarding rule, traffic sent to the specified port on the Windows host will be forwarded to the corresponding port on the WSL2 instance. This should allow you to access the k3d cluster running in WSL2 from another machine on your network.

Firewall rules on Windwos for k3s

image

Sudoers for k3d and docker

/etc/sudoers.d/docker_k3d

oleksis ALL=(ALL) NOPASSWD: /usr/bin/docker, /usr/local/bin/k3d

Troubleshooting

Some useful commands

➜ sudo systemctl status k3s

➜ sudo journalctl -u k3s-agent -n 100

➜ sudo sysctl net.ipv4.conf.all.forwarding=1
net.ipv4.conf.all.forwarding = 1

➜ sudo iptables -P FORWARD ACCEPT

#  iptables rules
➜ sudo ufw status

➜ kubectl exec -it nginx-6b7f675859-hvfmq -- /bin/bash

# Open your Windows hosts file as Administrator
➜ powershell.exe Start-Process notepad.exe 'c:\Windows\System32\Drivers\etc\hosts' -Verb runAs

Notes

If using old kernel version with k3s with this error:

➜ uname -a
Linux luna 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

➜ systemctl status k3s.service
● k3s.service - Lightweight Kubernetes
     Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
     Active: activating (start) since Thu 2023-05-04 15:54:02 EDT; 4s ago
       Docs: https://k3s.io
    Process: 28292 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SU>
    Process: 28294 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=1/FAILURE)
    Process: 28295 ExecStartPre=/sbin/modprobe overlay (code=exited, status=1/FAILURE)
   Main PID: 28296 (k3s-server)
      Tasks: 7
     Memory: 18.7M
     CGroup: /system.slice/k3s.service
             └─28296 /usr/local/bin/k3s server

May 04 15:54:02 luna systemd[1]: Starting Lightweight Kubernetes...
May 04 15:54:02 luna sh[28292]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
May 04 15:54:02 luna sh[28293]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
May 04 15:54:02 luna modprobe[28294]: modprobe: FATAL: Module br_netfilter not found in directory /lib/modules/5.15.90.1-microsoft-s>
May 04 15:54:02 luna modprobe[28295]: modprobe: FATAL: Module overlay not found in directory /lib/modules/5.15.90.1-microsoft-standa>
May 04 15:54:03 luna k3s[28296]: time="2023-05-04T15:54:03-04:00" level=info msg="Starting k3s v1.26.4+k3s1 (8d0255af)"

Compile a custom Linux kernel for use with WSL2, you can follow these steps: Custom Linux/x86 Kernel Configuration

To enable the br_netfilter and overlay kernel modules when compiling the Linux kernel source for WSL2, you need to enable the following kernel configuration options:

CONFIG_BRIDGE_NETFILTER: This option enables the br_netfilter module, which provides netfilter support for Linux bridges.

CONFIG_OVERLAY_FS: This option enables the overlay filesystem module, which allows multiple directories to be overlaid into a single directory.

In a typical Linux environment, kernel modules are installed to the /lib/modules/$(uname -r) directory and can be loaded and unloaded using the modprobe and rmmod commands, respectively.

After compiling the Linux kernel sources, you can install the kernel modules by running the make modules_install command. This will copy the compiled kernel modules to the /lib/modules/$(uname -r) directory.

Once the kernel modules are installed, you can use the modprobe command to load them. For example, to load the br_netfilter and overlay kernel modules, you can run the following commands:

sudo modprobe br_netfilter
sudo modprobe overlay

You can use the lsmod command to verify that the kernel modules have been loaded successfully.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
traefik.ingress.kubernetes.io/router.entrypoints: web
# traefik.ingress.kubernetes.io/router.entrypoints: websecure
# traefik.ingress.kubernetes.io/router.tls: "true"
# cert-manager.io/issuer: "letsencrypt-staging"
name: nginx-ingress
# namespace: my-namespace
spec:
rules:
- host: luna.lan
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
# tls:
# - hosts:
# - luna.lan
# secretName: quickstart-tls
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx-nodeport
spec:
type: NodePort
ports:
- name: 8080-80
nodePort: 30080
port: 8080
targetPort: 80
protocol: TCP
selector:
app: nginx
externalIPs:
- 172.30.89.132
- 172.17.246.34
# Port forwarding Windows to WSL2
netsh interface portproxy add v4tov4 listenport=80 listenaddress=0.0.0.0 connectport=80 connectaddress=172.30.89.132
netsh interface portproxy add v4tov4 listenport=8080 listenaddress=0.0.0.0 connectport=8080 connectaddress=172.30.89.132
netsh interface portproxy add v4tov4 listenport=6443 listenaddress=0.0.0.0 connectport=6443 connectaddress=172.30.89.132
netsh interface portproxy add v4tov4 listenport=6444 listenaddress=0.0.0.0 connectport=6444 connectaddress=172.30.89.132
netsh interface portproxy add v4tov4 listenport=10250 listenaddress=0.0.0.0 connectport=10250 connectaddress=172.30.89.132
netsh interface portproxy add v4tov4 listenport=2379 listenaddress=0.0.0.0 connectport=2379 connectaddress=172.30.89.132
netsh interface portproxy add v4tov4 listenport=2380 listenaddress=0.0.0.0 connectport=2380 connectaddress=172.30.89.132
netsh interface portproxy show all

Rancher Desktop

Select context, create namespace and deploy nginx and service

➜ kubectl config use-context rancher-desktop

➜ kubectl get node -o wide

### Namespaces
➜ kubectl create namespace my-namespace

### Deployments
# kubectl config set-context --current --namespace=my-namespace
➜ kubectl create deployment nginx --image=nginx:latest --port=80 --namespace my-namespace

### Services
➜ kubectl expose deployment nginx --port=80 -n my-namespace

Traefik as the default ingress controller

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
    traefik.ingress.kubernetes.io/router.entrypoints: web
  name: nginx-ingress
  namespace: my-namespace

spec:
  rules:
    - host: hp450-2.lan
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx
                port:
                  number: 80

Create ingress

➜ kubectl apply -n my-namespace -f .\nginx-ingess.yaml

Port Forwarding

Other option use kubectl port-forward

➜ kubectl get pod -o wide -n my-namespace

# kubectl describe pod nginx-6b7f675859-p7csp -n my-namespace

➜ kubectl port-forward nginx-6b7f675859-p7csp 8080:80

Delete namespace

➜ kubectl delete namespace my-namespace

Links

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-staging
namespace: my-namespace
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected]
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: traefik
@oleksis
Copy link
Author

oleksis commented May 2, 2023

ok thank you, just reading very quickly, I don't see any forwarding from Windows localhost network to your "intranet" IP 192.168.1.xxx. If you do a netstat -an from Powershell, you'll see the WSL2 "auto-forwarded" ports will only be on your localhost network, meaning you can't access it from another computer in your network

this is one WSL2 side, nothing to do with any K8s clusters you might choose. Then while having K3s directly deployed to a distro instead of K3d might help you get to the node more easily (i.e. Rancher Desktop), please note that the second node should be a worker node (—agent)

HA is nice and all, but will require another node (2 nodes CPs are not encouraged or even possible depending on the key/pair backend you choose) and the config flags on the first node would need a --cluster-init to make it "aware" it will be part of a HA cluster

@nunix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment