Skip to content

Instantly share code, notes, and snippets.

@jacobweinstock
Last active November 14, 2024 11:23
Show Gist options
  • Save jacobweinstock/e13cea2edbb83833d8fc7e3226af2a3c to your computer and use it in GitHub Desktop.
Save jacobweinstock/e13cea2edbb83833d8fc7e3226af2a3c to your computer and use it in GitHub Desktop.
Tinkerbell machine provisioning demo

Walk through demo

Demo of installing Ubuntu 22.04 on an HP EliteDesk.

Install the Tinkerbell stack

  1. Satisfy Stack installation prerequisites.
    • k3d cluster create --network host --no-lb --k3s-arg "--disable=traefik,servicelb" --k3s-arg "--kube-apiserver-arg=feature-gates=MixedProtocolLBService=true" --host-pid-mode
    • Command pulled from the sandbox repo.
  2. Clone the Tinkerbell chart repo.
    • git clone https://github.com/tinkerbell/charts.git
  3. Customize the stack template values.yaml to your environment.
    • cd chart/tinkerbell
    • Follow the guidance in the chart README.md.
  4. Install the Tinkerbell stack.
    • helm dependency build stack/
      trusted_proxies=$(kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' | tr ' ' ',')
      helm install stack-release stack/ --create-namespace --namespace tink-system --wait --set "boots.trustedProxies=${trusted_proxies}" --set "hegel.trustedProxies=${trusted_proxies}"
  5. Verify the stack is up and running.
    • Verify all pods are running: kubectl get pods -n tink-system
    • Verify the tink-stack service has the IP you specified in the values.yaml under the EXTERNAL-IP column: kubectl get svc -n tink-system
  6. Download, convert, and serve the Ubuntu 22.04 (Jammy) .img file.
    • kubectl apply -n tink-system -f ~/repos/tinkerbell/sandbox/deploy/stack/helm/manifests/ubuntu-download.yaml
    • This makes the Jammy file available via the Tinkerbell stack web server and can be used in your template.yaml file.
    • http://EXTERNAL-IP:8080/jammy-server-cloudimg-amd64.raw.gz
    • You can inspect the ubuntu-download.yaml and/or read this doc for the manual steps of performing this download and convert.

Apply the Tinkerbell CRDs

  1. Customize the 3 CRDs(hardware, template, workflow) to your environment.
    • Example CRDs can be found in the Tink repo.
    • If you use quay.io/tinkerbell-actions/image2disk:v1.0.0, add IMG_URL: "http://EXTERNAL-IP:8080/jammy-server-cloudimg-amd64.raw.gz" environment variable in your template.yaml file.
    • Be sure your workflow.yaml references the correct templateRef and hardwareRef CRD names and the device_1 matches the mac address of the target machine.
    • See the hardware.yaml, template.yaml, and workflow.yaml files below.
  2. Apply the Tinkerbell CRDs to the cluster.
    • kubectl apply -f hardware.yaml
      kubectl apply -f template.yaml
      kubectl apply -f workflow.yaml

Provision the machine

  1. Watch the workflow.
    • kubectl get workflow -n tink-system --watch
  2. Reboot the machine.
  3. Once the workflow is complete(STATE_SUCCESS), reboot the machine again.
  4. Log into the machine. This can be done via the console or via SSH.
apiVersion: "tinkerbell.org/v1alpha1"
kind: Hardware
metadata:
name: hp-demo
namespace: tink-system
spec:
disks:
- device: /dev/nvme0n1
metadata:
facility:
facility_code: onprem
manufacturer:
slug: hp
instance:
userdata: ""
hostname: "hp-demo"
id: "f8:b4:6a:ab:8d:40"
operating_system:
distro: "ubuntu"
os_slug: "ubuntu_22_04"
version: "22.04"
interfaces:
- dhcp:
arch: x86_64
hostname: hp-demo
ip:
address: 192.168.2.147
gateway: 192.168.2.1
netmask: 255.255.255.0
lease_time: 86400
mac: f8:b4:6a:ab:8d:40
name_servers:
- 1.1.1.1
- 8.8.8.8
uefi: true
netboot:
allowPXE: true
allowWorkflow: true
apiVersion: "tinkerbell.org/v1alpha1"
kind: Template
metadata:
name: ubuntu-jammy-nvme
namespace: tink-system
spec:
data: |
version: "0.1"
name: ubuntu_jammy_nvme
global_timeout: 9800
tasks:
- name: "os-installation"
worker: "{{.device_1}}"
volumes:
- /dev:/dev
- /dev/console:/dev/console
- /lib/firmware:/lib/firmware:ro
actions:
- name: "stream-ubuntu-image"
image: quay.io/tinkerbell-actions/image2disk:v1.0.0
timeout: 9600
environment:
DEST_DISK: {{ index .Hardware.Disks 0 }}
IMG_URL: "http://192.168.2.111:8080/jammy-server-cloudimg-amd64.raw.gz"
COMPRESSED: true
- name: "grow-partition"
image: quay.io/tinkerbell-actions/cexec:v1.0.0
timeout: 90
environment:
BLOCK_DEVICE: {{ index .Hardware.Disks 0 }}p1
FS_TYPE: ext4
CHROOT: y
DEFAULT_INTERPRETER: "/bin/sh -c"
CMD_LINE: "growpart {{ index .Hardware.Disks 0 }} 1 && resize2fs {{ index .Hardware.Disks 0 }}p1"
- name: "install-openssl"
image: quay.io/tinkerbell-actions/cexec:v1.0.0
timeout: 90
environment:
BLOCK_DEVICE: {{ index .Hardware.Disks 0 }}p1
FS_TYPE: ext4
CHROOT: y
DEFAULT_INTERPRETER: "/bin/sh -c"
CMD_LINE: "apt -y update && apt -y install openssl"
- name: "create-user"
image: quay.io/tinkerbell-actions/cexec:v1.0.0
timeout: 90
environment:
BLOCK_DEVICE: {{ index .Hardware.Disks 0 }}p1
FS_TYPE: ext4
CHROOT: y
DEFAULT_INTERPRETER: "/bin/sh -c"
CMD_LINE: "useradd -p $(openssl passwd -1 tink) -s /bin/bash -d /home/tink/ -m -G sudo tink"
- name: "enable-ssh"
image: quay.io/tinkerbell-actions/cexec:v1.0.0
timeout: 90
environment:
BLOCK_DEVICE: {{ index .Hardware.Disks 0 }}p1
FS_TYPE: ext4
CHROOT: y
DEFAULT_INTERPRETER: "/bin/sh -c"
CMD_LINE: "ssh-keygen -A; systemctl enable ssh.service; sed -i 's/^PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config"
- name: "disable-apparmor"
image: quay.io/tinkerbell-actions/cexec:v1.0.0
timeout: 90
environment:
BLOCK_DEVICE: {{ index .Hardware.Disks 0 }}p1
FS_TYPE: ext4
CHROOT: y
DEFAULT_INTERPRETER: "/bin/sh -c"
CMD_LINE: "systemctl disable apparmor; systemctl disable snapd"
- name: "write-netplan"
image: quay.io/tinkerbell-actions/writefile:v1.0.0
timeout: 90
environment:
DEST_DISK: {{ index .Hardware.Disks 0 }}p1
FS_TYPE: ext4
DEST_PATH: /etc/netplan/config.yaml
CONTENTS: |
network:
version: 2
renderer: networkd
ethernets:
id0:
match:
name: en*
dhcp4: true
UID: 0
GID: 0
MODE: 0644
DIRMODE: 0755
apiVersion: "tinkerbell.org/v1alpha1"
kind: Workflow
metadata:
name: demo-wf
namespace: tink-system
spec:
templateRef: ubuntu-jammy-nvme
hardwareRef: hp-demo
hardwareMap:
device_1: f8:b4:6a:ab:8d:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment