Skip to content

Instantly share code, notes, and snippets.

@neoaggelos
Last active January 19, 2025 22:58
Show Gist options
  • Save neoaggelos/22cc560ecc7691ce40e501f1cbee89c5 to your computer and use it in GitHub Desktop.
Save neoaggelos/22cc560ecc7691ce40e501f1cbee89c5 to your computer and use it in GitHub Desktop.
lxd provider cluster-api
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: test
namespace: default
spec:
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: MicroK8sControlPlane
name: test-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXDCluster
name: test
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXDCluster
metadata:
name: test
namespace: default
spec: {}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: MicroK8sControlPlane
metadata:
name: test-control-plane
namespace: default
spec:
controlPlaneConfig:
clusterConfiguration:
portCompatibilityRemap: true
initConfiguration:
IPinIP: true
addons:
- dns
- ingress
joinTokenTTLInSecs: 9000
machineTemplate:
infrastructureTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXDMachineTemplate
name: test-control-plane
replicas: 1
version: v1.25.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXDMachineTemplate
metadata:
name: test-control-plane
namespace: default
spec:
template:
spec:
imageAlias: u22
instanceType: container
profiles:
- default
- microk8s
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: test-md-0
namespace: default
spec:
clusterName: test
replicas: 0
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: MicroK8sConfigTemplate
name: test-md-0
clusterName: test
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXDMachineTemplate
name: test-md-0
version: 1.25.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXDMachineTemplate
metadata:
name: test-md-0
namespace: default
spec:
template:
spec:
imageAlias: u22
instanceType: container
profiles:
- default
- microk8s
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: MicroK8sConfigTemplate
metadata:
name: test-md-0
namespace: default
spec:
template:
spec: {}
# configure LXD, and allow access over HTTPS.
# note the IP address (in this case, 10.0.3.181) and the trust password.
$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this node? [default=10.0.3.181]:
Are you joining an existing cluster? (yes/no) [default=no]:
What name should be used to identify this node in the cluster? [default=test-ovn]:
Setup password authentication on the cluster? (yes/no) [default=no]: yes
Trust password for new clients:
Again:
....
# create image alias for ubuntu 22.04
$ sudo lxc launch ubuntu:22.04 t1
$ sudo lxc image alias create u22 ubuntu:22.04
$ sudo lxc rm t1 --force
# create profile
$ lxc profile create microk8s
$ curl https://raw.githubusercontent.com/ubuntu/microk8s/master/tests/lxc/microk8s.profile | lxc profile edit microk8s
# deploy LXD provider and configure access to server (replace IP and password)
$ microk8s kubectl apply -f provider.yaml
$ microk8s kubectl create configmap -n capl-system lxd-socket --from-literal=LXD_SERVER=https://10.0.3.181:8443 --from-literal=LXD_PASSWORD=password
# deploy cluster.yaml. initially, it has 1 control plane node and 0 workers
$ microk8s kubectl apply -f cluster.yaml
# wait for init node to come up, check with 'lxc list' and note its IP address
$ lxc list
+-------------------------------+---------+-----------------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------------------+---------+-----------------------------+------+-----------+-----------+
| test-test-control-plane-pkstp | RUNNING | 10.0.0.187 (eth0) | | CONTAINER | 0 |
+-------------------------------+---------+-----------------------------+------+-----------+-----------+
# after deployment, fix `test-kubeconfig` secret to unblock the control plane provider
$ microk8s kubectl edit cluster test # change control api endpoint to 10.0.0.187:6443
$ microk8s kubectl edit lxdcluster test # change control api endpoint to 10.0.0.187:6443
$ clusterctl get kubeconfig test > kubeconfig
$ vim kubeconfig # change 'https://TODO:12345' to 'https://10.0.0.187:6443'
$ cat kubeconfig | base64 -w0
$ microk8s kubectl edit secret test-kubeconfig # change value to the new base64 string
# cluster is now ready to scale. edit 'cluster.yaml', change 'replicas: 0' and re-apply to deploy worker nodes
apiVersion: v1
kind: Namespace
metadata:
labels:
control-plane: controller-manager
name: capl-system
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.0
creationTimestamp: null
labels:
cluster.x-k8s.io/provider: infrastructure-lxd
cluster.x-k8s.io/v1beta1: v1alpha1
name: lxdclusters.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
categories:
- cluster-api
kind: LXDCluster
listKind: LXDClusterList
plural: lxdclusters
shortNames:
- lc
singular: lxdcluster
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Cluster to which this LXDCluster belongs
jsonPath: .metadata.labels.cluster\.x-k8s\.io/cluster-name
name: Cluster
type: string
- description: Cluster infrastructure is ready for LXD instances
jsonPath: .status.ready
name: Ready
type: string
- description: Time duration since creation of LXDCluster
jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: LXDCluster is the Schema for the lxdclusters API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: LXDClusterSpec defines the desired state of LXDCluster
properties:
controlPlaneEndpoint:
description: ControlPlaneEndpoint represents the endpoint to communicate with the control plane.
properties:
host:
description: The hostname on which the API server is serving.
type: string
port:
description: The port on which the API server is serving.
format: int32
type: integer
required:
- host
- port
type: object
type: object
status:
description: LXDClusterStatus defines the observed state of LXDCluster
properties:
ready:
description: Ready denotes that the LXD cluster (infrastructure) is ready.
type: boolean
required:
- ready
type: object
type: object
served: true
storage: true
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.0
creationTimestamp: null
labels:
cluster.x-k8s.io/provider: infrastructure-lxd
cluster.x-k8s.io/v1beta1: v1alpha1
name: lxdclustertemplates.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
kind: LXDClusterTemplate
listKind: LXDClusterTemplateList
plural: lxdclustertemplates
singular: lxdclustertemplate
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: LXDClusterTemplate is the Schema for the lxdclustertemplates API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: LXDClusterTemplateSpec defines the desired state of LXDClusterTemplate
properties:
template:
properties:
metadata:
description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata'
properties:
annotations:
additionalProperties:
type: string
description: 'Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations'
type: object
labels:
additionalProperties:
type: string
description: 'Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels'
type: object
type: object
spec:
description: Spec is the specification of the desired behaviour of the cluster.
properties:
controlPlaneEndpoint:
description: ControlPlaneEndpoint represents the endpoint to communicate with the control plane.
properties:
host:
description: The hostname on which the API server is serving.
type: string
port:
description: The port on which the API server is serving.
format: int32
type: integer
required:
- host
- port
type: object
type: object
required:
- spec
type: object
required:
- template
type: object
type: object
served: true
storage: true
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.0
creationTimestamp: null
labels:
cluster.x-k8s.io/provider: infrastructure-lxd
cluster.x-k8s.io/v1beta1: v1alpha1
name: lxdmachines.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
categories:
- cluster-api
kind: LXDMachine
listKind: LXDMachineList
plural: lxdmachines
shortNames:
- lm
singular: lxdmachine
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Cluster to which this LXDMachine belongs
jsonPath: .metadata.labels.cluster\.x-k8s\.io/cluster-name
name: Cluster
type: string
- description: LXD instance state
jsonPath: .status.state
name: State
type: string
- description: LXD instance ID
jsonPath: .spec.providerID
name: ProviderID
type: string
- description: Machine ready status
jsonPath: .status.ready
name: Ready
type: string
- description: Machine object which owns with this LXDMachine
jsonPath: .metadata.ownerReferences[?(@.kind=="Machine")].name
name: Machine
type: string
- description: Time duration since creation of LXDMachine
jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: LXDMachine is the Schema for the lxdmachines API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: LXDMachineSpec defines the desired state of LXDMachine
properties:
imageAlias:
description: Image is the image alias name to use.
type: string
instanceType:
description: InstanceType is the instance type to create.
enum:
- container
- virtual-machine
type: string
profiles:
description: Profiles is a list of profiles to attach to the instance.
items:
type: string
type: array
providerID:
description: ProviderID is the container name in ProviderID format (lxd:///<containername>)
type: string
type: object
status:
description: LXDMachineStatus defines the observed state of LXDMachine
properties:
addresses:
items:
description: NodeAddress contains information for the node's address.
properties:
address:
description: The node address.
type: string
type:
description: Node address type, one of Hostname, ExternalIP or InternalIP.
type: string
required:
- address
- type
type: object
type: array
ready:
type: boolean
state:
type: string
required:
- addresses
type: object
type: object
served: true
storage: true
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.0
creationTimestamp: null
labels:
cluster.x-k8s.io/provider: infrastructure-lxd
cluster.x-k8s.io/v1beta1: v1alpha1
name: lxdmachinetemplates.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
kind: LXDMachineTemplate
listKind: LXDMachineTemplateList
plural: lxdmachinetemplates
singular: lxdmachinetemplate
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: LXDMachineTemplate is the Schema for the lxdmachinetemplates API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: LXDMachineTemplateSpec defines the desired state of LXDMachineTemplate
properties:
template:
properties:
metadata:
description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata'
properties:
annotations:
additionalProperties:
type: string
description: 'Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations'
type: object
labels:
additionalProperties:
type: string
description: 'Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels'
type: object
type: object
spec:
description: Spec is the specification of the desired behaviour of the machine.
properties:
imageAlias:
description: Image is the image alias name to use.
type: string
instanceType:
description: InstanceType is the instance type to create.
enum:
- container
- virtual-machine
type: string
profiles:
description: Profiles is a list of profiles to attach to the instance.
items:
type: string
type: array
providerID:
description: ProviderID is the container name in ProviderID format (lxd:///<containername>)
type: string
type: object
required:
- spec
type: object
required:
- template
type: object
type: object
served: true
storage: true
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: capl-controller-manager
namespace: capl-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: capl-leader-election-role
namespace: capl-system
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: capl-manager-role
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- cluster.x-k8s.io
resources:
- clusters
- clusters/status
verbs:
- get
- list
- watch
- apiGroups:
- cluster.x-k8s.io
resources:
- machines
- machines/status
verbs:
- get
- list
- watch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- lxdclusters
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- lxdclusters/finalizers
verbs:
- update
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- lxdclusters/status
verbs:
- get
- patch
- update
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- lxdmachines
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- lxdmachines/finalizers
verbs:
- update
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- lxdmachines/status
verbs:
- get
- patch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: capl-metrics-reader
rules:
- nonResourceURLs:
- /metrics
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: capl-proxy-role
rules:
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: capl-leader-election-rolebinding
namespace: capl-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: capl-leader-election-role
subjects:
- kind: ServiceAccount
name: capl-controller-manager
namespace: capl-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: capl-manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: capl-manager-role
subjects:
- kind: ServiceAccount
name: capl-controller-manager
namespace: capl-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: capl-proxy-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: capl-proxy-role
subjects:
- kind: ServiceAccount
name: capl-controller-manager
namespace: capl-system
---
apiVersion: v1
data:
controller_manager_config.yaml: "apiVersion: controller-runtime.sigs.k8s.io/v1alpha1\nkind: ControllerManagerConfig\nhealth:\n healthProbeBindAddress: :8081\nmetrics:\n bindAddress: 127.0.0.1:8080\nwebhook:\n port: 9443\nleaderElection:\n leaderElect: true\n resourceName: 349154e5.cluster.x-k8s.io\n# leaderElectionReleaseOnCancel defines if the leader should step down volume \n# when the Manager ends. This requires the binary to immediately end when the\n# Manager is stopped, otherwise, this setting is unsafe. Setting this significantly\n# speeds up voluntary leader transitions as the new leader don't have to wait\n# LeaseDuration time first.\n# In the default scaffold provided, the program ends immediately after \n# the manager stops, so would be fine to enable this option. However, \n# if you are doing or is intended to do any operation such as perform cleanups \n# after the manager stops then its usage might be unsafe.\n# leaderElectionReleaseOnCancel: true\n"
kind: ConfigMap
metadata:
name: capl-manager-config
namespace: capl-system
---
apiVersion: v1
kind: Service
metadata:
labels:
control-plane: controller-manager
name: capl-controller-manager-metrics-service
namespace: capl-system
spec:
ports:
- name: https
port: 8443
protocol: TCP
targetPort: https
selector:
control-plane: controller-manager
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
control-plane: controller-manager
name: capl-controller-manager
namespace: capl-system
spec:
replicas: 1
selector:
matchLabels:
control-plane: controller-manager
template:
metadata:
annotations:
kubectl.kubernetes.io/default-container: manager
labels:
control-plane: controller-manager
spec:
containers:
- args:
- --secure-listen-address=0.0.0.0:8443
- --upstream=http://127.0.0.1:8080/
- --logtostderr=true
- --v=0
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.11.0
name: kube-rbac-proxy
ports:
- containerPort: 8443
name: https
protocol: TCP
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 5m
memory: 64Mi
securityContext:
allowPrivilegeEscalation: false
- args:
- --health-probe-bind-address=:8081
- --metrics-bind-address=127.0.0.1:8080
- --leader-elect
command:
- /manager
envFrom:
- configMapRef:
name: lxd-socket
image: neoaggelos/capi-lxd:dev1
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
name: manager
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 10m
memory: 64Mi
securityContext:
allowPrivilegeEscalation: false
securityContext:
runAsNonRoot: true
serviceAccountName: capl-controller-manager
terminationGracePeriodSeconds: 10
@wirwolf
Copy link

wirwolf commented Jul 9, 2024

hey @neoaggelos. Can you publish the source code of docker image neoaggelos/capi-lxd:dev1 ?

@neoaggelos
Copy link
Author

neoaggelos commented Jul 11, 2024

hi @wirwolf, this was hacked together for a weekend project, and contained lots of hard-coded things and assumptions. i have been thinking of returning to this at some point to clean this up and make it shareable

But I'm excited to see more interest in something like this

@aliazlan-t
Copy link

hey @neoaggelos, any update about when you will be able to publish it? I would also love to contribute to it instead of starting a new project.

@neoaggelos
Copy link
Author

hello @aliazlan-t

lovely to see the interest in this!

have been slowly working on this on the side, i do expect to have some time to tidy things up during the holiday break. can't promise any estimates, but i do want to spend time on the code and get it up and running!

@wirwolf
Copy link

wirwolf commented Dec 11, 2024

I hope your code be compatibility to use with incus

@neoaggelos
Copy link
Author

of course ;)

@neoaggelos
Copy link
Author

neoaggelos commented Dec 26, 2024

Super excited to share some progress being made during the break! This is a 3-node CP / 1 worker kubeadm cluster running on Incus:

ubuntu@damocles ~ $ kubectl get cluster,lxccluster,machine,lxcmachine,kubeadmcontrolplane
NAME                          CLUSTERCLASS   PHASE         AGE   VERSION
cluster.cluster.x-k8s.io/c1                  Provisioned   17m   

NAME                                            CLUSTER   LOAD BALANCER   READY   AGE
lxccluster.infrastructure.cluster.x-k8s.io/c1   c1        10.0.0.39       true    17m

NAME                                                  CLUSTER   NODENAME                             PROVIDERID                                  PHASE     AGE    VERSION
machine.cluster.x-k8s.io/c1-control-plane-kmjjv       c1        default-c1-control-plane-kmjjv       lxc:///default-c1-control-plane-kmjjv       Running   17m    v1.32.0
machine.cluster.x-k8s.io/c1-control-plane-ppbck       c1        default-c1-control-plane-ppbck       lxc:///default-c1-control-plane-ppbck       Running   6m2s   v1.32.0
machine.cluster.x-k8s.io/c1-control-plane-psjb2       c1        default-c1-control-plane-psjb2       lxc:///default-c1-control-plane-psjb2       Running   3m1s   v1.32.0
machine.cluster.x-k8s.io/c1-worker-md-0-bzqvk-vf5gk   c1        default-c1-worker-md-0-bzqvk-vf5gk   lxc:///default-c1-worker-md-0-bzqvk-vf5gk   Running   17m    v1.32.0

NAME                                                                    CLUSTER   MACHINE                      PROVIDERID                                  READY   AGE
lxcmachine.infrastructure.cluster.x-k8s.io/c1-control-plane-kmjjv       c1        c1-control-plane-kmjjv       lxc:///default-c1-control-plane-kmjjv       true    17m
lxcmachine.infrastructure.cluster.x-k8s.io/c1-control-plane-ppbck       c1        c1-control-plane-ppbck       lxc:///default-c1-control-plane-ppbck       true    6m2s
lxcmachine.infrastructure.cluster.x-k8s.io/c1-control-plane-psjb2       c1        c1-control-plane-psjb2       lxc:///default-c1-control-plane-psjb2       true    3m1s
lxcmachine.infrastructure.cluster.x-k8s.io/c1-worker-md-0-bzqvk-vf5gk   c1        c1-worker-md-0-bzqvk-vf5gk   lxc:///default-c1-worker-md-0-bzqvk-vf5gk   true    17m

NAME                                                                 CLUSTER   INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE   VERSION
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/c1-control-plane   c1        true          true                   3          3       3         0             17m   v1.32.0
ubuntu@damocles ~ $ kubectl --kubeconfig=<(clusterctl get kubeconfig c1) get node,pod -A
NAME                                      STATUS   ROLES           AGE     VERSION
node/default-c1-control-plane-kmjjv       Ready    control-plane   14m     v1.32.0
node/default-c1-control-plane-ppbck       Ready    control-plane   3m46s   v1.32.0
node/default-c1-control-plane-psjb2       Ready    control-plane   93s     v1.32.0
node/default-c1-worker-md-0-bzqvk-vf5gk   Ready    <none>          13m     v1.32.0

NAMESPACE      NAME                                                         READY   STATUS    RESTARTS   AGE
kube-flannel   pod/kube-flannel-ds-5xgdj                                    1/1     Running   0          5m25s
kube-flannel   pod/kube-flannel-ds-6pc7t                                    1/1     Running   0          5m25s
kube-flannel   pod/kube-flannel-ds-9z7tp                                    1/1     Running   0          3m46s
kube-flannel   pod/kube-flannel-ds-tkrrq                                    1/1     Running   0          93s
kube-system    pod/coredns-668d6bf9bc-tr267                                 1/1     Running   0          14m
kube-system    pod/coredns-668d6bf9bc-vmvqm                                 1/1     Running   0          14m
kube-system    pod/etcd-default-c1-control-plane-kmjjv                      1/1     Running   0          14m
kube-system    pod/etcd-default-c1-control-plane-ppbck                      1/1     Running   0          3m39s
kube-system    pod/etcd-default-c1-control-plane-psjb2                      1/1     Running   0          83s
kube-system    pod/kube-apiserver-default-c1-control-plane-kmjjv            1/1     Running   0          14m
kube-system    pod/kube-apiserver-default-c1-control-plane-ppbck            1/1     Running   0          3m39s
kube-system    pod/kube-apiserver-default-c1-control-plane-psjb2            1/1     Running   0          83s
kube-system    pod/kube-controller-manager-default-c1-control-plane-kmjjv   1/1     Running   0          14m
kube-system    pod/kube-controller-manager-default-c1-control-plane-ppbck   1/1     Running   0          3m40s
kube-system    pod/kube-controller-manager-default-c1-control-plane-psjb2   1/1     Running   0          83s
kube-system    pod/kube-proxy-8c4dw                                         1/1     Running   0          93s
kube-system    pod/kube-proxy-8tgpq                                         1/1     Running   0          13m
kube-system    pod/kube-proxy-m8ffp                                         1/1     Running   0          3m46s
kube-system    pod/kube-proxy-w55kc                                         1/1     Running   0          14m
kube-system    pod/kube-scheduler-default-c1-control-plane-kmjjv            1/1     Running   0          14m
kube-system    pod/kube-scheduler-default-c1-control-plane-ppbck            1/1     Running   0          3m39s
kube-system    pod/kube-scheduler-default-c1-control-plane-psjb2            1/1     Running   0          83s
ubuntu@damocles ~ $ sudo incus list user.cluster-name=c1
+------------------------------------+---------+------------------------+------+-----------------+-----------+----------+
|                NAME                |  STATE  |          IPV4          | IPV6 |      TYPE       | SNAPSHOTS | LOCATION |
+------------------------------------+---------+------------------------+------+-----------------+-----------+----------+
| default-c1-control-plane-kmjjv     | RUNNING | 10.244.0.1 (cni0)      |      | VIRTUAL-MACHINE | 0         | damocles |
|                                    |         | 10.244.0.0 (flannel.1) |      |                 |           |          |
|                                    |         | 10.0.0.37 (enp5s0)     |      |                 |           |          |
+------------------------------------+---------+------------------------+------+-----------------+-----------+----------+
| default-c1-control-plane-ppbck     | RUNNING | 10.244.2.0 (flannel.1) |      | VIRTUAL-MACHINE | 0         | damocles |
|                                    |         | 10.0.0.63 (enp5s0)     |      |                 |           |          |
+------------------------------------+---------+------------------------+------+-----------------+-----------+----------+
| default-c1-control-plane-psjb2     | RUNNING | 10.244.3.0 (flannel.1) |      | VIRTUAL-MACHINE | 0         | damocles |
|                                    |         | 10.0.0.94 (enp5s0)     |      |                 |           |          |
+------------------------------------+---------+------------------------+------+-----------------+-----------+----------+
| default-c1-lb                      | RUNNING | 10.0.0.39 (eth0)       |      | CONTAINER (APP) | 0         | damocles |
+------------------------------------+---------+------------------------+------+-----------------+-----------+----------+
| default-c1-worker-md-0-bzqvk-vf5gk | RUNNING | 10.244.1.0 (flannel.1) |      | VIRTUAL-MACHINE | 0         | damocles |
|                                    |         | 10.0.0.22 (enp5s0)     |      |                 |           |          |
+------------------------------------+---------+------------------------+------+-----------------+-----------+----------+

rough todo to get a v0.1 out, currently feeling on track to get this done soon ish:

  • adjust cloud provider node patch (maps machines from management cluster to nodes in the workload cluster)
  • support kube-vip for clusters (e.g. in case oci containers are not supported)
  • sketch out some basic CI testing to be able to have some stable testing in place to be able to iterate without breaking things
  • improve machine bootstrapping conditions (currently assumes machines are ready without checking cloud-init progress)
  • currently uses stock ubuntu images and uses pre-kubeadm commands to install containerd, kubeadm, etc on the nodes. add some tooling that allows constructing a base image.
  • docs, docs, docs about considerations to keep in mind while using incus (or LXD), and what could constitute "production-level" setups -- currently, this is on par with capd for development purposes. however, my plan is to have some docs in place discussing an incus (or LXD) cluster, with ovn networking and stable storage, which much more trustworthy than development level stuff.

Cannot yet share cluster-template or images, since i'm heavily iterating on them atm.

@wirwolf
Copy link

wirwolf commented Dec 26, 2024

So good news. Thanks!!!

@neoaggelos
Copy link
Author

i've been using the comments on this gist as sort of progress, so here's where things are currently at, since last update:

  • cloud provider node patch is implemented
  • have added support for lxc instance, oci instance, kube-vip or network load balancer (with ovn networks) for cluster load balancer
  • have added automation for constructing a base image with all necessary utils
  • have implemented cluster and machine conditions

below are 2 clusters running on a 3-node incus infrastructure, using ovn and ovn network load balancers for the endpoint:

ubuntu@damocles ~ $ incus cluster list
+------+-----------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| NAME |          URL          |      ROLES      | ARCHITECTURE | FAILURE DOMAIN | DESCRIPTION | STATUS |      MESSAGE      |
+------+-----------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| w01  | https://10.0.1.1:8443 | database        | x86_64       | default        |             | ONLINE | Fully operational |
+------+-----------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| w02  | https://10.0.1.2:8443 | database        | x86_64       | default        |             | ONLINE | Fully operational |
+------+-----------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| w03  | https://10.0.1.3:8443 | database-leader | x86_64       | default        |             | ONLINE | Fully operational |
|      |                       | database        |              |                |             |        |                   |
+------+-----------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
ubuntu@damocles ~ $ incus list
+------------------------+---------+------------------------+-----------------------------------------------+-----------+-----------+----------+
|          NAME          |  STATE  |          IPV4          |                     IPV6                      |   TYPE    | SNAPSHOTS | LOCATION |
+------------------------+---------+------------------------+-----------------------------------------------+-----------+-----------+----------+
| c1-control-plane-9xq4x | RUNNING | 192.168.1.3 (eth0)     | fd42:9ec5:1f2b:ab4d:216:3eff:fe22:21a4 (eth0) | CONTAINER | 0         | w03      |
|                        |         | 10.244.0.0 (flannel.1) |                                               |           |           |          |
+------------------------+---------+------------------------+-----------------------------------------------+-----------+-----------+----------+
| c1-md-0-vsrgg-t699j    | RUNNING | 192.168.1.2 (eth0)     | fd42:9ec5:1f2b:ab4d:216:3eff:fe17:8689 (eth0) | CONTAINER | 0         | w01      |
|                        |         | 10.244.1.0 (flannel.1) |                                               |           |           |          |
+------------------------+---------+------------------------+-----------------------------------------------+-----------+-----------+----------+
| c2-control-plane-bdfgz | RUNNING | 192.168.1.4 (eth0)     | fd42:9ec5:1f2b:ab4d:216:3eff:fe99:644b (eth0) | CONTAINER | 0         | w02      |
|                        |         | 10.244.0.1 (cni0)      |                                               |           |           |          |
|                        |         | 10.244.0.0 (flannel.1) |                                               |           |           |          |
+------------------------+---------+------------------------+-----------------------------------------------+-----------+-----------+----------+
| c2-md-0-kv69n-bsntc    | RUNNING | 192.168.1.5 (eth0)     | fd42:9ec5:1f2b:ab4d:216:3eff:fe15:6c63 (eth0) | CONTAINER | 0         | w01      |
|                        |         | 10.244.1.0 (flannel.1) |                                               |           |           |          |
+------------------------+---------+------------------------+-----------------------------------------------+-----------+-----------+----------+
ubuntu@damocles ~ $ incus network load-balancer list default
+----------------+-------------+-------+----------+
| LISTEN ADDRESS | DESCRIPTION | PORTS | LOCATION |
+----------------+-------------+-------+----------+
| 10.100.42.1    |             | 1     |          |
+----------------+-------------+-------+----------+
| 10.100.42.2    |             | 1     |          |
+----------------+-------------+-------+----------+
ubuntu@damocles ~ $ kubectl get cluster,lxccluster,machine,lxcmachine
NAME                          CLUSTERCLASS   PHASE         AGE   VERSION
cluster.cluster.x-k8s.io/c1                  Provisioned   14m   
cluster.cluster.x-k8s.io/c2                  Provisioned   10m   

NAME                                            CLUSTER   LOAD BALANCER   READY   AGE
lxccluster.infrastructure.cluster.x-k8s.io/c1   c1        10.100.42.1     true    14m
lxccluster.infrastructure.cluster.x-k8s.io/c2   c2        10.100.42.2     true    10m

NAME                                              CLUSTER   NODENAME                 PROVIDERID                      PHASE     AGE   VERSION
machine.cluster.x-k8s.io/c1-control-plane-9xq4x   c1        c1-control-plane-9xq4x   lxc:///c1-control-plane-9xq4x   Running   14m   v1.32.0
machine.cluster.x-k8s.io/c1-md-0-vsrgg-t699j      c1        c1-md-0-vsrgg-t699j      lxc:///c1-md-0-vsrgg-t699j      Running   14m   v1.32.0
machine.cluster.x-k8s.io/c2-control-plane-bdfgz   c2        c2-control-plane-bdfgz   lxc:///c2-control-plane-bdfgz   Running   10m   v1.32.0
machine.cluster.x-k8s.io/c2-md-0-kv69n-bsntc      c2        c2-md-0-kv69n-bsntc      lxc:///c2-md-0-kv69n-bsntc      Running   10m   v1.32.0

NAME                                                                CLUSTER   MACHINE                  PROVIDERID                      READY   AGE
lxcmachine.infrastructure.cluster.x-k8s.io/c1-control-plane-9xq4x   c1        c1-control-plane-9xq4x   lxc:///c1-control-plane-9xq4x   true    14m
lxcmachine.infrastructure.cluster.x-k8s.io/c1-md-0-vsrgg-t699j      c1        c1-md-0-vsrgg-t699j      lxc:///c1-md-0-vsrgg-t699j      true    14m
lxcmachine.infrastructure.cluster.x-k8s.io/c2-control-plane-bdfgz   c2        c2-control-plane-bdfgz   lxc:///c2-control-plane-bdfgz   true    10m
lxcmachine.infrastructure.cluster.x-k8s.io/c2-md-0-kv69n-bsntc      c2        c2-md-0-kv69n-bsntc      lxc:///c2-md-0-kv69n-bsntc      true    10m
ubuntu@damocles ~ $ kubectl --kubeconfig=<(clusterctl get kubeconfig c1) get node -o wide 
NAME                     STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
c1-control-plane-9xq4x   Ready    control-plane   12m   v1.32.0   192.168.1.3   <none>        Ubuntu 24.04.1 LTS   6.8.0-50-generic   containerd://1.7.12
c1-md-0-vsrgg-t699j      Ready    <none>          11m   v1.32.0   192.168.1.2   <none>        Ubuntu 24.04.1 LTS   6.8.0-50-generic   containerd://1.7.12
ubuntu@damocles ~ $ kubectl --kubeconfig=<(clusterctl get kubeconfig c2) get node -o wide 
NAME                     STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
c2-control-plane-bdfgz   Ready    control-plane   8m41s   v1.32.0   192.168.1.4   <none>        Ubuntu 24.04.1 LTS   6.8.0-50-generic   containerd://1.7.12
c2-md-0-kv69n-bsntc      Ready    <none>          7m53s   v1.32.0   192.168.1.5   <none>        Ubuntu 24.04.1 LTS   6.8.0-50-generic   containerd://1.7.12

@wirwolf
Copy link

wirwolf commented Dec 31, 2024

Machine type -> CONTAINER - Does it normally work with Kubernetes? I have failed experience in calculating resources(cpu/ram). If you have limited cpu/ram in the container Kubernetes detects all cores from VDS. And if you have for example one VDS with 16 CPU 32 RAM and have 3 containers with a limit 3 CPU 4 RAM, Kubernetes detects 48 CPU(16 CPU * 3 VPS) and 96GB ram

@neoaggelos
Copy link
Author

@wirwolf indeed, this would happen unless resource limits are set. The CRD supports LXCMachineTemplates like the one below:

---                                                  
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1 
kind: LXCMachineTemplate                             
metadata:                                            
  name: c1-md-0                                      
  namespace: default                                 
spec:                                                
  template:                                          
    spec:                                            
      flavor: c2-m4                                  
      image:                                         
        name: k8s-u24-v1.32.0-lxc                    
        protocol: null                               
        server: null                                 
      profiles:                                      
      - default                                      
      type: container                                

which is the equivalent of launching instances with incus launch k8s-u24-v1.32.0-lxc t1 -t c2-m4. Using this, Kubernetes nodes seem to properly report 2CPUs, 4GBs of RAM each.

ubuntu@damocles ~ $ kubectl get node -o custom-columns=NAME:metadata.name,CPU:status.allocatable.cpu,RAM:status.allocatable.memory
NAME                     CPU   RAM
c1-control-plane-bflvn   2     4Gi
c1-control-plane-lhkw8   2     4Gi
c1-control-plane-sg854   2     4Gi
c1-md-0-67xcp-njz7n      2     4Gi
c1-md-0-67xcp-pwjsd      2     4Gi
c1-md-0-67xcp-vj4ps      2     4Gi
ubuntu@damocles ~ $ incus list
+------------------------+---------+--------------------+------+-----------+-----------+----------+
|          NAME          |  STATE  |        IPV4        | IPV6 |   TYPE    | SNAPSHOTS | LOCATION |
+------------------------+---------+--------------------+------+-----------+-----------+----------+
| c1-control-plane-bflvn | RUNNING | 192.168.1.6 (eth0) |      | CONTAINER | 0         | w01      |
+------------------------+---------+--------------------+------+-----------+-----------+----------+
| c1-control-plane-lhkw8 | RUNNING | 192.168.1.2 (eth0) |      | CONTAINER | 0         | w02      |
+------------------------+---------+--------------------+------+-----------+-----------+----------+
| c1-control-plane-sg854 | RUNNING | 192.168.1.7 (eth0) |      | CONTAINER | 0         | w03      |
+------------------------+---------+--------------------+------+-----------+-----------+----------+
| c1-md-0-67xcp-njz7n    | RUNNING | 192.168.1.4 (eth0) |      | CONTAINER | 0         | w01      |
+------------------------+---------+--------------------+------+-----------+-----------+----------+
| c1-md-0-67xcp-pwjsd    | RUNNING | 192.168.1.5 (eth0) |      | CONTAINER | 0         | w02      |
+------------------------+---------+--------------------+------+-----------+-----------+----------+
| c1-md-0-67xcp-vj4ps    | RUNNING | 192.168.1.3 (eth0) |      | CONTAINER | 0         | w03      |
+------------------------+---------+--------------------+------+-----------+-----------+----------+

But I stil believe top load would be shared between containers on the same hypervisor. For that reason, virtual-machine instances can be used just as well (and it is up to the cluster template to define the machine spec)

@wirwolf
Copy link

wirwolf commented Jan 3, 2025

Okay. Can you try to check the calculation with Prometheus?

@neoaggelos
Copy link
Author

neoaggelos commented Jan 14, 2025

@wirwolf here's a v0.0.1-prealpha.1 to experiment with, while i'm putting together docs and getting ready to cut a first release. would appreciate any comments/feedback

https://gist.github.com/neoaggelos/f6bdef9e092219293dd1cdea4dab2151

@wirwolf
Copy link

wirwolf commented Jan 14, 2025

Thanks. I try this on the weekend.

@wirwolf
Copy link

wirwolf commented Jan 18, 2025

I tested and all is working, but I can not install the kube cluster on incus with 2 nodes.

+------------------------+---------+-----------------------+------+-----------+-----------+----------+
|          NAME          |  STATE  |         IPV4          | IPV6 |   TYPE    | SNAPSHOTS | LOCATION |
+------------------------+---------+-----------------------+------+-----------+-----------+----------+
| c1-control-plane-9f59v | RUNNING | 10.117.223.163 (eth0) |      | CONTAINER | 0         | incus1   |
+------------------------+---------+-----------------------+------+-----------+-----------+----------+
| default-c1-lb          | RUNNING | 10.117.223.227 (eth0) |      | CONTAINER | 0         | incus0   |
+------------------------+---------+-----------------------+------+-----------+-----------+----------+
NAME   CLUSTERCLASS   PHASE         AGE     VERSION
c1                    Provisioned   5m21s
NAME   CLUSTER   LOAD BALANCER    READY   AGE
c1     c1        10.117.223.227   true    5m21s
NAME                     CLUSTER   NODENAME   PROVIDERID   PHASE          AGE     VERSION
c1-control-plane-9f59v   c1                                Provisioning   5m12s   v1.32.0
c1-md-0-7fb9p-s2wtj      c1                                Pending        5m5s    v1.32.0
c1-md-0-7fb9p-xrzxw      c1                                Pending        5m5s    v1.32.0
NAME                     CLUSTER   MACHINE                  PROVIDERID   READY   AGE
c1-control-plane-9f59v   c1        c1-control-plane-9f59v                        5m12s
c1-md-0-7fb9p-s2wtj      c1        c1-md-0-7fb9p-s2wtj                           5m6s
c1-md-0-7fb9p-xrzxw      c1        c1-md-0-7fb9p-xrzxw                           5m5s


But this could be a problem in my network configuration.

@wirwolf
Copy link

wirwolf commented Jan 19, 2025

Also, I try setup a kube cluster on my local network. The operator can not recognize local subnet 192.168

I0119 16:47:27.665838       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:28.673812       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:29.682233       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:30.690715       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:31.698787       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:32.706978       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:33.715599       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:34.723203       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:35.731110       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:36.739100       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:37.746821       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:38.754637       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:39.763456       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:40.772420       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:41.782305       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:42.791548       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:43.800077       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}

But the container start and works normally

+---------------+---------+----------------------+------+-----------+-----------+----------+
|     NAME      |  STATE  |         IPV4         | IPV6 |   TYPE    | SNAPSHOTS | LOCATION |
+---------------+---------+----------------------+------+-----------+-----------+----------+
| default-c1-lb | RUNNING | 192.168.0.161 (eth0) |      | CONTAINER | 0         | incus0   |
+---------------+---------+----------------------+------+-----------+-----------+----------+
Name: default-c1-lb
Status: RUNNING
Type: container
Architecture: x86_64
Location: incus0
PID: 12459
Created: 2025/01/19 16:43 UTC
Last Used: 2025/01/19 16:43 UTC
Started: 2025/01/19 16:43 UTC

Resources:
  Processes: 19
  CPU usage:
    CPU usage (in seconds): 0
  Memory usage:
    Memory (current): 94.92MiB
  Network usage:
    eth0:
      Type: broadcast
      State: UP
      MAC address: bc:24:11:6b:06:06
      MTU: 1500
      Bytes received: 166.21kB
      Bytes sent: 8.25kB
      Packets received: 1378
      Packets sent: 83
      IP addresses:
        inet:  192.168.0.161/22 (global)
        inet6: fe80::be24:11ff:fe6b:606/64 (link)
    lo:
      Type: loopback
      State: UP
      MTU: 65536
      Bytes received: 0B
      Bytes sent: 0B
      Packets received: 0
      Packets sent: 0
      IP addresses:
        inet:  127.0.0.1/8 (local)
        inet6: ::1/128 (local)

root@default-c1-lb:~# ps -ax
    PID TTY      STAT   TIME COMMAND
      1 ?        Ss     0:00 /sbin/init
    123 ?        Ss     0:00 /usr/lib/systemd/systemd-journald
    176 ?        Ss     0:00 /usr/lib/systemd/systemd-udevd
    186 ?        Ss     0:00 /usr/lib/systemd/systemd-networkd
    188 ?        Ss     0:00 /usr/lib/systemd/systemd-resolved
    195 ?        Ss     0:00 /usr/sbin/cron -f -P
    196 ?        Ss     0:00 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
    200 ?        Ss     0:00 /usr/lib/systemd/systemd-logind
    212 pts/0    Ss+    0:00 /sbin/agetty -o -p -- \u --noclear --keep-baud - 115200,38400,9600 vt220
    238 ?        Ssl    0:00 /usr/sbin/rsyslogd -n -iNONE
    273 ?        Ss     0:00 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
    275 ?        Sl     0:00 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
    308 pts/1    Ss     0:00 su -l
    311 ?        Ss     0:00 /usr/lib/systemd/systemd --user
    312 ?        S      0:00 (sd-pam)
    319 pts/1    S      0:00 -bash
    328 pts/1    R+     0:00 ps -ax
root@default-c1-lb:~# cat /etc/haproxy/haproxy.cfg
global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http
root@default-c1-lb:~# 

apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXCCluster
status:
  conditions:
    - lastTransitionTime: '2025-01-19T16:39:09Z'
      message: 1 of 2 completed
      reason: LoadBalancerProvisioningFailed
      severity: Warning
      status: 'False'
      type: Ready
    - lastTransitionTime: '2025-01-19T16:38:09Z'
      status: 'True'
      type: KubeadmProfileAvailable
    - lastTransitionTime: '2025-01-19T16:39:09Z'
      message: >-
        failed to get loadbalancer instance address: timed out waiting for
        instance address: context deadline exceeded
      reason: LoadBalancerProvisioningFailed
      severity: Warning
      status: 'False'
      type: LoadBalancerAvailable
  v1beta2:
    conditions:
      - lastTransitionTime: '2025-01-19T16:38:09Z'
        message: ''
        observedGeneration: 1
        reason: NotPaused
        status: 'False'
        type: Paused
spec:
  loadBalancer:
    instanceSpec:
      flavor: ''
      profiles:
        - default
    type: lxc
  secretRef:
    name: lxc-secret

@neoaggelos
Copy link
Author

@wirwolf if you don't mind, let's take this to https://github.com/neoaggelos/cluster-api-provider-lxc ! super excited to have a v0.1.0 out. would appreciate if you could create bug reports for them. but in general:

  • re instances in incus0 and incus1 not communicating, is 10.117.223.1/24 a local bridge on both nodes? if so, then it's a separate local bridge and cross node traffic would not work. you either need ovn for cross-node traffic, or configuring bridges/macvlan. working on adding some documentation on this subject and point to upstream incus docs for more.

  • interesting that there is no hostname on the eth0 interface. What type of network are you using?

If you can create a github issue for each in https://github.com/neoaggelos/cluster-api-provider-lxc/issues, it would be ideal

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment