Skip to content

Instantly share code, notes, and snippets.

@neoaggelos
Last active January 19, 2025 22:58
Show Gist options
  • Save neoaggelos/22cc560ecc7691ce40e501f1cbee89c5 to your computer and use it in GitHub Desktop.
Save neoaggelos/22cc560ecc7691ce40e501f1cbee89c5 to your computer and use it in GitHub Desktop.
lxd provider cluster-api
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: test
namespace: default
spec:
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: MicroK8sControlPlane
name: test-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXDCluster
name: test
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXDCluster
metadata:
name: test
namespace: default
spec: {}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: MicroK8sControlPlane
metadata:
name: test-control-plane
namespace: default
spec:
controlPlaneConfig:
clusterConfiguration:
portCompatibilityRemap: true
initConfiguration:
IPinIP: true
addons:
- dns
- ingress
joinTokenTTLInSecs: 9000
machineTemplate:
infrastructureTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXDMachineTemplate
name: test-control-plane
replicas: 1
version: v1.25.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXDMachineTemplate
metadata:
name: test-control-plane
namespace: default
spec:
template:
spec:
imageAlias: u22
instanceType: container
profiles:
- default
- microk8s
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: test-md-0
namespace: default
spec:
clusterName: test
replicas: 0
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: MicroK8sConfigTemplate
name: test-md-0
clusterName: test
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXDMachineTemplate
name: test-md-0
version: 1.25.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXDMachineTemplate
metadata:
name: test-md-0
namespace: default
spec:
template:
spec:
imageAlias: u22
instanceType: container
profiles:
- default
- microk8s
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: MicroK8sConfigTemplate
metadata:
name: test-md-0
namespace: default
spec:
template:
spec: {}
# configure LXD, and allow access over HTTPS.
# note the IP address (in this case, 10.0.3.181) and the trust password.
$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this node? [default=10.0.3.181]:
Are you joining an existing cluster? (yes/no) [default=no]:
What name should be used to identify this node in the cluster? [default=test-ovn]:
Setup password authentication on the cluster? (yes/no) [default=no]: yes
Trust password for new clients:
Again:
....
# create image alias for ubuntu 22.04
$ sudo lxc launch ubuntu:22.04 t1
$ sudo lxc image alias create u22 ubuntu:22.04
$ sudo lxc rm t1 --force
# create profile
$ lxc profile create microk8s
$ curl https://raw.githubusercontent.com/ubuntu/microk8s/master/tests/lxc/microk8s.profile | lxc profile edit microk8s
# deploy LXD provider and configure access to server (replace IP and password)
$ microk8s kubectl apply -f provider.yaml
$ microk8s kubectl create configmap -n capl-system lxd-socket --from-literal=LXD_SERVER=https://10.0.3.181:8443 --from-literal=LXD_PASSWORD=password
# deploy cluster.yaml. initially, it has 1 control plane node and 0 workers
$ microk8s kubectl apply -f cluster.yaml
# wait for init node to come up, check with 'lxc list' and note its IP address
$ lxc list
+-------------------------------+---------+-----------------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------------------+---------+-----------------------------+------+-----------+-----------+
| test-test-control-plane-pkstp | RUNNING | 10.0.0.187 (eth0) | | CONTAINER | 0 |
+-------------------------------+---------+-----------------------------+------+-----------+-----------+
# after deployment, fix `test-kubeconfig` secret to unblock the control plane provider
$ microk8s kubectl edit cluster test # change control api endpoint to 10.0.0.187:6443
$ microk8s kubectl edit lxdcluster test # change control api endpoint to 10.0.0.187:6443
$ clusterctl get kubeconfig test > kubeconfig
$ vim kubeconfig # change 'https://TODO:12345' to 'https://10.0.0.187:6443'
$ cat kubeconfig | base64 -w0
$ microk8s kubectl edit secret test-kubeconfig # change value to the new base64 string
# cluster is now ready to scale. edit 'cluster.yaml', change 'replicas: 0' and re-apply to deploy worker nodes
apiVersion: v1
kind: Namespace
metadata:
labels:
control-plane: controller-manager
name: capl-system
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.0
creationTimestamp: null
labels:
cluster.x-k8s.io/provider: infrastructure-lxd
cluster.x-k8s.io/v1beta1: v1alpha1
name: lxdclusters.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
categories:
- cluster-api
kind: LXDCluster
listKind: LXDClusterList
plural: lxdclusters
shortNames:
- lc
singular: lxdcluster
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Cluster to which this LXDCluster belongs
jsonPath: .metadata.labels.cluster\.x-k8s\.io/cluster-name
name: Cluster
type: string
- description: Cluster infrastructure is ready for LXD instances
jsonPath: .status.ready
name: Ready
type: string
- description: Time duration since creation of LXDCluster
jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: LXDCluster is the Schema for the lxdclusters API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: LXDClusterSpec defines the desired state of LXDCluster
properties:
controlPlaneEndpoint:
description: ControlPlaneEndpoint represents the endpoint to communicate with the control plane.
properties:
host:
description: The hostname on which the API server is serving.
type: string
port:
description: The port on which the API server is serving.
format: int32
type: integer
required:
- host
- port
type: object
type: object
status:
description: LXDClusterStatus defines the observed state of LXDCluster
properties:
ready:
description: Ready denotes that the LXD cluster (infrastructure) is ready.
type: boolean
required:
- ready
type: object
type: object
served: true
storage: true
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.0
creationTimestamp: null
labels:
cluster.x-k8s.io/provider: infrastructure-lxd
cluster.x-k8s.io/v1beta1: v1alpha1
name: lxdclustertemplates.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
kind: LXDClusterTemplate
listKind: LXDClusterTemplateList
plural: lxdclustertemplates
singular: lxdclustertemplate
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: LXDClusterTemplate is the Schema for the lxdclustertemplates API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: LXDClusterTemplateSpec defines the desired state of LXDClusterTemplate
properties:
template:
properties:
metadata:
description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata'
properties:
annotations:
additionalProperties:
type: string
description: 'Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations'
type: object
labels:
additionalProperties:
type: string
description: 'Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels'
type: object
type: object
spec:
description: Spec is the specification of the desired behaviour of the cluster.
properties:
controlPlaneEndpoint:
description: ControlPlaneEndpoint represents the endpoint to communicate with the control plane.
properties:
host:
description: The hostname on which the API server is serving.
type: string
port:
description: The port on which the API server is serving.
format: int32
type: integer
required:
- host
- port
type: object
type: object
required:
- spec
type: object
required:
- template
type: object
type: object
served: true
storage: true
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.0
creationTimestamp: null
labels:
cluster.x-k8s.io/provider: infrastructure-lxd
cluster.x-k8s.io/v1beta1: v1alpha1
name: lxdmachines.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
categories:
- cluster-api
kind: LXDMachine
listKind: LXDMachineList
plural: lxdmachines
shortNames:
- lm
singular: lxdmachine
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Cluster to which this LXDMachine belongs
jsonPath: .metadata.labels.cluster\.x-k8s\.io/cluster-name
name: Cluster
type: string
- description: LXD instance state
jsonPath: .status.state
name: State
type: string
- description: LXD instance ID
jsonPath: .spec.providerID
name: ProviderID
type: string
- description: Machine ready status
jsonPath: .status.ready
name: Ready
type: string
- description: Machine object which owns with this LXDMachine
jsonPath: .metadata.ownerReferences[?(@.kind=="Machine")].name
name: Machine
type: string
- description: Time duration since creation of LXDMachine
jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: LXDMachine is the Schema for the lxdmachines API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: LXDMachineSpec defines the desired state of LXDMachine
properties:
imageAlias:
description: Image is the image alias name to use.
type: string
instanceType:
description: InstanceType is the instance type to create.
enum:
- container
- virtual-machine
type: string
profiles:
description: Profiles is a list of profiles to attach to the instance.
items:
type: string
type: array
providerID:
description: ProviderID is the container name in ProviderID format (lxd:///<containername>)
type: string
type: object
status:
description: LXDMachineStatus defines the observed state of LXDMachine
properties:
addresses:
items:
description: NodeAddress contains information for the node's address.
properties:
address:
description: The node address.
type: string
type:
description: Node address type, one of Hostname, ExternalIP or InternalIP.
type: string
required:
- address
- type
type: object
type: array
ready:
type: boolean
state:
type: string
required:
- addresses
type: object
type: object
served: true
storage: true
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.0
creationTimestamp: null
labels:
cluster.x-k8s.io/provider: infrastructure-lxd
cluster.x-k8s.io/v1beta1: v1alpha1
name: lxdmachinetemplates.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
kind: LXDMachineTemplate
listKind: LXDMachineTemplateList
plural: lxdmachinetemplates
singular: lxdmachinetemplate
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: LXDMachineTemplate is the Schema for the lxdmachinetemplates API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: LXDMachineTemplateSpec defines the desired state of LXDMachineTemplate
properties:
template:
properties:
metadata:
description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata'
properties:
annotations:
additionalProperties:
type: string
description: 'Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations'
type: object
labels:
additionalProperties:
type: string
description: 'Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels'
type: object
type: object
spec:
description: Spec is the specification of the desired behaviour of the machine.
properties:
imageAlias:
description: Image is the image alias name to use.
type: string
instanceType:
description: InstanceType is the instance type to create.
enum:
- container
- virtual-machine
type: string
profiles:
description: Profiles is a list of profiles to attach to the instance.
items:
type: string
type: array
providerID:
description: ProviderID is the container name in ProviderID format (lxd:///<containername>)
type: string
type: object
required:
- spec
type: object
required:
- template
type: object
type: object
served: true
storage: true
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: capl-controller-manager
namespace: capl-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: capl-leader-election-role
namespace: capl-system
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: capl-manager-role
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- cluster.x-k8s.io
resources:
- clusters
- clusters/status
verbs:
- get
- list
- watch
- apiGroups:
- cluster.x-k8s.io
resources:
- machines
- machines/status
verbs:
- get
- list
- watch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- lxdclusters
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- lxdclusters/finalizers
verbs:
- update
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- lxdclusters/status
verbs:
- get
- patch
- update
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- lxdmachines
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- lxdmachines/finalizers
verbs:
- update
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- lxdmachines/status
verbs:
- get
- patch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: capl-metrics-reader
rules:
- nonResourceURLs:
- /metrics
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: capl-proxy-role
rules:
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: capl-leader-election-rolebinding
namespace: capl-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: capl-leader-election-role
subjects:
- kind: ServiceAccount
name: capl-controller-manager
namespace: capl-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: capl-manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: capl-manager-role
subjects:
- kind: ServiceAccount
name: capl-controller-manager
namespace: capl-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: capl-proxy-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: capl-proxy-role
subjects:
- kind: ServiceAccount
name: capl-controller-manager
namespace: capl-system
---
apiVersion: v1
data:
controller_manager_config.yaml: "apiVersion: controller-runtime.sigs.k8s.io/v1alpha1\nkind: ControllerManagerConfig\nhealth:\n healthProbeBindAddress: :8081\nmetrics:\n bindAddress: 127.0.0.1:8080\nwebhook:\n port: 9443\nleaderElection:\n leaderElect: true\n resourceName: 349154e5.cluster.x-k8s.io\n# leaderElectionReleaseOnCancel defines if the leader should step down volume \n# when the Manager ends. This requires the binary to immediately end when the\n# Manager is stopped, otherwise, this setting is unsafe. Setting this significantly\n# speeds up voluntary leader transitions as the new leader don't have to wait\n# LeaseDuration time first.\n# In the default scaffold provided, the program ends immediately after \n# the manager stops, so would be fine to enable this option. However, \n# if you are doing or is intended to do any operation such as perform cleanups \n# after the manager stops then its usage might be unsafe.\n# leaderElectionReleaseOnCancel: true\n"
kind: ConfigMap
metadata:
name: capl-manager-config
namespace: capl-system
---
apiVersion: v1
kind: Service
metadata:
labels:
control-plane: controller-manager
name: capl-controller-manager-metrics-service
namespace: capl-system
spec:
ports:
- name: https
port: 8443
protocol: TCP
targetPort: https
selector:
control-plane: controller-manager
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
control-plane: controller-manager
name: capl-controller-manager
namespace: capl-system
spec:
replicas: 1
selector:
matchLabels:
control-plane: controller-manager
template:
metadata:
annotations:
kubectl.kubernetes.io/default-container: manager
labels:
control-plane: controller-manager
spec:
containers:
- args:
- --secure-listen-address=0.0.0.0:8443
- --upstream=http://127.0.0.1:8080/
- --logtostderr=true
- --v=0
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.11.0
name: kube-rbac-proxy
ports:
- containerPort: 8443
name: https
protocol: TCP
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 5m
memory: 64Mi
securityContext:
allowPrivilegeEscalation: false
- args:
- --health-probe-bind-address=:8081
- --metrics-bind-address=127.0.0.1:8080
- --leader-elect
command:
- /manager
envFrom:
- configMapRef:
name: lxd-socket
image: neoaggelos/capi-lxd:dev1
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
name: manager
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 10m
memory: 64Mi
securityContext:
allowPrivilegeEscalation: false
securityContext:
runAsNonRoot: true
serviceAccountName: capl-controller-manager
terminationGracePeriodSeconds: 10
@neoaggelos
Copy link
Author

neoaggelos commented Jan 14, 2025

@wirwolf here's a v0.0.1-prealpha.1 to experiment with, while i'm putting together docs and getting ready to cut a first release. would appreciate any comments/feedback

https://gist.github.com/neoaggelos/f6bdef9e092219293dd1cdea4dab2151

@wirwolf
Copy link

wirwolf commented Jan 14, 2025

Thanks. I try this on the weekend.

@wirwolf
Copy link

wirwolf commented Jan 18, 2025

I tested and all is working, but I can not install the kube cluster on incus with 2 nodes.

+------------------------+---------+-----------------------+------+-----------+-----------+----------+
|          NAME          |  STATE  |         IPV4          | IPV6 |   TYPE    | SNAPSHOTS | LOCATION |
+------------------------+---------+-----------------------+------+-----------+-----------+----------+
| c1-control-plane-9f59v | RUNNING | 10.117.223.163 (eth0) |      | CONTAINER | 0         | incus1   |
+------------------------+---------+-----------------------+------+-----------+-----------+----------+
| default-c1-lb          | RUNNING | 10.117.223.227 (eth0) |      | CONTAINER | 0         | incus0   |
+------------------------+---------+-----------------------+------+-----------+-----------+----------+
NAME   CLUSTERCLASS   PHASE         AGE     VERSION
c1                    Provisioned   5m21s
NAME   CLUSTER   LOAD BALANCER    READY   AGE
c1     c1        10.117.223.227   true    5m21s
NAME                     CLUSTER   NODENAME   PROVIDERID   PHASE          AGE     VERSION
c1-control-plane-9f59v   c1                                Provisioning   5m12s   v1.32.0
c1-md-0-7fb9p-s2wtj      c1                                Pending        5m5s    v1.32.0
c1-md-0-7fb9p-xrzxw      c1                                Pending        5m5s    v1.32.0
NAME                     CLUSTER   MACHINE                  PROVIDERID   READY   AGE
c1-control-plane-9f59v   c1        c1-control-plane-9f59v                        5m12s
c1-md-0-7fb9p-s2wtj      c1        c1-md-0-7fb9p-s2wtj                           5m6s
c1-md-0-7fb9p-xrzxw      c1        c1-md-0-7fb9p-xrzxw                           5m5s


But this could be a problem in my network configuration.

@wirwolf
Copy link

wirwolf commented Jan 19, 2025

Also, I try setup a kube cluster on my local network. The operator can not recognize local subnet 192.168

I0119 16:47:27.665838       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:28.673812       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:29.682233       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:30.690715       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:31.698787       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:32.706978       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:33.715599       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:34.723203       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:35.731110       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:36.739100       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:37.746821       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:38.754637       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:39.763456       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:40.772420       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:41.782305       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:42.791548       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}
I0119 16:47:43.800077       1 lxc_util.go:45] "Waiting for instance address" controller="lxccluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="LXCCluster" LXCCluster="default/c1" namespace="default" name="c1" reconcileID="f6169598-9a4e-4f22-9f2e-acf507aa0658" Cluster="default/c1" profileName="cluster-api-default-c1" instance="default-c1-lb" image={"name":"haproxy","fingerprint":"","server":"https://d14dnvi2l3tc5t.cloudfront.net","protocol":"simplestreams"}

But the container start and works normally

+---------------+---------+----------------------+------+-----------+-----------+----------+
|     NAME      |  STATE  |         IPV4         | IPV6 |   TYPE    | SNAPSHOTS | LOCATION |
+---------------+---------+----------------------+------+-----------+-----------+----------+
| default-c1-lb | RUNNING | 192.168.0.161 (eth0) |      | CONTAINER | 0         | incus0   |
+---------------+---------+----------------------+------+-----------+-----------+----------+
Name: default-c1-lb
Status: RUNNING
Type: container
Architecture: x86_64
Location: incus0
PID: 12459
Created: 2025/01/19 16:43 UTC
Last Used: 2025/01/19 16:43 UTC
Started: 2025/01/19 16:43 UTC

Resources:
  Processes: 19
  CPU usage:
    CPU usage (in seconds): 0
  Memory usage:
    Memory (current): 94.92MiB
  Network usage:
    eth0:
      Type: broadcast
      State: UP
      MAC address: bc:24:11:6b:06:06
      MTU: 1500
      Bytes received: 166.21kB
      Bytes sent: 8.25kB
      Packets received: 1378
      Packets sent: 83
      IP addresses:
        inet:  192.168.0.161/22 (global)
        inet6: fe80::be24:11ff:fe6b:606/64 (link)
    lo:
      Type: loopback
      State: UP
      MTU: 65536
      Bytes received: 0B
      Bytes sent: 0B
      Packets received: 0
      Packets sent: 0
      IP addresses:
        inet:  127.0.0.1/8 (local)
        inet6: ::1/128 (local)

root@default-c1-lb:~# ps -ax
    PID TTY      STAT   TIME COMMAND
      1 ?        Ss     0:00 /sbin/init
    123 ?        Ss     0:00 /usr/lib/systemd/systemd-journald
    176 ?        Ss     0:00 /usr/lib/systemd/systemd-udevd
    186 ?        Ss     0:00 /usr/lib/systemd/systemd-networkd
    188 ?        Ss     0:00 /usr/lib/systemd/systemd-resolved
    195 ?        Ss     0:00 /usr/sbin/cron -f -P
    196 ?        Ss     0:00 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
    200 ?        Ss     0:00 /usr/lib/systemd/systemd-logind
    212 pts/0    Ss+    0:00 /sbin/agetty -o -p -- \u --noclear --keep-baud - 115200,38400,9600 vt220
    238 ?        Ssl    0:00 /usr/sbin/rsyslogd -n -iNONE
    273 ?        Ss     0:00 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
    275 ?        Sl     0:00 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
    308 pts/1    Ss     0:00 su -l
    311 ?        Ss     0:00 /usr/lib/systemd/systemd --user
    312 ?        S      0:00 (sd-pam)
    319 pts/1    S      0:00 -bash
    328 pts/1    R+     0:00 ps -ax
root@default-c1-lb:~# cat /etc/haproxy/haproxy.cfg
global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http
root@default-c1-lb:~# 

apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LXCCluster
status:
  conditions:
    - lastTransitionTime: '2025-01-19T16:39:09Z'
      message: 1 of 2 completed
      reason: LoadBalancerProvisioningFailed
      severity: Warning
      status: 'False'
      type: Ready
    - lastTransitionTime: '2025-01-19T16:38:09Z'
      status: 'True'
      type: KubeadmProfileAvailable
    - lastTransitionTime: '2025-01-19T16:39:09Z'
      message: >-
        failed to get loadbalancer instance address: timed out waiting for
        instance address: context deadline exceeded
      reason: LoadBalancerProvisioningFailed
      severity: Warning
      status: 'False'
      type: LoadBalancerAvailable
  v1beta2:
    conditions:
      - lastTransitionTime: '2025-01-19T16:38:09Z'
        message: ''
        observedGeneration: 1
        reason: NotPaused
        status: 'False'
        type: Paused
spec:
  loadBalancer:
    instanceSpec:
      flavor: ''
      profiles:
        - default
    type: lxc
  secretRef:
    name: lxc-secret

@neoaggelos
Copy link
Author

@wirwolf if you don't mind, let's take this to https://github.com/neoaggelos/cluster-api-provider-lxc ! super excited to have a v0.1.0 out. would appreciate if you could create bug reports for them. but in general:

  • re instances in incus0 and incus1 not communicating, is 10.117.223.1/24 a local bridge on both nodes? if so, then it's a separate local bridge and cross node traffic would not work. you either need ovn for cross-node traffic, or configuring bridges/macvlan. working on adding some documentation on this subject and point to upstream incus docs for more.

  • interesting that there is no hostname on the eth0 interface. What type of network are you using?

If you can create a github issue for each in https://github.com/neoaggelos/cluster-api-provider-lxc/issues, it would be ideal

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment