The idea is to use a Placement and a Policy to distribute a resource to managed cluster namespaces on a hub cluster, not to the actual managed clusters.
This can be accomplished by using two range
s in object-templates-raw
in the Policy: one over all the PlacementDecisions, and then one over all the clusters in each decision. For example, the Policy would be:
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: jkuli-policy-test-slack
namespace: open-cluster-management-global-set
spec:
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: policy-jk-cm-slack
spec:
object-templates-raw: |
{{ range $placedec := (lookup "cluster.open-cluster-management.io/v1beta1" "PlacementDecision" "open-cluster-management-global-set" "" "cluster.open-cluster-management.io/placement=configmap-placement").items }}
{{ range $clustdec := $placedec.status.decisions }}
- complianceType: musthave
objectDefinition:
kind: ConfigMap
apiVersion: v1
metadata:
name: "jk-slack"
namespace: {{ $clustdec.clusterName }}
data:
foo: bar
fizz: buzz
{{ end }}
{{ end }}
pruneObjectBehavior: None
remediationAction: enforce
severity: low
And all that would need to be added is the separate Placement configmap-placement
, and a Placement+PlacementBinding to distribute the policy to the hub cluster.
The indentation of the resource inside the policy can become difficult to manage. We can work around that using yq and a simple script to wrap everything together.
With a kubernetes YAML file to wrap called configmap.yaml
:
./wrap.sh configmap.yaml > cm-wrapped-policy.yaml
The generated policy can then be applied onto the hub cluster, along with the Placement describing which managed cluster namespaces should recieve the configmap.
The script can be configured by a variety of environment variables, see below.