- 
      
- 
        Save iyalang/5129795a26176140eab5bbe5b267450c to your computer and use it in GitHub Desktop. 
| --- | |
| apiVersion: kyverno.io/v1 | |
| kind: ClusterPolicy | |
| metadata: | |
| name: auto-vpa-creation | |
| annotations: | |
| policies.kyverno.io/title: Add default VPA | |
| policies.kyverno.io/category: Cost Optimization | |
| policies.kyverno.io/subject: Vertical Pod Autoscaler | |
| policies.kyverno.io/description: >- | |
| This policy creates a Vertical Pod Autoscaler for each new workload unless it already has one or is using a Horizontal Pod Autoscaler. | |
| spec: | |
| validationFailureAction: Enforce | |
| background: true | |
| generateExistingOnPolicyUpdate: true | |
| rules: | |
| - name: create-default-vpa-one-container | |
| match: | |
| any: | |
| - resources: | |
| kinds: | |
| - DaemonSet | |
| - Deployment | |
| - StatefulSet | |
| context: | |
| - name: existingHPA | |
| apiCall: | |
| urlPath: '/apis/autoscaling/v2/namespaces/{{request.namespace}}/horizontalpodautoscalers' | |
| jmesPath: "items[].spec.scaleTargetRef.name" | |
| - name: existingVPA | |
| apiCall: | |
| urlPath: "/apis/autoscaling.k8s.io/v1/namespaces/{{request.namespace}}/verticalpodautoscalers" | |
| jmesPath: "items[].spec.targetRef.name" | |
| - name: autoVPACount | |
| apiCall: | |
| urlPath: '/apis/autoscaling.k8s.io/v1/namespaces/{{request.namespace}}/verticalpodautoscalers' | |
| jmesPath: items[?metadata.labels."auto-vpa"] | [?spec.targetRef.name=='{{request.object.metadata.name}}'] | length(@) | |
| - name: totalContainers | |
| variable: | |
| value: '{{ request.object.spec.template.spec.containers }}' | |
| jmesPath: 'length(@)' | |
| preconditions: | |
| all: | |
| - key: '{{request.operation}}' | |
| operator: NotEquals | |
| value: DELETE | |
| - key: '{{request.object.metadata.name}}' | |
| operator: AllNotIn | |
| value: '{{existingHPA}}' | |
| - key: '{{totalContainers}}' | |
| operator: Equals | |
| value: "1" | |
| # Make sure there are no existing VPAs for this object | |
| # UNLESS there is an auto VPA (then it's ok to update it). | |
| any: | |
| - key: '{{request.object.metadata.name}}' | |
| operator: AllNotIn | |
| value: '{{existingVPA}}' | |
| - key: '{{ autoVPACount }}' | |
| operator: Equals | |
| value: 1 | |
| exclude: | |
| any: | |
| - resources: | |
| selector: | |
| matchLabels: | |
| auto-vpa/create: "false" | |
| - resources: | |
| namespaces: | |
| - kube-system | |
| - resources: | |
| namespaceSelector: | |
| matchExpressions: | |
| - key: "auto-vpa/create" | |
| operator: In | |
| values: | |
| - "false" | |
| generate: | |
| synchronize: true | |
| apiVersion: autoscaling.k8s.io/v1 | |
| kind: VerticalPodAutoscaler | |
| name: '{{request.object.metadata.name}}-auto-vpa' | |
| namespace: '{{request.object.metadata.namespace}}' | |
| data: | |
| metadata: | |
| labels: | |
| auto-vpa: "true" | |
| ownerReferences: | |
| - apiVersion: apps/v1 | |
| kind: '{{request.object.kind}}' | |
| name: '{{request.object.metadata.name}}' | |
| uid: '{{request.object.metadata.uid}}' | |
| spec: | |
| targetRef: | |
| apiVersion: "apps/v1" | |
| kind: '{{request.object.kind}}' | |
| name: '{{request.object.metadata.name}}' | |
| updatePolicy: | |
| updateMode: "Auto" | |
| resourcePolicy: | |
| containerPolicies: | |
| - containerName: "*" | |
| minAllowed: | |
| cpu: 10m | |
| memory: 32Mi | |
| maxAllowed: | |
| cpu: '{{request.object.spec.template.spec.containers[0].resources.requests.cpu}}' | |
| memory: '{{request.object.spec.template.spec.containers[0].resources.requests.memory}}' | |
| controlledResources: ["cpu", "memory"] | |
| controlledValues: "RequestsOnly" | |
| - name: create-default-vpa-multiple-containers | |
| match: | |
| any: | |
| - resources: | |
| kinds: | |
| - DaemonSet | |
| - Deployment | |
| - StatefulSet | |
| context: | |
| - name: existingHPA | |
| apiCall: | |
| urlPath: "/apis/autoscaling/v2/namespaces/{{request.namespace}}/horizontalpodautoscalers" | |
| jmesPath: "items[].spec.scaleTargetRef.name" | |
| - name: existingVPA | |
| apiCall: | |
| urlPath: "/apis/autoscaling.k8s.io/v1/namespaces/{{request.namespace}}/verticalpodautoscalers" | |
| jmesPath: "items[].spec.targetRef.name" | |
| - name: autoVPACount | |
| apiCall: | |
| urlPath: '/apis/autoscaling.k8s.io/v1/namespaces/{{request.namespace}}/verticalpodautoscalers' | |
| jmesPath: items[?metadata.labels."auto-vpa"] | [?spec.targetRef.name=='{{request.object.metadata.name}}'] | length(@) | |
| - name: totalContainers | |
| variable: | |
| value: '{{request.object.spec.template.spec.containers}}' | |
| jmesPath: 'length(@)' | |
| preconditions: | |
| all: | |
| - key: '{{request.operation}}' | |
| operator: NotEquals | |
| value: DELETE | |
| - key: '{{request.object.metadata.name}}' | |
| operator: AllNotIn | |
| value: '{{existingHPA}}' | |
| - key: '{{totalContainers}}' | |
| operator: NotEquals | |
| value: "1" | |
| # Make sure there are no existing VPAs for this object | |
| # UNLESS there is an auto VPA (then it's ok to update it). | |
| any: | |
| - key: '{{request.object.metadata.name}}' | |
| operator: AllNotIn | |
| value: '{{existingVPA}}' | |
| - key: '{{ autoVPACount }}' | |
| operator: Equals | |
| value: 1 | |
| exclude: | |
| any: | |
| - resources: | |
| selector: | |
| matchLabels: | |
| auto-vpa/create: "false" | |
| - resources: | |
| namespaces: | |
| - kube-system | |
| - resources: | |
| namespaceSelector: | |
| matchExpressions: | |
| - key: "auto-vpa/create" | |
| operator: In | |
| values: | |
| - "false" | |
| generate: | |
| synchronize: true | |
| apiVersion: autoscaling.k8s.io/v1 | |
| kind: VerticalPodAutoscaler | |
| name: '{{request.object.metadata.name}}-auto-vpa' | |
| namespace: '{{request.object.metadata.namespace}}' | |
| data: | |
| metadata: | |
| labels: | |
| auto-vpa: "true" | |
| ownerReferences: | |
| - apiVersion: apps/v1 | |
| kind: '{{request.object.kind}}' | |
| name: '{{request.object.metadata.name}}' | |
| uid: '{{request.object.metadata.uid}}' | |
| spec: | |
| targetRef: | |
| apiVersion: "apps/v1" | |
| kind: '{{request.object.kind}}' | |
| name: '{{request.object.metadata.name}}' | |
| updatePolicy: | |
| updateMode: "Auto" | |
| resourcePolicy: | |
| containerPolicies: | |
| - containerName: "*" | |
| minAllowed: | |
| cpu: 10m | |
| memory: 32Mi | |
| controlledResources: ["cpu", "memory"] | |
| controlledValues: "RequestsOnly" | 
Hi @kingdonb ! Good point about requests. We have another policy in our clusters that rejects workloads without requests and limits (something similar to this https://kyverno.io/policies/best-practices/require-pod-requests-limits/require-pod-requests-limits/ ), I guess this is why I didn't run into this issue ๐
I think this policy fails for pods without requests because of this part:
maxAllowed:
  cpu: '{{request.object.spec.template.spec.containers[0].resources.requests.cpu}}'
  memory: '{{request.object.spec.template.spec.containers[0].resources.requests.memory}}'
I can think of 2 possible solutions:
- Remove this block completely and not limit max allowed requests.
- Replace the values with some constants.
Thanks! I found that some operators just don't enable you to set requests and limits on the statefulsets they create (maybe I should file it as a bug on prometheus-operator!) - anyway I don't know if that background problem is a real issue - I have background: true now and the default request policy also installed now, and for the first time user to learn how a VPA in auto mode works, I think it's working fine. Cheers! ๐
I plugged your policy into my cluster and found it errored out on every resource that was missing a request. I also found that VPAs balk at recommending for StatefulSets that are managed by operators.
So I skipped StatefulSet, since all of my STS are managed by the Prometheus Stack chart, that uses Prometheus Operator... then I wrote a separate policy to impose requests as a patch if they were missing, I used
background: trueto first impose resource requests on every pod in the clusterI backed it off to
background: falsebecause I wasn't sure if setting the request by Kyverno was interfering with the VPA. It seemed like it was working after that!Did you find a good way to guarantee every pod resource has a request? Or are you still using this policy?
(Thanks for sharing what you did as a gist!)