Supported formats:
Group/Version/Kind
Version/Kind
Kind
Subresources can be specified by .
or a /
after Kind
:
Group/Version/Kind/Subresource
Group/Version/Kind.Subresource
Version/Kind/Subresource
Version/Kind.Subresource
Kind/Subresource
Kind.Subresource
Wildcard-supported formats for subresources are:
Group/*/Kind/Subresource
(.
can also be used for subresource)*/Kind/Subresource
(.
can also be used for subresource)*
A few examples of specifying subresources are:
apps/v1/Deployment/scale
Pod/exec
v1/Pod.eviction
v1/Pod/status
Note:-
Some subresources can be specified with their own kind too, for example: PodExecOptions
, NodeProxyOptions
, et cetera. But some subresource kinds are shared by multiple API resources, for example, Scale
API Resource.
API Resource autoscaling/v1/Scale
is present in several API Resources Lists, the same GVK is used for deployments/scale
, replicationcontrollers/scale
, and many more resources. So just defining the kind in policy as Scale
can match any of the above-mentioned resources. Specifying these types of subresources with the parent resource is required to get an exact match, for example: Deployment/scale
or ReplicationController/scale
.
Kind can be specified as Kind
, Kind/Subresource
or Kind.Subresource
.
Background scanning is not allowed for validating policies that contain subresource in policy definition, this is by design. The user would have to explicitly turn off background scanning for policy matching on subresources by background: false
otherwise policy won't be created.
Running policies in Audit
mode creates policy reports. Policy reports are created for some subresources and aren't created for others. Some subresources like PodExecOptions
do not contain any ObjectMeta
which is required for creating policy reports.
type PodExecOptions struct {
metav1.TypeMeta `json:",inline"`
// Redirect the standard input stream of the pod for this call.
// Defaults to false.
// +optional
Stdin bool `json:"stdin,omitempty" protobuf:"varint,1,opt,name=stdin"`
// Redirect the standard output stream of the pod for this call.
// +optional
Stdout bool `json:"stdout,omitempty" protobuf:"varint,2,opt,name=stdout"`
// Redirect the standard error stream of the pod for this call.
// +optional
Stderr bool `json:"stderr,omitempty" protobuf:"varint,3,opt,name=stderr"`
// TTY if true indicates that a tty will be allocated for the exec call.
// Defaults to false.
// +optional
TTY bool `json:"tty,omitempty" protobuf:"varint,4,opt,name=tty"`
// Container in which to execute the command.
// Defaults to only container if there is only one container in the pod.
// +optional
Container string `json:"container,omitempty" protobuf:"bytes,5,opt,name=container"`
// Command is the remote command to execute. argv array. Not executed within a shell.
Command []string `json:"command" protobuf:"bytes,6,rep,name=command"`
}
metav1.ObjectMeta
is not present, so Policy Report won't be created.
For testing subresources, the user would have to provide Kyverno the API resource definition for the subresource and the parent resource. This is only required when the user uses the Kind/Subresource
notation in the policy.
The user would be able to modify the value.yaml
file as follows:
policies:
- name: <policy1 name>
resources:
- name: <resource1 name>
values:
<variable1 in policy1>: <value>
<variable2 in policy1>: <value>
- name: <resource2 name>
values:
<variable1 in policy1>: <value>
<variable2 in policy1>: <value>
- name: <policy2 name>
resources:
- name: <resource1 name>
values:
<variable1 in policy2>: <value>
<variable2 in policy2>: <value>
- name: <resource2 name>
values:
<variable1 in policy2>: <value>
<variable2 in policy2>: <value>
globalValues:
<global variable1>: <value>
<global variable2>: <value>
namespaceSelector:
- name: <namespace1 name>
labels:
<label key>: <label value>
- name: <namespace2 name>
labels:
<label key>: <label value>
subresources:
- subresource:
name: <name of subresource>
kind: <kind of subresource>
version: <version of subresource>
parentResource:
name: <name of parent resource>
kind: <kind of parent resource>
version: <version of parent resource>
This is only required when you have specified subresource in Kind/Subresource
notation, specifying PodExecOptions
doesn't require supplying the GVR of the subresource but specifying as Pod/exec
requires it.
Whenever Pod
or Service
was specified in the policy, the webhook was updated to match pods/ephemeralcontainers and services/status
respectively. This behavior can produce unexpected results and now with subresources being supported, isn't necessary.
If the policy wants to match these resources, they would have to be specified explicitly. Kyverno also can detect if you are matching on some subresources but have not included them in the kinds
. For example, the following policy is matching status
subresource but hasn't included it in the kinds:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: advertise-resource
spec:
background: false
rules:
- name: advertise-resource
match:
any:
- resources:
kinds:
- Node
mutate:
patchesJson6902: |-
- op: add
path: "/status/capacity/example.com~1dongle"
value: "41"
Kyverno will create the policy but will give out the warning:
Warning: You are matching on status but not including the status subresource in the policy.
This is only supported for scale
, status
, and ephemeralcontainers
subresource.