_attributes/common-attributes.adoc :context: auth-allowed-origins
modules/auth-allowing-javascript-access-api-server.adoc ./audit-log-policy-config.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: audit-log-policy-config
You can control the amount of information that is logged to the API server audit logs by choosing the audit log policy profile to use.
modules/nodes-nodes-audit-policy-disable.adoc ./audit-log-view.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: audit-log-view
{product-title} auditing provides a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system.
-
Forwarding logs to third party systems ./cert_manager_operator/cert-manager-operator-install.adoc :_content-type: ASSEMBLY
= Installing the {cert-manager-operator} _attributes/common-attributes.adoc :context: cert-manager-operator-install
The {cert-manager-operator} is not installed in {product-title} by default. You can install the {cert-manager-operator} by using the web console.
-
Adding Operators to a cluster ./cert_manager_operator/cert-manager-operator-issuer-acme.adoc :_content-type: ASSEMBLY
= Managing certificates with an ACME issuer _attributes/common-attributes.adoc :context: cert-manager-operator-issuer-acme
The {cert-manager-operator} supports using ACME CA servers, such as Let’s Encrypt, to issue certificates.
modules/cert-manager-acme-dns01-aws.adoc ./cert_manager_operator/cert-manager-operator-proxy.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: cert-manager-operator-proxy
If a cluster-wide egress proxy is configured in {product-title}, Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. OLM automatically updates all of the Operator’s deployments with the HTTP_PROXY
, HTTPS_PROXY
, NO_PROXY
environment variables.
You can inject any CA certificates that are required for proxying HTTPS connections into the {cert-manager-operator}.
-
Configuring proxy support in Operator Lifecycle Manager ./cert_manager_operator/cert-manager-operator-release-notes.adoc :_content-type: ASSEMBLY
= {cert-manager-operator} release notes _attributes/common-attributes.adoc :context: cert-manager-operator-release-notes
The {cert-manager-operator} is a cluster-wide service that provides application certificate lifecycle management.
These release notes track the development of {cert-manager-operator}.
For more information, see About the {cert-manager-operator}.
Issued: 2023-03-23
The following advisory is available for the {cert-manager-operator} 1.10.2:
For more information, see the cert-manager project release notes for v1.10.
Important
|
If you used the Technology Preview version of the {cert-manager-operator}, you must uninstall it and remove all related resources for the Technology Preview version before installing this version of the {cert-manager-operator}. For more information, see Uninstalling the {cert-manager-operator}. |
This is the general availability (GA) release of the {cert-manager-operator}.
-
The following issuer types are supported:
-
Automated Certificate Management Environment (ACME)
-
Certificate authority (CA)
-
Self-signed
-
-
The following ACME challenge types are supported:
-
DNS-01
-
HTTP-01
-
-
The following DNS-01 providers for ACME issuers are supported:
-
Amazon Route 53
-
Azure DNS
-
Google Cloud DNS
-
-
The {cert-manager-operator} now supports injecting custom CA certificates and propagating cluster-wide egress proxy environment variables.
-
Previously, the
unsupportedConfigOverrides
field replaced user-provided arguments instead of appending them. Now, theunsupportedConfigOverrides
field properly appends user-provided arguments. (CM-23)WarningUsing the
unsupportedConfigOverrides
section to modify the configuration of an Operator is unsupported and might block cluster upgrades. -
Previously, the {cert-manager-operator} was installed as a cluster Operator. With this release, the {cert-manager-operator} is now properly installed as an OLM Operator. (CM-35)
-
Using
Route
objects is not fully supported. Currently, to use {cert-manager-operator} withRoutes
, users must createIngress
objects, which are translated toRoute
objects by the Ingress-to-Route Controller. (CM-16) -
The {cert-manager-operator} does not support using Azure Active Directory (Azure AD) pod identities to assign a managed identity to a pod. As a workaround, you can use a service principal to assign a managed identity. (OCPBUGS-8665)
-
The {cert-manager-operator} does not support using Google workload identity federation. (OCPBUGS-9998)
-
When uninstalling the {cert-manager-operator}, if you select the Delete all operand instances for this operator checkbox in the {product-title} web console, the Operator is not uninstalled properly. As a workaround, do not select this checkbox when uninstalling the {cert-manager-operator}. (OCPBUGS-9960) ./cert_manager_operator/cert-manager-operator-uninstall.adoc :_content-type: ASSEMBLY
= Uninstalling the {cert-manager-operator} _attributes/common-attributes.adoc :context: cert-manager-operator-uninstall
You can remove the {cert-manager-operator} from {product-title} by uninstalling the Operator and removing its related resources.
modules/cert-manager-remove-resources-console.adoc ./cert_manager_operator/index.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: cert-manager-operator-about
The {cert-manager-operator} is a cluster-wide service that provides application certificate lifecycle management. The {cert-manager-operator} allows you to integrate with external certificate authorities and provides certificate provisioning, renewal, and retirement.
-
cert-manager project documentation ./certificate_types_descriptions/aggregated-api-client-certificates.adoc :_content-type: ASSEMBLY
= Aggregated API client certificates _attributes/common-attributes.adoc :context: cert-types-aggregated-api-client-certificates
Aggregated API client certificates are used to authenticate the KubeAPIServer when connecting to the Aggregated API Servers.
This CA is valid for 30 days.
The managed client certificates are valid for 30 days.
CA and client certificates are rotated automatically through the use of controllers.
You cannot customize the aggregated API server certificates. ./certificate_types_descriptions/bootstrap-certificates.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: cert-types-bootstrap-certificates
The kubelet, in {product-title} 4 and later, uses the bootstrap certificate located in /etc/kubernetes/kubeconfig
to initially bootstrap. This is followed by the bootstrap initialization process and authorization of the kubelet to create a CSR.
In that process, the kubelet generates a CSR while communicating over the bootstrap channel. The controller manager signs the CSR, resulting in a certificate that the kubelet manages.
This bootstrap CA is valid for 10 years.
The kubelet-managed certificate is valid for one year and rotates automatically at around the 80 percent mark of that one year.
Note
|
OpenShift Lifecycle Manager (OLM) does not update the certificates of Operators that it manages in proxy environments. These certificates must be managed by the user using the subscription config. |
You cannot customize the bootstrap certificates. ./certificate_types_descriptions/certificate-types-descriptions-index.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: ocp-certificates
{product-title} monitors certificates for proper validity, for the cluster certificates it issues and manages. The {product-title} alerting framework has rules to help identify when a certificate issue is about to occur. These rules consist of the following checks:
-
API server client certificate expiration is less than five minutes. ./certificate_types_descriptions/control-plane-certificates.adoc :_content-type: ASSEMBLY
= Control plane certificates _attributes/common-attributes.adoc :context: cert-types-control-plane-certificates
Control plane certificates are included in these namespaces:
-
openshift-config-managed
-
openshift-kube-apiserver
-
openshift-kube-apiserver-operator
-
openshift-kube-controller-manager
-
openshift-kube-controller-manager-operator
-
openshift-kube-scheduler
Control plane certificates are managed by the system and rotated automatically.
In the rare case that your control plane certificates have expired, see Recovering from expired control plane certificates. ./certificate_types_descriptions/etcd-certificates.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: cert-types-etcd-certificates
etcd certificates are signed by the etcd-signer; they come from a certificate authority (CA) that is generated by the bootstrap process.
The CA certificates are valid for 10 years. The peer, client, and server certificates are valid for three years.
etcd certificates are used for encrypted communication between etcd member peers, as well as encrypted client traffic. The following certificates are generated and used by etcd and other processes that communicate with etcd:
-
Peer certificates: Used for communication between etcd members.
-
Client certificates: Used for encrypted server-client communication. Client certificates are currently used by the API server only, and no other service should connect to etcd directly except for the proxy. Client secrets (
etcd-client
,etcd-metric-client
,etcd-metric-signer
, andetcd-signer
) are added to theopenshift-config
,openshift-monitoring
, andopenshift-kube-apiserver
namespaces. -
Server certificates: Used by the etcd server for authenticating client requests.
-
Metric certificates: All metric consumers connect to proxy with metric-client certificates.
-
Restoring to a previous cluster state ./certificate_types_descriptions/ingress-certificates.adoc :_content-type: ASSEMBLY
= Ingress certificates _attributes/common-attributes.adoc :context: cert-types-ingress-certificates
The Ingress Operator uses certificates for:
-
Securing access to metrics for Prometheus.
-
Securing access to routes.
To secure access to Ingress Operator and Ingress Controller metrics, the Ingress Operator uses service serving certificates. The Operator requests a certificate from the service-ca
controller for its own metrics, and the service-ca
controller puts the certificate in a secret named metrics-tls
in the openshift-ingress-operator
namespace. Additionally, the Ingress Operator requests a certificate for each Ingress Controller, and the service-ca
controller puts the certificate in a secret named router-metrics-certs-<name>
, where <name>
is the name of the Ingress Controller, in the openshift-ingress
namespace.
Each Ingress Controller has a default certificate that it uses for secured routes that do not specify their own certificates. Unless you specify a custom certificate, the Operator uses a self-signed certificate by default. The Operator uses its own self-signed signing certificate to sign any default certificate that it generates. The Operator generates this signing certificate and puts it in a secret named router-ca
in the openshift-ingress-operator
namespace. When the Operator generates a default certificate, it puts the default certificate in a secret named router-certs-<name>
(where <name>
is the name of the Ingress Controller) in the openshift-ingress
namespace.
Warning
|
The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use Operator-generated default certificates in production clusters. |
An empty
defaultCertificate
field causes the Ingress Operator to use its self-signed CA to generate a serving certificate for the specified domain.
The default CA certificate and key generated by the Ingress Operator. Used to sign Operator-generated default serving certificates.
In the default workflow, the wildcard default serving certificate, created by the Ingress Operator and signed using the generated default CA certificate. In the custom workflow, this is the user-provided certificate.
The router deployment. Uses the certificate in
secrets/router-certs-default
as its default front-end server certificate.
In the default workflow, the contents of the wildcard default serving certificate (public and private parts) are copied here to enable OAuth integration. In the custom workflow, this is the user-provided certificate.
The public (certificate) part of the default serving certificate. Replaces the
configmaps/router-ca
resource.
The user updates the cluster proxy configuration with the CA certificate that signed the
ingresscontroller
serving certificate. This enables components like auth
, console
, and the registry to trust the serving certificate.
The cluster-wide trusted CA bundle containing the combined {op-system-first} and user-provided CA bundles or an {op-system}-only bundle if a user bundle is not provided.
The expiration terms for the Ingress Operator’s certificates are as follows:
-
The expiration date for metrics certificates that the
service-ca
controller creates is two years after the date of creation. -
The expiration date for the Operator’s signing certificate is two years after the date of creation.
-
The expiration date for default certificates that the Operator generates is two years after the date of creation.
You cannot specify custom expiration terms on certificates that the Ingress Operator or service-ca
controller creates.
You cannot specify expiration terms when installing {product-title} for certificates that the Ingress Operator or service-ca
controller creates.
Prometheus uses the certificates that secure metrics.
The Ingress Operator uses its signing certificate to sign default certificates that it generates for Ingress Controllers for which you do not set custom default certificates.
Cluster components that use secured routes may use the default Ingress Controller’s default certificate.
Ingress to the cluster via a secured route uses the default certificate of the Ingress Controller by which the route is accessed unless the route specifies its own certificate.
Ingress certificates are managed by the user. See Replacing the default ingress certificate for more information.
The service-ca
controller automatically rotates the certificates that it issues. However, it is possible to use oc delete secret <secret>
to manually rotate service serving certificates.
The Ingress Operator does not rotate its own signing certificate or the default certificates that it generates. Operator-generated default certificates are intended as placeholders for custom default certificates that you configure. ./certificate_types_descriptions/machine-config-operator-certificates.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: cert-types-machine-config-operator-certificates
Machine Config Operator certificates are used to secure connections between the Red Hat Enterprise Linux CoreOS (RHCOS) nodes and the Machine Config Server.
You cannot customize the Machine Config Operator certificates../certificate_types_descriptions/monitoring-and-cluster-logging-operator-component-certificates.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: cert-types-monitoring-and-cluster-logging-operator-component-certificates
Monitoring components secure their traffic with service CA certificates. These certificates are valid for 2 years and are replaced automatically on rotation of the service CA, which is every 13 months.
If the certificate lives in the openshift-monitoring
or openshift-logging
namespace, it is system managed and rotated automatically.
These certificates are managed by the system and not the user. ./certificate_types_descriptions/node-certificates.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: cert-types-node-certificates
Node certificates are signed by the cluster; they come from a certificate authority (CA) that is generated by the bootstrap process. After the cluster is installed, the node certificates are auto-rotated.
These certificates are managed by the system and not the user.
-
Working with nodes ./certificate_types_descriptions/olm-certificates.adoc :_content-type: ASSEMBLY
= OLM certificates _attributes/common-attributes.adoc :context: cert-types-olm-certificates
All certificates for OpenShift Lifecycle Manager (OLM) components (olm-operator
, catalog-operator
, packageserver
, and marketplace-operator
) are managed by the system.
When installing Operators that include webhooks or API services in their ClusterServiceVersion
(CSV) object, OLM creates and rotates the certificates for these resources. Certificates for resources in the openshift-operator-lifecycle-manager
namespace are managed by OLM.
OLM will not update the certificates of Operators that it manages in proxy environments. These certificates must be managed by the user using the subscription config. ./certificate_types_descriptions/proxy-certificates.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: cert-types-proxy-certificates
Proxy certificates allow users to specify one or more custom certificate authority (CA) certificates used by platform components when making egress connections.
The trustedCA
field of the Proxy object is a reference to a config map that contains a user-provided trusted certificate authority (CA) bundle. This bundle is merged with the {op-system-first} trust bundle and injected into the trust store of platform components that make egress HTTPS calls. For example, image-registry-operator
calls an external image registry to download images. If trustedCA
is not specified, only the {op-system} trust bundle is used for proxied HTTPS connections. Provide custom CA certificates to the {op-system} trust bundle if you want to use your own certificate infrastructure.
The trustedCA
field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from required key ca-bundle.crt
and copying it to a config map named trusted-ca-bundle
in the openshift-config-managed
namespace. The namespace for the config map referenced by trustedCA
is openshift-config
:
apiVersion: v1
kind: ConfigMap
metadata:
name: user-ca-bundle
namespace: openshift-config
data:
ca-bundle.crt: |
-----BEGIN CERTIFICATE-----
Custom CA certificate bundle.
-----END CERTIFICATE-----
The additionalTrustBundle
value of the installer configuration is used to specify any proxy-trusted CA certificates during installation. For example:
$ cat install-config.yaml
...
proxy:
httpProxy: http://<https://username:[email protected]:123/>
httpsProxy: https://<https://username:[email protected]:123/>
noProxy: <123.example.com,10.88.0.0/16>
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
<MY_HTTPS_PROXY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...
The user-provided trust bundle is represented as a config map. The config map is mounted into the file system of platform components that make egress HTTPS calls. Typically, Operators mount the config map to /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
, but this is not required by the proxy. A proxy can modify or inspect the HTTPS connection. In either case, the proxy must generate and sign a new certificate for the connection.
Complete proxy support means connecting to the specified proxy and trusting any signatures it has generated. Therefore, it is necessary to let the user specify a trusted root, such that any certificate chain connected to that trusted root is also trusted.
If using the RHCOS trust bundle, place CA certificates in /etc/pki/ca-trust/source/anchors
.
See Using shared system certificates in the Red Hat Enterprise Linux documentation for more information.
The user sets the expiration term of the user-provided trust bundle.
The default expiration term is defined by the CA certificate itself. It is up to the CA administrator to configure this for the certificate before it can be used by {product-title} or {op-system}.
Note
|
Red Hat does not monitor for when CAs expire. However, due to the long life of CAs, this is generally not an issue. However, you might need to periodically update the trust bundle. |
By default, all platform components that make egress HTTPS calls will use the {op-system} trust bundle. If trustedCA
is defined, it will also be used.
Any service that is running on the {op-system} node is able to use the trust bundle of the node.
Updating the user-provided trust bundle consists of either:
-
updating the PEM-encoded certificates in the config map referenced by
trustedCA,
or -
creating a config map in the namespace
openshift-config
that contains the new trust bundle and updatingtrustedCA
to reference the name of the new config map.
The mechanism for writing CA certificates to the {op-system} trust bundle is exactly the same as writing any other file to {op-system}, which is done through the use of machine configs. When the Machine Config Operator (MCO) applies the new machine config that contains the new CA certificates, the node is rebooted. During the next boot, the service coreos-update-ca-trust.service
runs on the {op-system} nodes, which automatically update the trust bundle with the new CA certificates. For example:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 50-examplecorp-ca-cert
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQXh5Z0F3SUJBZ0lKQU51bkkwRDY2MmNuTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdsTVFzd0NRWUQKV1FRR0V3SlZVekVYTUJVR0ExVUVDQXdPVG05eWRHZ2dRMkZ5YjJ4cGJtRXhFREFPQmdOVkJBY01CMUpoYkdWcApBMmd4RmpBVUJnTlZCQW9NRFZKbFpDQklZWFFzSUVsdVl5NHhFekFSQmdOVkJBc01DbEpsWkNCSVlYUWdTVlF4Ckh6QVpCZ05WQkFNTUVsSmxaQ0JJWVhRZ1NWUWdVbTl2ZENCRFFURWhNQjhHQ1NxR1NJYjNEUUVKQVJZU2FXNW0KWGpDQnBURUxNQWtHQTFVRUJoTUNWVk14RnpBVkJnTlZCQWdNRGs1dmNuUm9JRU5oY205c2FXNWhNUkF3RGdZRApXUVFIREFkU1lXeGxhV2RvTVJZd0ZBWURWUVFLREExU1pXUWdTR0YwTENCSmJtTXVNUk13RVFZRFZRUUxEQXBTCkFXUWdTR0YwSUVsVU1Sc3dHUVlEVlFRRERCSlNaV1FnU0dGMElFbFVJRkp2YjNRZ1EwRXhJVEFmQmdrcWhraUcKMHcwQkNRRVdFbWx1Wm05elpXTkFjbVZrYUdGMExtTnZiVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApCRENDQVFvQ2dnRUJBTFF0OU9KUWg2R0M1TFQxZzgwcU5oMHU1MEJRNHNaL3laOGFFVHh0KzVsblBWWDZNSEt6CmQvaTdsRHFUZlRjZkxMMm55VUJkMmZRRGsxQjBmeHJza2hHSUlaM2lmUDFQczRsdFRrdjhoUlNvYjNWdE5xU28KSHhrS2Z2RDJQS2pUUHhEUFdZeXJ1eTlpckxaaW9NZmZpM2kvZ0N1dDBaV3RBeU8zTVZINXFXRi9lbkt3Z1BFUwpZOXBvK1RkQ3ZSQi9SVU9iQmFNNzYxRWNyTFNNMUdxSE51ZVNmcW5obzNBakxRNmRCblBXbG82MzhabTFWZWJLCkNFTHloa0xXTVNGa0t3RG1uZTBqUTAyWTRnMDc1dkNLdkNzQ0F3RUFBYU5qTUdFd0hRWURWUjBPQkJZRUZIN1IKNXlDK1VlaElJUGV1TDhacXczUHpiZ2NaTUI4R0ExVWRJd1FZTUJhQUZIN1I0eUMrVWVoSUlQZXVMOFpxdzNQegpjZ2NaTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNQTBHQ1NxR1NJYjNEUUVCCkR3VUFBNElCQVFCRE52RDJWbTlzQTVBOUFsT0pSOCtlbjVYejloWGN4SkI1cGh4Y1pROGpGb0cwNFZzaHZkMGUKTUVuVXJNY2ZGZ0laNG5qTUtUUUNNNFpGVVBBaWV5THg0ZjUySHVEb3BwM2U1SnlJTWZXK0tGY05JcEt3Q3NhawpwU29LdElVT3NVSks3cUJWWnhjckl5ZVFWMnFjWU9lWmh0UzV3QnFJd09BaEZ3bENFVDdaZTU4UUhtUzQ4c2xqCjVlVGtSaml2QWxFeHJGektjbGpDNGF4S1Fsbk92VkF6eitHbTMyVTB4UEJGNEJ5ZVBWeENKVUh3MVRzeVRtZWwKU3hORXA3eUhvWGN3bitmWG5hK3Q1SldoMWd4VVp0eTMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
mode: 0644
overwrite: true
path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt
The trust store of machines must also support updating the trust store of nodes.
There are no Operators that can auto-renew certificates on the {op-system} nodes.
Note
|
Red Hat does not monitor for when CAs expire. However, due to the long life of CAs, this is generally not an issue. However, you might need to periodically update the trust bundle. |
_attributes/common-attributes.adoc :context: cert-types-service-ca-certificates
service-ca
is an Operator that creates a self-signed CA when an {product-title} cluster is deployed.
A custom expiration term is not supported. The self-signed CA is stored in a secret with qualified name service-ca/signing-key
in fields tls.crt
(certificate(s)), tls.key
(private key), and ca-bundle.crt
(CA bundle).
Other services can request a service serving certificate by annotating a service resource with service.beta.openshift.io/serving-cert-secret-name: <secret name>
. In response, the Operator generates a new certificate, as tls.crt
, and private key, as tls.key
to the named secret. The certificate is valid for two years.
Other services can request that the CA bundle for the service CA be injected into API service or config map resources by annotating with service.beta.openshift.io/inject-cabundle: true
to support validating certificates generated from the service CA. In response, the Operator writes its current CA bundle to the CABundle
field of an API service or as service-ca.crt
to a config map.
As of {product-title} 4.3.5, automated rotation is supported and is backported to some 4.2.z and 4.3.z releases. For any release supporting automated rotation, the service CA is valid for 26 months and is automatically refreshed when there is less than 13 months validity left. If necessary, you can manually refresh the service CA.
The service CA expiration of 26 months is longer than the expected upgrade interval for a supported {product-title} cluster, such that non-control plane consumers of service CA certificates will be refreshed after CA rotation and prior to the expiration of the pre-rotation CA.
Warning
|
A manually-rotated service CA does not maintain trust with the previous service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA. |
Services that use service CA certificates include:
-
cluster-autoscaler-operator
-
cluster-monitoring-operator
-
cluster-authentication-operator
-
cluster-image-registry-operator
-
cluster-ingress-operator
-
cluster-kube-apiserver-operator
-
cluster-kube-controller-manager-operator
-
cluster-kube-scheduler-operator
-
cluster-networking-operator
-
cluster-openshift-apiserver-operator
-
cluster-openshift-controller-manager-operator
-
cluster-samples-operator
-
machine-config-operator
-
console-operator
-
insights-operator
-
machine-api-operator
-
operator-lifecycle-manager
This is not a comprehensive list.
-
Securing service traffic using service serving certificate secrets ./certificate_types_descriptions/user-provided-certificates-for-api-server.adoc :_content-type: ASSEMBLY
= User-provided certificates for the API server _attributes/common-attributes.adoc :context: cert-types-user-provided-certificates-for-the-api-server
The API server is accessible by clients external to the cluster at api.<cluster_name>.<base_domain>
. You might want clients to access the API server at a different hostname or without the need to distribute the cluster-managed certificate authority (CA) certificates to the clients. The administrator must set a custom default certificate to be used by the API server when serving content.
The user-provided certificates must be provided in a kubernetes.io/tls
type Secret
in the openshift-config
namespace. Update the API server cluster configuration, the apiserver/cluster
resource, to enable the use of the user-provided certificate.
API server client certificate expiration is less than five minutes.
User-provided certificates are managed by the user.
Update the secret containing the user-managed certificate as needed.
-
Adding API server certificates ./certificate_types_descriptions/user-provided-certificates-for-default-ingress.adoc :_content-type: ASSEMBLY
= User-provided certificates for default ingress _attributes/common-attributes.adoc :context: cert-types-user-provided-certificates-for-default-ingress
Applications are usually exposed at <route_name>.apps.<cluster_name>.<base_domain>
. The <cluster_name>
and <base_domain>
come from the installation config file. <route_name>
is the host field of the route, if specified, or the route name. For example, hello-openshift-default.apps.username.devcluster.openshift.com
. hello-openshift
is the name of the route and the route is in the default namespace. You might want clients to access the applications without the need to distribute the cluster-managed CA certificates to the clients. The administrator must set a custom default certificate when serving application content.
Warning
|
The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use operator-generated default certificates in production clusters. |
The user-provided certificates must be provided in a tls
type Secret
resource in the openshift-ingress
namespace. Update the IngressController
CR in the openshift-ingress-operator
namespace to enable the use of the user-provided certificate. For more information on this process, see Setting a custom default certificate.
Update the secret containing the user-managed certificate as needed.
-
Replacing the default ingress certificate ./certificates/api-server.adoc :_content-type: ASSEMBLY
= Adding API server certificates _attributes/common-attributes.adoc :context: api-server-certificates
The default API server certificate is issued by an internal {product-title} cluster CA. Clients outside of the cluster will not be able to verify the API server’s certificate by default. This certificate can be replaced by one that is issued by a CA that clients trust.
modules/customize-certificates-api-add-named.adoc ./certificates/replacing-default-ingress-certificate.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: replacing-default-ingress
-
Proxy certificate customization ./certificates/service-serving-certificate.adoc :_content-type: ASSEMBLY
= Securing service traffic using service serving certificate secrets _attributes/common-attributes.adoc :context: service-serving-certificate
-
You can use a service certificate to configure a secure route using reencrypt TLS termination. For more information, see Creating a re-encrypt route with a custom certificate.
modules/customize-certificates-manually-rotate-service-ca.adoc ./certificates/updating-ca-bundle.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: updating-ca-bundle
-
Proxy certificate customization ./compliance_operator/compliance-operator-advanced.adoc :_content-type: ASSEMBLY
= Performing advanced Compliance Operator tasks _attributes/common-attributes.adoc :context: compliance-advanced
The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling.
-
Managing security context constraints ./compliance_operator/compliance-operator-crd.adoc :_content-type: ASSEMBLY
= Understanding the Custom Resource Definitions _attributes/common-attributes.adoc :context: compliance-crd
The Compliance Operator in the {product-title} provides you with several Custom Resource Definitions (CRDs) to accomplish the compliance scans. To run a compliance scan, it leverages the predefined security policies, which are derived from the ComplianceAsCode community project. The Compliance Operator converts these security policies into CRDs, which you can use to run compliance scans and get remediations for the issues found.
By default, the Compliance Operator CRDs include ProfileBundle
and Profile
objects, in which you can define and set the rules for your compliance scan requirements. You can also customize the default profiles by using a TailoredProfile
object.
After you have defined the requirements of the compliance scan, you can configure it by specifying the type of the scan, occurrence of the scan, and location of the scan. To do so, Compliance Operator provides you with a ScanSetting
object.
When you have defined the compliance scan requirements and configured the settings to run the scans, then the Compliance Operator processes it using the ScanSettingBinding
object.
After the creation of compliance suite, you can monitor the status of the deployed scans using the ComplianceSuite
object.
When the compliance suite reaches the DONE
phase, you can view the scan results and possible remediations.
modules/compliance-crd-compliance-remediation.adoc ./compliance_operator/compliance-operator-installation.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: compliance-operator-installation
Before you can use the Compliance Operator, you must ensure it is deployed in the cluster.
Important
|
The Compliance Operator might report incorrect results on managed platforms, such as OpenShift Dedicated, Red Hat OpenShift Service on AWS, and Microsoft Azure Red Hat OpenShift. For more information, see the Red Hat Knowledgebase Solution #6983418. |
Important
|
If the You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator. |
Important
|
If the You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator. |
-
The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks. ./compliance_operator/compliance-operator-manage.adoc :_content-type: ASSEMBLY
= Managing the Compliance Operator _attributes/common-attributes.adoc :context: managing-compliance
This section describes the lifecycle of security content, including how to use an updated version of compliance content and how to create a custom ProfileBundle
object.
-
The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks. ./compliance_operator/compliance-operator-raw-results.adoc :_content-type: ASSEMBLY
= Retrieving Compliance Operator raw results _attributes/common-attributes.adoc :context: compliance-raw-results
When proving compliance for your {product-title} cluster, you might need to provide the scan results for auditing purposes.
modules/compliance-results.adoc ./compliance_operator/compliance-operator-release-notes.adoc :_content-type: ASSEMBLY
The Compliance Operator lets {product-title} administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them.
These release notes track the development of the Compliance Operator in the {product-title}.
For an overview of the Compliance Operator, see Understanding the Compliance Operator.
To access the latest release, see Updating the Compliance Operator.
The following advisory is available for the OpenShift Compliance Operator 1.0.0:
-
The Compliance Operator is now stable and the release channel is upgraded to
stable
. Future releases will follow Semantic Versioning. To access the latest release, see Updating the Compliance Operator.
-
Before this update, the compliance_operator_compliance_scan_error_total metric had an ERROR label with a different value for each error message. With this update, the compliance_operator_compliance_scan_error_total metric does not increase in values. (OCPBUGS-1803)
-
Before this update, the
ocp4-api-server-audit-log-maxsize
rule would result in aFAIL
state. With this update, the error message has been removed from the metric, decreasing the cardinality of the metric in line with best practices. (OCPBUGS-7520) -
Before this update, the
rhcos4-enable-fips-mode
rule description was misleading that FIPS could be enabled after installation. With this update, therhcos4-enable-fips-mode
rule description clarifies that FIPS must be enabled at install time. (OCPBUGS-8358)
The following advisory is available for the OpenShift Compliance Operator 0.1.61:
-
The Compliance Operator now supports timeout configuration for Scanner Pods. The timeout is specified in the
ScanSetting
object. If the scan is not completed within the timeout, the scan retries until the maximum number of retries is reached. See Configuring ScanSetting timeout for more information.
-
Before this update, Compliance Operator remediations required variables as inputs. Remediations without variables set were applied cluster-wide and resulted in stuck nodes, even though it appeared the remediation applied correctly. With this update, the Compliance Operator validates if a variable needs to be supplied using a
TailoredProfile
for a remediation. (OCPBUGS-3864) -
Before this update, the instructions for
ocp4-kubelet-configure-tls-cipher-suites
were incomplete, requiring users to refine the query manually. With this update, the query provided inocp4-kubelet-configure-tls-cipher-suites
returns the actual results to perform the audit steps. (OCPBUGS-3017) -
Before this update,
ScanSettingBinding
objects created without asettingRef
variable did not use an appropriate default value. With this update, theScanSettingBinding
objects without asettingRef
variable use thedefault
value. (OCPBUGS-3420) -
Before this update, system reserved parameters were not generated in kubelet configuration files, causing the Compliance Operator to fail to unpause the machine config pool. With this update, the Compliance Operator omits system reserved parameters during machine configuration pool evaluation. (OCPBUGS-4445)
-
Before this update,
ComplianceCheckResult
objects did not have correct descriptions. With this update, the Compliance Operator sources theComplianceCheckResult
information from the rule description. (OCPBUGS-4615) -
Before this update, the Compliance Operator did not check for empty kubelet configuration files when parsing machine configurations. As a result, the Compliance Operator would panic and crash. With this update, the Compliance Operator implements improved checking of the kubelet configuration data structure and only continues if it is fully rendered. (OCPBUGS-4621)
-
Before this update, the Compliance Operator generated remediations for kubelet evictions based on machine config pool name and a grace period, resulting in multiple remediations for a single eviction rule. With this update, the Compliance Operator applies all remediations for a single rule. (OCPBUGS-4338)
-
Before this update, re-running scans on remediations that previously
Applied
might have been marked asOutdated
after rescans were performed, despite no changes in the remediation content. The comparison of scans did not account for remediation metadata correctly. With this update, remediations retain the previously generatedApplied
status. (OCPBUGS-6710) -
Before this update, a regression occurred when attempting to create a
ScanSettingBinding
that was using aTailoredProfile
with a non-defaultMachineConfigPool
marked theScanSettingBinding
asFailed
. With this update, functionality is restored and customScanSettingBinding
using aTailoredProfile
performs correctly. (OCPBUGS-6827) -
Before this update, some kubelet configuration parameters did not have default values. With this update, the following parameters contain default values (OCPBUGS-6708):
-
ocp4-cis-kubelet-enable-streaming-connections
-
ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available
-
ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree
-
ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available
-
ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available
-
-
Before this update, the
selinux_confinement_of_daemons
rule failed running on the kubelet because of the permissions necessary for the kubelet to run. With this update, theselinux_confinement_of_daemons
rule is disabled. (OCPBUGS-6968)
The following advisory is available for the OpenShift Compliance Operator 0.1.59:
-
The Compliance Operator now supports Payment Card Industry Data Security Standard (PCI-DSS)
ocp4-pci-dss
andocp4-pci-dss-node
profiles on theppc64le
architecture.
-
Previously, the Compliance Operator did not support the Payment Card Industry Data Security Standard (PCI DSS)
ocp4-pci-dss
andocp4-pci-dss-node
profiles on different architectures such asppc64le
. Now, the Compliance Operator supportsocp4-pci-dss
andocp4-pci-dss-node
profiles on theppc64le
architecture. (OCPBUGS-3252) -
Previously, after the recent update to version 0.1.57, the
rerunner
service account (SA) was no longer owned by the cluster service version (CSV), which caused the SA to be removed during the Operator upgrade. Now, the CSV owns thererunner
SA in 0.1.59, and upgrades from any previous version will not result in a missing SA. (OCPBUGS-3452) -
In 0.1.57, the Operator started the controller metrics endpoint listening on port
8080
. This resulted inTargetDown
alerts since cluster monitoring expected port is8383
. With 0.1.59, the Operator starts the endpoint listening on port8383
as expected. (OCPBUGS-3097)
The following advisory is available for the OpenShift Compliance Operator 0.1.57:
-
KubeletConfig
checks changed fromNode
toPlatform
type.KubeletConfig
checks the default configuration of theKubeletConfig
. The configuration files are aggregated from all nodes into a single location per node pool. See EvaluatingKubeletConfig
rules against default configuration values. -
The
ScanSetting
Custom Resource now allows users to override the default CPU and memory limits of scanner pods through thescanLimits
attribute. For more information, see Increasing Compliance Operator resource limits. -
A
PriorityClass
object can now be set throughScanSetting
. This ensures the Compliance Operator is prioritized and minimizes the chance that the cluster falls out of compliance. For more information, see SettingPriorityClass
forScanSetting
scans.
-
Previously, the Compliance Operator hard-coded notifications to the default
openshift-compliance
namespace. If the Operator were installed in a non-default namespace, the notifications would not work as expected. Now, notifications work in non-defaultopenshift-compliance
namespaces. (BZ#2060726) -
Previously, the Compliance Operator was unable to evaluate default configurations used by kubelet objects, resulting in inaccurate results and false positives. This new feature evaluates the kubelet configuration and now reports accurately. (BZ#2075041)
-
Previously, the Compliance Operator reported the
ocp4-kubelet-configure-event-creation
rule in aFAIL
state after applying an automatic remediation because theeventRecordQPS
value was set higher than the default value. Now, theocp4-kubelet-configure-event-creation
rule remediation sets the default value, and the rule applies correctly. (BZ#2082416) -
The
ocp4-configure-network-policies
rule requires manual intervention to perform effectively. New descriptive instructions and rule updates increase applicability of theocp4-configure-network-policies
rule for clusters using Calico CNIs. (BZ#2091794) -
Previously, the Compliance Operator would not clean up pods used to scan infrastructure when using the
debug=true
option in the scan settings. This caused pods to be left on the cluster even after deleting theScanSettingBinding
. Now, pods are always deleted when aScanSettingBinding
is deleted.(BZ#2092913) -
Previously, the Compliance Operator used an older version of the
operator-sdk
command that caused alerts about deprecated functionality. Now, an updated version of theoperator-sdk
command is included and there are no more alerts for deprecated functionality. (BZ#2098581) -
Previously, the Compliance Operator would fail to apply remediations if it could not determine the relationship between kubelet and machine configurations. Now, the Compliance Operator has improved handling of the machine configurations and is able to determine if a kubelet configuration is a subset of a machine configuration. (BZ#2102511)
-
Previously, the rule for
ocp4-cis-node-master-kubelet-enable-cert-rotation
did not properly describe success criteria. As a result, the requirements forRotateKubeletClientCertificate
were unclear. Now, the rule forocp4-cis-node-master-kubelet-enable-cert-rotation
reports accurately regardless of the configuration present in the kubelet configuration file. (BZ#2105153) -
Previously, the rule for checking idle streaming timeouts did not consider default values, resulting in inaccurate rule reporting. Now, more robust checks ensure increased accuracy in results based on default configuration values. (BZ#2105878)
-
Previously, the Compliance Operator would fail to fetch API resources when parsing machine configurations without Ignition specifications, which caused the
api-check-pods
processes to crash loop. Now, the Compliance Operator handles Machine Config Pools that do not have Ignition specifications correctly. (BZ#2117268) -
Previously, rules evaluating the
modprobe
configuration would fail even after applying remediations due to a mismatch in values for themodprobe
configuration. Now, the same values are used for themodprobe
configuration in checks and remediations, ensuring consistent results. (BZ#2117747)
-
Specifying Install into all namespaces in the cluster or setting the
WATCH_NAMESPACES
environment variable to""
no longer affects all namespaces. Any API resources installed in namespaces not specified at the time of Compliance Operator installation is no longer be operational. API resources might require creation in the selected namespace, or theopenshift-compliance
namespace by default. This change improves the Compliance Operator’s memory usage.
The following advisory is available for the OpenShift Compliance Operator 0.1.53:
-
Previously, the
ocp4-kubelet-enable-streaming-connections
rule contained an incorrect variable comparison, resulting in false positive scan results. Now, the Compliance Operator provides accurate scan results when settingstreamingConnectionIdleTimeout
. (BZ#2069891) -
Previously, group ownership for
/etc/openvswitch/conf.db
was incorrect on IBM Z architectures, resulting inocp4-cis-node-worker-file-groupowner-ovs-conf-db
check failures. Now, the check is markedNOT-APPLICABLE
on IBM Z architecture systems. (BZ#2072597) -
Previously, the
ocp4-cis-scc-limit-container-allowed-capabilities
rule reported in aFAIL
state due to incomplete data regarding the security context constraints (SCC) rules in the deployment. Now, the result isMANUAL
, which is consistent with other checks that require human intervention. (BZ#2077916) -
Previously, the following rules failed to account for additional configuration paths for API servers and TLS certificates and keys, resulting in reported failures even if the certificates and keys were set properly:
-
ocp4-cis-api-server-kubelet-client-cert
-
ocp4-cis-api-server-kubelet-client-key
-
ocp4-cis-kubelet-configure-tls-cert
-
ocp4-cis-kubelet-configure-tls-key
Now, the rules report accurately and observe legacy file paths specified in the kubelet configuration file. (BZ#2079813)
-
-
Previously, the
content_rule_oauth_or_oauthclient_inactivity_timeout
rule did not account for a configurable timeout set by the deployment when assessing compliance for timeouts. This resulted in the rule failing even if the timeout was valid. Now, the Compliance Operator uses thevar_oauth_inactivity_timeout
variable to set valid timeout length. (BZ#2081952) -
Previously, the Compliance Operator used administrative permissions on namespaces not labeled appropriately for privileged use, resulting in warning messages regarding pod security-level violations. Now, the Compliance Operator has appropriate namespace labels and permission adjustments to access results without violating permissions. (BZ#2088202)
-
Previously, applying auto remediations for
rhcos4-high-master-sysctl-kernel-yama-ptrace-scope
andrhcos4-sysctl-kernel-core-pattern
resulted in subsequent failures of those rules in scan results, even though they were remediated. Now, the rules reportPASS
accurately, even after remediations are applied.(BZ#2094382) -
Previously, the Compliance Operator would fail in a
CrashLoopBackoff
state because of out-of-memory exceptions. Now, the Compliance Operator is improved to handle large machine configuration data sets in memory and function correctly. (BZ#2094854)
-
When
"debug":true
is set within theScanSettingBinding
object, the pods generated by theScanSettingBinding
object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods:$ oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis
The following advisory is available for the OpenShift Compliance Operator 0.1.52:
-
The FedRAMP high SCAP profile is now available for use in {product-title} environments. For more information, See Supported compliance profiles.
-
Previously, the
OpenScap
container would crash due to a mount permission issue in a security environment whereDAC_OVERRIDE
capability is dropped. Now, executable mount permissions are applied to all users. (BZ#2082151) -
Previously, the compliance rule
ocp4-configure-network-policies
could be configured asMANUAL
. Now, compliance ruleocp4-configure-network-policies
is set toAUTOMATIC
. (BZ#2072431) -
Previously, the Cluster Autoscaler would fail to scale down because the Compliance Operator scan pods were never removed after a scan. Now, the pods are removed from each node by default unless explicitly saved for debugging purposes. (BZ#2075029)
-
Previously, applying the Compliance Operator to the
KubeletConfig
would result in the node going into aNotReady
state due to unpausing the Machine Config Pools too early. Now, the Machine Config Pools are unpaused appropriately and the node operates correctly. (BZ#2071854) -
Previously, the Machine Config Operator used
base64
instead ofurl-encoded
code in the latest release, causing Compliance Operator remediation to fail. Now, the Compliance Operator checks encoding to handle bothbase64
andurl-encoded
Machine Config code and the remediation applies correctly. (BZ#2082431)
-
When
"debug":true
is set within theScanSettingBinding
object, the pods generated by theScanSettingBinding
object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods:$ oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis
The following advisory is available for the OpenShift Compliance Operator 0.1.49:
-
The Compliance Operator is now supported on the following architectures:
-
IBM Power
-
IBM Z
-
IBM LinuxONE
-
-
Previously, the
openshift-compliance
content did not include platform-specific checks for network types. As a result, OVN- and SDN-specific checks would show asfailed
instead ofnot-applicable
based on the network configuration. Now, new rules contain platform checks for networking rules, resulting in a more accurate assessment of network-specific checks. (BZ#1994609) -
Previously, the
ocp4-moderate-routes-protected-by-tls
rule incorrectly checked TLS settings that results in the rule failing the check, even if the connection secure SSL/TLS protocol. Now, the check properly evaluates TLS settings that are consistent with the networking guidance and profile recommendations. (BZ#2002695) -
Previously,
ocp-cis-configure-network-policies-namespace
used pagination when requesting namespaces. This caused the rule to fail because the deployments truncated lists of more than 500 namespaces. Now, the entire namespace list is requested, and the rule for checking configured network policies works for deployments with more than 500 namespaces. (BZ#2038909) -
Previously, remediations using the
sshd jinja
macros were hard-coded to specific sshd configurations. As a result, the configurations were inconsistent with the content the rules were checking for and the check would fail. Now, the sshd configuration is parameterized and the rules apply successfully. (BZ#2049141) -
Previously, the
ocp4-cluster-version-operator-verify-integrity
always checked the first entry in the Cluter Version Operator (CVO) history. As a result, the upgrade would fail in situations where subsequent versions of {product-name} would be verified. Now, the compliance check result forocp4-cluster-version-operator-verify-integrity
is able to detect verified versions and is accurate with the CVO history. (BZ#2053602) -
Previously, the
ocp4-api-server-no-adm-ctrl-plugins-disabled
rule did not check for a list of empty admission controller plugins. As a result, the rule would always fail, even if all admission plugins were enabled. Now, more robust checking of theocp4-api-server-no-adm-ctrl-plugins-disabled
rule accurately passes with all admission controller plugins enabled. (BZ#2058631) -
Previously, scans did not contain platform checks for running against Linux worker nodes. As a result, running scans against worker nodes that were not Linux-based resulted in a never ending scan loop. Now, the scan schedules appropriately based on platform type and labels complete successfully. (BZ#2056911)
The following advisory is available for the OpenShift Compliance Operator 0.1.48:
-
Previously, some rules associated with extended Open Vulnerability and Assessment Language (OVAL) definitions had a
checkType
ofNone
. This was because the Compliance Operator was not processing extended OVAL definitions when parsing rules. With this update, content from extended OVAL definitions is parsed so that these rules now have acheckType
of eitherNode
orPlatform
. (BZ#2040282) -
Previously, a manually created
MachineConfig
object forKubeletConfig
prevented aKubeletConfig
object from being generated for remediation, leaving the remediation in thePending
state. With this release, aKubeletConfig
object is created by the remediation, regardless if there is a manually createdMachineConfig
object forKubeletConfig
. As a result,KubeletConfig
remediations now work as expected. (BZ#2040401)
The following advisory is available for the OpenShift Compliance Operator 0.1.47:
-
The Compliance Operator now supports the following compliance benchmarks for the Payment Card Industry Data Security Standard (PCI DSS):
-
ocp4-pci-dss
-
ocp4-pci-dss-node
-
-
Additional rules and remediations for FedRAMP moderate impact level are added to the OCP4-moderate, OCP4-moderate-node, and rhcos4-moderate profiles.
-
Remediations for KubeletConfig are now available in node-level profiles.
-
Previously, if your cluster was running {product-title} 4.6 or earlier, remediations for USBGuard-related rules would fail for the moderate profile. This is because the remediations created by the Compliance Operator were based on an older version of USBGuard that did not support drop-in directories. Now, invalid remediations for USBGuard-related rules are not created for clusters running {product-title} 4.6. If your cluster is using {product-title} 4.6, you must manually create remediations for USBGuard-related rules.
Additionally, remediations are created only for rules that satisfy minimum version requirements. (BZ#1965511)
-
Previously, when rendering remediations, the compliance operator would check that the remediation was well-formed by using a regular expression that was too strict. As a result, some remediations, such as those that render
sshd_config
, would not pass the regular expression check and therefore, were not created. The regular expression was found to be unnecessary and removed. Remediations now render correctly. (BZ#2033009)
The following advisory is available for the OpenShift Compliance Operator 0.1.44:
-
In this release, the
strictNodeScan
option is now added to theComplianceScan
,ComplianceSuite
andScanSetting
CRs. This option defaults totrue
which matches the previous behavior, where an error occurred if a scan was not able to be scheduled on a node. Setting the option tofalse
allows the Compliance Operator to be more permissive about scheduling scans. Environments with ephemeral nodes can set thestrictNodeScan
value to false, which allows a compliance scan to proceed, even if some of the nodes in the cluster are not available for scheduling. -
You can now customize the node that is used to schedule the result server workload by configuring the
nodeSelector
andtolerations
attributes of theScanSetting
object. These attributes are used to place theResultServer
pod, the pod that is used to mount a PV storage volume and store the raw Asset Reporting Format (ARF) results. Previously, thenodeSelector
and thetolerations
parameters defaulted to selecting one of the control plane nodes and tolerating thenode-role.kubernetes.io/master taint
. This did not work in environments where control plane nodes are not permitted to mount PVs. This feature provides a way for you to select the node and tolerate a different taint in those environments. -
The Compliance Operator can now remediate
KubeletConfig
objects. -
A comment containing an error message is now added to help content developers differentiate between objects that do not exist in the cluster compared to objects that cannot be fetched.
-
Rule objects now contain two new attributes,
checkType
anddescription
. These attributes allow you to determine if the rule pertains to a node check or platform check, and also allow you to review what the rule does. -
This enhancement removes the requirement that you have to extend an existing profile to create a tailored profile. This means the
extends
field in theTailoredProfile
CRD is no longer mandatory. You can now select a list of rule objects to create a tailored profile. Note that you must select whether your profile applies to nodes or the platform by setting thecompliance.openshift.io/product-type:
annotation or by setting the-node
suffix for theTailoredProfile
CR. -
In this release, the Compliance Operator is now able to schedule scans on all nodes irrespective of their taints. Previously, the scan pods would only tolerated the
node-role.kubernetes.io/master taint
, meaning that they would either ran on nodes with no taints or only on nodes with thenode-role.kubernetes.io/master
taint. In deployments that use custom taints for their nodes, this resulted in the scans not being scheduled on those nodes. Now, the scan pods tolerate all node taints. -
In this release, the Compliance Operator supports the following North American Electric Reliability Corporation (NERC) security profiles:
-
ocp4-nerc-cip
-
ocp4-nerc-cip-node
-
rhcos4-nerc-cip
-
-
In this release, the Compliance Operator supports the NIST 800-53 Moderate-Impact Baseline for the Red Hat OpenShift - Node level, ocp4-moderate-node, security profile.
-
In this release, the remediation template now allows multi-value variables.
-
With this update, the Compliance Operator can change remediations based on variables that are set in the compliance profile. This is useful for remediations that include deployment-specific values such as time outs, NTP server host names, or similar. Additionally, the
ComplianceCheckResult
objects now use the labelcompliance.openshift.io/check-has-value
that lists the variables a check has used.
-
Previously, while performing a scan, an unexpected termination occurred in one of the scanner containers of the pods. In this release, the Compliance Operator uses the latest OpenSCAP version 1.3.5 to avoid a crash.
-
Previously, using
autoReplyRemediations
to apply remediations triggered an update of the cluster nodes. This was disruptive if some of the remediations did not include all of the required input variables. Now, if a remediation is missing one or more required input variables, it is assigned a state ofNeedsReview
. If one or more remediations are in aNeedsReview
state, the machine config pool remains paused, and the remediations are not applied until all of the required variables are set. This helps minimize disruption to the nodes. -
The RBAC Role and Role Binding used for Prometheus metrics are changed to 'ClusterRole' and 'ClusterRoleBinding' to ensure that monitoring works without customization.
-
Previously, if an error occurred while parsing a profile, rules or variables objects were removed and deleted from the profile. Now, if an error occurs during parsing, the
profileparser
annotates the object with a temporary annotation that prevents the object from being deleted until after parsing completes. (BZ#1988259) -
Previously, an error occurred if titles or descriptions were missing from a tailored profile. Because the XCCDF standard requires titles and descriptions for tailored profiles, titles and descriptions are now required to be set in
TailoredProfile
CRs. -
Previously, when using tailored profiles,
TailoredProfile
variable values were allowed to be set using only a specific selection set. This restriction is now removed, andTailoredProfile
variables can be set to any value.
The following advisory is available for the OpenShift Compliance Operator 0.1.39:
-
Previously, the Compliance Operator was unable to parse Payment Card Industry Data Security Standard (PCI DSS) references. Now, the Operator can parse compliance content that ships with PCI DSS profiles.
-
Previously, the Compliance Operator was unable to execute rules for AU-5 control in the moderate profile. Now, permission is added to the Operator so that it can read Prometheusrules.monitoring.coreos.com objects and run the rules that cover AU-5 control in the moderate profile.
-
Understanding the Compliance Operator ./compliance_operator/compliance-operator-remediation.adoc :_content-type: ASSEMBLY
= Managing Compliance Operator result and remediation _attributes/common-attributes.adoc :context: compliance-remediation
Each ComplianceCheckResult
represents a result of one compliance rule check. If the rule can be remediated automatically, a ComplianceRemediation
object with the same name, owned by the ComplianceCheckResult
is created. Unless requested, the remediations are not applied automatically, which gives an {product-title} administrator the opportunity to review what the remediation does and only apply a remediation once it has been verified.
-
Modifying nodes. ./compliance_operator/compliance-operator-supported-profiles.adoc :_content-type: ASSEMBLY
= Supported compliance profiles _attributes/common-attributes.adoc :context: compliance-operator-supported-profiles
There are several profiles available as part of the Compliance Operator (CO) installation.
Important
|
The Compliance Operator might report incorrect results on managed platforms, such as OpenShift Dedicated, Red Hat OpenShift Service on AWS, and Azure Red Hat OpenShift. For more information, see the Red Hat Knowledgebase Solution #6983418. |
-
Compliance Operator profile types./compliance_operator/compliance-operator-tailor.adoc :_content-type: ASSEMBLY
= Tailoring the Compliance Operator _attributes/common-attributes.adoc :context: compliance-tailor
While the Compliance Operator comes with ready-to-use profiles, they must be modified to fit the organizations’ needs and requirements. The process of modifying a profile is called tailoring.
The Compliance Operator provides the TailoredProfile
object to help tailor profiles.
modules/compliance-tailored-profiles.adoc ./compliance_operator/compliance-operator-troubleshooting.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: compliance-troubleshooting
This section describes how to troubleshoot the Compliance Operator. The information can be useful either to diagnose a problem or provide information in a bug report. Some general tips:
-
The Compliance Operator emits Kubernetes events when something important happens. You can either view all events in the cluster using the command:
$ oc get events -n openshift-compliance
Or view events for an object like a scan using the command:
$ oc describe -n openshift-compliance compliancescan/cis-compliance
-
The Compliance Operator consists of several controllers, approximately one per API object. It could be useful to filter only those controllers that correspond to the API object having issues. If a
ComplianceRemediation
cannot be applied, view the messages from theremediationctrl
controller. You can filter the messages from a single controller by parsing withjq
:$ oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f \ | jq -c 'select(.logger == "profilebundlectrl")'
-
The timestamps are logged as seconds since UNIX epoch in UTC. To convert them to a human-readable date, use
date -d @timestamp --utc
, for example:$ date -d @1596184628.955853 --utc
-
Many custom resources, most importantly
ComplianceSuite
andScanSetting
, allow thedebug
option to be set. Enabling this option increases verbosity of the OpenSCAP scanner pods, as well as some other helper pods. -
If a single rule is passing or failing unexpectedly, it could be helpful to run a single scan or a suite with only that rule to find the rule ID from the corresponding
ComplianceCheckResult
object and use it as therule
attribute value in aScan
CR. Then, together with thedebug
option enabled, thescanner
container logs in the scanner pod would show the raw OpenSCAP logs.
include::modules/support.adoc[leveloffset=+1]./compliance_operator/compliance-operator-understanding.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: understanding-compliance
The Compliance Operator lets {product-title} administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. The Compliance Operator assesses compliance of both the Kubernetes API resources of {product-title}, as well as the nodes running the cluster. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content.
Important
|
The Compliance Operator is available for {op-system-first} deployments only. |
-
Supported compliance profiles ./compliance_operator/compliance-operator-uninstallation.adoc :_content-type: ASSEMBLY
= Uninstalling the Compliance Operator _attributes/common-attributes.adoc :context: compliance-operator-uninstallation
You can remove the OpenShift Compliance Operator from your cluster by using the {product-title} web console or the CLI.
include::modules/compliance-operator-cli-uninstall.adoc[leveloffset=+1]./compliance_operator/compliance-operator-updating.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: compliance-operator-updating
As a cluster administrator, you can update the Compliance Operator on your {product-title} cluster.
modules/olm-preparing-upgrade.adoc modules/olm-changing-update-channel.adoc modules/olm-approving-pending-upgrade.adoc
_attributes/common-attributes.adoc :context: compliance-operator-scans
The ScanSetting
and ScanSettingBinding
APIs are recommended to run compliance scans with the Compliance Operator. For more information on these API objects, run:
$ oc explain scansettings
or
$ oc explain scansettingbindings
include::modules/compliance-scheduling-pods-with-resource-requests.adoc[leveloffset=+1]./compliance_operator/oc-compliance-plug-in-using.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: oc-compliance-plug-in-understanding
Although the Compliance Operator automates many of the checks and remediations for the cluster, the full process of bringing a cluster into compliance often requires administrator interaction with the Compliance Operator API and other components. The oc-compliance
plugin makes the process easier.
modules/oc-compliance-viewing-compliance-check-result-details.adoc ./container_security/security-build.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: security-build
In a container environment, the software build process is the stage in the life cycle where application code is integrated with the required runtime libraries. Managing this build process is key to securing the software stack.
-
Viewing application composition using the Topology view ./container_security/security-compliance.adoc :_content-type: ASSEMBLY
= Understanding compliance _attributes/common-attributes.adoc :context: security-compliance
For many {product-title} customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization’s corporate governance framework.
-
Installing a cluster in FIPS mode ./container_security/security-container-content.adoc :_content-type: ASSEMBLY
= Securing container content _attributes/common-attributes.adoc :context: security-container-content
To ensure the security of the content inside your containers you need to start with trusted base images, such as Red Hat Universal Base Images, and add trusted software. To check the ongoing security of your container images, there are both Red Hat and third-party tools for scanning images.
-
Image stream objects ./container_security/security-container-signature.adoc :_content-type: ASSEMBLY
= Container image signatures _attributes/common-attributes.adoc :context: security-container-signature
Red Hat delivers signatures for the images in the Red Hat Container Registries. Those signatures can be automatically verified when being pulled to {product-title} 4 clusters by using the Machine Config Operator (MCO).
Quay.io serves most of the images that make up {product-title}, and only the release image is signed. Release images refer to the approved {product-title} images, offering a degree of protection against supply chain attacks. However, some extensions to {product-title}, such as logging, monitoring, and service mesh, are shipped as Operators from the Operator Lifecycle Manager (OLM). Those images ship from the Red Hat Ecosystem Catalog Container images registry.
To verify the integrity of those images between Red Hat registries and your infrastructure, enable signature verification.
-
Machine Config Overview ./container_security/security-deploy.adoc :_content-type: ASSEMBLY
= Deploying containers _attributes/common-attributes.adoc :context: security-deploy
You can use a variety of techniques to make sure that the containers you deploy hold the latest production-quality content and that they have not been tampered with. These techniques include setting up build triggers to incorporate the latest code and using signatures to ensure that the container comes from a trusted source and has not been modified.
-
Input secrets and config maps ./container_security/security-hardening.adoc :_content-type: ASSEMBLY
= Hardening {op-system} _attributes/common-attributes.adoc :context: security-hardening
{op-system} was created and tuned to be deployed in {product-title} with
few if any changes needed to {op-system} nodes.
Every organization adopting {product-title} has its own requirements for
system hardening. As a {op-system-base} system with OpenShift-specific modifications and
features added (such as Ignition, ostree, and a read-only /usr
to provide
limited immutability),
{op-system} can be hardened just as you would any {op-system-base} system.
Differences lie in the ways you manage the hardening.
A key feature of {product-title} and its Kubernetes engine is to be able to quickly scale applications and infrastructure up and down as needed. Unless it is unavoidable, you do not want to make direct changes to {op-system} by logging into a host and adding software or changing settings. You want to have the {product-title} installer and control plane manage changes to {op-system} so new nodes can be spun up without manual intervention.
So, if you are setting out to harden {op-system} nodes in {product-title} to meet your security needs, you should consider both what to harden and how to go about doing that hardening.
-
Installation configuration parameters - see
fips
-
{op-system-base} core crypto components ./container_security/security-hosts-vms.adoc :_content-type: ASSEMBLY
= Understanding host and VM security _attributes/common-attributes.adoc :context: security-hosts-vms
Both containers and virtual machines provide ways of separating applications running on a host from the operating system itself. Understanding {op-system}, which is the operating system used by {product-title}, will help you see how the host systems protect containers and hosts from each other.
modules/security-hosts-vms-openshift.adoc ./container_security/security-monitoring.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: security-monitoring
The ability to monitor and audit an {product-title} cluster is an important part of safeguarding the cluster and its users against inappropriate usage.
There are two main sources of cluster-level information that are useful for this purpose: events and logging.
-
Viewing audit logs ./container_security/security-network.adoc :_content-type: ASSEMBLY
= Securing networks _attributes/common-attributes.adoc :context: security-network
Network security can be managed at several levels. At the pod level, network namespaces can prevent containers from seeing other pods or the host system by restricting network access. Network policies give you control over allowing and rejecting connections. You can manage ingress and egress traffic to and from your containerized applications.
-
Configuring an egress firewall to control access to external IP addresses
-
Configuring egress IPs for a project ./container_security/security-platform.adoc :_content-type: ASSEMBLY
= Securing the container platform _attributes/common-attributes.adoc :context: security-platform
{product-title} and Kubernetes APIs are key to automating container management at scale. APIs are used to:
-
Validate and configure the data for pods, services, and replication controllers.
-
Perform project validation on incoming requests and invoke triggers on other major system components.
Security-related features in {product-title} that are based on Kubernetes include:
-
Multitenancy, which combines Role-Based Access Controls and network policies to isolate containers at multiple levels.
-
Admission plugins, which form boundaries between an API and those making requests to the API.
{product-title} uses Operators to automate and simplify the management of Kubernetes-level security features.
-
Proxy certificates ./container_security/security-registries.adoc :_content-type: ASSEMBLY
= Using container registries securely _attributes/common-attributes.adoc :context: security-registries
Container registries store container images to:
-
Make images accessible to others
-
Organize images into repositories that can include multiple versions of an image
-
Optionally limit access to images, based on different authentication methods, or make them publicly available
There are public container registries, such as Quay.io and Docker Hub where many people and organizations share their images. The Red Hat Registry offers supported Red Hat and partner images, while the Red Hat Ecosystem Catalog offers detailed descriptions and health checks for those images. To manage your own registry, you could purchase a container registry such as Red Hat Quay.
From a security standpoint, some registries provide special features to check and improve the health of your containers. For example, Red Hat Quay offers container vulnerability scanning with Clair security scanner, build triggers to automatically rebuild images when source code changes in GitHub and other locations, and the ability to use role-based access control (RBAC) to secure access to images.
modules/security-registries-quay.adoc ./container_security/security-storage.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: security-storage
{product-title} supports multiple types of storage, both for on-premise and cloud providers. In particular, {product-title} can use storage types that support the Container Storage Interface.
-
Persistent storage using GCE Persistent Disk ./container_security/security-understanding.adoc :_content-type: ASSEMBLY
= Understanding container security _attributes/common-attributes.adoc :context: security-understanding
Securing a containerized application relies on multiple levels of security:
-
Container security begins with a trusted base container image and continues through the container build process as it moves through your CI/CD pipeline.
ImportantImage streams by default do not automatically update. This default behavior might create a security issue because security updates to images referenced by an image stream do not automatically occur. For information about how to override this default behavior, see Configuring periodic importing of imagestreamtags.
-
When a container is deployed, its security depends on it running on secure operating systems and networks, and establishing firm boundaries between the container itself and the users and hosts that interact with it.
-
Continued security relies on being able to scan container images for vulnerabilities and having an efficient way to correct and replace vulnerable images.
Beyond what a platform such as {product-title} offers out of the box, your organization will likely have its own security demands. Some level of compliance verification might be needed before you can even bring {product-title} into your data center.
Likewise, you may need to add your own agents, specialized hardware drivers, or encryption features to {product-title}, before it can meet your organization’s security standards.
This guide provides a high-level walkthrough of the container security measures available in {product-title}, including solutions for the host layer, the container and orchestration layer, and the build and application layer. It then points you to specific {product-title} documentation to help you achieve those security measures.
This guide contains the following information:
-
Why container security is important and how it compares with existing security standards.
-
Which container security measures are provided by the host ({op-system} and {op-system-base}) layer and which are provided by {product-title}.
-
How to evaluate your container content and sources for vulnerabilities.
-
How to design your build and deployment process to proactively check container content.
-
How to control access to containers through authentication and authorization.
-
How networking and attached storage are secured in {product-title}.
-
Containerized solutions for API management and SSO.
The goal of this guide is to understand the incredible security benefits of using {product-title} for your containerized workloads and how the entire Red Hat ecosystem plays a part in making and keeping containers secure. It will also help you understand how you can engage with the {product-title} to achieve your organization’s security goals.
-
OpenShift Security Guide ./encrypting-etcd.adoc :_content-type: ASSEMBLY
= Encrypting etcd data _attributes/common-attributes.adoc :context: encrypting-etcd
modules/disabling-etcd-encryption.adoc ./file_integrity_operator/file-integrity-operator-advanced-usage.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: file-integrity-operator
modules/file-integrity-operator-exploring-daemon-sets.adoc ./file_integrity_operator/file-integrity-operator-configuring.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: file-integrity-operator
modules/file-integrity-operator-changing-custom-config.adoc ./file_integrity_operator/file-integrity-operator-installation.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: file-integrity-operator-installation
-
The File Integrity Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks. ./file_integrity_operator/file-integrity-operator-release-notes.adoc :_content-type: ASSEMBLY
= File Integrity Operator release notes :context: file-integrity-operator-release-notes-v0 _attributes/common-attributes.adoc
The File Integrity Operator for {product-title} continually runs file integrity checks on {op-system} nodes.
These release notes track the development of the File Integrity Operator in the {product-title}.
For an overview of the File Integrity Operator, see Understanding the File Integrity Operator.
To access the latest release, see Updating the File Integrity Operator.
The following advisory is available for the OpenShift File Integrity Operator 1.2.1:
-
RHBA-2023:1684 OpenShift File Integrity Operator Bug Fix Update
-
This release includes updated container dependencies.
The following advisory is available for the OpenShift File Integrity Operator 1.2.0:
-
The File Integrity Operator Custom Resource (CR) now contains an
initialDelay
feature that specifies the number of seconds to wait before starting the first AIDE integrity check. For more information, see Creating the FileIntegrity custom resource. -
The File Integrity Operator is now stable and the release channel is upgraded to
stable
. Future releases will follow Semantic Versioning. To access the latest release, see Updating the File Integrity Operator.
The following advisory is available for the OpenShift File Integrity Operator 1.0.0:
The following advisory is available for the OpenShift File Integrity Operator 0.1.32:
-
Previously, alerts issued by the File Integrity Operator did not set a namespace, making it difficult to understand from which namespace the alert originated. Now, the Operator sets the appropriate namespace, providing more information about the alert. (BZ#2112394)
-
Previously, The File Integrity Operator did not update the metrics service on Operator startup, causing the metrics targets to be unreachable. With this release, the File Integrity Operator now ensures the metrics service is updated on Operator startup. (BZ#2115821)
The following advisory is available for the OpenShift File Integrity Operator 0.1.30:
-
The File Integrity Operator is now supported on the following architectures:
-
IBM Power
-
IBM Z and LinuxONE
-
-
Previously, alerts issued by the File Integrity Operator did not set a namespace, making it difficult to understand where the alert originated. Now, the Operator sets the appropriate namespace, increasing understanding of the alert. (BZ#2101393)
The following advisory is available for the OpenShift File Integrity Operator 0.1.24:
-
You can now configure the maximum number of backups stored in the
FileIntegrity
Custom Resource (CR) with theconfig.maxBackups
attribute. This attribute specifies the number of AIDE database and log backups left over from there-init
process to keep on the node. Older backups beyond the configured number are automatically pruned. The default is set to five backups.
-
Previously, upgrading the Operator from versions older than 0.1.21 to 0.1.22 could cause the
re-init
feature to fail. This was a result of the Operator failing to updateconfigMap
resource labels. Now, upgrading to the latest version fixes the resource labels. (BZ#2049206) -
Previously, when enforcing the default
configMap
script contents, the wrong data keys were compared. This resulted in theaide-reinit
script not being updated properly after an Operator upgrade, and caused there-init
process to fail. Now,daemonSets
run to completion and the AIDE databasere-init
process executes successfully. (BZ#2072058)
The following advisory is available for the OpenShift File Integrity Operator 0.1.22:
-
Previously, a system with a File Integrity Operator installed might interrupt the {product-title} update, due to the
/etc/kubernetes/aide.reinit
file. This occurred if the/etc/kubernetes/aide.reinit
file was present, but later removed prior to theostree
validation. With this update,/etc/kubernetes/aide.reinit
is moved to the/run
directory so that it does not conflict with the {product-title} update. (BZ#2033311)
The following advisory is available for the OpenShift File Integrity Operator 0.1.21:
-
The metrics related to
FileIntegrity
scan results and processing metrics are displayed on the monitoring dashboard on the web console. The results are labeled with the prefix offile_integrity_operator_
. -
If a node has an integrity failure for more than 1 second, the default
PrometheusRule
provided in the operator namespace alerts with a warning. -
The following dynamic Machine Config Operator and Cluster Version Operator related filepaths are excluded from the default AIDE policy to help prevent false positives during node updates:
-
/etc/machine-config-daemon/currentconfig
-
/etc/pki/ca-trust/extracted/java/cacerts
-
/etc/cvo/updatepayloads
-
/root/.kube
-
-
The AIDE daemon process has stability improvements over v0.1.16, and is more resilient to errors that might occur when the AIDE database is initialized.
-
Understanding the File Integrity Operator ./file_integrity_operator/file-integrity-operator-troubleshooting.adoc :_content-type: ASSEMBLY
= Troubleshooting the File Integrity Operator _attributes/common-attributes.adoc :context: file-integrity-operator
- Issue
-
You want to generally troubleshoot issues with the File Integrity Operator.
- Resolution
-
Enable the debug flag in the
FileIntegrity
object. Thedebug
flag increases the verbosity of the daemons that run in theDaemonSet
pods and run the AIDE checks.
- Issue
-
You want to check the AIDE configuration.
- Resolution
-
The AIDE configuration is stored in a config map with the same name as the
FileIntegrity
object. All AIDE configuration config maps are labeled withfile-integrity.openshift.io/aide-conf
.
- Issue
-
You want to determine if the
FileIntegrity
object exists and see its current status. - Resolution
-
To see the
FileIntegrity
object’s current status, run:$ oc get fileintegrities/worker-fileintegrity -o jsonpath="{ .status }"
Once the
FileIntegrity
object and the backing daemon set are created, the status should switch toActive
. If it does not, check the Operator pod logs.
- Issue
-
You want to confirm that the daemon set exists and that its pods are running on the nodes you expect them to run on.
- Resolution
-
Run:
$ oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity
NoteAdding
-owide
includes the IP address of the node that the pod is running on.To check the logs of the daemon pods, run
oc logs
.Check the return value of the AIDE command to see if the check passed or failed. ./file_integrity_operator/file-integrity-operator-understanding.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: file-integrity-operator
The File Integrity Operator is an {product-title} Operator that continually runs file integrity checks on the cluster nodes. It deploys a daemon set that initializes and runs privileged advanced intrusion detection environment (AIDE) containers on each node, providing a status object with a log of files that are modified during the initial run of the daemon set pods.
Important
|
Currently, only {op-system-first} nodes are supported. |
modules/file-integrity-understanding-file-integrity-cr.adoc modules/checking-file-intergrity-cr-status.adoc modules/file-integrity-CR-phases.adoc modules/file-integrity-understanding-file-integrity-node-statuses-object.adoc modules/file-integrity-node-status.adoc modules/file-integrity-node-status-success.adoc modules/file-integrity-node-status-failure.adoc modules/file-integrity-events.adoc ./file_integrity_operator/file-integrity-operator-updating.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: file-integrity-operator-updating
As a cluster administrator, you can update the File Integrity Operator on your {product-title} cluster.
modules/olm-preparing-upgrade.adoc modules/olm-changing-update-channel.adoc modules/olm-approving-pending-upgrade.adoc
_attributes/common-attributes.adoc :context: security-compliance-overview
It is important to understand how to properly secure various aspects of your {product-title} cluster.
A good starting point to understanding {product-title} security is to review the concepts in Understanding container security. This and subsequent sections provide a high-level walkthrough of the container security measures available in {product-title}, including solutions for the host layer, the container and orchestration layer, and the build and application layer. These sections also include information on the following topics:
-
Why container security is important and how it compares with existing security standards.
-
Which container security measures are provided by the host ({op-system} and {op-system-base}) layer and which are provided by {product-title}.
-
How to evaluate your container content and sources for vulnerabilities.
-
How to design your build and deployment process to proactively check container content.
-
How to control access to containers through authentication and authorization.
-
How networking and attached storage are secured in {product-title}.
-
Containerized solutions for API management and SSO.
{product-title} auditing provides a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. Administrators can configure the audit log policy and view audit logs.
Certificates are used by various components to validate access to the cluster. Administrators can replace the default ingress certificate, add API server certificates, or add a service certificate.
You can also review more details about the types of certificates used by the cluster:
You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties.
Administrators can use the {rhq-cso} to run vulnerability scans and review information about detected vulnerabilities.
For many {product-title} customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards, or the organization’s corporate governance framework.
Administrators can use the Compliance Operator to run compliance scans and recommend remediations for any issues found. The oc-compliance
plugin is an OpenShift CLI (oc
) plugin that provides a set of utilities to easily interact with the Compliance Operator.
Administrators can use the File Integrity Operator to continually run file integrity checks on cluster nodes and provide a log of files that have been modified.
-
Managing security context constraints ./network_bound_disk_encryption/nbde-about-disk-encryption-technology.adoc :_content-type: ASSEMBLY
= About disk encryption technology _attributes/common-attributes.adoc :context: nbde-implementation
Network-Bound Disk Encryption (NBDE) allows you to encrypt root volumes of hard drives on physical and virtual machines without having to manually enter a password when restarting machines.
modules/nbde-logging-considerations.adoc ./network_bound_disk_encryption/nbde-disaster-recovery-considerations.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: nbde-implementation
This section describes several potential disaster situations and the procedures to respond to each of them. Additional situations will be added here as they are discovered or presumed likely to be possible.
modules/nbde-compromise-of-key-material.adoc ./network_bound_disk_encryption/nbde-managing-encryption-keys.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: nbde-implementation
The cryptographic mechanism to recreate the encryption key is based on the blinded key stored on the node and the private key of the involved Tang servers. To protect against the possibility of an attacker who has obtained both the Tang server private key and the node’s encrypted disk, periodic rekeying is advisable.
You must perform the rekeying operation for every node before you can delete the old key from the Tang server. The following sections provide procedures for rekeying and deleting old keys.
modules/nbde-deleting-old-tang-server-keys.adoc ./network_bound_disk_encryption/nbde-tang-server-installation-considerations.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: nbde-implementation
-
Configuring automated unlocking of encrypted volumes using policy-based decryption
-
Encrypting and mirroring disks during installation ./pod-vulnerability-scan.adoc :_content-type: ASSEMBLY
= Scanning pods for vulnerabilities _attributes/common-attributes.adoc :context: pod-vulnerability-scan
Using the {rhq-cso}, you can access vulnerability scan results from the {product-title} web console for container images used in active pods on the cluster. The {rhq-cso}:
-
Watches containers associated with pods on all or specified namespaces
-
Queries the container registry where the containers came from for vulnerability information, provided an image’s registry is running image scanning (such as Quay.io or a Red Hat Quay registry with Clair scanning)
-
Exposes vulnerabilities via the
ImageManifestVuln
object in the Kubernetes API
Using the instructions here, the {rhq-cso} is installed in the openshift-operators
namespace, so it is available to all namespaces on your {product-title} cluster.
modules/security-pod-scan-query-cli.adoc ./seccomp-profiles.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: configuring-seccomp-profiles
An {product-title} container or a pod runs a single application that performs one or more well-defined tasks. The application usually requires only a small subset of the underlying operating system kernel APIs. Secure computing mode, seccomp, is a Linux kernel feature that can be used to limit the process running in a container to only using a subset of the available system calls.
The restricted-v2
SCC applies to all newly created pods in {product-version}. The default seccomp profile runtime/default
is applied to these pods.
Seccomp profiles are stored as JSON files on the disk.
Important
|
Seccomp profiles cannot be applied to privileged containers. |
You can configure a custom seccomp profile, which allows you to update the filters based on the application requirements. This allows cluster administrators to have greater control over the security of workloads running in OpenShift Container Platform.
Seccomp security profiles list the system calls (syscalls) a process can make. Permissions are broader than SELinux, which restrict operations, such as write
, system-wide.
modules/creating-custom-seccomp-profile.adoc modules/setting-custom-seccomp-profile.adoc modules/applying-custom-seccomp-profile.adoc
During deployment, the admission controller validates the following:
-
The annotations against the current SCCs allowed by the user role.
-
The SCC, which includes the seccomp profile, is allowed for the pod.
If the SCC is allowed for the pod, the kubelet runs the pod with the specified seccomp profile.
Important
|
Ensure that the seccomp profile is deployed to all worker nodes. |
Note
|
The custom SCC must have the appropriate priority to be automatically assigned to the pod or meet other conditions required by the pod, such as allowing CAP_NET_ADMIN. |
-
Post-installation machine configuration tasks ./security_profiles_operator/spo-advanced.adoc :_content-type: ASSEMBLY
= Advanced Security Profiles Operator tasks _attributes/common-attributes.adoc :context: spo-advanced
Use advanced tasks to enable metrics, configure webhooks, or restrict syscalls.
include::modules/spo-configuring-webhooks.adoc[leveloffset=+1]./security_profiles_operator/spo-enabling.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: spo-enabling
Before you can use the Security Profiles Operator, you must ensure the Operator is deployed in the cluster.
include::modules/spo-logging-verbosity.adoc[leveloffset=+1]./security_profiles_operator/spo-overview.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: spo-overview
{product-title} Security Profiles Operator (SPO) provides a way to define secure computing (seccomp) profiles and SELinux profiles as custom resources, synchronizing profiles to every node in a given namespace. For the latest updates, see the release notes.
The SPO can distribute custom resources to each node while a reconciliation loop ensures that the profiles stay up-to-date. See Understanding the Security Profiles Operator.
The SPO manages SELinux policies and seccomp profiles for namespaced workloads. For more information, see Enabling the Security Profiles Operator.
You can create seccomp and SELinux profiles, bind policies to pods, record workloads, and synchronize all worker nodes in a namespace.
Use Advanced Security Profile Operator tasks to enable the log enricher, configure webhooks and metrics, or restrict profiles to a single namespace.
Troubleshoot the Security Profiles Operator as needed, or engage Red Hat support.
You can Uninstall the Security Profiles Operator by removing the profiles before removing the Operator. ./security_profiles_operator/spo-release-notes.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: spo-release-notes
The Security Profiles Operator provides a way to define secure computing (seccomp) and SELinux profiles as custom resources, synchronizing profiles to every node in a given namespace.
These release notes track the development of the Security Profiles Operator in {product-title}.
For an overview of the Security Profiles Operator, see Security Profiles Operator Overview.
The following advisory is available for the Security Profiles Operator 0.7.1:
-
Security Profiles Operator (SPO) now automatically selects the appropriate
selinuxd
image for RHEL 8- and 9-based RHCOS systems.ImportantUsers that mirror images for disconnected environments must mirror both
selinuxd
images provided by the Security Profiles Operator. -
You can now enable memory optimization inside of an
spod
daemon. For more information, see Enabling memory optimization in the spod daemon.NoteSPO memory optimization is not enabled by default.
-
The daemon resource requirements are now configurable. For more information, see Customizing daemon resource requirements.
-
The priority class name is now configurable in the
spod
configuration. For more information, see Setting a custom priority class name for the spod daemon pod.
-
The default
nginx-1.19.1
seccomp profile is now removed from the Security Profiles Operator deployment.
-
Previously, a Security Profiles Operator (SPO) SELinux policy did not inherit low-level policy definitions from the container template. If you selected another template, such as net_container, the policy would not work because it required low-level policy definitions that only existed in the container template. This issue occurred when the SPO SELinux policy attempted to translate SELinux policies from the SPO custom format to the Common Intermediate Language (CIL) format. With this update, the container template appends to any SELinux policies that require translation from SPO to CIL. Additionally, the SPO SELinux policy can inherit low-level policy definitions from any supported policy template. (OCPBUGS-12879)
-
When uninstalling the Security Profiles Operator, the
MutatingWebhookConfiguration
object is not deleted and must be manually removed. As a workaround, delete theMutatingWebhookConfiguration
object after uninstalling the Security Profiles Operator. These steps are defined in Uninstalling the Security Profiles Operator. (OCPBUGS-4687)
The following advisory is available for the Security Profiles Operator 0.5.2:
This update addresses a CVE in an underlying dependency.
-
When uninstalling the Security Profiles Operator, the
MutatingWebhookConfiguration
object is not deleted and must be manually removed. As a workaround, delete theMutatingWebhookConfiguration
object after uninstalling the Security Profiles Operator. These steps are defined in Uninstalling the Security Profiles Operator. (OCPBUGS-4687)
The following advisory is available for the Security Profiles Operator 0.5.0:
-
When uninstalling the Security Profiles Operator, the
MutatingWebhookConfiguration
object is not deleted and must be manually removed. As a workaround, delete theMutatingWebhookConfiguration
object after uninstalling the Security Profiles Operator. These steps are defined in Uninstalling the Security Profiles Operator. (OCPBUGS-4687) ./security_profiles_operator/spo-seccomp.adoc :_content-type: ASSEMBLY= Managing seccomp profiles _attributes/common-attributes.adoc :context: spo-seccomp
Create and manage seccomp profiles and bind them to workloads.
-
About security profiles./security_profiles_operator/spo-selinux.adoc :_content-type: ASSEMBLY
= Managing SELinux profiles _attributes/common-attributes.adoc :context: spo-selinux
Create and manage SELinux profiles and bind them to workloads.
-
About security profiles./security_profiles_operator/spo-troubleshooting.adoc :_content-type: ASSEMBLY
= Troubleshooting the Security Profiles Operator _attributes/common-attributes.adoc :context: spo-troubleshooting
Troubleshoot the Security Profiles Operator to diagnose a problem or provide information in a bug report.
_attributes/common-attributes.adoc :context: spo-understanding
{product-title} administrators can use the Security Profiles Operator to define increased security measures in clusters.
include::modules/spo-about.adoc[leveloffset=+1]./security_profiles_operator/spo-uninstalling.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: spo-uninstalling
You can remove the Security Profiles Operator from your cluster by using the {product-title} web console.
include::modules/spo-uninstall-console.adoc[leveloffset=+1]./tls-security-profiles.adoc :_content-type: ASSEMBLY
_attributes/common-attributes.adoc :context: tls-security-profiles
TLS security profiles provide a way for servers to regulate which ciphers a client can use when connecting to the server. This ensures that {product-title} components use cryptographic libraries that do not allow known insecure protocols, ciphers, or algorithms.
Cluster administrators can choose which TLS security profile to use for each of the following components:
-
the Ingress Controller
-
the control plane
This includes the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, OpenShift API server, OpenShift OAuth API server, OpenShift OAuth server, and etcd.
-
the kubelet, when it acts as an HTTP server for the Kubernetes API server