- CKS
- Common paths:
# Certificate path
/etc/kubernetes/pki/
# kubelet certificate path
/var/lib/kubelet/pki/
# kubernetes scheduler
/etc/kubernetes/scheduler.conf
# kubernetes controller manager
/etc/kubernetes/controller-manager.conf
# kubernetes api server manifest
/etc/kubernetes/manifests/kube-apiserver.yaml
# kubelet. Can use as kubeconfig as well
/etc/kubernetes/kubelet.conf
/etc/default/kubelet
/var/lib/kubelet/config.yaml
/etc/systemd/system/kubelet.service.d/
# certificate mountpoint inside the pod
/run/secrets/kubernetes.io/serviceaccount
# etcd secret path
/registry/secrets/<namespace>/<secret-name>
# pod logs path
/var/log/pods
# admission controller path
/etc/kubernetes/admission/
- To generate TLS
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
- To retrieve container volume
docker cp <container-name>:/ <folder>
- To create role & rolebinding
kubectl create role <role-name> --verb=get --resource=secrets
kubectl create rolebinding <rolebinding-name> --role=<role-name> --user=<user>
- To test user permission
kubectl auth can-i <verb> <obj> --as <user>
kubectl auth can-i <verb> <obj> --as system:serviceaccount:<namespace>:<service-account-name>
- Approve certificate
kubectl certificate approve <certificate-signing-request-name>
# certificate in .status.certificate
- Create kubeconfig with certificate info
kubectl config set-credentials <user> --client-key=<key-name> --client-certificate=<cert-name>
# add --embed-certs for in-line certificate
kubectl config view
- To read secret from etcd
# Check api-server manifest to get the certs
export cert=/etc/kubernetes/pki/apiserver-etcd-client.crt
export key=/etc/kubernetes/pki/apiserver-etcd-client.key
export ca=/etc/kubernetes/pki/etcd/ca.crt
ETCDCTL_API=3 etcdctl --cert $cert --key $key --cacert $ca get /registry/secrets/<namespace>/<secret-name>
- To generate yaml template
kubectl run <name> --image=nginx -o yaml --dry-run=client > file.yaml
-
Tools
- pstree -p
- strace -cw
-
docker run with apparmor
docker run --security-opt apparmor=<profile-name> <image-name>
- Call kubernetes api with service account token
curl https://kubernetes.default/api/v1/namespaces/restricted/secrets -H "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)" -k
- To find syscalls
strace -p <PID>
Just a place to write down any notes for CKS journey Offical document: https://kubernetes.io/docs/concepts/security/
- Network access to API Server (Control plane)
- Network access to Nodes (nodes)
- Kubernetes access to Cloud Provider API
- Access to etcd
- etcd Encryption
- RBAC Authorization (Access to the Kubernetes API)
- Authentication
- Application secrets management (and encrypting them in etcd at rest)
- Pod Security Policies
- Quality of Service (and Cluster resource management)
- Network Policies
- TLS For Kubernetes Ingress
- Container Vulnerability Scanning and OS Dependency Security
- Image Signing and Enforcement
- Disallow privileged users
- Use container runtime with stronger isolation
- Access over TLS only
- Limiting port ranges of communication
- 3rd Party Dependency Security
- Static Code Analysis
- Dynamic probing attacks
Policies:
- Privileged
- Baseline
- Restricted
Security Contexts configure Pods and Containers at runtime. Security contexts are defined as part of the Pod and container specifications in the Pod manifest, and represent parameters to the container runtime.
Security policies are control plane mechanisms to enforce specific settings in the Security Context, as well as other parameters outside the Security Context. As of February 2020, the current native solution for enforcing these security policies is Pod Security Policy - a mechanism for centrally enforcing security policy on Pods across a cluster. Other alternatives for enforcing security policy are being developed in the Kubernetes ecosystem, such as OPA Gatekeeper.
The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:
-
Other pods that are allowed (exception: a pod cannot block access to itself)
-
Namespaces that are allowed
-
IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
-
Multiple np with the same name will be merged
-
Default deny
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-np
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
egress:
- to:
ports:
- port: 53
protocol: TCP
- port: 53
protocol: UDP
anonymous access is used for kube api server healthcheck, so be careful when disabling it. Toggle with
--anonymous-auth=false
Insecure port
--insecure-port=8080
Manual access kube api.
curl <ENDPOINT> --cacert <ca> --cert <cert> --key <key>
(info taken from kubeconfig)
-
API serves on
localhost:8080
, no TLS, bypass authentication & authorization. -
API serves on port
6443
(proxied from public443
), protected by TLS. -
If your cluster uses a private certificate authority, you need a copy of that CA certificate configured into your
~/.kube/config
on the client, so that you can trust the connection and be confident it was not intercepted.
-
HTTP requests need to go through authenticator modules. See more on authentication
- Failed --> 401
- Successful -->
username
mapped --> reusable for subsequent steps
-
Kubernetes does not have
User
object or store user information
-
Two categories of users:
- Normal users not managed by Kubernetes
- Determines the username from the common name field in the
subject
of the cert (e.g., "/CN=bob")
- Determines the username from the common name field in the
- Service accounts managed by Kubernetes
- Are bound to specific namespaces, and created automatically by the API server or manually through API calls
- Are tied to a set of credentials stored as Secrets, which are mounted into pods allowing in-cluster processes to talk to the Kubernetes API.
- Normal users not managed by Kubernetes
-
API requests are tied to either a normal user or a service account, or are treated as
anonymous requests
-
Kubernetes uses client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth to authenticate API requests through authentication plugins. Plugins attempt to associate the following attributes with the request: username, uid, groups, extra fields
- X509 client certs: enabled by passing the
--client-ca-file=SOMEFILE
option to API server - Static token file: given the
--token-auth-file=SOMEFILE
option - Bearer token: the API server expects an
Authorization
header with a value ofBearer THETOKEN
- Bootstrap token: See link
- Service account token: Are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API.
- OpenID connect token:
- Webhook token:
- When a client attempts to authenticate with the API server using a bearer token as discussed above, the authentication webhook POSTs a JSON-serialized
TokenReview
object containing the token to the remote service. - The remote service must return a response using the same TokenReview API version that it received
- When a client attempts to authenticate with the API server using a bearer token as discussed above, the authentication webhook POSTs a JSON-serialized
- X509 client certs: enabled by passing the
-
The API server does not guarantee the order authenticators run in.
-
A user can act as another user through impersonation headers
kubectl drain mynode --as=superman --as-group=system:masters
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: impersonator
rules:
- apiGroups: [""]
resources: ["users", "groups", "serviceaccounts"]
verbs: ["impersonate"]
-
A request must include the username of the requester, the requested action, and the object affected by the action. The request is authorized if an existing policy declares that the user has permissions to complete the requested action. See more on authorization
-
Sample policy:
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "bob",
"namespace": "projectCaribou",
"resource": "pods",
"readonly": true
}
}
- Sample request review:
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"spec": {
"resourceAttributes": {
"namespace": "projectCaribou",
"verb": "get",
"group": "unicorn.example.org",
"resource": "pods"
}
}
}
-
bob
will be allowed to getpod
resources withinprojectCaribou
namespace -
Denied --> 403
-
Non-resource requests Requests to endpoints other than /api/v1/... or /apis///... are considered "non-resource requests", and use the lower-cased HTTP method of the request as the verb
-
A user granted permission to create pods (or controllers that create pods) in the namespace can: read all secrets in the namespace; read all config maps in the namespace; and impersonate any service account in the namespace and take any action the account could take.
-
Authorization modules:
- ABAC mode
- RBAC Mode
- Webhook mode
-
After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for a binding, you need to remove the binding object and create a replacement.
-
You can aggregate several ClusterRoles into one combined ClusterRole. A controller, running as part of the cluster control plane, watches for ClusterRole objects with an aggregationRule set. The
aggregationRule
defines a label selector that the controller uses to match other ClusterRole objects that should be combined into the rules field of this one. See link -
At each start-up, the API server updates default cluster roles with any missing permissions, and updates default cluster role bindings with any missing subjects. This allows the cluster to repair accidental modifications, and helps to keep roles and role bindings up-to-date as permissions and subjects change in new Kubernetes releases.
-
Admission Control modules can:
- Modify or reject requests
- Access the contents of the object that is being created or modified
-
Admission controllers
do not
act on requests that merelyread
objects -
1 module failed --> immediately rejected
- A CertificateSigningRequest (CSR) resource is used to request that a certificate be signed by a denoted signer
- The signing controller then updates the CertificateSigningRequest, storing the new certificate into the status.certificate field of the existing CertificateSigningRequest object
kubernetes.io/kube-apiserver-client
: signs certificates that will be honored as client certificates by the API server. Never auto-approved by kube-controller-manager.kubernetes.io/kube-apiserver-client-kubelet
: signs client certificates that will be honored as client certificates by the API server. May be auto-approved by kube-controller-manager.kubernetes.io/kubelet-serving
: signs serving certificates that are honored as a valid kubelet serving certificate by the API server, but has no other guarantees. Never auto-approved by kube-controller-managerkubernetes.io/legacy-unknown
: has no guarantees for trust at all. Some third-party distributions of Kubernetes may honor client certificates signed by it. The stable CertificateSigningRequest API (version certificates.k8s.io/v1 and later) does not allow to set the signerName askubernetes.io/legacy-unknown
. Never auto-approved by kube-controller-manager.
For TLS certificates. See link
- If the pod does not have a ServiceAccount set, it sets the ServiceAccount to default.
- It ensures that the ServiceAccount referenced by the pod exists, and otherwise rejects it.
- If the pod does not contain any ImagePullSecrets, then ImagePullSecrets of the ServiceAccount are added to the pod.
- It adds a volume to the pod which contains a token for API access.
- It adds a volumeSource to each container of the pod mounted at
/var/run/secrets/kubernetes.io/serviceaccount
.
- watches ServiceAccount creation and creates a corresponding ServiceAccount token Secret to allow API access.
- watches ServiceAccount deletion and deletes all corresponding ServiceAccount token Secrets.
- watches ServiceAccount token Secret addition, and ensures the referenced ServiceAccount exists, and adds a token to the Secret if needed.
- watches Secret deletion and removes a reference from the corresponding ServiceAccount if needed.
- manages the ServiceAccounts inside namespaces
- ensures a ServiceAccount named "default" exists in every active namespace
- Necessary role to use psp
...
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- example
...
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: example
spec:
privileged: false # Don't allow privileged pods!
# The rest fills in some required fields.
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
- PodSecurityPolicies which allow the pod as-is, without changing defaults or mutating the pod, are preferred. The order of these non-mutating PodSecurityPolicies doesn't matter.
- If the pod must be defaulted or mutated, the first PodSecurityPolicy (ordered by name) to allow the pod is selected.
See link
See link
Each request can be recorded with an associated stage. The defined stages are:
RequestReceived
- The stage for events generated as soon as the audit handler receives the request, and before it is delegated down the handler chain.ResponseStarted
- Once the response headers are sent, but before the response body is sent. This stage is only generated for long-running requests (e.g. watch).ResponseComplete
- The response body has been completed and no more bytes will be sent.Panic
- Events generated when a panic occurred.
The first matching rule sets the audit level of the event. The defined audit levels are:
None
- don't log events that match this rule.Metadata
- log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body.Request
- log event metadata and request body but not response body. This does not apply for non-resource requests.RequestResponse
- log event metadata, request and response bodies. This does not apply for non-resource requests.
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
Enable in kube-apiserver
- --audit-policy-file=/etc/kubernetes/audit/policy.yaml # add
- --audit-log-path=/etc/kubernetes/audit/logs/audit.log # add
- --audit-log-maxsize=500 # add
- --audit-log-maxbackup=5 # add
volumeMounts:
- mountPath: /etc/kubernetes/audit # add
name: audit # add
volumes:
- hostPath: # add
path: /etc/kubernetes/audit # add
type: DirectoryOrCreate # add
name: audit # add
- Enabled on container-based via annotations
container.apparmor.security.beta.kubernetes.io/<container_name>: <profile_ref>
You can set a different RuntimeClass between different Pods to provide a balance of performance versus security. For example, if part of your workload deserves a high level of information security assurance, you might choose to schedule those Pods so that they run in a container runtime that uses hardware virtualization. You'd then benefit from the extra isolation of the alternative runtime, at the expense of some additional overhead.
- Configure the CRI implementation on nodes (runtime dependent)
- Create the corresponding RuntimeClass resources
apiVersion: node.k8s.io/v1 # RuntimeClass is defined in the node.k8s.io API group
kind: RuntimeClass
metadata:
name: myclass # The name the RuntimeClass will be referenced by
# RuntimeClass is a non-namespaced resource
handler: myconfiguration # The name of the corresponding CRI configuration
---
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
runtimeClassName: myclass
- Constraint Template
templates.gatekeeper.sh/v1beta1
- Constraint
constraints.gatekeeper.sh/v1beta1
- Audit
The audit functionality enables periodic evaluations of replicated resources against the
Constraints
enforced in the cluster to detect pre-existing misconfigurations. Gatekeeper stores audit results as violations listed in the status field of the relevantConstraint
.
- Config
config.gatekeeper.sh/v1alpha1