This guide provides all the YAML files needed to deploy a complete application on Kubernetes, with detailed explanations of each component.
apiVersion: v1
kind: Namespace
metadata:
name: myapp-namespace
Explanation:
- apiVersion: v1 - Uses the core API version, which has been stable since Kubernetes was first released
- kind: Namespace - Creates an isolated virtual cluster within your Kubernetes cluster
- metadata.name - Specifies the name of the namespace (myapp-namespace)
Namespaces help organize resources and provide a scope for names. They allow multiple teams to share a cluster while preventing naming conflicts.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
namespace: myapp-namespace
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: myapp:1.0
ports:
- containerPort: 8080
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: myapp-config
key: db.host
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: myapp-secrets
key: db.password
resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "200m"
memory: "256Mi"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
imagePullSecrets:
- name: registry-secret
Explanation:
- apiVersion: apps/v1 - Uses the apps API group, which contains the stable Deployment resource
- kind: Deployment - Manages a replicated application on your cluster
- metadata:
- name - The name of the deployment
- namespace - The namespace where this deployment exists
- labels - Key-value pairs that can be used to organize and select subsets of objects
- spec:
- replicas: 3 - Maintains 3 copies of your application for high availability
- selector.matchLabels - Defines how the Deployment finds which Pods to manage
- strategy - Defines the strategy to replace old Pods with new ones:
- type: RollingUpdate - Updates Pods in a rolling fashion (one by one)
- maxSurge - Maximum number of Pods that can be created over the desired number
- maxUnavailable - Maximum number of Pods that can be unavailable during the update
- template - Defines the Pod template used to create new Pods:
- metadata.labels - Labels attached to the Pods
- spec.containers - List of containers within the Pod:
- name - Name of the container
- image - Docker image to use
- ports - List of ports to expose from the container
- env - Environment variables for the container
- resources - Resource limits and requests for the container
- livenessProbe - Checks if the container is alive and running
- readinessProbe - Checks if the container is ready to receive traffic
- imagePullSecrets - References to secrets used for pulling the image
apiVersion: v1
kind: Service
metadata:
name: myapp-service
namespace: myapp-namespace
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: ClusterIP
Explanation:
- apiVersion: v1 - Uses the core API version
- kind: Service - Creates a service to expose pods to network traffic
- metadata:
- name - Name of the service
- namespace - Namespace where this service exists
- spec:
- selector - Defines which Pods the Service routes traffic to (all Pods with label app=myapp)
- ports - List of ports that this service exposes:
- port - Port exposed by the service (80)
- targetPort - Port on the Pod to redirect traffic to (8080)
- protocol - Network protocol to use (TCP)
- type - The type of service:
- ClusterIP - Exposes the service on a cluster-internal IP (default)
- Other options include NodePort, LoadBalancer, and ExternalName
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
namespace: myapp-namespace
data:
db.host: "postgres-service"
db.port: "5432"
app.log.level: "INFO"
app.config.json: |
{
"cache": {
"enabled": true,
"ttl": 300
},
"features": {
"experimental": false
}
}
Explanation:
- apiVersion: v1 - Uses the core API version
- kind: ConfigMap - Creates a ConfigMap to store non-confidential configuration data
- metadata:
- name - Name of the ConfigMap
- namespace - Namespace where this ConfigMap exists
- data - Contains the configuration data as key-value pairs:
- Simple key-value pairs for single values
- Multi-line values for more complex configurations like JSON
ConfigMaps decouple configuration from Pod specifications, allowing you to change configuration without rebuilding container images.
apiVersion: v1
kind: Secret
metadata:
name: myapp-secrets
namespace: myapp-namespace
type: Opaque
data:
db.password: cGFzc3dvcmQxMjM= # Base64 encoded "password123"
api.key: VGhpc0lzQVNlY3JldEFQSUtleQ== # Base64 encoded
stringData:
credentials.json: |
{
"username": "admin",
"password": "secure_password"
}
Explanation:
- apiVersion: v1 - Uses the core API version
- kind: Secret - Creates a Secret to store sensitive information
- metadata:
- name - Name of the Secret
- namespace - Namespace where this Secret exists
- type: Opaque - Generic Secret type for arbitrary user-defined data
- data - Contains key-value pairs that are Base64 encoded
- stringData - Contains key-value pairs that will be encoded for you
Secrets are similar to ConfigMaps but are intended for confidential data. Kubernetes does not encrypt Secrets by default, so additional encryption should be configured for production environments.
apiVersion: v1
kind: PersistentVolume
metadata:
name: myapp-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
hostPath:
path: /data/myapp
type: DirectoryOrCreate
Explanation:
- apiVersion: v1 - Uses the core API version
- kind: PersistentVolume - Creates a PersistentVolume (PV) for storage
- metadata:
- name - Name of the PersistentVolume
- spec:
- capacity.storage - Defines how much storage this PV offers
- volumeMode - Filesystem or Block
- accessModes - How the volume can be mounted:
- ReadWriteOnce - Can be mounted as read-write by a single node
- Other options include ReadOnlyMany and ReadWriteMany
- persistentVolumeReclaimPolicy - What happens when the PVC is deleted:
- Retain - Manual reclamation
- Other options include Delete and Recycle
- storageClassName - Storage class for the PV
- hostPath - Uses the host's filesystem for storage (for testing only, not for production)
In a cloud environment, you would typically use cloud-specific volume types instead of hostPath.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-pvc
namespace: myapp-namespace
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 5Gi
Explanation:
- apiVersion: v1 - Uses the core API version
- kind: PersistentVolumeClaim - Creates a PersistentVolumeClaim (PVC)
- metadata:
- name - Name of the PVC
- namespace - Namespace where this PVC exists
- spec:
- accessModes - How the volume can be mounted
- storageClassName - Storage class for the PVC to use
- resources.requests.storage - Amount of storage requested
PVCs are a request for storage by a user that can be fulfilled by a PV. They allow Pods to use storage without knowing the details of the underlying storage infrastructure.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
namespace: myapp-namespace
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 80
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls-secret
Explanation:
- apiVersion: networking.k8s.io/v1 - Uses the networking API group
- kind: Ingress - Creates an Ingress resource for HTTP/HTTPS routing
- metadata:
- name - Name of the Ingress
- namespace - Namespace where this Ingress exists
- annotations - Additional configuration specific to the Ingress controller
- spec:
- ingressClassName - Specifies which Ingress controller to use
- rules - List of host rules:
- host - Domain name for this rule
- http.paths - List of paths:
- path - URL path
- pathType - How the path should be matched (Prefix, Exact, or ImplementationSpecific)
- backend - Service to route traffic to
- tls - TLS configuration for secure connections
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. It requires an Ingress controller to be deployed in your cluster.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
namespace: myapp-namespace
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-deployment
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Explanation:
- apiVersion: autoscaling/v2 - Uses the autoscaling API group, version 2
- kind: HorizontalPodAutoscaler - Creates an HPA to automatically scale Pods
- metadata:
- name - Name of the HPA
- namespace - Namespace where this HPA exists
- spec:
- scaleTargetRef - Reference to the resource to scale (Deployment, StatefulSet, etc.)
- minReplicas - Minimum number of replicas
- maxReplicas - Maximum number of replicas
- metrics - List of metrics to use for scaling:
- type: Resource - Uses resource utilization
- resource.name - Resource to monitor (CPU or memory)
- target.type - Type of target value (Utilization or Value)
- target.averageUtilization - Target utilization percentage
HPA automatically scales the number of Pods in a deployment based on observed metrics to handle varying loads.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myapp-network-policy
namespace: myapp-namespace
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
Explanation:
- apiVersion: networking.k8s.io/v1 - Uses the networking API group
- kind: NetworkPolicy - Creates a NetworkPolicy for Pod network security
- metadata:
- name - Name of the NetworkPolicy
- namespace - Namespace where this NetworkPolicy exists
- spec:
- podSelector - Selects the Pods to which this policy applies
- policyTypes - Types of policy (Ingress, Egress, or both)
- ingress - Rules for incoming traffic:
- from - Sources allowed to access the Pods
- ports - Ports that may be accessed
- egress - Rules for outgoing traffic:
- to - Destinations the Pods can access
- ports - Ports that may be accessed
NetworkPolicies allow you to control the network traffic flow to and from Pods, providing security at the network level.
apiVersion: v1
kind: ResourceQuota
metadata:
name: myapp-quota
namespace: myapp-namespace
spec:
hard:
pods: "20"
requests.cpu: "2"
requests.memory: 2Gi
limits.cpu: "4"
limits.memory: 4Gi
persistentvolumeclaims: "10"
Explanation:
- apiVersion: v1 - Uses the core API version
- kind: ResourceQuota - Creates a ResourceQuota to limit resource consumption
- metadata:
- name - Name of the ResourceQuota
- namespace - Namespace where this ResourceQuota applies
- spec:
- hard - Hard limits for various resources:
- pods - Maximum number of Pods
- requests.cpu - Total CPU requests allowed
- requests.memory - Total memory requests allowed
- limits.cpu - Total CPU limits allowed
- limits.memory - Total memory limits allowed
- persistentvolumeclaims - Maximum number of PVCs
- hard - Hard limits for various resources:
ResourceQuotas help prevent one team or application from consuming all the resources in a shared cluster, ensuring fair resource allocation.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: myapp-role
namespace: myapp-namespace
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: myapp-rolebinding
namespace: myapp-namespace
subjects:
- kind: ServiceAccount
name: myapp-sa
namespace: myapp-namespace
roleRef:
kind: Role
name: myapp-role
apiGroup: rbac.authorization.k8s.io
Explanation:
-
Role:
- apiVersion: rbac.authorization.k8s.io/v1 - Uses the RBAC API group
- kind: Role - Creates a Role for namespace-scoped permissions
- metadata - Metadata for the Role
- rules - List of permissions:
- apiGroups - API groups containing the resources
- resources - Resources to which this rule applies
- verbs - Actions allowed on these resources
-
RoleBinding:
- apiVersion: rbac.authorization.k8s.io/v1 - Uses the RBAC API group
- kind: RoleBinding - Binds a Role to subjects
- metadata - Metadata for the RoleBinding
- subjects - List of subjects to which this binding applies:
- kind - Type of subject (ServiceAccount, User, or Group)
- name - Name of the subject
- namespace - Namespace of the subject
- roleRef - Reference to the Role being bound:
- kind - Kind of role (Role or ClusterRole)
- name - Name of the role
- apiGroup - API group for the role
RBAC allows you to control who can access which resources and perform which actions in your cluster.
apiVersion: v1
kind: ServiceAccount
metadata:
name: myapp-sa
namespace: myapp-namespace
Explanation:
- apiVersion: v1 - Uses the core API version
- kind: ServiceAccount - Creates a ServiceAccount for Pod identity
- metadata:
- name - Name of the ServiceAccount
- namespace - Namespace where this ServiceAccount exists
ServiceAccounts provide an identity for processes running in a Pod, allowing them to interact with the Kubernetes API or other services securely.
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-env-config
namespace: myapp-namespace
data:
.env: |
NODE_ENV=production
PORT=8080
LOGGING_LEVEL=info
API_TIMEOUT=5000
CACHE_TTL=300
Explanation:
- apiVersion: v1 - Uses the core API version
- kind: ConfigMap - Creates a ConfigMap for application configuration
- metadata:
- name - Name of the ConfigMap
- namespace - Namespace where this ConfigMap exists
- data - Contains configuration files:
- .env - Environment file for the application
This ConfigMap can be mounted as a file in your Pod, allowing your application to read its configuration from a standard location.
apiVersion: batch/v1
kind: Job
metadata:
name: myapp-database-migration
namespace: myapp-namespace
spec:
template:
spec:
containers:
- name: migration
image: myapp-migrations:1.0
command: ["npm", "run", "migrate"]
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: myapp-config
key: db.host
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: myapp-secrets
key: db.password
restartPolicy: Never
backoffLimit: 4
Explanation:
- apiVersion: batch/v1 - Uses the batch API group
- kind: Job - Creates a Job for one-time tasks
- metadata:
- name - Name of the Job
- namespace - Namespace where this Job exists
- spec:
- template - Pod template for the Job:
- spec.containers - List of containers to run:
- name - Name of the container
- image - Image to use
- command - Command to run
- env - Environment variables for the container
- restartPolicy - What to do if the Pod fails (Never, OnFailure)
- spec.containers - List of containers to run:
- backoffLimit - Number of retries before considering the Job failed
- template - Pod template for the Job:
Jobs are useful for one-time tasks like database migrations, data processing, or batch jobs.
apiVersion: batch/v1
kind: CronJob
metadata:
name: myapp-backup
namespace: myapp-namespace
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: myapp-backup:1.0
command: ["/scripts/backup.sh"]
volumeMounts:
- name: backup-volume
mountPath: /backup
volumes:
- name: backup-volume
persistentVolumeClaim:
claimName: backup-pvc
restartPolicy: OnFailure
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
Explanation:
- apiVersion: batch/v1 - Uses the batch API group
- kind: CronJob - Creates a CronJob for scheduled tasks
- metadata:
- name - Name of the CronJob
- namespace - Namespace where this CronJob exists
- spec:
- schedule - When to run the job (cron format)
- jobTemplate - Template for the Job to run:
- spec.template - Pod template for the Job
- spec.template.spec - Pod specification:
- containers - List of containers to run
- volumes - List of volumes to mount
- restartPolicy - What to do if the Pod fails
- successfulJobsHistoryLimit - How many successful jobs to keep
- failedJobsHistoryLimit - How many failed jobs to keep
CronJobs are useful for scheduled tasks like backups, reports, or maintenance operations.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp-database
namespace: myapp-namespace
spec:
selector:
matchLabels:
app: database
serviceName: database-headless
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: database
spec:
containers:
- name: database
image: postgres:14
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: myapp-secrets
key: db.password
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 10Gi
Explanation:
- apiVersion: apps/v1 - Uses the apps API group
- kind: StatefulSet - Creates a StatefulSet for stateful applications
- metadata:
- name - Name of the StatefulSet
- namespace - Namespace where this StatefulSet exists
- spec:
- selector - Labels to select which Pods to manage
- serviceName - Name of the headless service that controls the network domain
- replicas - Number of desired replicas
- updateStrategy - How to update the Pods (RollingUpdate or OnDelete)
- podManagementPolicy - How to create and terminate Pods (OrderedReady or Parallel)
- template - Pod template:
- metadata.labels - Labels attached to the Pods
- spec.containers - List of containers:
- Standard container configuration
- volumeMounts - Where to mount the volumes in the container
- volumeClaimTemplates - Templates for dynamically created PVCs:
- metadata.name - Name of the volume claim
- spec - PVC specification
StatefulSets are used for applications that require stable network identifiers, stable persistent storage, and ordered deployment and scaling.
apiVersion: v1
kind: Service
metadata:
name: database-headless
namespace: myapp-namespace
labels:
app: database
spec:
ports:
- port: 5432
name: postgres
clusterIP: None
selector:
app: database
Explanation:
- apiVersion: v1 - Uses the core API version
- kind: Service - Creates a Service
- metadata:
- name - Name of the Service
- namespace - Namespace where this Service exists
- labels - Labels for the Service
- spec:
- ports - List of ports:
- port - Port exposed by the service
- name - Name for the port
- clusterIP: None - Makes this a headless service (no cluster IP)
- selector - Labels to select which Pods to route traffic to
- ports - List of ports:
A headless service is used with StatefulSets to provide stable network identities to each Pod.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
namespace: myapp-namespace
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: myapp:1.0
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 3
startupProbe:
httpGet:
path: /startup
port: 8080
periodSeconds: 5
failureThreshold: 30
Explanation:
-
livenessProbe - Determines if the container is running:
- httpGet - Uses HTTP GET to check health:
- path - URL path to check
- port - Port to query
- initialDelaySeconds - How long to wait before first probe
- periodSeconds - How often to perform the probe
- timeoutSeconds - When the probe times out
- failureThreshold - How many failures before restarting the container
- httpGet - Uses HTTP GET to check health:
-
readinessProbe - Determines if the container is ready to serve traffic:
- Similar settings to livenessProbe
- successThreshold - How many successes to mark as ready after failure
-
startupProbe - Checks if the container has started (available in Kubernetes 1.18+):
- Disables liveness and readiness checks until it succeeds
- Useful for slow-starting containers
Probes help Kubernetes understand the health of your application, improving reliability and availability.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
namespace: myapp-namespace
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
initContainers:
- name: wait-for-db
image: busybox:1.34
command: ['sh', '-c', 'until nslookup database-service; do echo waiting for database; sleep 2; done;']
- name: init-schema
image: myapp-migrations:1.0
command: ['sh', '-c', '/scripts/init-schema.sh']
env:
- name: DB_HOST
value: database-service
containers:
- name: myapp-container
image: myapp:1.0
ports:
- containerPort: 8080
Explanation:
- spec.initContainers - List of initialization containers:
- name - Name of the init container
- image - Image to use
- command - Command to run
- env - Environment variables
Init Containers run before the app containers in a Pod. They run sequentially and must complete successfully before the next one starts. They're useful for setup tasks like:
- Waiting for a service to be ready
- Running database migrations
- Downloading configuration files
- Setting up volumes
Here are the essential commands for deploying and managing your application:
-
Create resources from files:
kubectl apply -f <file.yaml> kubectl apply -f <directory>/
-
List resources:
kubectl get pods -n myapp-namespace kubectl get services -n myapp-namespace kubectl get deployments -n myapp-namespace
-
View detailed information:
kubectl describe pod <pod-name> -n myapp-namespace kubectl describe deployment <deployment-name> -n myapp-namespace
-
View logs:
kubectl logs <pod-name> -n myapp-namespace kubectl logs <pod-name> -n myapp-namespace -c <container-name> kubectl logs -f <pod-name> -n myapp-namespace # Follow logs
-
Execute commands in a Pod:
kubectl exec -it <pod-name> -n myapp-namespace -- /bin/bash kubectl exec <pod-name> -n myapp-namespace -- <command>
-
Scale applications:
kubectl scale deployment <deployment-name> -n myapp-namespace --replicas=5
-
Update images:
kubectl set image deployment/<deployment-name> <container-name>=<new-image> -n myapp-namespace
-
Rollout management:
kubectl rollout status deployment/<deployment-name> -n myapp-namespace kubectl rollout history deployment/<deployment-name> -n myapp-namespace kubectl rollout undo deployment/<deployment-name> -n myapp-namespace kubectl rollout restart deployment/<deployment-name> -n myapp-namespace
-
Port forwarding for testing:
kubectl port-forward <pod-name> 8080:8080 -n myapp-namespace kubectl port-forward service/<service-name> 8080:80 -n myapp-namespace
-
View resource usage:
kubectl top pods -n myapp-namespace kubectl top nodes
-
Create a new namespace:
kubectl create namespace myapp-namespace
-
Apply a label:
kubectl label pods <pod-name> environment=production -n myapp-namespace
-
Creating secrets:
kubectl create secret generic <secret-name> --from-literal=key1=value1 --from-literal=key2=value2 -n myapp-namespace kubectl create secret generic <secret-name> --from-file=./secret-file.txt -n myapp-namespace
-
Creating ConfigMaps:
kubectl create configmap <config-name> --from-literal=key1=value1 --from-literal=key2=value2 -n myapp-namespace kubectl create configmap <config-name> --from-file=./config-file.txt -n myapp-namespace
-
Delete resources:
kubectl delete pod <pod-name> -n myapp-namespace kubectl delete deployment <deployment-name> -n myapp-namespace kubectl delete service <service-name> -n myapp-namespace
-
Get resource definitions in YAML format:
kubectl get deployment <deployment-name> -n myapp-namespace -o yaml
-
Edit resources:
kubectl edit deployment <deployment-name> -n myapp-namespace
-
View events in the namespace:
kubectl get events -n myapp-namespace
-
View all resources in a namespace:
kubectl get all -n myapp-namespace
-
Delete a namespace and all its resources:
kubectl delete namespace myapp-namespace
Here's a typical workflow to deploy a complete application to Kubernetes:
-
Create the namespace:
kubectl create namespace myapp-namespace
-
Create ConfigMaps and Secrets:
kubectl apply -f configmap.yaml -n myapp-namespace kubectl apply -f secret.yaml -n myapp-namespace
-
Create storage resources (if needed):
kubectl apply -f pv.yaml kubectl apply -f pvc.yaml -n myapp-namespace
-
Create StatefulSets and their Services (if needed):
kubectl apply -f database-headless-service.yaml -n myapp-namespace kubectl apply -f database-statefulset.yaml -n myapp-namespace
-
Run any initialization Jobs:
kubectl apply -f init-job.yaml -n myapp-namespace
-
Deploy the application:
kubectl apply -f deployment.yaml -n myapp-namespace kubectl apply -f service.yaml -n myapp-namespace
-
Create Ingress or other network resources:
kubectl apply -f ingress.yaml -n myapp-namespace
-
Set up autoscaling (if needed):
kubectl apply -f hpa.yaml -n myapp-namespace
-
Apply RBAC resources:
kubectl apply -f serviceaccount.yaml -n myapp-namespace kubectl apply -f role.yaml -n myapp-namespace kubectl apply -f rolebinding.yaml -n myapp-namespace
-
Set up monitoring and resource quotas:
kubectl apply -f resourcequota.yaml -n myapp-namespace
-
Apply any network policies:
kubectl apply -f networkpolicy.yaml -n myapp-namespace
-
Verify the deployment:
kubectl get all -n myapp-namespace
The exact order may vary depending on your application's requirements, but this provides a general guideline for deploying a complete application with all its components.
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: myapp-pdb
namespace: myapp-namespace
spec:
minAvailable: 2
selector:
matchLabels:
app: myapp
Explanation:
- apiVersion: policy/v1 - Uses the policy API group
- kind: PodDisruptionBudget - Creates a PDB to limit voluntary disruptions
- metadata:
- name - Name of the PDB
- namespace - Namespace where this PDB exists
- spec:
- minAvailable - Minimum number of Pods that must be available
- selector - Selects which Pods the PDB applies to
PDBs help ensure high availability during voluntary disruptions like node drains or cluster upgrades.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: myapps.example.com
spec:
group: example.com
names:
kind: MyApp
plural: myapps
singular: myapp
shortNames:
- ma
scope: Namespaced
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
replicas:
type: integer
image:
type: string
required:
- replicas
- image
---
apiVersion: example.com/v1
kind: MyApp
metadata:
name: my-custom-app
namespace: myapp-namespace
spec:
replicas: 3
image: myapp:1.0
Explanation:
-
CustomResourceDefinition (CRD):
- apiVersion: apiextensions.k8s.io/v1 - Uses the apiextensions API group
- kind: CustomResourceDefinition - Defines a new resource type
- spec:
- group - API group for the new resource
- names - Names for the new resource
- scope - Whether the resource is namespaced or cluster-wide
- versions - Versions of the API with schemas
-
Custom Resource:
- apiVersion: example.com/v1 - Uses the custom API group
- kind: MyApp - Uses the custom resource kind
- spec - Contains the custom resource's specification
CRDs allow you to extend the Kubernetes API with your own resource types, enabling custom operators and controllers.
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: myapp-validating-webhook
webhooks:
- name: validate.example.com
clientConfig:
service:
name: webhook-service
namespace: myapp-namespace
path: "/validate"
caBundle: <base64-encoded-ca-bundle>
rules:
- apiGroups: ["apps"]
apiVersions: ["v1"]
operations: ["CREATE", "UPDATE"]
resources: ["deployments"]
admissionReviewVersions: ["v1"]
sideEffects: None
timeoutSeconds: 5
Explanation:
- apiVersion: admissionregistration.k8s.io/v1 - Uses the admissionregistration API group
- kind: ValidatingWebhookConfiguration - Defines a validation webhook
- webhooks - List of webhook configurations:
- name - Name of the webhook
- clientConfig - How to call the webhook:
- service - Kubernetes service to call
- caBundle - CA certificate for TLS
- rules - When to call the webhook:
- apiGroups - API groups to intercept
- apiVersions - API versions to intercept
- operations - Operations to intercept
- resources - Resources to intercept
- admissionReviewVersions - Versions of the AdmissionReview object
- sideEffects - Whether the webhook has side effects
- timeoutSeconds - Timeout for the webhook call
Admission webhooks allow you to intercept and validate or modify API requests before they are processed by Kubernetes, enabling custom policies and automation.
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: myapp-vpa
namespace: myapp-namespace
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-deployment
updatePolicy:
updateMode: Auto
resourcePolicy:
containerPolicies:
- containerName: '*'
minAllowed:
cpu: 100m
memory: 200Mi
maxAllowed:
cpu: 1
memory: 1Gi
controlledResources: ["cpu", "memory"]
Explanation:
- apiVersion: autoscaling.k8s.io/v1 - Uses the autoscaling API group
- kind: VerticalPodAutoscaler - Creates a VPA
- spec:
- targetRef - Reference to the resource to autoscale
- updatePolicy.updateMode - How to apply recommendations:
- Auto - Automatically applies recommendations
- Other options include Off and Initial
- resourcePolicy.containerPolicies - Policies for containers:
- containerName - Which container the policy applies to
- minAllowed - Minimum resource limits
- maxAllowed - Maximum resource limits
- controlledResources - Which resources to control
VPA automatically adjusts the CPU and memory resource requests of Pod containers to better match their actual usage.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: myapp-monitor
namespace: myapp-namespace
labels:
release: prometheus
spec:
selector:
matchLabels:
app: myapp
endpoints:
- port: metrics
path: /metrics
interval: 15s
namespaceSelector:
matchNames:
- myapp-namespace
Explanation:
- apiVersion: monitoring.coreos.com/v1 - Uses the monitoring API group (Prometheus Operator)
- kind: ServiceMonitor - Creates a ServiceMonitor for Prometheus
- metadata.labels - Labels used by Prometheus to select ServiceMonitors
- spec:
- selector - Selects which Services to monitor
- endpoints - Endpoints to scrape:
- port - Port name to scrape
- path - Path to scrape metrics from
- interval - How often to scrape
- namespaceSelector - Which namespaces to select Services from
ServiceMonitors are used by the Prometheus Operator to configure Prometheus to scrape metrics from your applications.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
- configmap.yaml
namePrefix: dev-
namespace: myapp-namespace
commonLabels:
environment: development
patchesStrategicMerge:
- deployment-patch.yaml
configMapGenerator:
- name: myapp-config
files:
- config.properties
secretGenerator:
- name: myapp-secrets
files:
- secret.properties
Explanation:
- apiVersion: kustomize.config.k8s.io/v1beta1 - Uses the kustomize API
- kind: Kustomization - Creates a Kustomization configuration
- resources - List of YAML files to include
- namePrefix - Prefix to add to all resource names
- namespace - Namespace to set for all resources
- commonLabels - Labels to add to all resources
- patchesStrategicMerge - Files containing patches to apply
- configMapGenerator - Generates ConfigMaps from files
- secretGenerator - Generates Secrets from files
Kustomize allows you to customize Kubernetes resources without modifying the original files, making it easier to manage multiple environments.
This completes the comprehensive guide for deploying applications to Kubernetes.