Network policies in Kubernetes are like traffic rules for your application's components. They control which parts of your application can talk to each other and how. Here's a simple breakdown of how the policies in this guide work:
- Zero-Trust Starting Point: We begin by assuming no communication is allowed. It's like having walls between all parts of your application.
- Allowing Necessary Communication: We then create "doors" in these walls, but only where needed. For example:
- We allow components in the same environment (like production) to talk to each other.
- We let the backend talk to the frontend, but not the other way around.
- We don't allow the database to directly talk to the frontend for security reasons.
- Namespace Isolation: We enforce strict isolation between different namespaces, ensuring that applications cannot communicate across namespaces unless explicitly allowed.
- Labeling for Identification: We use labels (like name tags) on each component. These labels help the policies know which rules to apply to which parts of the application.
- Layered Approach: We apply these rules in layers, starting with the strictest (no communication) and then gradually allowing necessary traffic. This ensures we maintain a secure setup while still letting the application function.
- Environment Separation: We create separate rules for production and non-production environments. This keeps testing activities from interfering with live systems.
The end result is a secure network where:
- Components can only communicate in ways that are necessary and explicitly allowed.
- Unauthorized access attempts are automatically blocked.
- The application can function normally while maintaining strong security practices.
This approach helps prevent potential security breaches and ensures that if one part of the application is compromised, the damage is contained.
- Production Deployment Example: LSR Application
- Deployment Structure and Network Policies
- Understanding Label Placement in Deployment Manifests
- Deployment Manifests
- Network Policy Deployment Steps
- Final State
Production Cluster
├── Namespace: lsr-prod
│ ├── Deployment: lsr-frontend
│ │ └── Labels:
│ │ ├── environment: prod
│ │ ├── role: frontend
│ │ ├── application: lsr
│ │ ├── intent: public
│ │ └── zone: us-east
│ ├── Deployment: lsr-backend
│ │ └── Labels:
│ │ ├── environment: prod
│ │ ├── role: backend
│ │ ├── application: lsr
│ │ ├── intent: internal
│ │ └── zone: us-east
│ └── Deployment: lsr-db
│ └── Labels:
│ ├── environment: prod
│ ├── role: db
│ ├── application: lsr
│ ├── intent: internal
│ └── zone: us-east
└── Network Policies
├── enforce-zero-trust-prod
│ └── Effect: Blocks all traffic for pods with intent: zero-trust
├── allow-same-environment-prod
│ └── Effect: Allows traffic between all pods in prod environment
├── allow-frontend-to-backend-prod
│ └── Effects:
│ ├── Allows traffic from backend to frontend
│ └── Blocks traffic from db to frontend
└── strict-namespace-isolation
└── Effect: Denies traffic between different namespaces
Traffic Flow:
- Internet → lsr-frontend ✓ (allowed by allow-same-environment-prod)
- lsr-frontend → lsr-backend ✓ (allowed by allow-same-environment-prod)
- lsr-backend → lsr-frontend ✓ (allowed by allow-frontend-to-backend-prod)
- lsr-backend → lsr-db ✓ (allowed by allow-same-environment-prod)
- lsr-db → lsr-backend ✓ (allowed by allow-same-environment-prod)
- lsr-db → lsr-frontend ✗ (blocked by allow-frontend-to-backend-prod)
- lsr-frontend → pods in other namespaces ✗ (blocked by strict-namespace-isolation)
A note about labels: In the deployment manifests, you'll notice that labels are placed under spec.template.metadata.labels
. This placement is crucial for the correct application of network policies and overall functioning of the deployments. Here's why:
- Label Inheritance: Labels defined here are automatically applied to all pods created by this deployment, ensuring consistency across all managed pods.
- Pod Selection: The deployment uses these labels to identify which pods it manages, as defined in the
spec.selector.matchLabels
field. - Network Policy Application: Network policies use label selectors to determine which pods they apply to. Placing labels here ensures all created pods will be subject to the correct network policies.
- Separation of Concerns:
metadata.labels
at the top level of the deployment are for the deployment itself.spec.template.metadata.labels
are for the pods created by the deployment.
- Dynamic Updates: If you update labels in
spec.template.metadata.labels
, the deployment will gradually replace old pods with new ones that have the updated labels.
For example, in the LSR frontend deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lsr-frontend
namespace: lsr-prod
spec:
replicas: 3
selector:
matchLabels:
app: lsr-frontend
template:
metadata:
labels:
environment: prod
role: frontend
application: lsr
intent: public
zone: us-east
spec:
containers:
- name: lsr-frontend
image: lsr-frontend:latest
ports:
- containerPort: 80
This structure ensures that all pods created by this deployment will have these labels, making them correctly identifiable for network policy application, service discovery, and other Kubernetes features that rely on labels.
apiVersion: apps/v1
kind: Deployment
metadata:
name: lsr-frontend
namespace: lsr-prod
spec:
replicas: 3
selector:
matchLabels:
app: lsr-frontend
template:
metadata:
labels:
environment: prod
role: frontend
application: lsr
intent: public
zone: us-east
spec:
containers:
- name: lsr-frontend
image: lsr-frontend:latest
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: lsr-backend
namespace: lsr-prod
spec:
replicas: 3
selector:
matchLabels:
app: lsr-backend
template:
metadata:
labels:
environment: prod
role: backend
application: lsr
intent: internal
zone: us-east
spec:
containers:
- name: lsr-backend
image: lsr-backend:latest
ports:
- containerPort: 8080
apiVersion: apps/v1
kind: Deployment
metadata:
name: lsr-db
namespace: lsr-prod
spec:
replicas: 1
selector:
matchLabels:
app: lsr-db
template:
metadata:
labels:
environment: prod
role: db
application: lsr
intent: internal
zone: us-east
spec:
containers:
- name: lsr-db
image: lsr-db:latest
ports:
- containerPort: 5432
These deployment manifests create the three components of the LSR application in the production environment, each with the appropriate labels to match the network policies. The network policies then control the traffic flow between these components as described in the tree structure above.
If using helm, it may be possible to label your pods when you deploy:
helm install lsr-release yourchart/ \
--namespace lsr-prod \
--set labels.environment=prod \
--set labels.role=frontend \
--set labels.application=lsr \
--set labels.intent=public \
--set labels.zone=us-east \
--set labels.additionalLabels.team=devops \
--set labels.additionalLabels.version=v1
Zero-trust policies block all traffic by default. Only explicitly allowed traffic will be permitted.
apiVersion: crd.antrea.io/v1beta1
kind: ClusterNetworkPolicy
metadata:
name: enforce-zero-trust-prod
spec:
priority: 10
tier: baseline
appliedTo:
- podSelector:
matchLabels:
intent: zero-trust
ingress:
- name: drop-all-ingress-prod
action: Drop
egress:
- name: drop-all-egress-prod
action: Drop
Effect: This policy enforces a zero-trust model for all pods in the production environment labeled with intent: zero-trust
. It blocks all incoming and outgoing traffic for these pods.
apiVersion: crd.antrea.io/v1beta1
kind: ClusterNetworkPolicy
metadata:
name: enforce-zero-trust-non-prod
spec:
priority: 10
tier: baseline
appliedTo:
- podSelector:
matchLabels:
intent: zero-trust
ingress:
- name: drop-all-ingress-non-prod
action: Drop
egress:
- name: drop-all-egress-non-prod
action: Drop
Effect: Similar to the production policy, this will enforce a zero-trust model for all pods in the non-production environment labeled with intent: zero-trust
.
These policies allow communication between pods within the same environment (production or non-production), overriding the zero-trust policies for intra-environment traffic.
apiVersion: crd.antrea.io/v1beta1
kind: ClusterNetworkPolicy
metadata:
name: allow-same-environment-prod
spec:
priority: 100
tier: securityops
appliedTo:
- namespaceSelector:
matchLabels:
environment: prod
ingress:
- name: allow-ingress-prod
action: Allow
from:
- namespaceSelector:
matchLabels:
environment: prod
egress:
- name: allow-egress-prod
action: Allow
to:
- namespaceSelector:
matchLabels:
environment: prod
Effect: This policy allows communication between pods within the production environment. It overrides the zero-trust policy for traffic within the production namespace.
apiVersion: crd.antrea.io/v1beta1
kind: ClusterNetworkPolicy
metadata:
name: allow-same-environment-non-prod
spec:
priority: 100
tier: securityops
appliedTo:
- namespaceSelector:
matchLabels:
environment: non-prod
ingress:
- name: allow-ingress-non-prod
action: Allow
from:
- namespaceSelector:
matchLabels:
environment: non-prod
egress:
- name: allow-egress-non-prod
action: Allow
to:
- namespaceSelector:
matchLabels:
environment: non-prod
Effect: This policy allows communication between pods within the non-production environment, overriding the zero-trust policy for traffic within the non-production namespace.
These policies allow specific communication paths between different roles within the same environment.
apiVersion: crd.antrea.io/v1beta1
kind: ClusterNetworkPolicy
metadata:
name: allow-frontend-to-backend-prod
spec:
priority: 200
tier: application
appliedTo:
- podSelector:
matchLabels:
role: frontend
ingress:
- name: allow-backend-traffic-prod
action: Allow
from:
- podSelector:
matchLabels:
role: backend
- name: drop-db-traffic-prod
action: Drop
from:
- podSelector:
matchLabels:
role: db
Effect: This policy allows backend pods to communicate with frontend pods in the production environment. It also explicitly blocks any direct communication from database pods to frontend pods.
apiVersion: crd.antrea.io/v1beta1
kind: ClusterNetworkPolicy
metadata:
name: allow-frontend-to-backend-non-prod
spec:
priority: 200
tier: application
appliedTo:
- podSelector:
matchLabels:
role: frontend
ingress:
- name: allow-backend-traffic-non-prod
action: Allow
from:
- podSelector:
matchLabels:
role: backend
- name: drop-db-traffic-non-prod
action: Drop
from:
- podSelector:
matchLabels:
role: db
Effect: This policy allows backend pods to communicate with frontend pods in the non-production environment. It also explicitly blocks any direct communication from database pods to frontend pods.
To ensure that applications cannot communicate across different namespaces, we enforce strict namespace isolation. This means that pods can only communicate within their own namespace unless explicitly allowed otherwise.
apiVersion: crd.antrea.io/v1beta1
kind: ClusterNetworkPolicy
metadata:
name: strict-namespace-isolation
spec:
priority: 50
tier: securityops
appliedTo:
- namespaceSelector:
matchExpressions:
- { key: "kubernetes.io/metadata.name", operator: NotIn, values: ["kube-system"] }
ingress:
- name: allow-same-namespace-ingress
action: Pass
from:
- namespaces:
match: Self
- name: deny-other-namespace-ingress
action: Drop
from:
- namespaceSelector: {}
egress:
- name: allow-same-namespace-egress
action: Pass
to:
- namespaces:
match: Self
- name: deny-other-namespace-egress
action: Drop
to:
- namespaceSelector: {}
Explanation:
- Priority: Set to 50 to ensure it takes precedence over the same-environment policies (priority: 100) but is lower than zero-trust (priority: 10).
- Tier: Assigned to securityops tier for appropriate enforcement.
- AppliedTo: Targets all namespaces except kube-system (you can adjust the NotIn values as needed).
- Ingress Rules:
allow-same-namespace-ingress
: Allows traffic originating from the same namespace (Self).deny-other-namespace-ingress
: Drops all other ingress traffic from different namespaces.
- Egress Rules:
allow-same-namespace-egress
: Allows traffic to the same namespace (Self).deny-other-namespace-egress
: Drops all other egress traffic to different namespaces.
Effect: This policy enforces that pods can only communicate within their own namespace. Any attempt to communicate across namespaces is denied unless another higher-priority policy explicitly allows it.
Note: Ensure that this policy is compatible with your existing policies. Since it has a higher priority than environment-based policies but lower than zero-trust, it will enforce namespace isolation while still allowing intra-environment communication as defined by your existing policies.
After deploying all these policies, the network will be configured as follows:
-
Zero-Trust Enforcement:
- All pods labeled with
intent: zero-trust
will have all traffic blocked by default.
- All pods labeled with
-
Same-Environment Communication:
- Pods within the same environment (prod or non-prod) can communicate with each other.
-
Frontend-to-Backend Communication:
- Backend pods can communicate with frontend pods within their respective environments.
- Database pods are explicitly prevented from directly communicating with frontend pods.
-
Namespace Isolation:
- Pods can only communicate within their own namespace.
- Any traffic attempting to cross namespaces is denied unless explicitly allowed by higher-priority policies.
-
Overall Security Posture:
- The layered approach ensures a secure baseline with the zero-trust policy.
- Necessary communication channels are explicitly allowed, maintaining application functionality.
- Unauthorized access attempts are automatically blocked, containing potential security breaches.
This comprehensive network policy setup ensures that your Kubernetes cluster maintains a strong security posture by enforcing strict communication rules, preventing unauthorized access, and containing potential breaches effectively.
-
Policy Ordering: Antrea processes policies based on their priority and tier. Ensure that the strict-namespace-isolation policy has a priority that correctly positions it within the policy enforcement order.
-
Testing Policies: Before deploying to production, thoroughly test these policies in a staging environment to ensure that legitimate traffic is not inadvertently blocked.
-
Logging and Monitoring: Enable logging for your network policies to monitor traffic flows and detect any unauthorized access attempts. This can be done by setting the
enableLogging
field totrue
in your policy rules. -
Policy Maintenance: Regularly review and update your network policies to accommodate changes in your application architecture or security requirements.