An Introduction to Kubernetes Network Policies for Security People

Reuven Harrison
21 min readFeb 23, 2019

Many organizations are currently adopting Kubernetes to run their applications. This is happening to such an extent that some people are referring to Kubernetes as the new data-center operating system. Consequently, organizations are starting to treat Kubernetes (often abbreviated as k8s) as a mission-critical platform that requires mature business processes including network security.

Network security teams which are tasked with securing this new platform may find it surprisingly different. For example, the default Kubernetes policy is any-any-any-allow!

This guide provides some insights into the workings of Kubernetes network policies, how they compare to traditional firewall policies and some pitfalls and best-practices that will help you secure your Kubernetes applications.

About Me

I am the CTO and Co-Founder of Tufin, the Security Policy Company. For the past three years I’ve been working of our solution for Kubernetes security, Tufin SecureCloud. I wrote this article as a result of the insights gained throughout the development process.

I welcome you to try SecureCloud on your own cluster and share your feedback.

Kubernetes Network Policies

Kubernetes provides a mechanism called Network Policies that can be used to enforce layer-3 segmentation for applications that are deployed on the platform. Network policies lack the advanced features of modern firewalls like layer-7 control and threat detection, but they do provide a basic level of network security which is a good starting point.

Network Policies Control Pod Communications

Kubernetes workloads are run in pods which consist of one or more containers that are deployed together. Kubernetes assigns each pod an IP address which is routable from all other pods, even across the underlying servers. Kubernetes network policies specify the access permissions for groups of pods, much like security groups in the cloud are used to control access to VM instances.

Writing Network Policies

Like other Kubernetes resources, network policies can be defined using a language called YAML. Here’s a simple example which allows access from balance to postgress:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default.postgres
namespace: default
spec:
podSelector:
matchLabels:
app: postgres
ingress:
- from:
- podSelector:
matchLabels:
app: balance
policyTypes:
- Ingress

In order to write your own network policies, you will need a basic understanding of yaml. Yaml is based on indentation (with spaces, not tabs!). An indented item belongs to the closest indented item above it. A hyphen (dash) starts a new list item, all other items are map entries. You can find plenty of information about yaml on the Internet.

Once you’ve written the policy yaml, use kubectl to create the policy:

kubectl create -f policy.yaml

The Network Policy Spec

A network policy specification consists of four elements:

  1. podSelector: the pods that will be subject to this policy (the policy target) - mandatory
  2. policyTypes: specifies which types of policies are included in this policy, ingress and/or egress - this is optional but I recommend to always specify it explicitly.
  3. ingress: allowed inbound traffic to the target pods - optional
  4. egress: allowed outbound traffic from the target pods - optional

This example, adapted (I changed “role” to “app”) from the Kubernetes web site, specifies all four elements:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978

Note that you don’t have to include all four elements. The main podSelector element is mandatory, the other three are optional.

If you omit policyTypes, it will be inferred as follows:

  • The policy is always assumed to specify an ingress definition. If you don’t explicitly specify it, it will be considered as “no traffic allowed”.
  • Egress will be induced from the existence or non-existence of the egress element.

To avoid mistakes, I recommend to always specify policyTypes explicitly.

If ingress and/or egress are not provided and they are assumed to be there according to the logic above, the policy will consider them as “no traffic allowed” (see Cleanup Rule below).

The Default Policy is Allow

When no policies are defined, Kubernetes allows all communications. All pods can talk to each-other freely. This may sound counter-intuitive from a security perspective but keep in mind that Kubernetes was designed by developers who want applications to communicate. Network policies were added as a later enhancement.

Namespaces

Namespaces are Kubernetes’ multi-tenancy mechanism, they are intended to isolate the namespace environments from each-other, however, communications between namespaces are still allowed by default.

Like most Kubernetes entities, network policies also live in a specific namespace. The metadata header tells Kubernetes which namespace the policy belongs to:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: my-namespace
spec:
...

If you don’t explicitly specify the metadata namespace it will applied in the namespace that is provided in kubectl (default is namespace=default):

kubectl apply -n my-namespace -f namespace.yaml

I recommend to explicitly specify the namespace unless you are writing a policy that needs to be applied uniformly across multiple namespaces.

The main podSelector element in a policy will select pods from the same namespace which the policy belongs to (it cannot select pods from another namespace).

podSelectors in the ingress and egress elements also select pods in the same namespace unless you combine them with a namespaceSelector as described in “Filter on Namespaces AND Pods” below.

Policy Naming Conventions

Policy names are unique within a namespace. You can’t have two policies with the same name in a namespace, but you can have policies with the same name in different namespaces. This is handy when you want to repeatably apply a certain policy in multiple namespaces.

One option for naming your policies which I like, is to combine the namespace with the target pods. For example:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default.postgres
namespace: default
spec:
podSelector:
matchLabels:
app: postgres
ingress:
- from:
- podSelector:
matchLabels:
app: admin
policyTypes:
- Ingress

Labels

Kubernetes objects, like pods and namespaces, can have user-defined labels attached to them. Labels are the equivalent of tags in the cloud. Kubernetes network policies rely on labels to select the pods they apply to:

podSelector:
matchLabels:
role: db

Or, the namespaces they apply to. This example selects all pods in namespaces with matching labels:

namespaceSelector:
matchLabels:
project: myproject

One caveat to watch out for: if you use a namespaceSelector, make sure that the namespace(s) you are selecting really has the label you are using. Keep in mind that the built-in namespaces like default and kube-system don’t have labels out of the box.

You can add a label to a namespace like this:

kubectl label namespace default namespace=default

Or, you can use automatic namespace labeling in Kubernetes 1.21 onwards.

Conversely, the namespace in the metadata section is the the actual name of the namespace, not a label:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
...

Source and Destination

Firewall policies consist of rules with source and destination. Kubernetes network policies are defined for a target — a set of pods that the policy applies to, and then specify ingress and/or egress traffic for the target. Using the same example again, you can see the policy target — all pods in the default namespace which have a label with key “app” and value “db”:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978

The ingress item in this policy allows inbound traffic to the target pods. So ingress is interpreted as the “source” and the target is the respective “destination”.
Likewise, egress is interpreted as the “destination” with the target being its respective “source”.

This is equivalent to two firewall rules: Ingress -> Target; Target -> Egress

Egress and DNS

When enforcing egress you must be careful not to block DNS which Kubernetes uses to resolve service names to their IP addresses. For example, this policy will not work because you haven’t allowed balance to perform DNS lookups:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default.balance
namespace: default
spec:
podSelector:
matchLabels:
app: balance
egress:
- to:
- podSelector:
matchLabels:
app: postgres
policyTypes:
- Egress

To fix it, you must allow access to the DNS service:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default.balance
namespace: default
spec:
podSelector:
matchLabels:
app: balance
egress:
- to:
- podSelector:
matchLabels:
app: postgres
- to:
ports:
- protocol: UDP
port: 53

policyTypes:
- Egress

The highlighted ‘to’ element is empty and therefore it implicitly selects all pods in all namespaces allowing balance to perform DNS lookups against the Kubernetes DNS service which is normally running in namespace kube-system.

While this works, it is overly permissive and insecure — it also allows DNS lookups outside the cluster!

You can lock it down in a phased approach:

  1. Allow DNS only inside the cluster by adding a namespaceSelector:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default.balance
namespace: default
spec:
podSelector:
matchLabels:
app: balance
egress:
- to:
- podSelector:
matchLabels:
app: postgres
- to:
- namespaceSelector: {}
ports:
- protocol: UDP
port: 53
policyTypes:
- Egress

2. Allow DNS only in the kube-system namespace.
To do that, you need to add a label to the kube-system namespace: kubectl label namespace kube-system namespace=kube-system and specify it in the policy with a namespaceSelector:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default.balance
namespace: default
spec:
podSelector:
matchLabels:
app: balance
egress:
- to:
- podSelector:
matchLabels:
app: postgres
- to:
- namespaceSelector:
matchLabels:
namespace: kube-system

ports:
- protocol: UDP
port: 53
policyTypes:
- Egress

3. The paranoid may also want to go further and restrict DNS to the specific DNS service in kube-system. See “Filter on Namespaces AND Pods” below for instructions to do that.

Another option is to allow DNS at the namespace level, so you don’t need to specify it per service:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default.dns
namespace: default
spec:
podSelector: {}
egress:
- to:
- namespaceSelector: {}
ports:
- protocol: UDP
port: 53
policyTypes:
- Egress

The empty podSelector selects all pods in the namespace.

Egress and the Instance Metadata Server (IMDS)

Kubernetes workloads may need to read instance metadata from the cloud platform. When enforcing egress policies, you should be careful not to block this endpoint.

The metadata server (IMDS) endpoint is usually 169.254.169.254:80. You can ping the metadata server from a pod to find its actual IP:

kubectl -n kube-system exec --stdin --tty my-pod -- ping metadata

And then add it to your egress policy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: instance-metadata
namespace: default
spec:
podSelector:
matchLabels:
app: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 169.254.169.254/32
ports:
- protocol: TCP
port: 80

In some cases, Kubernetes is configured to access the IMDS through a proxy, for example, when using the GKE metadata server. In such cases, you should allow egress to the proxy rather than the IMDS, for example, with GKE metadata server:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: instance-metadata
namespace: default
spec:
podSelector:
matchLabels:
app: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 127.0.0.1/32
ports:
- protocol: TCP
port: 988

First Match And Rule Order

Firewall administrators know that the action (Allow or Deny) taken for a packet is determined by the first rule that matches it.

In Kubernetes there is no importance to order between the policies.

The default behavior, when no policies are defined, is to allow all communications so all pods can talk to each-other freely.

As soon as you start defining policies. Every pod which is selected, by at least one policy, becomes isolated according to the union (logical OR) of the policies that select it:

Pods that are not selected by any policy continue to be open. You can change this behavior by defining a cleanup rule.

Cleanup Rule (Deny)

Firewall policies usually have an any-any-any-deny rule to drop all non-explicitly allowed traffic.

Kubernetes doesn’t have a “deny” action but you can achieve the same effect with a regular (allow) policy that specifies policyTypes=Ingress but omits the actual ingress definition. This is interpreted as “no ingress allowed”:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress

This policy selects all pods in the namespace as the source and leaves ingress undefined which means — no inbound traffic allowed.

Similarly, you can deny all outbound traffic from a namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
namespace: default
spec:
podSelector: {}
policyTypes:
- Egress

Keep in mind that any additional policies allowing traffic to pods in the namespace will take precedence over this deny rule! - the equivalent of adding the allow rules above this deny rule in a firewall.

Any-Any-Any-Allow

An allow-all policy can be created by amending the deny-all policy above with an empty ingress element:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all
namespace: default
spec:
podSelector: {}
ingress:
- {}

policyTypes:
- Ingress

This allows communications from all pods in all namespaces (and all IPs) to any pod in the default namespace. This is the default behavior, so you wouldn’t normally need to define this. It could be useful, however, to override any more specific allow rules temporarily for diagnosing a problem.

You can narrow this down to allow access only to a specific set of pods (app:balance) in the default namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-to-balance
namespace: default
spec:
podSelector:
matchLabels:
app: balance
ingress:
- {}
policyTypes:
- Ingress

The following policy permits all ingress AND egress traffic (including access to any IP outside of the cluster):

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all
spec:
podSelector: {}
ingress:
- {}
egress:
- {}
policyTypes:
- Ingress
- Egress

Allowing All Internal Communications

You can allow all internal communications (excluding external IPs) by selecting all pods in all namespaces.

For example, this policy allows all pods in the default namespace to connect to all pods in all namespaces, but not to external IP addresses:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress-internally
namespace: default
spec:
podSelector: {}
egress:
- to:
- namespaceSelector: {}
policyTypes:
- Egress

And similarly for ingress. Allow all pods in default to receive traffic from pods in all namespaces, but not from external sources:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-ingress-internally
namespace: default
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector: {}
policyTypes:
- Ingress

Combining Multiple Policies

Policies are combined with a logical OR allowing each selected pod to communicate according to the union of all policies that are applied to it.

The policies are combined in three different levels:

  1. Under “from” and “to”
  • namespaceSelector — selects an entire namespace
  • podSelector — selects pods
  • ipBlock — selects a subnet

You can define as many of these as you like under from/to (even multiple instances of the same kind) — they will all be combined with a logical OR.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default.postgres
namespace: default
spec:
ingress:
- from:
- podSelector:
matchLabels:
app: indexer
- podSelector:
matchLabels:
app: admin
podSelector:
matchLabels:
app: postgres
policyTypes:
- Ingress

2. Under “ingress” or “egress”

Within a policy, ingress policies can have multiple “from” items — they are combined with a logical OR. Likewise, Egress policies can have multiple “to” items which are combined with a logical OR.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default.postgres
namespace: default
spec:
ingress:
- from:
- podSelector:
matchLabels:
app: indexer
- from:
- podSelector:
matchLabels:
app: admin
podSelector:
matchLabels:
app: postgres
policyTypes:
- Ingress

3. Multiple policies are also combined with a logical OR

You can write multiple network policies that refer to the same pods. Traffic is allowed according to the union (logical OR) of all policies that select a pod.

Inter-Namespace Communication

By default, inter-namespace communications are allowed. You can change this with a deny-all policy to prevent communication from and/or to your namespace (see Cleanup Rule above).

If you blocked access to your namespace (see Clean Up Rule above), you can make exceptions to the deny policy by allowing connections from a specific namespace using the namespaceSelector:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database.postgres
namespace: database
spec:
podSelector:
matchLabels:
app: postgres
ingress:
- from:
- namespaceSelector:
matchLabels:
namespace: default
policyTypes:
- Ingress

This allows all pods in the default namespace to access the postgres pods in the database namespace. But what if you only wanted to allow specific pods in the default namespace to access postgres?

Filter on Namespaces AND Pods

Kubernetes 1.11 and above allows you to combine the namespaceSelector and the podSelector with a logical AND as follows:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database.postgres
namespace: database
spec:
podSelector:
matchLabels:
app: postgres
ingress:
- from:
- namespaceSelector:
matchLabels:
namespace: default
podSelector:
matchLabels:
app: admin
policyTypes:
- Ingress

Why is this interpreted as an AND as opposed to the usual OR?

Note that the podSelector doesn’t begin with a dash (hyphen) which means, in yaml, that the podSelector and the preceding namespaceSelector both belong to the same list item and therefore they are logically combined with AND.

If you added a dash in front of the podSelector, it would create a new list item which would be combined with the preceding namespaceSelector with a logical OR.

To select pods with a specific label across all namespaces, specify an empty namespaceSelector:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database.postgres
namespace: database
spec:
podSelector:
matchLabels:
app: postgres
ingress:
- from:
- namespaceSelector: {}
podSelector:
matchLabels:
app: admin
policyTypes:
- Ingress

Multiple Labels Are Combined With AND

Firewall rules with multiple objects (hosts, network, groups…) are interpreted as a logical OR. For example, this rule will be applied if the packet source matches Host_1 OR Host_2:

Conversely, in Kubernetes, multiple labels in a podSelector or namespaceSelector are combined with a logical AND. For example, this will select pods that have both labels, role=db AND version=v2:

podSelector:
matchLabels:
role: db
version: v2

The same logic applies to all types of selectors: selectors for the policy target, selectors for pods and selectors for namespaces.

Subnets and IP Addresses (IPBlocks)

Firewalls use vlans, IP addresses and subnets to segment the network.

In Kubernetes, the pod IPs are automatically assigned and can change frequently, so, instead, network policies use labels to select pods and namespaces.

Subnets (ipBlocks) are used for ingress or egress connectivity (North-South). For example this policy allows all pods in the default namespace to access Google’s DNS service:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: egress-dns
namespace: default
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 8.8.8.8/32
ports:
- protocol: UDP
port: 53

The empty pod selector in this example means “select all pods in the namespace”.

This policy allows access to 8.8.8.8 only which means that it denies access to any other IP. So, in effect, you have blocked access to the internal Kubernetes DNS service. If you still want to allow it, you’ll need to specify it explicitly.

Normally ipBlocks and podSelectors are mutually exclusive because you don’t use internal pod IPs in ipBlocks. If you do specify ipBlocks with internal pod IPs it will actually allow communications to/from pods with these IPs, although in practice you won’t know which IPs to use, which is why you shouldn’t use IPs to select pods.

As a counter-example, this policy includes all IPs and subsequently allows access to all other pods:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: egress-any
namespace: default
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0

You can allow access to external IPs only by excluding internal pod IPs. For example, if your pod subnet is 10.16.0.0/14:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: egress-any
namespace: default
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.16.0.0/14

Ports and Protocols

Pods should normally listen on a single port which means you can simply omit ports from your policies and allow the default “any port”, however, it is still a good practice to lock down your policies to be restrictive as possible so you may still want to specify ports:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default.postgres
namespace: default
spec:
ingress:
- from:
- podSelector:
matchLabels:
app: indexer
- podSelector:
matchLabels:
app: admin
ports:
- port: 443
protocol: TCP
- port: 80
protocol: TCP

podSelector:
matchLabels:
app: postgres
policyTypes:
- Ingress

Note that the ports apply to all items in the same “to” or “from” clause they appear in. If you want to specify different ports for different sets of items, you can break up the ingress or egress to multiple “to”s or “from”s, each with its own ports:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default.postgres
namespace: default
spec:
ingress:
- from:
- podSelector:
matchLabels:
app: indexer
ports:
- port: 443
protocol: TCP

- from:
- podSelector:
matchLabels:
app: admin
ports:
- port: 80
protocol: TCP

podSelector:
matchLabels:
app: postgres
policyTypes:
- Ingress

Port defaults:

  • If you omit ports definition altogether, it means all protocols (TCP, UDP and all other protocols) and all ports
  • If you omit the protocol definition it defaults to TCP
  • If you omit the port definition it defaults to all ports

Best practice: don’t rely on defaults, be explicit.

Port ranges are supported since Kubernetes 1.21.

Note that if you want to allow a non-TPC nor UDP protocol, like, for example, ICMP (ping), you have to omit the ports definition. This will allow all protocols and ports. There is no way to allow only ICMP.

Note that you must use pod ports, not service ports (more about this in the next paragraph).

Are Policies Defined for Pods or Services?

When a pod accesses another pod in Kubernetes it usually does so through a service — a virtual load-balancer which forwards traffic to the pods that implement the service. You may be tempted to think that network policies control access to services but this is not the case. Kubernetes network policies are applied to the pod ports, not to the service ports.

For example, if a service is listening on port 80 but forwarding traffic to its pods on port 8080, you need to specify 8080 in the network policy.

This design is sub-optimal because it means that you need to update network policies when someone changes the internal workings of a service (which ports the pods are listening on).

There is apparently a solution for this which is using named ports instead of hard-coded port numbers:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default.allow-hello
namespace: default
spec:
podSelector:
matchLabels:
app: hello
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
run: curl
ports:
- port: my-http

You need to specify this port name in the pod definition (or its deployment):

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: hello
name: hello
spec:
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- image: gcr.io/hello-minikube-zero-install/hello-node
imagePullPolicy: Always
name: hello-node
ports:
- containerPort: 8080
name: my-http

This decouples network policies and pods.

Ingress

The term “ingress” has two meanings in the context of Kubernetes:

  1. Ingress network policies allow you to control inbound access to pods, from other pods and from external IPs.
  2. Kubernetes ingress is a way to configure external load-balancers to route traffic into the cluster.

You can write a k8s network policy to restrict access from the Kubernetes ingress but in most cases it isn’t very useful because it will only control the internal IP of the load-balancer. In order to control the external subnets that have access to the cluster you will need to configure access controls on an external enforcement point such as the load-balancer itself or a firewall infront of it.

Do I Need To Define Both Ingress And Egress?

The short answer is Yes — in order to allow pod A to talk to pod B you need to allow pod A to create an outbound connection through an egress policy and pod B to accept an inbound connection through an ingress policy.

In reality, however, you may rely on the default allow policy for one or both of the directions.

If the source pod is selected by one or more egress policies, it will be restricted according the to union of the policies and in this case you will need to explicitly allow it to connect to the destination pod. If the pod is not selected by any policy, it is allowed all egress traffic by default.

Similarly, a destination pod which is selected by one or more ingress policies, will be restricted according the to union of the policies and in this case you will need to explicitly allow it to receive traffic from the source pod. If the pod is not selected by any policy, it is allowed all ingress traffic by default.

See also “Stateful or Stateless” below.

hostNetwork Gotcha

Kubernetes usually runs your pods in their own isolated network, however, you can instruct Kubernetes to run your pod on the host network by specifying:

hostNetwork: true

Doing this bypasses network policies altogether. Your pod will be able to communicate just like any other process that is running on the host itself.

Traffic Logs

Kubernetes network policies can not generate traffic logs. This makes it difficult to know whether a policy is working as expected or not. It’s also a major limitation with regards to security analysis.

Tufin SecureCloud can generate traffic logs for Kubernetes clusters.

Controlling Traffic to External Services

Kubernetes network policies don’t allow you to specify a fully qualified domain name (DNS) for egress. This is a major limitation when trying to restrict traffic to external end-points that aren’t associated with a fixed IP address (like aws.com).

Tufin SecureCloud can control access to external services with dynamic IP addresses.

Controlling Traffic to the Kubernetes API Server

Please see my dedicated post addressing this use-case.

Policy Validation

Firewalls will warn you or even refuse to accept an invalid policy. Kubernetes does some validation too. When defining a network policy using kubectl, Kubernetes may tell you that the policy definition is invalid and refuse to accept it. In other cases, Kubernetes will accept the policy and amend it with missing details which you can see with:

kubernetes get networkpolicy <policy-name> -o yaml

Be aware that Kubernetes validation is not bulletproof and it can allow certain types of errors in the policies.

Enforcement

Kubernetes doesn’t enforce network policies itself, it is just an API gateway which passes the tough job of enforcement to an underlying system called a Container Networking Interface (CNI). Defining policies on a Kubernetes cluster without a suitable CNI is like defining the policies on a firewall management server without installing them on firewalls. You must ensure that you have a security-capable CNI or, for hosted Kubernetes platforms, you need to explicitly enable network policies which will install the CNI for you.

Note that Kubernetes won’t warn you if you define a network policy without a supporting CNI.

Stateful or Stateless?

All Kubernetes CNIs that I have seen are stateful (Calico uses Linux conntrack for example). This enables a pod to receive replies on a TCP connection it initiated without having to open high-ports for the reply. I am not aware of a Kubernetes standard that guarantees statefulness.

Advanced Security Policy Management

Here are some ways to get more advanced network policy enforcement for Kubernetes:

  1. The servicemesh design pattern uses sidecars to provide advanced telemetry and traffic control at the service level. See Istio for example.
  2. Some of the CNI providers have extended their tools to go beyond Kubernetes network policies
  3. Monitor real-time traffic, and automate Kubernetes network policies with Tufin SecureCloud
Tufin SecureCloud

Additional Information

Summary

Kubernetes network policies provide a good means for segmenting your clusters, but they are non-intuitive and have many caveats. As a result, many clusters are left exposed with minimal or no network segmentation. Possible solutions could be automating the policy definitions or using other means of segmentation.

Meanwhile, I hope you find this guide useful to clarify and resolve some of the questions you may stumble upon.

Reuven

p.s., I am the CTO and Co-Founder of Tufin which makes security policy management solutions.

--

--