This section is a refresher that provides an overview of the main concepts of security in Kubernetes. At the end of this section, please complete the exercises to put these concepts into practice.
Kubernetes offers multiple methods to authenticate users against the API Server:
To authenticate an application running in a Pod, Kubernetes relies on ServiceAccounts resources.
When we create a cluster, an admin kubeconfig file is generated, similar to the following one.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRβ¦0tLS0tCg==
server: https://10.55.133.216:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRU...0tLS0tCg==
client-key-data: LS0tLS1CRU...0tLS0tCg==
This file contains a public/private key pair used for authentication against the API Server. We can use OpenSSL commands to get details about the public key (x509 certificate).
The following screenshot shows the Subject used in the certificate:
Using this certificate to communicate with the API Server will authenticate us as the kubernetes-admin belonging to the system:master group. This is a specific case, as the group system:master provides full access to the cluster.
The admin kubeconfig file is not the only kubeconfig file generated during the cluster creation step. As we’ll see in the next section, each component that needs to communicate with the API Server has its kubeconfig file (and associated access rights).
The following picture illustrates how the control plane components communicate with each other.
The /etc/kubernetes folder contains the following files to ensure this communication is secured.
$ sudo tree /etc/kubernetes
/etc/kubernetes
βββ admin.conf
βββ controller-manager.conf
βββ kubelet.conf
βββ manifests
β βββ etcd.yaml
β βββ kube-apiserver.yaml
β βββ kube-controller-manager.yaml
β βββ kube-scheduler.yaml
βββ pki
β βββ apiserver-etcd-client.crt
β βββ apiserver-etcd-client.key
β βββ apiserver-kubelet-client.crt
β βββ apiserver-kubelet-client.key
β βββ apiserver.crt
β βββ apiserver.key
β βββ ca.crt
β βββ ca.key
β βββ etcd
β β βββ ca.crt
β β βββ ca.key
β β βββ healthcheck-client.crt
β β βββ healthcheck-client.key
β β βββ peer.crt
β β βββ peer.key
β β βββ server.crt
β β βββ server.key
β βββ front-proxy-ca.crt
β βββ front-proxy-ca.key
β βββ front-proxy-client.crt
β βββ front-proxy-client.key
β βββ sa.key
β βββ sa.pub
βββ scheduler.conf
βββ super-admin.conf
For information purposes, the following table gives the subject of the certificates embedded in each kubeconfig file.
file | subject |
---|---|
admin.conf | O = system:masters, CN=kubernetes-admin |
super-admin.conf | O = system:masters, CN = kubernetes-super-admin |
controller-manager.conf | CN = system:kube-controller-manager |
kubelet.conf | O = system:nodes, CN = system:node:NODE_NAME |
scheduler.conf | CN = system:kube-scheduler |
When a Pod needs to access the API Server, it must use a resource of type ServiceAccount. The following YAML specification defines a ServiceAccount named viewer.
apiVersion: v1
kind: ServiceAccount
metadata:
name: viewer
We can manually create a token for this ServiceAccount.
kubectl create token viewer
This command returns a token similar to the following one:
eyJhbGciOiJSUzI1NiIsImtpZCI6IlRwSU85ZXdWUFp0SlpjaDBjekl6ZTNaNGRuUTZSVDFiV2dyWVhqbGwyRDAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ1NDk5OTUyLCJpYXQiOjE3NDU0OTYzNTIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMTE1OTgzZjYtOWE3Ny00ZmY1LWE4OGQtMTc2ODg3N2YxYmE3Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6InZpZXdlciIsInVpZCI6IjY2NmE3NWNkLWRkZGUtNDAzYi1iZmE0LWM0MjIxNWI1OTA1YiJ9fSwibmJmIjoxNzQ1NDk2MzUyLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp2aWV3ZXIifQ.CGYbqWDj3KaEGPgU_pV6sL1wRf3IU56AlpljLxUO6tvpbkK7Z6le8FI5zdwp_04LgcWnHLo5-hsZiyJxmeKYXhsb3CASkI0Vvumfsb8kahIiJxVXIE-PfzKNlxampuubc3mG4q9h1s0M_Y-PubMdl4TkBoLMjujxbsTtPqpD2joxyZ2YB7ys7DiGp-BjQwXwwaxOniSwd0l_tyEAlX0UTy0qjmjjuMBJKQTLDzwPJXWCAXbeAMULsnsosS21sWyimmVMz6HQ8S4MttkMSg8eZ1IW-LPPn3Hfs0lBLRYeVRBn6qe4l7qxgCfgj57GfYgEWGy5BO9uaAAGcHVBdTacAQ
From jwt.io we can get the content of this JWT token and see that it authenticates the ServiceAccount named viewer within the default Namespace. We could use this token manually to call the API Server, but when a Pod is using a ServiceAccount a dedicated token is automatically created and mounted into its containers’ filesystem.
In the previous section, we covered the authentication mechanisms that allow the API Server to verify a user’s or an application’s identity. Now, we’ll look at the resources used to grant permissions.
A Role resource defines permissions in a Namespace. A RoleBinding associates this Role with an identity which can be a user, a group, or a ServiceAccount.
The following example defines a Role that grants read-only access to Pods in the development Namespace. The RoleBinding associates this Role with a user named bob: if a user has a certificate whose subject is bob, then he will be granted the permission to list and get information about Pods in the development Namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: development
name: dev-pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-pod-reader
namespace: development
subjects:
- kind: User
name: bob
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: dev-pod-reader
apiGroup: rbac.authorization.k8s.io
The ClusterRole and ClusterRoleBinding resources are similar to the Role and RoleBinding ones, except that they are global to the cluster instead of being limited to a Namespace.
The following specifications define a ClusterRole which grants read access to resources of type Secret in the entire cluster and a ClusterRoleBinding that associates this ClusterRole to the ServiceAccount named viewer. If a Pod uses the viewer ServiceAccount, then its containers will have the right to read the cluster’s Secrets.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dev-pod-reader
subjects:
- kind: ServiceAccount
name: viewer
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
To use a ServiceAccount, we can define the serviceAccountName property in the Pod’s specification as follows.
apiVersion: v1
kind: Pod
metadata:
name: monitoring
spec:
containers:
- image: lucj/mon:1.2
name: mon
serviceAccountName: viewer
The SecurityContext property defines privileges and access controls at the Pod or container level.
When defined at the Pod level, we can use the following properties:
When defined at the container level, we can use the following properties:
The following Pod specification defines a securityContext property both at the Pod and at the container level.
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
containers:
- name: api
image: registry.gitlab.com/web-hook/api:v1.0.39
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 10000
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
By default, Pods created in separate Namespaces can communicate with each other. To control Pod-to-Pod communications, Kubernetes has a NetworkPolicy resource. Based on Pod’s labels, it can restrict ingress and egress communication for the selected Pods.
The example below defines a NetworkPolicy restricting the database Pod to receive traffic from the backend Pod only.
The example below (from Kubernetes documentation) is more complex. It illustrates the full capabilities of NetworkPolicies.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
It defines a NetworkPolicy for Pods with the label role: db
managing incoming and outgoing traffic for those Pods.
It authorizes incoming traffic from (logical OR):
It also authorizes outgoing traffic to (logical OR):
Both incoming and outgoing traffic are limited to specific ports.
You can now jump to the Exercises part to learn and practice the concepts above.