Hello, readers! In this article, we will be focusing on Kubernetes Affinity and Anti-Affinity, in detail.
So, let us begin!! 🙂
Let us first understand Affinity and Anti-Affinity
In the Kubernetes world, we may come across scenarios, wherein we would feel the need to assign a Pod to a specific Kubernetes resource (Node/VM). Let us say, we may want all the pods for the web application to get scheduled to a Node which has a persistent disk attached to it.
For the same, in our previous article, we already have looked at the nodeSelector method. In this method, we apply labels at node level rather than pod level. This turns out to be a limitation as it restricts the node and not the Pod.
This is when Affinity and Anti-Affinity comes into picture.
With affinity and anti-affinity, we come across the below advantages that do not work with nodeSelector label method-
- We can have more expressive rules and labels matching the nodes and Pods with logical operators such as AND operator.
- Instead of having a hard rule implemented, we can provide soft/preference labels that allows the pods to bind even if the labels are not fully satisfied.
- Apart from the nodes, we can also apply constraints on the Pod level with the topological domain. Yes, affinity and anti-affinity enables us to apply labels on the Node as well as Pod levels.
In the context of this topic, we will be having a look at the below methods of applying labels-
- Node affinity
- Pod affinity and anti-affinity
1. Node Affinity
Node Affinity resembles the nodeSelector label i.e. it applies labels on the Node level. Plus, it comes with the below advantages which do not comply with nodeSelector method-
- It offers more matching label rules rather than exact rules using logical operators.
- Instead of applying a hard rule, node affinity provides us with soft/preference rules that enables the pod to bind with minimum availability.
Let us have a look at the below Pod Specification that inculcates Node affinity-
apiVersion: v1 kind: Pod metadata: name: demo-pod spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/demo-key operator: In values: - b*123 - d*123! preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: label-key operator: In values: - 123@*abc containers: - name: demo-init image: nginx
- The above YAML’s first portion includes requiredDuringSchedulingIgnoredDuringExecution nodeAffinity, which is a hard rule i.e. the specific rules must be present on the pods as well as nodes to bind. In the above example, it says that a pod can only be bind to the node with key kubernetes.io/demo-key and with value b*123 or d*123 using In operator constraint.
- On the other hand, in the last portion, we include preferredDuringSchedulingIgnoredDuringExecution nodeAffinity, which lists preferences of the rules/labels that the scheduler tries to impose and enforce but is not guaranteed. So, it is known as soft or a preference rule.
2. Pod Affinity and Anti-Affinity
In Pod Affinity, the rules or preferences applies on the Pod level instead of a Node. That is, it decides which pods are eligible to bind to a node depending upon the label of pods running on the nodes rather than of nodes.
Affinity allows the allocation of pods on Nodes. While Anti-Affinity releases the Pods that are running on the Node based on the constraints.
Let us have a look at the below YAML schema–
apiVersion: v1 kind: Pod metadata: name: demo-pod spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: disk operator: In values: - P1 topologyKey: topology.kubernetes.io/zone podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: disk operator: In values: - p2 topologyKey: topology.kubernetes.io/zone containers: - name: demo image: nginx
- At first, we have applied podAffinity rule with requiredDuringSchedulingIgnoredDuringExecution (hard rule).
- It says that the pod will be scheduled only on those nodes that are in the same zone of at-least one running pod in the zone with the disk:P1 key value pair.
- In the next section, we have applied podAntiAffinity rule with preferredDuringSchedulingIgnoredDuringExecution which says that the pod should not be bind to a node that lays in the same zone as that of a Pod with Key value pair as disk:P2.
By this, we have come to the end of this topic. Feel free to comment below, in case you come across any questions. For more such posts related to Kubernetes, Stay tuned with us!
Till then, Happy Learning! 🙂