Scheduling (Taints and Labels)
Motivation
In K8s, labels on nodes are used to influence pod scheduling (e.g. confine pods to certain nodes)
Taints are used to prevent pods from scheduling on nodes unless they specifically tolerate a taint
We want to be able to confine certain services we provide (e.g. Rook and Monitoring) away from the worker nodes
Concept
Kubernetes labels and taints are key-value pairs.
Per key and type (label/taint), there can be only one value on a node.
In addition to the key and value, taints also have an effect
,
which defines what the taint does.
Typically, it is NoSchedule (which prevents pods from being scheduled
there unless they tolerate that specific taint or the NoSchedule effect in general).
See also:
Assigning labels and taints
Note
Some node labels are managed by YAOOK/k8s. Expect your changes to such labels to be overwritten. Please refer to the respective documentation for details.
Labels and taints of a node are parsed, processed and assigned during LCM rollout after the node joined the cluster.
The LCM does not support removal of labels/taints for nodes that already joined the cluster. Changing node labels/taints can lead to disruption if the workload is not immediately reconfigured as well. A more detailed explanation can be found in the respective commit which reworked this behavior.
For details on how to configure labels and taints for nodes, please refer to Node-Scheduling: Labels and Taints Configuration
Defining a common Scheduling-Key-Prefix
It is often desirable to use a common prefix for self-defined labels and taints for consistency. YAOOK/K8s allows to define such a scheduling-key-prefix and then use it in the label and taint definitions.
Please refer to the Node-Scheduling: Labels and Taints Configuration for details on how to label and taint nodes with a common scheduling-key-prefix.
Use scheduling keys for Services
Scheduling keys control where services may run. A scheduling key corresponds to both, a node label and to a taint. It is often desirable to configure a service such that its workload is spawned on specific nodes. In especially, it often makes sense to use dedicated monitoring and storage nodes.
For details on how to use scheduling keys for our supported storage solution rook, please refer to the Rook Configuration
For details on how to use scheduling keys for our supported monitoring solution, an extended prometheus stack, please refer to the Prometheus-based Monitoring Configuration