Skip to content

Instantly share code, notes, and snippets.

@inductor
Created November 24, 2020 12:30
Show Gist options
  • Save inductor/6ed579ca7af313e5a8509774066a5cfd to your computer and use it in GitHub Desktop.
Save inductor/6ed579ca7af313e5a8509774066a5cfd to your computer and use it in GitHub Desktop.
git diff upstream/release-1.17 upstream/release-1.18 <file>
diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md
index 97188ee9a..b1e1f0dc2 100644
--- a/content/en/docs/concepts/architecture/nodes.md
+++ b/content/en/docs/concepts/architecture/nodes.md
@@ -3,38 +3,136 @@ reviewers:
- caesarxuchao
- dchen1107
title: Nodes
-content_template: templates/concept
+content_type: concept
weight: 10
---
-{{% capture overview %}}
+<!-- overview -->
-A node is a worker machine in Kubernetes, previously known as a `minion`. A node
-may be a VM or physical machine, depending on the cluster. Each node contains
-the services necessary to run [pods](/docs/concepts/workloads/pods/pod/) and is managed by the master
-components. The services on a node include the [container runtime](/docs/concepts/overview/components/#container-runtime), kubelet and kube-proxy. See
-[The Kubernetes Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) section in the
-architecture design doc for more details.
+Kubernetes runs your workload by placing containers into Pods to run on _Nodes_.
+A node may be a virtual or physical machine, depending on the cluster. Each node
+contains the services necessary to run
+{{< glossary_tooltip text="Pods" term_id="pod" >}}, managed by the
+{{< glossary_tooltip text="control plane" term_id="control-plane" >}}.
-{{% /capture %}}
+Typically you have several nodes in a cluster; in a learning or resource-limited
+environment, you might have just one.
+The [components](/docs/concepts/overview/components/#node-components) on a node include the
+{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, a
+{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}, and the
+{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}.
-{{% capture body %}}
+<!-- body -->
-## Node Status
+## Management
+
+There are two main ways to have Nodes added to the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}:
+
+1. The kubelet on a node self-registers to the control plane
+2. You, or another human user, manually add a Node object
+
+After you create a Node object, or the kubelet on a node self-registers, the
+control plane checks whether the new Node object is valid. For example, if you
+try to create a Node from the following JSON manifest:
+
+```json
+{
+ "kind": "Node",
+ "apiVersion": "v1",
+ "metadata": {
+ "name": "10.240.79.157",
+ "labels": {
+ "name": "my-first-k8s-node"
+ }
+ }
+}
+```
+
+Kubernetes creates a Node object internally (the representation). Kubernetes checks
+that a kubelet has registered to the API server that matches the `metadata.name`
+field of the Node. If the node is healthy (if all necessary services are running),
+it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity
+until it becomes healthy.
+
+{{< note >}}
+Kubernetes keeps the object for the invalid Node and continues checking to see whether
+it becomes healthy.
+
+You, or a {{< glossary_tooltip term_id="controller" text="controller">}}, must explicitly
+delete the Node object to stop that health checking.
+{{< /note >}}
+
+The name of a Node object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+
+### Self-registration of Nodes
+
+When the kubelet flag `--register-node` is true (the default), the kubelet will attempt to
+register itself with the API server. This is the preferred pattern, used by most distros.
-A node's status contains the following information:
+For self-registration, the kubelet is started with the following options:
+
+ - `--kubeconfig` - Path to credentials to authenticate itself to the API server.
+ - `--cloud-provider` - How to talk to a {{< glossary_tooltip text="cloud provider" term_id="cloud-provider" >}} to read metadata about itself.
+ - `--register-node` - Automatically register with the API server.
+ - `--register-with-taints` - Register the node with the given list of {{< glossary_tooltip text="taints" term_id="taint" >}} (comma separated `<key>=<value>:<effect>`).
+
+ No-op if `register-node` is false.
+ - `--node-ip` - IP address of the node.
+ - `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node in the cluster (see label restrictions enforced by the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
+ - `--node-status-update-frequency` - Specifies how often kubelet posts node status to master.
+
+When the [Node authorization mode](/docs/reference/access-authn-authz/node/) and
+[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) are enabled,
+kubelets are only authorized to create/modify their own Node resource.
+
+### Manual Node administration
+
+You can create and modify Node objects using
+{{< glossary_tooltip text="kubectl" term_id="kubectl" >}}.
+
+When you want to create Node objects manually, set the kubelet flag `--register-node=false`.
+
+You can modify Node objects regardless of the setting of `--register-node`.
+For example, you can set labels on an existing Node, or mark it unschedulable.
+
+You can use labels on Nodes in conjunction with node selectors on Pods to control
+scheduling. For example, you can constrain a Pod to only be eligible to run on
+a subset of the available nodes.
+
+Marking a node as unschedulable prevents the scheduler from placing new pods onto
+that Node, but does not affect existing Pods on the Node. This is useful as a
+preparatory step before a node reboot or other maintenance.
+
+To mark a Node unschedulable, run:
+
+```shell
+kubectl cordon $NODENAME
+```
+
+{{< note >}}
+Pods that are part of a {{< glossary_tooltip term_id="daemonset" >}} tolerate
+being run on an unschedulable Node. DaemonSets typically provide node-local services
+that should run on the Node even if it is being drained of workload applications.
+{{< /note >}}
+
+## Node status
+
+A Node's status contains the following information:
* [Addresses](#addresses)
* [Conditions](#condition)
* [Capacity and Allocatable](#capacity)
* [Info](#info)
-Node status and other details about a node can be displayed using the following command:
+You can use `kubectl` to view a Node's status and other details:
+
```shell
kubectl describe node <insert-node-name-here>
```
-Each section is described in detail below.
+
+Each section of the output is described below.
### Addresses
@@ -49,15 +147,23 @@ The usage of these fields varies depending on your cloud provider or bare metal
The `conditions` field describes the status of all `Running` nodes. Examples of conditions include:
-| Node Condition | Description |
-|----------------|-------------|
-| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 40 seconds) |
-| `MemoryPressure` | `True` if pressure exists on the node memory -- that is, if the node memory is low; otherwise `False` |
-| `PIDPressure` | `True` if pressure exists on the processes -- that is, if there are too many processes on the node; otherwise `False` |
-| `DiskPressure` | `True` if pressure exists on the disk size -- that is, if the disk capacity is low; otherwise `False` |
-| `NetworkUnavailable` | `True` if the network for the node is not correctly configured, otherwise `False` |
+{{< table caption = "Node conditions, and a description of when each condition applies." >}}
+| Node Condition | Description |
+|----------------------|-------------|
+| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 40 seconds) |
+| `DiskPressure` | `True` if pressure exists on the disk size--that is, if the disk capacity is low; otherwise `False` |
+| `MemoryPressure` | `True` if pressure exists on the node memory--that is, if the node memory is low; otherwise `False` |
+| `PIDPressure` | `True` if pressure exists on the processes—that is, if there are too many processes on the node; otherwise `False` |
+| `NetworkUnavailable` | `True` if the network for the node is not correctly configured, otherwise `False` |
+{{< /table >}}
+
+{{< note >}}
+If you use command-line tools to print details of a cordoned Node, the Condition includes
+`SchedulingDisabled`. `SchedulingDisabled` is not a Condition in the Kubernetes API; instead,
+cordoned nodes are marked Unschedulable in their spec.
+{{< /note >}}
-The node condition is represented as a JSON object. For example, the following response describes a healthy node.
+The node condition is represented as a JSON object. For example, the following structure describes a healthy node:
```json
"conditions": [
@@ -72,19 +178,23 @@ The node condition is represented as a JSON object. For example, the following r
]
```
-If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout`, an argument is passed to the [kube-controller-manager](/docs/admin/kube-controller-manager/) and all the Pods on the node are scheduled for deletion by the Node Controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the apiserver is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the apiserver is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
+If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
-In versions of Kubernetes prior to 1.5, the node controller would [force delete](/docs/concepts/workloads/pods/pod/#force-deletion-of-pods)
-these unreachable pods from the apiserver. However, in 1.5 and higher, the node controller does not force delete pods until it is
-confirmed that they have stopped running in the cluster. You can see the pods that might be running on an unreachable node as being in
-the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the underlying infrastructure if a node has
-permanently left a cluster, the cluster administrator may need to delete the node object by hand. Deleting the node object from
-Kubernetes causes all the Pod objects running on the node to be deleted from the apiserver, and frees up their names.
+The node controller does not force delete pods until it is confirmed that they have stopped
+running in the cluster. You can see the pods that might be running on an unreachable node as
+being in the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the
+underlying infrastructure if a node has permanently left a cluster, the cluster administrator
+may need to delete the node object by hand. Deleting the node object from Kubernetes causes
+all the Pod objects running on the node to be deleted from the API server, and frees up their
+names.
The node lifecycle controller automatically creates
-[taints](/docs/concepts/configuration/taint-and-toleration/) that represent conditions.
-When the scheduler is assigning a Pod to a Node, the scheduler takes the Node's taints
-into account, except for any taints that the Pod tolerates.
+[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that represent conditions.
+The scheduler takes the Node's taints into consideration when assigning a Pod to a Node.
+Pods can also have tolerations which let them tolerate a Node's taints.
+
+See [Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
+for more details.
### Capacity and Allocatable {#capacity}
@@ -104,48 +214,10 @@ on a Node.
Describes general information about the node, such as kernel version, Kubernetes version (kubelet and kube-proxy version), Docker version (if used), and OS name.
This information is gathered by Kubelet from the node.
-## Management
+### Node controller
-Unlike [pods](/docs/concepts/workloads/pods/pod/) and [services](/docs/concepts/services-networking/service/),
-a node is not inherently created by Kubernetes: it is created externally by cloud
-providers like Google Compute Engine, or it exists in your pool of physical or virtual
-machines. So when Kubernetes creates a node, it creates
-an object that represents the node. After creation, Kubernetes
-checks whether the node is valid or not. For example, if you try to create
-a node from the following content:
-
-```json
-{
- "kind": "Node",
- "apiVersion": "v1",
- "metadata": {
- "name": "10.240.79.157",
- "labels": {
- "name": "my-first-k8s-node"
- }
- }
-}
-```
-
-Kubernetes creates a node object internally (the representation), and
-validates the node by health checking based on the `metadata.name` field. If the node is valid -- that is, if all necessary
-services are running -- it is eligible to run a pod. Otherwise, it is
-ignored for any cluster activity until it becomes valid.
-The name of a Node object must be a valid
-[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
-
-{{< note >}}
-Kubernetes keeps the object for the invalid node and keeps checking to see whether it becomes valid.
-You must explicitly delete the Node object to stop this process.
-{{< /note >}}
-
-Currently, there are three components that interact with the Kubernetes node
-interface: node controller, kubelet, and kubectl.
-
-### Node Controller
-
-The node controller is a Kubernetes master component which manages various
-aspects of nodes.
+The node {{< glossary_tooltip text="controller" term_id="controller" >}} is a
+Kubernetes control plane component that manages various aspects of nodes.
The node controller has multiple roles in a node's life. The first is assigning a
CIDR block to the node when it is registered (if CIDR assignment is turned on).
@@ -168,6 +240,7 @@ checks the state of each node every `--node-monitor-period` seconds.
#### Heartbeats
Heartbeats, sent by Kubernetes nodes, help determine the availability of a node.
+
There are two forms of heartbeats: updates of `NodeStatus` and the
[Lease object](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#lease-v1-coordination-k8s-io).
Each Node has an associated Lease object in the `kube-node-lease`
@@ -188,13 +261,7 @@ a Lease object.
#### Reliability
-In Kubernetes 1.4, we updated the logic of the node controller to better handle
-cases when a large number of nodes have problems with reaching the master
-(e.g. because the master has networking problems). Starting with 1.4, the node
-controller looks at the state of all nodes in the cluster when making a
-decision about pod eviction.
-
-In most cases, node controller limits the eviction rate to
+ In most cases, node controller limits the eviction rate to
`--node-eviction-rate` (default 0.1) per second, meaning it won't evict pods
from more than 1 node per 10 seconds.
@@ -220,62 +287,12 @@ completely unhealthy (i.e. there are no healthy nodes in the cluster). In such a
case, the node controller assumes that there's some problem with master
connectivity and stops all evictions until some connectivity is restored.
-Starting in Kubernetes 1.6, the NodeController is also responsible for evicting
-pods that are running on nodes with `NoExecute` taints, when the pods do not tolerate
-the taints. Additionally, as an alpha feature that is disabled by default, the
-NodeController is responsible for adding taints corresponding to node problems like
-node unreachable or not ready. See [this documentation](/docs/concepts/configuration/taint-and-toleration/)
-for details about `NoExecute` taints and the alpha feature.
-
-Starting in version 1.8, the node controller can be made responsible for creating taints that represent
-Node conditions. This is an alpha feature of version 1.8.
+The node controller is also responsible for evicting pods running on nodes with
+`NoExecute` taints, unless those pods tolerate that taint.
+The node controller also adds {{< glossary_tooltip text="taints" term_id="taint" >}}
+corresponding to node problems like node unreachable or not ready. This means
+that the scheduler won't place Pods onto unhealthy nodes.
-### Self-Registration of Nodes
-
-When the kubelet flag `--register-node` is true (the default), the kubelet will attempt to
-register itself with the API server. This is the preferred pattern, used by most distros.
-
-For self-registration, the kubelet is started with the following options:
-
- - `--kubeconfig` - Path to credentials to authenticate itself to the apiserver.
- - `--cloud-provider` - How to talk to a cloud provider to read metadata about itself.
- - `--register-node` - Automatically register with the API server.
- - `--register-with-taints` - Register the node with the given list of taints (comma separated `<key>=<value>:<effect>`). No-op if `register-node` is false.
- - `--node-ip` - IP address of the node.
- - `--node-labels` - Labels to add when registering the node in the cluster (see label restrictions enforced by the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) in 1.13+).
- - `--node-status-update-frequency` - Specifies how often kubelet posts node status to master.
-
-When the [Node authorization mode](/docs/reference/access-authn-authz/node/) and
-[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) are enabled,
-kubelets are only authorized to create/modify their own Node resource.
-
-#### Manual Node Administration
-
-A cluster administrator can create and modify node objects.
-
-If the administrator wishes to create node objects manually, set the kubelet flag
-`--register-node=false`.
-
-The administrator can modify node resources (regardless of the setting of `--register-node`).
-Modifications include setting labels on the node and marking it unschedulable.
-
-Labels on nodes can be used in conjunction with node selectors on pods to control scheduling,
-e.g. to constrain a pod to only be eligible to run on a subset of the nodes.
-
-Marking a node as unschedulable prevents new pods from being scheduled to that
-node, but does not affect any existing pods on the node. This is useful as a
-preparatory step before a node reboot, etc. For example, to mark a node
-unschedulable, run this command:
-
-```shell
-kubectl cordon $NODENAME
-```
-
-{{< note >}}
-Pods created by a DaemonSet controller bypass the Kubernetes scheduler
-and do not respect the unschedulable attribute on a node. This assumes that daemons belong on
-the machine even if it is being drained of applications while it prepares for a reboot.
-{{< /note >}}
{{< caution >}}
`kubectl cordon` marks a node as 'unschedulable', which has the side effect of the service
@@ -285,34 +302,41 @@ eligible for, effectively removing incoming load balancer traffic from the cordo
### Node capacity
-The capacity of the node (number of cpus and amount of memory) is part of the node object.
-Normally, nodes register themselves and report their capacity when creating the node object. If
-you are doing [manual node administration](#manual-node-administration), then you need to set node
-capacity when adding a node.
+Node objects track information about the Node's resource capacity (for example: the amount
+of memory available, and the number of CPUs).
+Nodes that [self register](#self-registration-of-nodes) report their capacity during
+registration. If you [manually](#manual-node-administration) add a Node, then
+you need to set the node's capacity information when you add it.
-The Kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
-checks that the sum of the requests of containers on the node is no greater than the node capacity. It
-includes all containers started by the kubelet, but not containers started directly by the [container runtime](/docs/concepts/overview/components/#container-runtime) nor any process running outside of the containers.
+The Kubernetes {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} ensures that
+there are enough resources for all the Pods on a Node. The scheduler checks that the sum
+of the requests of containers on the node is no greater than the node's capacity.
+That sum of requests includes all containers managed by the kubelet, but excludes any
+containers started directly by the container runtime, and also excludes any
+processes running outside of the kubelet's control.
-If you want to explicitly reserve resources for non-Pod processes, follow this tutorial to
+{{< note >}}
+If you want to explicitly reserve resources for non-Pod processes, see
[reserve resources for system daemons](/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved).
+{{< /note >}}
## Node topology
-{{< feature-state state="alpha" >}}
+{{< feature-state state="alpha" for_k8s_version="v1.16" >}}
If you have enabled the `TopologyManager`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then
the kubelet can use topology hints when making resource assignment decisions.
+See [Control Topology Management Policies on a Node](/docs/tasks/administer-cluster/topology-manager/)
+for more information.
+
-## API Object
+## {{% heading "whatsnext" %}}
-Node is a top-level resource in the Kubernetes REST API. More details about the
-API object can be found at:
-[Node API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
+* Learn about the [components](/docs/concepts/overview/components/#node-components) that make up a node.
+* Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
+* Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
+ section of the architecture design document.
+* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
+* Read about [cluster autoscaling](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling).
-{{% /capture %}}
-{{% capture whatsnext %}}
-* Read about [node components](/docs/concepts/overview/components/#node-components)
-* Read about node-level topology: [Control Topology Management Policies on a node](/docs/tasks/administer-cluster/topology-manager/)
-{{% /capture %}}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment