v2.1
v2.0
v1.0
  1. Release Notes
    1. Release Notes - 2.1.1Latest
    1. Release Notes - 2.1.0
    1. Release Notes - 2.0.2
    1. Release Notes - 2.0.1
    1. Release Notes - 2.0.0
  1. Introduction
    1. Introduction
    1. Features
    1. Architecture
    1. Advantages
    1. Glossary
  1. Installation
    1. Introduction
      1. Intro
      2. Port Requirements
      3. Kubernetes Cluster Configuration
    1. Install on Linux
      1. All-in-One Installation
      2. Multi-Node Installation
      3. High Availability Configuration
      4. Air Gapped Installation
      5. StorageClass Configuration
      6. Enable All Components
    1. Install on Kubernetes
      1. Prerequisites
      2. Install on K8s
      3. Air Gapped Installation
      4. Install on GKE
    1. Pluggable Components
      1. Pluggable Components
      2. Enable Application Store
      3. Enable DevOps System
      4. Enable Logging System
      5. Enable Service Mesh
      6. Enable Alerting and Notification
      7. Enable Metrics-server for HPA
      8. Verify Components Installation
    1. Upgrade
      1. Overview
      2. All-in-One
      3. Multi-node
    1. Third-Party Tools
      1. Configure Harbor
      2. Access Built-in SonarQube and Jenkins
      3. Enable built-in Grafana Installation
      4. Load Balancer plugin in Bare Metal - Porter
    1. Authentication Integration
      1. Configure LDAP/AD
    1. Cluster Operations
      1. Add or Cordon Nodes
      2. High Risk Operations
      3. Uninstall KubeSphere
  1. Quick Start
    1. 1. Getting Started with Multi-tenancy
    1. 2. Expose your App Using Ingress
    1. 3. Compose and Deploy Wordpress to K8s
    1. 4. Deploy Grafana Using App Template
    1. 5. Job to Compute π to 2000 Places
    1. 6. Create Horizontal Pod Autoscaler
    1. 7. S2I: Publish your App without Dockerfile
    1. 8. B2I: Publish Artifacts to Kubernete
    1. 9. CI/CD based on Spring Boot Project
    1. 10. Jenkinsfile-free Pipeline with Graphical Editing Panel
    1. 11. Canary Release of Bookinfo App
    1. 12. Canary Release based on Ingress-Nginx
    1. 13. Application Store
  1. DevOps
    1. Pipeline
    1. Create SonarQube Token
    1. Credentials
    1. Set CI Node for Dependency Cache
    1. Set Email Server for KubeSphere Pipeline
  1. User Guide
    1. Configration Center
      1. Secrets
      2. ConfigMap
      3. Configure Image Registry
  1. Logging
    1. Log Query
  1. Developer Guide
    1. Introduction to S2I
    1. Custom S2I Template
  1. API Documentation
    1. API Documentation
    1. How to Access KubeSphere API
  1. Troubleshooting
    1. Troubleshooting Guide for Installation
  1. FAQ
    1. Telemetry
KubeSphere®️ 2020 All Rights Reserved.

Nodes

Edit

A node is a worker machine in Kubernetes, a node may be a VM or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components. Node management in KubeSphere fully meets the enterprise's requirements for cluster operation and maintenance. It supports real-time monitoring of CPU, memory, storage, Pod usage and node status, as well as the running status of the Pod on any node. In addition, it also provides rich monitoring metrics, such as CPU and memory utilization, CPU load average, IOPS, disk throughput and utilization, network badwidth, etc.

Node Management

Firstly, sign in with cluster admin, select Platform → Infrastructure, enter into the node management panel. As a cluster-admin, you can view all of nodes and monitoring details.

Node Management

View the Node Details

Click a node in the list to enter its detail page, then you will see the resource and node status, Pods status, annotations, monitoring and events of this node.

Node Status

KubeSphere provides the following five states, the cluster administrator can determine the load and capacity of the current node through the following states, and manage the host resources more reasonably.

  • OutOfDisk:If there is insufficient free space on the node for adding new pods.
  • MemoryPressure:If pressure exists on the node memory – that is, if the node memory is low.
  • DiskPressure: If pressure exists on the disk size – that is, if the disk capacity is low.
  • PIDPressure:If pressure exists on the processes – that is, if there are too many processes on the node.
  • Ready:If the node is healthy and ready to accept pods.

Node Status

View the Monitoring Graph

It is worth mentioning that node management supports fine-grained resource monitoring, which can filter monitoring data within a specified period to view changes, also supports dynamically watching the monitoring metrics.

Taints Management

Node affinity is a property of pods that attracts them to a set of nodes. Taints are the opposite – they allow a node to repel a set of pods. Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.

For example, the memory usage shows up at 90% in this node as following, which means there is not recommended to schedule new pods on this node, you can set a stain on it.

Taint Management

  1. Click Taint Management, enter into the taint management pop-up window.

  2. Add a row of taint as key1:value1 NoSchedule, then no pod will be able to schedule onto this node unless it has a matching toleration as following.

tolerations:
  key: "key1"
  operator: "Equal"
  value: "value1"
  effect: "NoSchedule"

Taint list

Generally, creates a taint that marks the node as unschedulable by any pods that do not have a toleration for taint with key key, value value, and effect NoSchedule. (The other taint effects are PreferNoSchedule, which is the preferred version of NoSchedule, and NoExecute, which means any pods that are running on the node when the taint is applied will be evicted unless they tolerate the taint.)

There are 3 kind of effect:

  • NoSchedule: No pod will be able to schedule onto the node unless it has a matching toleration. It will be able to continue running if the Pod is already running on the node when the taint is added.
  • PreferNoSchedule: This is a “preference” or “soft” version of NoSchedule – the system will try to avoid placing a pod that does not tolerate the taint on the node, but it is not required.
  • NoExecute: The pod will be evicted from the node (if it is already running on the node), and will not be scheduled onto the node (if it is not yet running on the node). Only the pods that do tolerate the taint will never be evicted.

For example, imagine you taint a node like following scenario:

Example

And a pod has two tolerations:

tolerations:
- key: "key1"
  operator: "Equal"
  value: "value1"
  effect: "NoSchedule"
- key: "key2"
  operator: "Equal"
  value: "value2"
  effect: "NoExecute"

In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod.

Cordon or Uncordon

Click on Cordon button to mark a node as unschedulable prevents new pods from being scheduled to that node, but does not affect any existing pods on the node. This is useful as a preparatory step before a node reboot, etc.

Cordon or Uncordon

Then you will see the status of this node has changed to Unschedulable. If you need to Uncordon as well.

Unschedulable

Edit Label

Labels on nodes can be used in conjunction with node selectors on pods to control scheduling, e.g. to constrain a pod to only be eligible to run on a subset of the nodes.

For example, if we set label role=ssd_node to node1, and set NodeSelector role : ssd_node to Pod at the same time, then the Pod will only be eligible to run on the node1.

If you need to edit the label of node, you can click More → Edit Label to update its label.

Edit Label

Modify the key-value labels in the pop-up window.

Edit Label list