v2.1
v2.0
v1.0
  1. Release Notes
    1. Release Notes - 2.1.1Latest
    1. Release Notes - 2.1.0
    1. Release Notes - 2.0.2
    1. Release Notes - 2.0.1
    1. Release Notes - 2.0.0
  1. Introduction
    1. Introduction
    1. Features
    1. Architecture
    1. Advantages
    1. Glossary
  1. Installation
    1. Introduction
      1. Intro
      2. Port Requirements
      3. Kubernetes Cluster Configuration
    1. Install on Linux
      1. All-in-One Installation
      2. Multi-Node Installation
      3. High Availability Configuration
      4. Air Gapped Installation
      5. StorageClass Configuration
      6. Enable All Components
    1. Install on Kubernetes
      1. Prerequisites
      2. Install on K8s
      3. Air Gapped Installation
      4. Install on GKE
    1. Pluggable Components
      1. Pluggable Components
      2. Enable Application Store
      3. Enable DevOps System
      4. Enable Logging System
      5. Enable Service Mesh
      6. Enable Alerting and Notification
      7. Enable Metrics-server for HPA
      8. Verify Components Installation
    1. Upgrade
      1. Overview
      2. All-in-One
      3. Multi-node
    1. Third-Party Tools
      1. Configure Harbor
      2. Access Built-in SonarQube and Jenkins
      3. Enable built-in Grafana Installation
      4. Load Balancer plugin in Bare Metal - Porter
    1. Authentication Integration
      1. Configure LDAP/AD
    1. Cluster Operations
      1. Add or Cordon Nodes
      2. High Risk Operations
      3. Uninstall KubeSphere
  1. Quick Start
    1. 1. Getting Started with Multi-tenancy
    1. 2. Expose your App Using Ingress
    1. 3. Compose and Deploy Wordpress to K8s
    1. 4. Deploy Grafana Using App Template
    1. 5. Job to Compute π to 2000 Places
    1. 6. Create Horizontal Pod Autoscaler
    1. 7. S2I: Publish your App without Dockerfile
    1. 8. B2I: Publish Artifacts to Kubernete
    1. 9. CI/CD based on Spring Boot Project
    1. 10. Jenkinsfile-free Pipeline with Graphical Editing Panel
    1. 11. Canary Release of Bookinfo App
    1. 12. Canary Release based on Ingress-Nginx
    1. 13. Application Store
  1. DevOps
    1. Pipeline
    1. Create SonarQube Token
    1. Credentials
    1. Set CI Node for Dependency Cache
    1. Set Email Server for KubeSphere Pipeline
  1. User Guide
    1. Configration Center
      1. Secrets
      2. ConfigMap
      3. Configure Image Registry
  1. Logging
    1. Log Query
  1. Developer Guide
    1. Introduction to S2I
    1. Custom S2I Template
  1. API Documentation
    1. API Documentation
    1. How to Access KubeSphere API
  1. Troubleshooting
    1. Troubleshooting Guide for Installation
  1. FAQ
    1. Telemetry
KubeSphere®️ 2020 All Rights Reserved.

StorageClass Configuration

Edit

Currently, Installer supports the following Storage Class, providing persistent storage service for KubeSphere (more storage classes will be supported soon).

  • NFS
  • Ceph RBD
  • GlusterFS
  • QingCloud Block Storage
  • QingStor NeonSAN
  • Local Volume (for development and test only)

The versions of storage systems and corresponding CSI plugins in the table listed below have been well tested.

Name Version Reference
Ceph RBD Server v0.94.10 For development and testing, refer to Install Ceph Storage Server for details. Please refer to Ceph Documentation for production.
Ceph RBD Client v12.2.5 Before installing KubeSphere, you need to configure the corresponding parameters in common.yaml. Please refer to Ceph RBD
GlusterFS Server v3.7.6 For development and testing, refer to Deploying GlusterFS Storage Server for details. Please refer to Gluster Documentation or Gluster Documentation for production. Note you need to install Heketi Manager (V3.0.0).
GlusterFS Client v3.12.10 Before installing KubeSphere, you need to configure the corresponding parameters in common.yaml. Please refer to GlusterFS
NFS Client v3.1.0 Before installing KubeSphere, you need to configure the corresponding parameters in common.yaml. Make sure you have prepared NFS storage server. Please see NFS Client
QingCloud-CSI v0.2.0.1 You need to configure the corresponding parameters in common.yaml before installing KubeSphere. Please refer to QingCloud CSI for details
NeonSAN-CSI v0.3.0 Before installing KubeSphere, you need to configure the corresponding parameters in common.yaml. Make sure you have prepared QingStor NeonSAN storage server. Please see Neonsan-CSI

Note: You are only allowed to set ONE default storage classes in the cluster. To specify a default storage class, make sure there is no default storage class already exited in the cluster.

Storage Configuration

After preparing the storage server, you need to refer to the parameters description in the following table. Then modify the corresponding configurations in conf/common.yaml accordingly.

The following describes the storage configuration in common.yaml.

Note: Local Volume is configured as the default storage class in common.yaml by default. If you are going to set other storage class as the default, disable the Local Volume and modify the configuration for other storage class.

Local Volume (For developing or testing only)

A Local Volume represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. We recommend you to use Local volume in testing or development only since it is quick and easy to install KubeSphere without the struggle to set up persistent storage server. Refer to following table for the definition in conf/common.yaml.

Local volume Description
local_volume_provisioner_enabled Whether to use Local as the persistent storage, defaults to true
local_volume_provisioner_storage_class Storage class name, default value:local
local_volume_is_default_class Whether to set Local as the default storage class, defaults to true.

NFS

An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in conf/common.yaml. Note you need to prepare NFS server in advance.

NFS Description
nfs_client_enable Whether to use NFS as the persistent storage, defaults to false
nfs_client_is_default_class Whether to set NFS as default storage class, defaults to false.
nfs_server The NFS server address, either IP or Hostname
nfs_path NFS shared directory, which is the file directory shared on the server, see Kubernetes Documentation
nfs_vers3_enabled Specifies which version of the NFS protocol to use, defaults to false which means v4. True means v4
nfs_archiveOnDelete Archive PVC when deleting. It will automatically remove data from oldPath when it sets to false

Ceph RBD

The open source Ceph RBD distributed storage system can be configured to use in conf/common.yaml. You need to prepare Ceph storage server in advance. Please refer to Kubernetes Documentation for more details.

Ceph_RBD Description
ceph_rbd_enabled Whether to use Ceph RBD as the persistent storage, defaults to false
ceph_rbd_storage_class Storage class name
ceph_rbd_is_default_class Whether to set Ceph RBD as default storage class, defaults to false
ceph_rbd_monitors Ceph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters
ceph_rbd_admin_id Ceph client ID that is capable of creating images in the pool. Defaults to “admin”
ceph_rbd_admin_secret Admin_id's secret, secret name for "adminId". This parameter is required. The provided secret must have type “kubernetes.io/rbd”
ceph_rbd_pool Ceph RBD pool. Default is “rbd”
ceph_rbd_user_id Ceph client ID that is used to map the RBD image. Default is the same as adminId
ceph_rbd_user_secret Secret for User_id, it is required to create this secret in namespace which used rbd image
ceph_rbd_fsType fsType that is supported by Kubernetes. Default: "ext4"
ceph_rbd_imageFormat Ceph RBD image format, “1” or “2”. Default is “1”
ceph_rbd_imageFeatures This parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on

Note:

The ceph secret, which is created in storage class, like "cephrbdadminsecret" and "cephrbdusersecret", is retrieved using following command in Ceph storage server.

ceph auth get-key client.admin

GlusterFS

GlusterFS is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. You need to prepare GlusterFS storage server in advance. Please refer to Kubernetes Documentation for further information.

GlusterFS(It requires glusterfs cluster which is managed by heketi) Description
glusterfs_provisioner_enabled Whether to use GlusterFS as the persistent storage, defaults to false
glusterfs_provisioner_storage_class Storage class name
glusterfs_is_default_class Whether to set GlusterFS as default storage class, defaults to false
glusterfs_provisioner_restauthenabled Gluster REST service authentication boolean that enables authentication to the REST server
glusterfs_provisioner_resturl Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be "IP address:Port" and this is a mandatory parameter for GlusterFS dynamic provisioner
glusterfs_provisioner_clusterid Optional, for example, 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids
glusterfs_provisioner_restuser Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool
glusterfs_provisioner_secretName Optional, identification of Secret instance that contains user password to use when talking to Gluster REST service, Installer will automatically create this secret in kube-system
glusterfs_provisioner_gidMin The minimum value of GID range for the storage class
glusterfs_provisioner_gidMax The maximum value of GID range for the storage class
glusterfs_provisioner_volumetype The volume type and its parameters can be configured with this optional value, for example: ‘Replica volume’: volumetype: replicate:3
jwt_admin_key "jwt.admin.key" field is from "/etc/heketi/heketi.json" in Heketi server

Attention:

Please note: "glusterfs_provisioner_clusterid" could be returned from glusterfs server by running the following command:

export HEKETI_CLI_SERVER=http://localhost:8080
heketi-cli cluster list

QingCloud Block Storage

QingCloud Block Storage is supported in KubeSphere as the persistent storage service. If you would like to experience dynamic provisioning when creating volume, we recommend you to use it as your persistent storage solution. KubeSphere integrates QingCloud-CSI, and allows you to use various block storage services of QingCloud. With simple configuration, you can quickly expand, clone PVCs and view the topology of PVCs, create/delete snapshot, as well as restore volume from snapshot.

QingCloud-CSI plugin has implemented the standard CSI. You can easily create and manage different types of volumes in KubeSphere, which are provided by QingCloud. The corresponding PVCs will created with ReadWriteOnce access mode and mounted to running Pods.

QingCloud-CSI supports create the following five types of volume in QingCloud:

  • High capacity
  • Standard
  • SSD Enterprise
  • Super high performance
  • High performance
QingCloud-CSI Description
qingcloud_csi_enabled Whether to use QingCloud-CSI as the persistent storage volume, defaults to false
qingcloud_csi_is_default_class Whether to set QingCloud-CSI as default storage class, defaults to false
qingcloud_access_key_id ,
qingcloud_secret_access_key
Please obtain it from QingCloud Console
qingcloud_zone Zone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A)
type The type of volume in QingCloud platform. In QingCloud platform, 0 represents high performance volume. 3 represents super high performance volume. 1 or 2 represents high capacity volume depending on cluster‘s zone, see QingCloud Documentation
maxSize, minSize Limit the range of volume size in GiB
stepSize Set the increment of volumes size in GiB
fsType The file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4

QingStor NeonSAN

The NeonSAN-CSI plugin supports the enterprise-level distributed storage QingStor NeonSAN as the persistent storage solution. You need prepare the NeonSAN server, then configure the NeonSAN-CSI plugin to connect to its storage server in conf/common.yaml. Please refer to NeonSAN-CSI Reference for further information.

NeonSAN Description
neonsan_csi_enabled Whether to use NeonSAN as the persistent storage, defaults to false
neonsan_csi_is_default_class Whether to set NeonSAN-CSI as the default storage class, defaults to false
Neonsan_csi_protocol transportation protocol, user must set the option, such as TCP or RDMA
neonsan_server_address NeonSAN server address
neonsan_cluster_name NeonSAN server cluster name
neonsan_server_pool A comma separated list of pools. Tell plugin to manager these pools. User must set the option, the default value is kube
neonsan_server_replicas NeonSAN image replica count. Default: 1
neonsan_server_stepSize set the increment of volumes size in GiB. Default: 1
neonsan_server_fsType The file system to use for the volume. Default: ext4