-
Notifications
You must be signed in to change notification settings - Fork 8
Getting started with cctl
A single command is used for all cluster operations. All logging is donein the UI, so there are no logs on disk – other than Kubernetes-specific logs that could be found onany Kubernetes cluster.
To invoke the CLI, run:
/opt/bin/cctl
Example
node1 ~ # /opt/bin/cctl Platform9 tool for Kubernetes cluster management. This tool lets you create, scale, backup and restoreyour on-premise Kubernetes cluster. Usage: cctl [command] Available Commands: backup Create an archive with the current cctl state and an etcdsnapshot from the cluster. bundle Used to create cctl bundle create Used to create resources delete Used to delete resources deploy Used to deploy app to the cluster get Display one or more resources help Help about any command migrate Migrate the state file to the current version recover Used to recover the cluster restore Restore the cctl state and etcd snapshot from an archive. snapshot Used to get a snapshot status Used to get status of the cluster upgrade Used to upgrade the cluster version Print version information Flags: -h, --help help for cctl -l, --log-level string set log level for output, permitted values debug,info, warn, error, fatal and panic (default "info") --state string state file (default "/etc/cctl-state.yaml") Use "cctl [command] --help" for more information about a command.
The CLI can be executed from any node in the cluster. The only requirement for this is that the cluster configuration file, located in /etc/cctl-state.yaml, must be synchronized between the nodes.
Choose a cluster node to use for initial cluster configuration. We'll call this the "management node". From this node, execute the following steps to bring up a Multi-Master Kubernetes cluster:
Use cctl create credential to configure the username and private key to use for accessing all nodes in the cluster:
node1 ~ $ sudo /opt/bin/cctl create credential --user core --private-key/home/core/.ssh/id_rsa 2018/07/16 12:50:36 Credentials created with user:root and private-key file:/home/core/.ssh/id_rsa
Use cctl create cluster to create a cluster. This step includes the CIDR configuration for both the services and pod networks, as well as the VIP (optional) and Router ID (optional) for the VRRP-based Virtual IP address in front of the master node(s):
node1 ~ $ sudo /opt/bin/cctl create cluster --pod-network 192.168.0.0/16 --service-network 192.169.0.0/24 --vip 10.105.16.38 --router-id 201 2018/07/16 12:51:08 Cluster created successfully.
The cctl create cluster command also supports passing a YAML config file used to supply values for configurable parameters as follows:
node1 ~ $ sudo /opt/bin/cctl create cluster --pod-network 192.168.0.0/16 --service-network 192.169.0.0/24 --vip 10.105.16.38 --router-id 201 --cluster-config ./clusterconfig.yaml
Sample YAML config file: Following shows the default values used for current configurable parameters.
kubeAPIServer: service-node-port-range: "80-32767" allow-privileged: "true" secure-port: 6443 kubeControllerManager: pod-eviction-timeout: "20s" kubeScheduler: kubeProxy: mode: iptables kubelet: kubeAPIQPS: 20 kubeAPIBurst: 40 maxPods: 500 failSwapOn: "false"
References:
- https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver
- https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy
- https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler
- https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager
- https://github.com/kubernetes/kubernetes/blob/v1.10.11/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go
To add one or more master nodes, run cctl create machine with --role master
node1 ~ $ sudo /opt/bin/cctl create machine --ip 10.0.0.51 --role master
To create additional master nodes, run the same command against the additional nodes – changing only the IP address.
To add one or more worker nodes, run cctl create machine with --role node
node1 ~ $ sudo /opt/bin/cctl create machine --ip 10.0.0.54 --role node
As soon as the master and worker nodes have been setup, the cluster is ready and can be managed with kubectl. Use “sudo” with non-root user as follows or export KUBECONFIG if user is root:
node1 ~ $ sudo KUBECONFIG=/etc/kubernetes/admin.conf /opt/bin/kubectl get nodes NAME STATUS ROLES AGE VERSION node1 Ready master 13m v1.11.9 node2 Ready master 3m v1.11.9 node3 Ready master 2m v1.11.9 node4 Ready <none> 1m v1.11.9
node1 ~ # export KUBECONFIG=/etc/kubernetes/admin.conf node1 ~ # source <(kubectl completion bash)
node1 ~ # kubectl get nodes NAME STATUS ROLES AGE VERSION node1 Ready master 13m v1.11.9 node2 Ready master 3m v1.11.9 node3 Ready master 2m v1.11.9 node4 Ready <none> 1m v1.11.9