KubeRunner is a Go-based tool for automating Kubernetes cluster installation and setup on Linux systems. It provides a simple, interactive way to bootstrap both control plane and worker nodes.
- Automatic detection of Linux distribution
- Support for Debian and RedHat based distributions
- Containerd runtime installation and configuration
- Kubernetes control plane initialization
- Calico network plugin installation
- Optional Kubernetes Dashboard installation
- Worker node join command generation
- High-availability cluster setup
- Node joining (both worker and control plane nodes)
- Node labeling and tainting
- Cluster upgrade capabilities
- Cluster status checking
- robust configuration options
When setting up a control plane node, KubeRunner will:
- Install and configure containerd
- Install Kubernetes components (kubeadm, kubelet, kubectl)
- Initialize the Kubernetes control plane
- Install the Calico network plugin
- Generate a join command for worker nodes
- Optionally install the Kubernetes Dashboard
When setting up a worker node, KubeRunner will:
- Install and configure containerd
- Install Kubernetes components
- Prompt for the join command from the control plane
- Join the node to the cluster
KubeRunner supports high availability setups with multiple control plane nodes. When configuring a high availability cluster:
- Set up a load balancer in front of the API servers
- Configure the first control plane node with the load balancer endpoint
- Join additional control plane nodes using certificate key
git clone https://github.com/ochestra-tech/KubeRunner.git
cd KubeRunner
make build
sudo make installThis is more practical approach that uses Docker only as a build environment, producing a standalone binary that can be run directly on the host system. This addresses security concerns while leveraging containerization for consistent builds.
KubeRunner provides a containerized build environment that produces standalone binaries for direct use on the host system. This approach avoids running privileged containers while still leveraging Docker for consistent builds.
-
Secure: No need for privileged containers at runtime
-
Portable: Builds binaries for multiple architectures
-
Consistent: Same build environment regardless of host OS
-
Simple: Easy installation process using generated script
-
Flexible: Can run directly on the host with full access to system resources
-
CI/CD friendly: Easy to integrate into build pipelines
a. Build the binaries:
./scripts/docker-build.shb. Install KubeRunner on the host system:
./scripts/install-host.shc. Run KubeRunner
KubeRunnerIf you prefer to build and install KubeRunner manually:
- Build the binary:
go build -o KubeRunner cmd/KubeRunner/main.go- Install it
sudo cp KubeRunner /usr/local/bin/
sudo mkdir -p /usr/local/lib/KubeRunner/assets
sudo cp -r assets/* /usr/local/lib/KubeRunner/assets/
sudo chmod +x /usr/local/bin/KubeRunner# Build the binaries
./build.sh
# Install KubeRunner on the host system
./install-host.sh
# Run KubeRunner
KubeRunnerscripts/k8s-cluster-setup is the batch script version of this go cluster creation tool
- Save the script to a file (e.g., setup-kubernetes.sh)
- Make it executable: chmod +x setup-kubernetes.sh
- Run it as root: sudo ./setup-kubernetes.sh
- First run the script on the machine you want to be the master node
- When prompted, indicate it's a master node
- Save the join command that is generated
- Run the script on each worker node
- When prompted, indicate it's not a master node
- Run the join command you saved earlier on each worker node
After completing these steps, you'll have a functional Kubernetes cluster with networking configured and ready to deploy applications.
A separate binary kubeopera-deploy implements the full kubeopera infrastructure deployment pipeline in Go, replacing the bash script deploy-kubeopera.sh with an 11-step orchestration.
go build -o kubeopera-deploy ./cmd/kubeopera-deploy/# Full pipeline (pass path to kubeopera infrastructure/scripts)
./kubeopera-deploy --environment staging --script-dir /path/to/kubeopera/infrastructure/scripts
# Skip confirmation (e.g. CI)
./kubeopera-deploy -e production -s /path/to/scripts -y
# Run only steps 3–5
./kubeopera-deploy -e staging -s /path/to/scripts --start-step 3 --stop-step 5| Option | Description |
|---|---|
-e, --environment |
development, staging, or production (default: production) |
-s, --script-dir |
Path to kubeopera infrastructure/scripts directory |
-y, --skip-confirmation |
Skip confirmation prompts |
--start-step |
Start at step N (1–11) |
--stop-step |
Stop at step N (1–11) |
- Prerequisites – Check terraform, kubectl, helm, aws, jq, ssh and AWS credentials
- Terraform backend – Create S3 bucket and DynamoDB table (AWS SDK)
- Infrastructure – Terraform init/plan/apply; copy SSH key and outputs to bastion
- Configure nodes – Run
configure-nodes.shon each node via bastion - Install Kubernetes – Run
install-kubernetes.shon bastion - Networking – Install Calico CNI
- Monitoring – Install Prometheus and Grafana
- Kubeconfig – Copy kubeconfig from bastion to local
~/.kube/config-{env} - Verify –
kubectl get nodes/pods,cluster-info - Backups – Schedule etcd backup cron on control plane nodes
- Add-ons – CCM/CSI/Metrics/Ingress via
install-kubernetes.sh RUN_MODE=addons
Bootstrap scripts (e.g. configure-nodes.sh, install-kubernetes.sh) remain as-is; the Go binary copies and runs them over SSH. Step 2 (Terraform backend) is implemented entirely in Go using the AWS SDK.