Create a Kubernetes cluster¶
This guide provides an example of how to create a Kubernetes cluster using the Rumble Cloud console and the Kubernetes service.
In this guide you’ll learn how to:
- Use the Kubernetes service to automate the setup of the network, load balancers, and other components of your cluster
- Create a Kubernetes cluster based on a cluster template
- Monitor and understand the different cloud components associated with the cluster
- Modify the cluster
This guide assumes that:
- You’re logged into Rumble Cloud
- You’ve generated an SSH key pair in Rumble Cloud or uploaded your key
If you haven't created a key pair, follow the instructions in Add an SSH Key Pair.
About containerization on Rumble Cloud
Rumble Cloud provides a platform where you can implement different cloud-based solutions, including container orchestration. The Kubernetes service is just one way for you to run containerization, and the tools provided still require you to plan for and manage your Kubernetes project according to your own requirements. While Rumble Cloud supports the underlying container orchestration engine service for your project, it’s up to you to manage your specific Kubernetes implementation.
Create the cluster¶
Required vCPUs and VMs
Make sure that your quota for vCPUs supports your cluster deployments. Kubernetes deployments require a minimum of two virtual machines to deploy a cluster. You'll typically need more robust deployments for real-world environments (for example, three master VMs with 8vCPU and 16GB RAM, and an additional two worker VMs).
- In the Rumble Cloud console, go to Kubernetes > Clusters.
- Select Create Cluster.
- Enter a name for the cluster, for example, ‘howtok8s’.
- The cluster templates available include the latest version of Kubernetes, ‘Standard-v2.0-k8s-calico-fc38_v1.24.16’. The latest version is shown in the last part of the file name for the template (‘_v1.24.16’). For this guide, you’ll use the latest Kubernetes template.
- Select the last option, ‘Standard-v2.0-k8s-calico-fc38_v1.24.16’.
- Select Next: Node Spec.
- Create or add an existing key pair.
- Specify the number of master nodes. Creating three is typical for production-ready environments. If you create only one master node, you won’t be able to scale up later. For this example, you’ll create one master node.
- Depending on your requirements, you’ll select the flavor of the master node(s). For this example, you’ll select ‘c2a.2xlarge’ (8 CPU and 16 GiB memory).
- Specify the number of worker nodes. For this example, you’ll start by specifying 1 (and learn how to scale up later in this guide).
- Select Next: Network Setting.
- Load balancing is helpful when working with multiple master nodes. For this example, you’ll enable load balancing. The load balancer will help provide access to the cluster from the public internet.
- Select Create New Network. This will provision a new network.
- The Kubernetes service will create virtual machines during the setup of your cluster. You can optionally assign a floating IP address to the VMs. Skip enabling floating IP addresses for now.
- Select Next: Management.
- Auto-healing enables the automated management of worker nodes. Auto-scaling enables your Kubernetes cluster to scale worker nodes per demand. Skip enabling both for now.
- Leave the timeout value as ‘60’.
- Select Next: Additional Labels.
- You’ll see additional labels as a list of key-value pairs. These labels are required for correctly provisioning the cluster. Leave the default values in place.
- Select Confirm to create the cluster.
Inspect the cluster and components¶
You’ll see the cluster in the Clusters dashboard, along with its status. In the example below, the status is ‘CREATE IN PROGRESS’.
You’ll also see the health status for the cluster. Health status is based on continuous monitoring of the cluster during its lifetime. Both status values may change as the cluster is created or modified.
Select the linked cluster ID to view the cluster details.
Select Network > Networks to view the new network created for the cluster.
Select Network > Load Balancers to view the two new load balancers created for the cluster. One load balancer is created for the cluster API, and the other is created for the ‘etcd’ service used by the cluster.
You’ll see that both load balancers have an assigned IP address. These are internal addresses that are not accessible or valid outside of the network. The API load balancer has an assigned floating IP address which enables access to the cluster from the internet.
Select Network > Routers to see the new router created for the cluster. The external IP address allows outbound traffic from the private network to external networks.
Select the router details, and then select Ports. You’ll see a port (network interface) was created and attached to the new network created for the cluster.
Select Compute > Instances. You’ll see the two new virtual machines created for the master and node in the cluster.
Earlier when creating the master node, you had the option to add a floating IP address to login to the virtual machine. You can also add a floating IP address to the master node from the Instances dashboard.
- Look for the new master node instance. You’ll see the word ‘master’ in the name.
- Select More > Related Resources > Add Floating IP.
- Select the instance IP. This is the internal IP address associated with the internal network created for the cluster.
- Select a floating IP address from the pool of addresses listed.
- Select OK to assign the floating IP address.
Once assigned, you’ll see the floating IP address listed in the Instances dashboard.
You can now access the virtual machine hosting the master node. You can use SSH to access the virtual machine using the floating IP address. Since the master node uses a Fedora CoreOS image, you’ll use ‘core’ as the username.
- Copy the floating IP address for the master node to your clipboard (or make a note of the address).
- Open the Terminal application on your local machine.
-
Type the following command and use the floating IP address as your SSH target.
-
Type ‘Yes’ to use your SSH key.
- Fedora CoreOS uses an application called Podman (similar to Docker) to work with containers. Type the following command to view available containers.
You’ll see a list of containers associated with your cluster.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0f0b719fddde docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 44 hours ago Up 44 hours heat-container-agent
5709e6252e82 docker.io/rancher/hyperkube:v1.24.16-rancher1 kube-apiserver --... 44 hours ago Up 44 hours kube-apiserver
21227e414400 docker.io/rancher/hyperkube:v1.24.16-rancher1 kube-controller-m... 44 hours ago Up 44 hours kube-controller-manager
ca6c6c55fd4e docker.io/rancher/hyperkube:v1.24.16-rancher1 kube-scheduler --... 44 hours ago Up 44 hours kube-scheduler
fbe457c6880b docker.io/rancher/hyperkube:v1.24.16-rancher1 kubelet --logtost... 44 hours ago Up 44 hours kubelet
4ca1a819f399 docker.io/rancher/hyperkube:v1.24.16-rancher1 kube-proxy --logt... 44 hours ago Up 44 hours kube-proxy
b0f5f0ca76aa quay.io/coreos/etcd:v3.4.6 /usr/local/bin/et... 44 hours ago Up 44 hours etcd
Type 'logout' to end your SSH session.
Inspect orchestration¶
The Kubernetes service uses the Automation service to automatically provision cloud resources required for clusters. Stacks are the templates used by Automation service to control the automation. When you create a cluster, an Automation service stack is created.
- Select Automation > Stacks. You’ll see a new stack created for the cluster.
- Select the stack ID link to view the stack details.
You won’t need to modify the stack, but you can inspect the various elements to get an understanding of how the service coordinates the various cloud components.
Modify the cluster¶
You can use the cloud console to modify the cluster
- Go to Kubernetes > Clusters.
- Find the cluster you created.
- Select Resize Cluster.
- Add an additional worker node to the cluster.
- Select OK.
The Clusters dashboard shows that the cluster status is ‘UPDATE IN PROGRESS’. Wait for the status to change to ‘UPDATE COMPLETE’.
Once complete, you can select the cluster details to verify that the additional worker node was added.
Next steps¶
Learn how to manage a cluster using command lines tools and the cluster you built from this example.