Skip to content

Clusters console

Overview

Clusters are used to run containerized applications in a cloud environment, providing a scalable and flexible platform that leverages the underlying infrastructure and services Rumble Cloud. Clusters are useful for deploying microservices-based applications, big data workloads, and other applications that benefit from containerization and orchestration.

Select Kubernetes > Clusters to view the Clusters console. The Clusters console shows a list of all clusters in your project.

You can view usage stats, view cluster details, and delete clusters from the console.

Cluster details

Here's a table summarizing the properties and attributes you should understand in order to work with clusters in the Kubernetes service:

Property/Attribute Description
Name A human-readable name for the cluster, used for identification purposes.
ID A unique identifier automatically assigned to the cluster by OpenStack.
Status Indicates the current state of the cluster (e.g., CREATE_IN_PROGRESS, ACTIVE, DELETE_FAILED).
Cluster Template ID The ID of the cluster template used to create the cluster, defining its configuration.
Node Count The number of nodes (instances) in the cluster.
Master Count The number of master nodes in the cluster (for orchestration services like Kubernetes).
Creation Time The timestamp indicating when the cluster was created.
Update Time The timestamp indicating when the cluster was last updated.
API Address The IP address or DNS name for accessing the cluster's API (e.g., Kubernetes API).
Discovery URL A URL used for cluster discovery during the initial setup (specific to certain orchestration tools).
Status Reason Provides additional information about the current status of the cluster.
Labels A set of key-value pairs associated with the cluster, providing additional metadata or configuration options.
Faults Information about any faults or errors encountered by the cluster.
Key Pair The name of the SSH key pair used for accessing nodes in the cluster.
Master Flavor ID The ID of the flavor used for master nodes (defines CPU, memory, and storage resources).
Node Flavor ID The ID of the flavor used for worker nodes.
Project ID The ID of the project or tenant to which the cluster belongs.
User ID The ID of the user who created the cluster.

Understanding these properties and attributes is helpful for effectively creating, managing, and interacting with clusters, enabling you to deploy and operate container orchestration platforms like Kubernetes, Docker Swarm, or Apache Mesos.

Status details

Clusters can have various status reasons that provide more detailed information about the current state of the cluster. Here's a table summarizing some common status reasons for clusters:

Status Status Reason Description
CREATE_IN_PROGRESS ClusterCreateInProgress The cluster is being created.
CREATE_FAILED ClusterCreateFailed The creation of the cluster has failed. The specific reason for the failure is provided.
CREATE_COMPLETE ClusterCreateComplete The cluster has been successfully created.
UPDATE_IN_PROGRESS ClusterUpdateInProgress The cluster is being updated.
UPDATE_FAILED ClusterUpdateFailed The update of the cluster has failed. The specific reason for the failure is provided.
UPDATE_COMPLETE ClusterUpdateComplete The cluster has been successfully updated.
DELETE_IN_PROGRESS ClusterDeleteInProgress The cluster is being deleted.
DELETE_FAILED ClusterDeleteFailed The deletion of the cluster has failed. The specific reason for the failure is provided.
DELETE_COMPLETE ClusterDeleteComplete The cluster has been successfully deleted.
RESUME_IN_PROGRESS ClusterResumeInProgress The cluster is being resumed from a suspended state.
RESUME_FAILED ClusterResumeFailed The resuming of the cluster has failed. The specific reason for the failure is provided.
RESUME_COMPLETE ClusterResumeComplete The cluster has been successfully resumed.
SUSPEND_IN_PROGRESS ClusterSuspendInProgress The cluster is being suspended.
SUSPEND_FAILED ClusterSuspendFailed The suspension of the cluster has failed. The specific reason for the failure is provided.
SUSPEND_COMPLETE ClusterSuspendComplete The cluster has been successfully suspended.
SCALE_IN_PROGRESS ClusterScaleInProgress The cluster is being scaled (either up or down).
SCALE_FAILED ClusterScaleFailed The scaling of the cluster has failed. The specific reason for the failure is provided.
SCALE_COMPLETE ClusterScaleComplete The cluster has been successfully scaled.

These status reasons provide insight into the operations being performed on a cluster and can help in diagnosing issues or understanding the current state of the cluster in the Kubernetes service.

Cluster labels

Cluster labels are key-value pairs that let you specify configuration options and metadata for their container orchestration clusters, such as Kubernetes or Docker Swarm clusters. Labels provide a flexible way to customize the behavior, features, and settings of the cluster and its components.

When creating a cluster, you can specify labels as part of the cluster creation request. These labels are then used by Kubernetes service and the underlying container orchestration engine to configure the cluster according to the specified options. The labels can influence various aspects of the cluster, including networking settings, storage options, security parameters, and feature toggles.

Tip

Consult the documentation for the specific container orchestration platform (for example, Kubernetes or Docker Swarm) to understand the available options and their expected values.

Here are some use cases for cluster labels.

  • Configuring networking: Labels can be used to configure network-related settings, such as the network driver, overlay network options, or network policies.

  • Enabling features: Certain features of the container orchestration platform can be enabled or disabled using labels. For example, you might use labels to enable Kubernetes alpha features or to configure logging and monitoring options.

  • Setting resource limits: Labels can specify resource limits and quotas for the cluster, such as the maximum number of pods per node or CPU and memory limits for containers.

  • Customizing storage: You can use labels to configure storage options, such as the type of persistent storage to use or storage class parameters.

  • Security and compliance: Labels can be used to enforce security settings and compliance requirements, such as enabling network policies or configuring role-based access control (RBAC).

Here are some examples of cluster labels.

Enable Kubernetes Dashboard: - Label: kube_dashboard_enabled=true - This label enables the Kubernetes Dashboard for the cluster.

Set Flannel Network Backend: - Label: flannel_backend=vxlan - This label configures the Flannel network plugin to use the VXLAN backend for overlay networking in a Kubernetes cluster.

Configure Calico Network Policy: - Label: calico_ipv4pool_ipip=Always - This label enables IP-in-IP encapsulation for the Calico network plugin, which is used for implementing network policies in Kubernetes.

Specify Docker Storage Driver: - Label: docker_storage_driver=overlay2 - This label sets the storage driver for Docker containers to overlay2, which is a modern and efficient storage driver.

Enable Kubernetes Autoscaler: - Label: autoscaler_enabled=true - This label enables the Kubernetes Cluster Autoscaler, which automatically adjusts the number of nodes in the cluster based on workload demand.

Cluster faults

Here's a table summarizing some of the most common and important faults in clusters managed by the Kubernetes service, along with troubleshooting use cases.

Fault Type Common Faults Troubleshooting Use Case
Creation Faults - Insufficient resources
- Image not found
- Network configuration errors
- Check resource quotas and availability
- Verify the image exists in Glance
- Review network settings
Update Faults - Incompatible changes
- Resource constraints
- Update script errors
- Review change logs and documentation
- Ensure adequate resources
- Check update scripts and logs
Operational Faults - Networking issues
- Orchestration engine failures
- Application errors
- Diagnose network connectivity
- Check the health of the orchestration engine
- Debug application logs
Deletion Faults - Resource cleanup errors
- Dependency issues
- Orphaned resources
- Manually remove remaining resources
- Resolve dependencies
- Identify and delete orphaned resources

When troubleshooting faults, start by examining the logs and error messages provided by Kubernetes service, the orchestration engine (for example, Kubernetes), and the underlying cloud services (for example, Compute service or Network service). Additionally, checking the health and status of the cluster nodes and the deployed applications can provide insights into the root cause of the fault.

Kubernetes service

Kubernetes service CLI reference

Kubernetes service API reference

See also