Simple Terraform Example¶
The Terraform code in this example deploys a scalable virtual machine infrastructure on Rumble Cloud. The code creates a configurable number of VMs with associated storage, networking, and security configurations.
Download the code¶
Codebase structure¶
The infrastructure is defined across several Terraform files, each responsible for specific components:
main.tf
- Core provider configuration and initializationvariables.tf
- Variable definitions for customizing the infrastructurevms_app.tf
- VM instance configurationsvolumes_app.tf
- Storage volume definitionssecuritygroup_app.tf
- Security group rules and configurationskeypair.tf
- SSH key pair configurationports_app.tf
- Network port configurationsservergroup_app.tf
- Server group definitions for VM placementdata.tf
- Data source definitions
Component overview¶
Provider Configuration (main.tf)¶
The infrastructure uses the OpenStack provider (version 2.0.0). Authentication can be configured either through environment variables or by directly specifying credentials in the provider block.
Variables and Customization (variables.tf)¶
The infrastructure is highly configurable through variables including:
system_name
- Base name for resource namingapp_vm_count
- Number of VMs to deploy (default: 3)flavor_app
- VM instance size (default: m2a.xlarge)image_app
- OS image (default: Ubuntu-22.04)volume_app_os
- OS volume size in GB (default: 10)volume_app
- Additional volume size in GB (default: 10)app_subnet
- Subnet configuration (default: 192.168.1)cloud_network
- Network name (default: PublicEphemeral)
Warning
PublicEphemeral
is a pre-built, default public network provided by Rumble Cloud. Placing a virtual machine directly onto a public network can be unsafe and is not recommended for most situations. You can use this example as a quick way to create a test server, but you'll typically not want to use this method in any kind of working environment.
Virtual machines and storage¶
- Each VM is created with two volumes:
- An OS volume for the system
- An additional volume for data storage
- VMs are configured with network ports and security group rules
- Server groups ensure proper VM placement and distribution
Networking and security¶
- Security groups define inbound and outbound traffic rules
- Network ports connect VMs to the specified network
- The infrastructure uses a pre-existing network (PublicEphemeral)
Access management¶
- SSH access is configured through keypairs
- Security groups control network access to the VMs
How it works¶
- When applied, Terraform first initializes the OpenStack provider and validates the configuration.
- It then creates the necessary security groups and rules.
- Storage volumes are provisioned for each VM.
- Network ports are created and configured.
- VMs are launched with the specified image and connected to their volumes and network ports.
- The server group ensures proper VM distribution across the infrastructure.
Infrastructure diagram¶
+------------------------+
| Public Network |
+------------------------+
|
+------------------------+
| Security Groups |
+------------------------+
|
+----------+
| VMs |
| (1-N) |
+----------+
| |
+-----+ +-----+
| |
+--------+ +---------+
| OS Vol | |Data Vol |
+--------+ +---------+
Usage notes¶
- Ensure your OpenStack credentials are properly configured.
- Update the
keypair
variable with your public SSH key. - Adjust the VM count and specifications as needed in .
variables.tf
-
Use standard Terraform commands to manage the infrastructure:
terraform init
terraform plan
terraform apply
terraform destroy
Detailed file descriptions¶
main.tf¶
This file serves as the foundation of the infrastructure configuration:
terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "2.0.0"
}
}
}
provider "openstack" {
# Configuration via environment variables or direct credentials
}
Key aspects:
- Provider Block: Specifies OpenStack as the infrastructure provider with version 2.0.0
-
Authentication: Supports two methods:
- Environment variables (recommended) using OpenStack RC file
- Direct credential configuration in the provider block
-
Version Pinning: Explicitly pins the OpenStack provider version to ensure consistency
- Provider Source: Uses the official terraform-provider-openstack/openstack source
Best practices implemented:
- Version constraint to prevent unexpected provider updates
- Commented credential placeholders for easy configuration
- Flexibility in authentication methods
variables.tf¶
This file defines all configurable parameters for the infrastructure. Variables are organized into logical groups:
# System Identification
variable "system_name" {
type = string
default = "simplevms"
}
# VM Configuration
variable "app_vm_count" {
type = string
default = "3"
}
# ... more variables ...
Variable categories:
-
System Identification
system_name
: Base name for resource identification (default: "simplevms")
-
VM Configuration
app_vm_count
: Number of VMs to deploy (default: 3)flavor_app
: VM size/flavor (default: m2a.xlarge)image_app
: OS image selection (default: Ubuntu-22.04)
-
Storage Configuration
volume_app_os
: Size of OS volume in GB (default: 10)volume_app
: Size of additional data volume in GB (default: 10)
-
Network Configuration
app_subnet
: Subnet CIDR base (default: 192.168.1)cloud_network
: Network name (default: PublicEphemeral)
-
Access Configuration
keypair
: SSH public key for VM access
Best practices implemented:
- All variables have explicit types defined
- Sensible defaults provided where appropriate
- Clear grouping and documentation of variables
- Placeholder for sensitive data (SSH key)
vms_app.tf¶
This file defines the core VM instances and their configurations using the OpenStack Compute service:
resource "openstack_compute_instance_v2" "server_app" {
count = var.app_vm_count
name = "${var.system_name}-app-${format("%02d", count.index + 1)}"
flavor_name = var.flavor_app
key_pair = openstack_compute_keypair_v2.key.name
# ... configuration continues ...
}
Key components:
-
Instance Configuration
- Dynamic instance count based on
app_vm_count
- Standardized naming with zero-padded indices (e.g., app-01, app-02)
- VM size defined by
flavor_app
variable - SSH key integration for secure access
- Dynamic instance count based on
-
Network Integration
- Security group association for network rules
- Port assignment from pre-configured network ports
-
Storage Configuration
- Boot volume configuration using specified image
- Volume size defined by
volume_app_os
- Automatic volume cleanup on instance termination
-
Availability Management
- Server group integration for anti-affinity
- Ensures VMs are distributed across different compute nodes
Best practices implemented:
- Zero-padded instance numbering for consistent sorting
- Boot from volume configuration for persistence
- Anti-affinity rules for high availability
- Integration with security groups and network ports
volumes_app.tf¶
This file manages the additional data volumes for the VMs and their attachments:
resource "openstack_blockstorage_volume_v3" "volume_app" {
count = length(openstack_compute_instance_v2.server_app)
name = "${var.system_name}-volumes-app-${format("%02d", count.index + 1)}"
size = var.volume_app
}
resource "openstack_compute_volume_attach_v2" "volume_attach_app" {
count = length(openstack_compute_instance_v2.server_app)
instance_id = openstack_compute_instance_v2.server_app.*.id[count.index]
volume_id = openstack_blockstorage_volume_v3.volume_app.*.id[count.index]
}
Key components:
-
Volume Creation
- Creates additional volumes for data storage
- Volume count matches the number of VMs
- Consistent naming scheme with the rest of the infrastructure
- Size defined by
volume_app
variable
-
Volume Attachment
- Automatically attaches volumes to corresponding VMs
- Uses instance and volume IDs for proper mapping
- Maintains one-to-one relationship between VMs and volumes
Best practices implemented:
- Dynamic volume count based on VM instances
- Consistent naming convention with zero-padded indices
- Automatic volume attachment handling
- Clear separation between volume creation and attachment
Note
This is separate from the boot volumes defined in vms_app.tf
, providing dedicated data storage for each VM.
securitygroup_app.tf¶
This file defines the network security rules for the VM instances:
resource "openstack_networking_secgroup_v2" "secgroup_app" {
name = "${var.system_name}-secgrp_app"
}
resource "openstack_networking_secgroup_rule_v2" "secgroup_rule_app_ssh_from_all" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.secgroup_app.id
}
# ... additional rules ...
Key components:
-
Security Group Definition
- Creates a named security group for the application
- Uses consistent naming convention with system name prefix
-
Ingress Rules
- SSH Access (Port 22)
- Allows remote SSH connections from any IP
- HTTP Access (Port 80)
- Enables web traffic on standard HTTP port
- HTTPS Access (Port 443)
- Supports secure web traffic
- ICMP (Ping)
- Allows basic network connectivity testing
-
Rule Configuration
- All rules are ingress (incoming traffic)
- IPv4 protocol support
- Specific port ranges for each service
- Global access (
0.0.0.0/0
) for all services
Best practices implemented:
- Clear separation of rules by service
- Standard ports for common services
- Basic network connectivity testing enabled
- Consistent rule structure and naming
- Explicit direction and protocol definitions
Note
While this configuration allows access from any IP (0.0.0.0/0
), in production environments, you might want to restrict access to specific IP ranges for better security.
keypair.tf¶
This file manages the SSH key pair used for secure access to the VMs:
resource "openstack_compute_keypair_v2" "key" {
name = "${var.system_name}-keypair"
public_key = var.keypair
}
Key components:
-
Keypair Resource
- Creates a named keypair in OpenStack
- Uses consistent naming with system name prefix
- Imports the public key specified in variables
-
Integration Points
- Referenced by VM instances for SSH access
- Uses the public key defined in
variables.tf
- Enables secure remote access to instances
Best practices implemented:
- Consistent resource naming
- Separation of key material from configuration
- Integration with VM provisioning
- Uses OpenStack's key management system
Note
The actual SSH public key value should be provided through the keypair
variable in variables.tf
or through Terraform variables at runtime.
ports_app.tf¶
This file manages the network ports for the VM instances:
resource "openstack_networking_port_v2" "app_ports" {
count = var.app_vm_count
name = "${var.system_name}-app_ports-${format("%02d", count.index + 1)}"
network_id = data.openstack_networking_network_v2.cloud_network.id
security_group_ids = [openstack_networking_secgroup_v2.secgroup_app.id]
}
Key components:
-
Port Creation
- Creates network ports for each VM instance
- Dynamic port count based on
app_vm_count
- Consistent naming scheme with zero-padded indices
-
Network Integration
- Associates ports with the specified cloud network
- References network ID from data source
- Links security groups to ports
-
Security Integration
- Applies security group rules at the port level
- Direct integration with the application security group
Best practices implemented:
- Dynamic port creation matching VM count
- Consistent resource naming convention
- Security group integration at network level
- Clean separation of networking concerns
Note
These ports are referenced in the VM configuration to provide network connectivity to each instance.
servergroup_app.tf¶
This file defines the server group policy for VM placement:
resource "openstack_compute_servergroup_v2" "app_server_group_anti_affinity" {
name = "${var.system_name}-app_server_group_anti_affinity"
policies = ["soft-anti-affinity"]
}
Key components:
-
Server Group Definition
- Creates a named server group for VM placement
- Uses consistent naming with system name prefix
- Implements soft anti-affinity policy
-
Anti-Affinity Policy
- Uses "soft-anti-affinity" for flexible VM distribution
- Encourages VMs to run on different compute nodes
- Allows fallback if strict distribution isn't possible
-
Integration Points
- Referenced by VM instances in their scheduler hints
- Helps OpenStack make intelligent placement decisions
- Supports high availability goals
Best practices implemented:
- High availability through VM distribution
- Flexible placement with soft anti-affinity
- Consistent resource naming
- Integration with VM scheduling
Note: Soft anti-affinity is preferred over strict anti-affinity as it allows the infrastructure to still function even if perfect distribution isn't possible.
data.tf¶
This file defines the data sources used to reference existing OpenStack resources:
data "openstack_networking_network_v2" "cloud_network" {
name = var.cloud_network
}
data "openstack_images_image_v2" "image_app" {
name = var.image_app
}
Key components:
-
Network Data Source
- References existing network by name
- Uses network name from
cloud_network
variable - Provides network ID for port creation
-
Image Data Source
- References VM image by name
- Uses image name from
image_app
variable - Provides image ID for VM creation
-
Integration Points
- Network data used in port configuration
- Image data used in VM boot volume configuration
- Enables reuse of existing OpenStack resources
Best practices implemented:
- Separation of data sources from resource creation
- Reuse of existing infrastructure components
- Dynamic resource referencing
- Clean integration with variables