Terraform Full-Stack Example¶
The Terraform configuration in this example creates a multi-tier infrastructure in OpenStack with:
- Application servers behind a load balancer
- Database servers in a private network
- Jump host for secure access
- A jump host (also known as jump server or bastion host) is a dedicated server that acts as a gateway to access other servers or resources in a private network from an external network (typically the internet). It serves as an intermediary that provides a controlled and secure way to access internal resources.
- All components connected through an internal router
Download the example code¶
Codebase structure¶
The infrastructure is defined across several focused Terraform files:
.
├── main.tf # OpenStack provider configuration
├── variables.tf # Variable definitions
├── data.tf # Data source definitions
├── router.tf # Internal network router
├── networks_internal_app.tf # Application network
├── networks_internal_db.tf # Database network
├── securitygroup_app.tf # Application security rules
├── securitygroup_db.tf # Database security rules
├── securitygroup_jump.tf # Jump host security rules
├── servergroup_app.tf # Application server group
├── servergroup_db.tf # Database server group
├── servergroup_jump.tf # Jump host server group
├── vms_app.tf # Application servers
├── vms_db.tf # Database servers
├── vms_jump.tf # Jump host
├── volumes_app.tf # Application storage
├── volumes_db.tf # Database storage
├── loadbalancer_app.tf # Application load balancer
├── floating_ip_jump.tf # Public IP for jump host
└── floating_ip_loadbalancer_app.tf # Public IP for load balancer
Component overview¶
The infrastructure consists of three main layers:
-
Access Layer
- Jump host for secure SSH access to internal resources
- Load balancer exposing application services
- Floating IPs for external connectivity
-
Application Layer
- Application servers in a private network
- Server group for VM placement control
- Attached volumes for application storage
- Internal network isolated from public access
-
Database Layer
- Database servers in separate private network
- Dedicated security group controlling access
- Storage volumes for database files
- Network isolated from public access
How it works¶
Network flow¶
-
External Access
- Users connect to applications through the load balancer's floating IP
- Administrators access the infrastructure through the jump host's floating IP
- All other resources are in private networks without direct external access
-
Internal Routing
- A central router connects all internal networks
- Application servers can reach database servers through this router
- Jump host can access all internal resources for management
-
Security Controls
- Jump host allows inbound SSH only
- Load balancer accepts application traffic only
- Application servers accept traffic only from load balancer
- Database servers accept connections only from application servers
- All other traffic is denied by default
Infrastructure diagram¶
Internet
│
┌─────────────┴────────────┐
│ │
Floating IP Floating IP
│ │
Jump Host Load Balancer
│ │
│ │
└──────────┐ ┌────────┘
│ │
▼ ▼
Internal Router
│
┌──────────┴───────────┐
│ │
Application Network Database Network
│ │
┌──────┴──────┐ ┌─────┴──────┐
│ │ │ │
App Server App Server DB DB
+ Volume + Volume + Volume + Volume
Core configuration files¶
main.tf¶
The core configuration file that sets up the OpenStack provider:
terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "2.0.0"
}
}
}
provider "openstack" {
# Authentication via OpenStack RC file or direct credentials
}
Key aspects:
- Uses OpenStack provider version 2.0.0
- Authentication handled through environment variables or direct credentials
- Supports OpenStack RC file for easy credential management
Best practices implemented:
- Version pinning prevents unexpected provider updates
- Credentials managed outside of code for security
- Provider configuration centralized in one location
- Clear separation of authentication concerns
variables.tf¶
Defines all configurable parameters for the infrastructure:
# System Identification
variable "system_name" {
type = string
default = "example"
}
# Jump Server Configuration
variable "jump_vm_count" {
type = string
default = "1"
}
variable "flavor_jump" {
type = string
default = "m2a.xlarge"
}
# Application Server Configuration
variable "app_vm_count" {
type = string
default = "3"
}
variable "app_subnet" {
type = string
default = "192.168.1"
}
# Database Configuration
variable "db_vm_count" {
type = string
default = "3"
}
variable "db_subnet" {
type = string
default = "192.168.2"
}
Variable categories:
-
System Settings
- Base name for resource identification
- Network and DNS configurations
-
Jump Host Configuration
- Instance count (default: 1)
- VM flavor and image selection
- Volume sizes for OS and data
-
Application Settings
- Instance count (default: 3)
- Network subnet (192.168.1.0/24)
- VM specifications and storage
-
Database Settings
- Instance count (default: 3)
- Network subnet (192.168.2.0/24)
- VM specifications and storage
All components use Ubuntu 22.04 by default with m2a.xlarge flavors and 10GB volumes.
Best practices implemented:
- All variables have explicit type constraints
- Logical grouping by infrastructure component
- Descriptive variable names for clarity
- Sensible defaults provided where appropriate
- Consistent naming conventions throughout
- Documentation strings for all variables
data.tf¶
Defines data sources for existing OpenStack resources:
data "openstack_networking_network_v2" "cloud_network" {
name = var.cloud_network
}
data "openstack_images_image_v2" "image_app" {
name = var.image_app
}
data "openstack_images_image_v2" "image_db" {
name = var.image_db
}
data "openstack_images_image_v2" "image_jump" {
name = var.image_jump
}
Key components:
-
Network Reference
- References existing cloud network
- Used for external connectivity
- Specified through variables
-
VM Images
- Application server image
- Database server image
- Jump host image
- All referenced by name
Best practices implemented:
- Reuse of existing resources
- Clear image separation by role
- Variable-driven configuration
- Consistent naming scheme
- Resource discovery pattern
- Infrastructure reusability
keypair.tf¶
Manages SSH key pairs for secure instance access:
resource "openstack_compute_keypair_v2" "key" {
name = "${var.system_name}-keypair"
public_key = var.keypair
}
Key components:
-
SSH Key Management
- Single key pair for infrastructure
- Public key provided through variables
- Consistent naming with system
-
Access Control
- Used across all instance types
- Secure SSH authentication
- Centralized key management
Best practices implemented:
- Secure SSH key-based access
- Variable-driven key configuration
- Consistent key naming scheme
- Infrastructure-wide key management
- No private key storage
- Centralized access control
Network Infrastructure¶
router.tf¶
Creates the central router that connects all internal networks to the external network:
resource "openstack_networking_router_v2" "router_cloud" {
name = "${var.system_name}-router"
admin_state_up = true
external_network_id = data.openstack_networking_network_v2.cloud_network.id
}
resource "openstack_networking_router_interface_v2" "router_interface_internal_app" {
router_id = openstack_networking_router_v2.router_cloud.id
subnet_id = openstack_networking_subnet_v2.network_subnet_internal_app.id
}
resource "openstack_networking_router_interface_v2" "router_interface_internal_db" {
router_id = openstack_networking_router_v2.router_cloud.id
subnet_id = openstack_networking_subnet_v2.network_subnet_internal_db.id
}
Key components:
-
Main Router
- Connected to external network for internet access
- Named using system_name variable for consistency
- Administrative state enabled
-
Network Interfaces
- Interface connecting to application network
- Interface connecting to database network
- Enables inter-network routing
Best practices implemented:
- Consistent resource naming using variables
- Explicit interface definitions for each network
- Clear separation of external and internal networking
- Administrative state explicitly defined
- Dependencies properly managed through references
networks_internal_app.tf¶
Creates the private network for application servers:
resource "openstack_networking_network_v2" "network_internal_app" {
name = "${var.system_name}-app-network"
description = "Internal network for fake app VM Networking"
admin_state_up = "true"
tags = [
"app=openstack",
"role=network",
]
}
resource "openstack_networking_subnet_v2" "network_subnet_internal_app" {
name = "${var.system_name}-app-subnet"
network_id = openstack_networking_network_v2.network_internal_app.id
cidr = "192.168.1.0/24"
no_gateway = "false"
dns_nameservers = [
var.dns1,
var.dns2,
var.dns3,
]
}
Key components:
-
Network Definition
- Private network for application tier
- Administrative state enabled
- Tagged for easy resource identification
-
Subnet Configuration
- CIDR block: 192.168.1.0/24
- Gateway enabled for routing
- DNS servers configured for name resolution
- Consistent tagging with parent network
Best practices implemented:
- Network isolation through private addressing
- Proper resource tagging for management
- Multiple DNS servers for redundancy
- Explicit gateway configuration
- Descriptive network and subnet naming
- Consistent CIDR block allocation
networks_internal_db.tf¶
Creates the private network for database servers:
resource "openstack_networking_network_v2" "network_internal_db" {
name = "${var.system_name}-db-network"
description = "Internal network for fake VM Networking"
admin_state_up = "true"
tags = [
"app=openstack",
"role=network",
]
}
resource "openstack_networking_subnet_v2" "network_subnet_internal_db" {
name = "${var.system_name}-db-subnet"
network_id = openstack_networking_network_v2.network_internal_db.id
cidr = "192.168.2.0/24"
no_gateway = "false"
dns_nameservers = [
var.dns1,
var.dns2,
var.dns3,
]
}
Key components:
-
Network Definition
- Private network for database tier
- Isolated from application network
- Tagged for resource management
-
Subnet Configuration
- CIDR block: 192.168.2.0/24
- Gateway enabled for routing
- Same DNS servers as application network
- Separate address space from application subnet
Best practices implemented:
- Complete network isolation for database tier
- Consistent tagging strategy with app network
- Redundant DNS configuration
- Non-overlapping CIDR allocation
- Explicit gateway configuration
- Clear naming convention for database resources
ports_app.tf¶
Creates network ports for application server instances:
resource "openstack_networking_port_v2" "app_ports" {
count = var.app_vm_count
name = "${var.system_name}-app_ports-${format("%02d", count.index + 1)}"
network_id = openstack_networking_network_v2.network_internal_app.id
security_group_ids = [openstack_networking_secgroup_v2.secgroup_app.id]
fixed_ip {
subnet_id = openstack_networking_subnet_v2.network_subnet_internal_app.id
ip_address = "${var.app_subnet}.${format("%02d", count.index + 51)}"
}
}
Key components:
-
Port Creation
- One port per application instance
- Fixed IP address assignment
- Security group association
-
Network Integration
- Connected to application network
- Subnet-specific configuration
- Predictable IP addressing
Best practices implemented: - Deterministic IP address allocation - Consistent port naming scheme - Security group integration - Automated port provisioning - Scalable with instance count - Clear network organization
ports_db.tf¶
Creates network ports for database servers, including a virtual IP for high availability:
resource "openstack_networking_port_v2" "db_vip_port" {
name = "${var.system_name}-db_vip_port"
network_id = openstack_networking_network_v2.network_internal_db.id
security_group_ids = [openstack_networking_secgroup_v2.secgroup_db.id]
fixed_ip {
subnet_id = openstack_networking_subnet_v2.network_subnet_internal_db.id
ip_address = "${var.db_subnet}.${format("%02d", 10)}"
}
}
resource "openstack_networking_port_v2" "db_ports" {
count = var.db_vm_count
name = "${var.system_name}-db_ports-${format("%02d", count.index + 1)}"
network_id = openstack_networking_network_v2.network_internal_db.id
security_group_ids = [openstack_networking_secgroup_v2.secgroup_db.id]
allowed_address_pairs {
ip_address = openstack_networking_port_v2.db_vip_port.all_fixed_ips[0]
}
fixed_ip {
subnet_id = openstack_networking_subnet_v2.network_subnet_internal_db.id
ip_address = "${var.db_subnet}.${format("%02d", count.index + 11)}"
}
}
Key components:
-
Virtual IP Port
- Dedicated port for database VIP
- Fixed IP for cluster access
- Security group association
-
Database Node Ports
- Individual ports for each database instance
- Allowed address pairs for VIP failover
- Predictable IP addressing scheme
Best practices implemented:
- High availability through VIP configuration
- Deterministic IP address allocation
- Consistent port naming convention
- Security group integration
- Failover support via address pairs
- Clear network organization
ports_jump.tf¶
Creates network ports for jump host instances:
resource "openstack_networking_port_v2" "jump_ports" {
count = var.jump_vm_count
name = "${var.system_name}-jump_ports-${format("%02d", count.index + 1)}"
network_id = openstack_networking_network_v2.network_internal_app.id
security_group_ids = [openstack_networking_secgroup_v2.secgroup_app.id]
fixed_ip {
subnet_id = openstack_networking_subnet_v2.network_subnet_internal_app.id
ip_address = "${var.app_subnet}.${format("%02d", count.index + 100)}"
}
}
Key components:
-
Port Creation
- One port per jump host
- Fixed IP address assignment
- Connected to application network
-
Network Integration
- Security group association
- Predictable IP addressing
- Subnet-specific configuration
Best practices implemented:
- Deterministic IP address allocation
- Consistent port naming scheme
- Security group integration
- Automated port provisioning
- Clear network organization
- Management network isolation
Security Configuration¶
securitygroup_app.tf¶
Defines security rules for the application servers:
resource "openstack_networking_secgroup_v2" "secgroup_app" {
name = "${var.system_name}-secgrp_app"
}
resource "openstack_networking_secgroup_rule_v2" "secgroup_rule_app_ssh_from_all" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.secgroup_app.id
}
resource "openstack_networking_secgroup_rule_v2" "secgroup_rule_app_http_from_all" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 80
port_range_max = 80
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.secgroup_app.id
}
Key components:
-
Security Group
- Named group for application servers
- Manages inbound and outbound traffic rules
-
Inbound Rules
- SSH access (port 22)
- HTTP access (port 80)
- HTTPS access (port 443)
- ICMP for network diagnostics
-
Access Control
- All rules are IPv4
- Currently allows access from any source (0.0.0.0/0)
- Each service has specific port ranges
Best practices implemented:
- Principle of least privilege for access rules
- Explicit port ranges for each service
- Clear rule descriptions and naming
- Consistent security group naming
- Protocol-specific rules for better control
- Service-based security group organization
securitygroup_db.tf¶
Defines security rules for the database servers, with specific focus on database cluster operations:
resource "openstack_networking_secgroup_v2" "secgroup_db" {
name = "${var.system_name}-secgrp_db"
}
# MySQL Cluster Communication
resource "openstack_networking_secgroup_rule_v2" "secgroup_rule_db_mysql_from_local" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 3306
port_range_max = 3306
remote_group_id = openstack_networking_secgroup_v2.secgroup_db.id
security_group_id = openstack_networking_secgroup_v2.secgroup_db.id
}
# Galera Cluster Replication
resource "openstack_networking_secgroup_rule_v2" "secgroup_rule_db_galera_from_local" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 4567
port_range_max = 4568
remote_group_id = openstack_networking_secgroup_v2.secgroup_db.id
security_group_id = openstack_networking_secgroup_v2.secgroup_db.id
}
Key components:
-
Database Access
- MySQL port (3306) for database connections
- Limited to internal cluster communication
-
Cluster Operations
- Galera replication ports (4567-4568)
- State Snapshot Transfer port (4444)
- Both TCP and UDP protocols for Galera
-
Management Access
- SSH access (port 22) for administration
- ICMP for network diagnostics
- Monitoring port (9200) for PMM
-
High Availability
- Keepalived port (112) for failover
- All cluster ports restricted to database security group
Best practices implemented:
- Strict access control using security groups
- Port-specific rules for each database function
- Internal-only communication for database cluster
- Separate rules for different protocols (TCP/UDP)
- Clear rule naming for each database service
- Minimal required ports exposed
securitygroup_jump.tf¶
Defines security rules for the jump host, which serves as the entry point for infrastructure management:
resource "openstack_networking_secgroup_v2" "secgroup_jump" {
name = "${var.system_name}-secgrp_jump"
}
resource "openstack_networking_secgroup_rule_v2" "secgroup_rule_jump_ssh_from_all" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.secgroup_jump.id
}
resource "openstack_networking_secgroup_rule_v2" "secgroup_rule_jump_icmp" {
direction = "ingress"
ethertype = "IPv4"
protocol = "icmp"
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.secgroup_jump.id
}
Key components:
-
Security Group
- Named group for jump host
- Minimal rule set for secure access
-
Access Rules
- SSH access (port 22) from any source
- ICMP for network diagnostics
- All other ports closed by default
-
Security Focus
- Only essential services enabled
- Acts as secure gateway to internal resources
- Restricted to management functions
Best practices implemented:
- Minimal attack surface with limited ports
- Single entry point for SSH access
- ICMP enabled for network troubleshooting
- Clear security group naming convention
- Default deny for undefined traffic
- Explicit rule documentation
Server Groups¶
servergroup_app.tf¶
Configures server group policies for application server placement:
resource "openstack_compute_servergroup_v2" "app_server_group_anti_affinity" {
name = "${var.system_name}-app_server_group_anti_affinity"
policies = ["soft-anti-affinity"]
}
Key components:
-
Server Group Policy
- Uses soft anti-affinity for high availability
- Encourages spreading instances across different hosts
- Allows fallback if strict separation isn't possible
-
Placement Strategy
- Improves fault tolerance
- Helps maintain service availability
- Balances resource utilization
Best practices implemented:
- Soft anti-affinity for flexible placement
- Consistent naming with infrastructure
- High availability by default
- Graceful fallback options
- Resource distribution strategy
- Clear policy documentation
servergroup_db.tf¶
Configures server group policies for database server placement:
resource "openstack_compute_servergroup_v2" "db_server_group_anti_affinity" {
name = "${var.system_name}-db_server_group_anti_affinity"
policies = ["soft-anti-affinity"]
}
Key components:
-
Server Group Policy
- Soft anti-affinity for database instances
- Promotes high availability for database cluster
- Prevents single point of hardware failure
-
Cluster Resilience
- Distributes database nodes across hosts
- Maintains cluster availability during host failures
- Supports Galera cluster requirements
Best practices implemented:
- Database-specific placement strategy
- Consistent with cluster requirements
- Fault tolerance by design
- Aligned with Galera cluster needs
- Predictable node distribution
- Infrastructure resilience focus
servergroup_jump.tf¶
Configures server group policies for jump host placement:
resource "openstack_compute_servergroup_v2" "jump_server_group_anti_affinity" {
name = "${var.system_name}-jump_server_group_anti_affinity"
policies = ["soft-anti-affinity"]
}
Key components:
-
Server Group Policy
- Soft anti-affinity for jump host
- Ensures separation from other infrastructure components
- Maintains management access during host failures
-
Management Access
- Isolates administrative access point
- Supports infrastructure resilience
- Consistent with security best practices
Best practices implemented:
- Separation of management infrastructure
- Consistent naming with other server groups
- Resilient access point configuration
- Flexible placement with soft anti-affinity
- Clear management role designation
- Infrastructure isolation strategy
Compute Resources¶
vms_app.tf¶
Creates the application server instances:
resource "openstack_compute_instance_v2" "server_app" {
count = var.app_vm_count
name = "${var.system_name}-app-${format("%02d", count.index + 1)}"
flavor_name = var.flavor_app
key_pair = openstack_compute_keypair_v2.key.name
security_groups = [openstack_networking_secgroup_v2.secgroup_app.name]
network {
port = openstack_networking_port_v2.app_ports.*.id[count.index]
}
block_device {
uuid = data.openstack_images_image_v2.image_app.id
source_type = "image"
destination_type = "volume"
volume_size = var.volume_app_os
boot_index = 0
delete_on_termination = true
}
scheduler_hints {
group = openstack_compute_servergroup_v2.app_server_group_anti_affinity.id
}
}
Key components:
-
Instance Configuration
- Multiple instances based on app_vm_count
- Consistent naming with sequential numbering
- Uses specified flavor for sizing
-
Network Setup
- Connected to application network via ports
- Security group for traffic control
- SSH key for secure access
-
Storage Configuration
- Boot volume created from image
- OS volume size from variables
- Automatic volume cleanup on termination
-
Placement Control
- Uses anti-affinity group
- Distributes across compute nodes
- Enhances availability
Best practices implemented:
- Dynamic instance count through variables
- Standardized naming convention
- Secure SSH key authentication
- Automated volume lifecycle management
- Network isolation through security groups
- Anti-affinity for high availability
vms_db.tf¶
Creates the database server instances:
resource "openstack_compute_instance_v2" "server_db" {
count = var.db_vm_count
name = "${var.system_name}-db-${format("%02d", count.index + 1)}"
flavor_name = var.flavor_db
key_pair = openstack_compute_keypair_v2.key.name
security_groups = [openstack_networking_secgroup_v2.secgroup_db.name]
network {
port = openstack_networking_port_v2.db_ports.*.id[count.index]
}
block_device {
uuid = data.openstack_images_image_v2.image_db.id
source_type = "image"
destination_type = "volume"
volume_size = var.volume_db_os
boot_index = 0
delete_on_termination = true
}
scheduler_hints {
group = openstack_compute_servergroup_v2.db_server_group_anti_affinity.id
}
}
Key components:
-
Instance Configuration
- Multiple database nodes based on db_vm_count
- Sequential naming for cluster nodes
- Database-optimized flavor selection
-
Network Setup
- Connected to private database network
- Database security group applied
- Isolated from public access
-
Storage Configuration
- Boot volume from database image
- OS volume sized for database needs
- Managed volume lifecycle
-
High Availability
- Anti-affinity placement for redundancy
- Supports database cluster operations
- Distributed across compute nodes
Best practices implemented:
- Variable-driven instance deployment
- Consistent database cluster naming
- Private network isolation
- Secure volume management
- Cluster-aware placement strategy
- Automated lifecycle handling
vms_jump.tf¶
Creates the jump host instance that serves as the secure entry point:
resource "openstack_compute_instance_v2" "server_jump" {
count = var.jump_vm_count
name = "${var.system_name}-jump-${format("%02d", count.index + 1)}"
flavor_name = var.flavor_jump
key_pair = openstack_compute_keypair_v2.key.name
security_groups = [openstack_networking_secgroup_v2.secgroup_jump.name]
network {
port = openstack_networking_port_v2.jump_ports.*.id[count.index]
}
block_device {
uuid = data.openstack_images_image_v2.image_jump.id
source_type = "image"
destination_type = "volume"
volume_size = var.volume_jump_os
boot_index = 0
delete_on_termination = true
}
scheduler_hints {
group = openstack_compute_servergroup_v2.jump_server_group_anti_affinity.id
}
}
Key components:
-
Instance Configuration
- Single jump host (typically)
- Standardized naming convention
- Sized appropriately for management tasks
-
Network Setup
- Connected to management network
- Restricted security group
- SSH key access required
-
Storage Configuration
- Boot volume from jump host image
- Basic OS volume size
- Cleanup on instance termination
-
Access Control
- Anti-affinity placement for reliability
- Serves as SSH gateway
- Central point for infrastructure access
Best practices implemented:
- Single point of administrative access
- Minimal resource allocation
- Secure SSH key-based access
- Automated volume management
- Clear security group assignment
- Infrastructure gateway pattern
volumes_app.tf¶
Creates and attaches additional storage volumes for application servers:
resource "openstack_blockstorage_volume_v3" "volume_app" {
count = length(openstack_compute_instance_v2.server_app)
name = "${var.system_name}-volumes-app-${format("%02d", count.index + 1)}"
size = var.volume_app
}
resource "openstack_compute_volume_attach_v2" "volume_attach_app" {
count = length(openstack_compute_instance_v2.server_app)
instance_id = openstack_compute_instance_v2.server_app.*.id[count.index]
volume_id = openstack_blockstorage_volume_v3.volume_app.*.id[count.index]
}
Key components:
-
Volume Creation
- One volume per application instance
- Consistent naming with parent instance
- Size defined through variables
-
Volume Attachment
- Automatic attachment to instances
- Matches volume count to instance count
- Direct instance-to-volume mapping
Best practices implemented:
- Dynamic volume count based on instances
- Consistent volume naming convention
- Automated volume attachment
- Size management through variables
- Clear volume-to-instance mapping
- Scalable with infrastructure
volumes_db.tf¶
Creates and attaches storage volumes for database servers:
resource "openstack_blockstorage_volume_v3" "volume_db" {
count = length(openstack_compute_instance_v2.server_db)
name = "${var.system_name}-volumes-db-${format("%02d", count.index + 1)}"
size = var.volume_db
}
resource "openstack_compute_volume_attach_v2" "volume_attach_db" {
count = length(openstack_compute_instance_v2.server_db)
instance_id = openstack_compute_instance_v2.server_db.*.id[count.index]
volume_id = openstack_blockstorage_volume_v3.volume_db.*.id[count.index]
depends_on = [
openstack_compute_instance_v2.server_db
]
}
Key components:
-
Volume Creation
- Dedicated volume for each database instance
- Consistent naming with database nodes
- Size specified through variables
-
Volume Attachment
- Automatic attachment to database instances
- Explicit dependency management
- One-to-one instance mapping
Best practices implemented:
- Database-optimized volume configuration
- Explicit dependency declaration
- Consistent volume naming scheme
- Automated attachment process
- Volume count matches instance count
- Clear resource relationships
Load Balancing and Access¶
loadbalancer_app.tf¶
Creates the load balancer for distributing traffic to application servers:
resource "openstack_lb_loadbalancer_v2" "loadbalancer_app" {
name = "${var.system_name}-loadbalancer-app"
vip_network_id = openstack_networking_network_v2.network_internal_app.id
}
resource "openstack_lb_listener_v2" "listener_http_app" {
name = "${var.system_name}-app_listener_http"
protocol = "TCP"
protocol_port = 80
loadbalancer_id = openstack_lb_loadbalancer_v2.loadbalancer_app.id
default_pool_id = openstack_lb_pool_v2.pool_http_app.id
}
resource "openstack_lb_pool_v2" "pool_http_app" {
name = "${var.system_name}-pool_http_app"
protocol = "TCP"
lb_method = "SOURCE_IP_PORT"
loadbalancer_id = openstack_lb_loadbalancer_v2.loadbalancer_app.id
}
resource "openstack_lb_monitor_v2" "monitor_http_app" {
name = "${var.system_name}-monitor_http_app"
pool_id = openstack_lb_pool_v2.pool_http_app.id
type = "TCP"
delay = 10
timeout = 5
max_retries = 3
}
Key components:
-
Load Balancer
- Placed in application network
- Handles both HTTP and HTTPS traffic
- Named consistently with infrastructure
-
Listeners
- TCP listeners for ports 80 and 443
- Connected to respective backend pools
- Protocol-specific configurations
-
Backend Pools
- Source IP port load balancing method
- Automatic member registration
- Health monitoring enabled
-
Health Monitoring
- TCP health checks
- Configurable retry parameters
- Automatic failed node removal
Best practices implemented:
- TCP load balancing for maximum performance
- Separate pools for HTTP and HTTPS
- Consistent health monitoring
- Automatic backend registration
- Configurable monitoring parameters
- Clear naming conventions
floating_ip_jump.tf¶
Creates and associates public IP addresses for jump host access:
resource "openstack_networking_floatingip_v2" "float_ip_jump" {
count = var.jump_vm_count
pool = var.cloud_network
}
resource "openstack_networking_floatingip_associate_v2" "floating_ip_jump_attach" {
count = var.jump_vm_count
floating_ip = openstack_networking_floatingip_v2.float_ip_jump.*.address[count.index]
port_id = openstack_networking_port_v2.jump_ports.*.id[count.index]
}
Key components:
-
Floating IP Allocation
- Public IP from specified pool
- One IP per jump host
- Dynamic count based on jump host count
-
IP Association
- Attached to jump host network ports
- Automatic mapping to instances
- Consistent with instance count
Best practices implemented:
- Dynamic IP allocation matching instance count
- Clear association with network ports
- Consistent naming convention
- Automated IP management
- Flexible scaling with infrastructure
- Direct mapping to jump host instances
floating_ip_loadbalancer_app.tf¶
Creates and associates public IP address for load balancer access:
resource "openstack_networking_floatingip_v2" "float_ip_loadbalancer_app" {
pool = var.cloud_network
}
resource "openstack_networking_floatingip_associate_v2" "floating_ip_loadbalancer_app_attach" {
floating_ip = openstack_networking_floatingip_v2.float_ip_loadbalancer_app.address
port_id = openstack_lb_loadbalancer_v2.loadbalancer_app.vip_port_id
depends_on = [
openstack_networking_router_interface_v2.router_interface_internal_app
]
}
Key components:
-
Floating IP Allocation
- Single public IP for load balancer
- Allocated from specified network pool
- Used as service entry point
-
IP Association
- Connected to load balancer VIP port
- Depends on network interface setup
- Ensures proper routing path
Best practices implemented:
- Single entry point for application access
- Explicit dependency management
- Proper resource ordering
- Clear association with load balancer
- Network routing prerequisites
- Infrastructure accessibility control