-
Configuration Management Tools: Ansible, Puppet, SaltStack
-
Server Templating: Docker, Packer, Vagrant
-
Provisioning Tools: Terraform, CloudFormation
Procedural approach (configuration mgmt tools) vs Declarative approach (Provisioning tools)
Declarative approach: blue print that defines a desired state
HCL: HashiCorp Configuration Language
Ex:
resource "local_file" "pet" { filename = "root/pets.txt" content = "We love pets!" }
-
resource = block name
-
"local_file" = resource type <provider_resource> with provider "local" and resource "file" separated by an underscore "_"
-
"pet" = resource name
-
filename and content are block arguments specific to this resource type
-
main.tf = Main configuration file containing resource definition
-
variables.tf = Contains variables declaration
-
outputs.tf = Contains outputs from resources
-
provider.tf = Contains Provider definition
-
terraform.tf = Configure Terraform behavior
Terraform registry at registry.terraform.io
-
Official providers: AWS, GCP, Azure
-
Verified providers: developed by third-party companies but reviewed and tested by HashiCorp
-
Community providers: published and maintained by individual contributors
Provider plugins are downloaded in directory .terraform/plugins
Plugin name = <hostname>/<organization_namespace>/<type>
-
Hostname (Optional): hostname where the plugin is located. If not specified, defaults to
registry.terrform.io
-
Organization_namespace always default to
hashicorp
since we are using Terraform registry -
Type: name of the provider plugin
Ex:
registry.terraform.io/hashicorp/local
or hashicorp/local
Must execute command terraform init
again if a configuration file is updated with a new resource using a provider
that was not used before.
terraform version
lists the terraform version along with the provider plugins' version
The terraform providers
command shows information about the provider requirements of the configuration in the current working directory.
You can specify the desired version of the Terraform CLI with the required_version
argument in the terraform
block.
You can specify the version of a provider in the required_providers
block within the terraform
block.
terraform { required_version = "1.2.6" required_providers { mycloud = { source = "mycorp/mycloud" version = "~> 1.0" } } }
Version argument operators = <, >, <=, >=, =, !=, ~>
(==
is not a valid operator)
Operator ~>
allows for incremental versions only.
Ex:
-
"~> 1.0" allows for 1.0, 1.1, 1.2, … until 1.9 version.
-
"~> 1.2.0" allows for 1.2.0, 1.2.1, 1.2.2,… up to 1.2.9
You can also make use of version constraints within modules.
Note
|
When you initialize a Terraform configuration for the first time with terraform init , Terraform will
generate a new .terraform.lock.hcl file in the current working directory, listing the actual provider versions that
conform to the version constraints. You should include the lock file in your version control repository to ensure that
Terraform uses the same provider versions across your team and thus ensure consistent runs.
|
Tip
|
terraform init -upgrade will upgrade all providers to the latest version consistent within the version
constraints specified in your configuration.
|
You can override the default provider configuration or define multiple configurations of the same provider, through aliases. See When to specify providers.
Ex: provider.tf
# Override default configuration provider "aws" { region = "us-west-1" } # Define an other configuration provider "aws" { alias = "usw2" region = "us-west-2" }
Specify the provider
argument to make use of a custom provider:
resource "aws_key_pair" "beta" { key_name = "beta" public_key = "ssh-rsa 0123456798ABCDEF@server" provider = aws.central }
where provider
is <provider_name>.<alias_name>
variable "filename" { default = "/root/pets.txt" } variable "content" { default = "We love pets!" }
resource "local_file" "pet" { filename = var.filename content = var.content }
You do not have to specify a default value, you can just define a variable like this:
variable "filename" { }
When applying your configuration files with terraform apply
, you will be prompted for a value, or you can specify a
value directly in the command line with the -var
option.
terraform apply -var="filename=/home/gruik/grok.txt"
Alternatively, you can specify a value with an environment variable prefixed with TF_VAR_<variable_name>
:
export TF_VAR_filename=/home/gruik/grok.txt
You can also declare your variables' value in bulk in a .tfvars
or .tfvars.json
file:
filename="/home/gruik/grok.txt" content="We really love pets!"
Caution
|
Please note that this still requires you to declare/define your variables with variable blocks, tfvars files
only consist of variable assignments!
|
The file is automatically loaded by Terraform if it is called terraform.tfvars
or terraform.tfvars.json
or by
any name that is ending with auto.tfvars
or auto.tfvars.json
. Otherwise, you have to pass it in the command line
with the -var-file
option:
terraform apply -var-file=variables.tfvars
If you use multiple ways to assign values for the same variable, Terraform follows the following variable definition precedence to know which value it should exit:
-
Environment variables
TF_VAR_*
(Low priority) -
terraform.tfvars
-
*.auto.tfvars
by alphabetical order -
-var
or-var-file
command line flags (High priority)
Caution
|
Variable values must be literal values, and cannot use computed values like resource attributes, expressions, or other variables. |
Optional arguments when defining a variable
block:
* default
* description
* type
* sensitive (false by default)
You can add a validation
block inside the variable
block:
variable "ami" { type = string description = "The id of the machine image (AMI) to use for the server" validation { # Built-in function substr condition = substr(var.ami, 0, 4) == "ami-" error_message = "The AMI should start with \"ami-\"." } }
Basic variable types are string
, number
and bool
ean.
Terraform supports type conversion whenever it is possible, such as "true" for a boolean variable, or "2" for a number
variable. If a type conversion is not possible, like 1
for a boolean variable, Terraform will produce an error.
Terraform also supports additional types such as list, map, set, object and tuple.
Ex: list
variable "servers" { default = ["web1", "web2", "web3"] type = list }
resource "aws_instance" "web" { ami = var.ami instance_type = var.instance_type tags = { # Indices start at 0 name = var.servers[0] } }
Ex: map
variable "instance_type" { type = map default = { "production" = "m5.large" "development" = "t2.micro" } }
resource "aws_instance" "web" { ami = var.ami instance_type = var.instance_type["development"] tags = { # Indices start at 0 name = var.servers[0] } }
You can also combine type constraints:
variable "servers" { default = ["web1", "web2", "web3"] type = list(string) }
variable "server_count" { type = map(number) default = { "web" = 3 "db" = 1 "agent" = 2 } }
Ex: set A set cannot have duplicate elements.
variable "servers" { default = ["web1", "web2", "web3"] type = set(string) }
But this is not valid:
variable "servers" { default = ["web1", "web2", "web2"] type = set(string) }
Ex: objects
With objects, you can create complex data structures:
variable "bella" { type = object({ name = string color = string age = number food = list(string) favorite_pet = bool }) default = { name = "bella" color = "brown" age = 7 food = ["fish", "chicken", "turkey"] favorite_pet = true } }
Ex: tuples With tuples, you can use different variable types.
variable "web" { default = ["web1", 3, true] type = tuple([string, number, bool]) }
The values to be passed must match the number of elements and their type in the tuple definition.
default = ["web1", 3, true, "web2"]
will produce an error.
Output variables store the value of an expression in Terraform.
resource "aws_instance" "cerberus" { ami = var.ami instance_type = var.instance_type } output "pub_ip" { # required argument value = aws_instance.cerberus.public_ip description = "Print the public IPv4 address" }
Output variables are used to display details about the provisioned resources on the Terraform output, or to feed variables to external tools (bash scripts, Ansible playbooks, other Terraform modules).
# After terraform apply terraform output
Tip
|
|
An implicit dependency exists between two resources when a resource refers to an attribute of the other resource. An attribute reference expression is of the form <RESOUCE_TYPE>.<RESOURCE_NAME>.<ATTRIBUTE_NAME>.
resource "aws_key_pair" "alpha" { key_name = "alpha" public_key = "ssh-rsa 0123456798ABCDEF@server" } resource "aws_instance" "cerberus" { ami = var.ami instance_type = var.instance_type key_name = aws_key_pair.alpha.key_name }
Dependencies determine in which order resources are created by Terraform.
You can embed a resource attribute (or variable) reference within a string using ${}
.
resource "example" { name_prefix = "app-${terraform.workspace}" workspace = terraform.workspace }
You can also create an explicit dependency between two resources by adding a depends_on
meta-argument.
resource "aws_instance" "db" { ami = var.db_ami instance_type = var.db_instance_type } resource "aws_instance" "web" { ami = var.web_ami instance_type = var.web_instance_type # Explicit dependency dependes_on = [ aws_instance.db ] }
resource "random_string" "server_suffix" { # After terraform apply, we change the value to 6 length = 5 upper = false special = false } resource "aws_instance" "web" { ami = "ami-0123456798ABCDEF" instance_type = "m5.large" # Explicit dependency tags = { Name = "web-${random_string.server_suffix.id}" } }
Changing a resource whose another resource depends on will result in the destruction and the recreation of both resources. If you want to apply the modification on the primary resource only, you must target this resource like this:
terraform apply -target random_string.server_suffix
Resource targeting should only be used with caution: changes are considered incomplete as the resources provisioned do not match the state described in the configuration files.
To make use of a resource that was created externally (manually, with other tools such as Ansible or Puppet, or within another Terraform configuration), you can use data sources.
# data block data "aws_key_pair" "cerberus_key" { # unique identifier key_name = "alpha" } resource "aws_instance" "cerberus" { ami = var.ami instance_type = var.instance_type # Reference to a data source attribute must start with "data." key_name = data.aws_key_pair.cerberus_key.key_name }
Like a resource, a data source is identified by an instance (resource) type and a name.
Inside the data
block, we need arguments to uniquely identify the data source.
You can also make use of other ways to identify the data source, such as key ID or filters.
# data block data "aws_key_pair" "cerberus_key" { filter { name = "tag:project" values = ["cerberus"] } } resource "aws_instance" "cerberus" { ami = var.ami instance_type = var.instance_type # Reference to a data source attribute must start with "data." key_name = data.aws_key_pair.cerberus_key.key_name }
resource
blocks define managed resources by Terraform, which create, update and destroy infrastructure.
data
blocks define data (re)sources that are read-only infrastructure.
Tip
|
|
terraform.tfstate
is the state file created in the same directory as the Terraform configuration files.
Terraform also create a backup of this file called terraform.tfstate.backup
.
The state file is the blue print of the resources that are actually provisioned.
When executing terraform plan
(and thereafter terraform apply
), Terraform first checks that the state file exists,
refreshes it, and compares the state with the configuration files, so as to know if changes must be applied.
When refreshing the state file, Terraform keeps the contents of the state file in sync with the real resources, as resources may be changed externally by other means than Terraform (ex: a VM is deleted manually).
It is possible to not refresh the state file with terraform apply -refresh=false
, when for instance the Terraform
refresh takes a long time to complete. This should only be used when resources are sure to be in sync with the state
file, otherwise it can introduce inconsistencies.
While the state refresh can be disabled, the state file itself cannot be disabled. This is mandatory for Terraform to work.
The state file keeps track of the dependencies between resources, allowing Terraform to decide in which order the resources should be provisioned or destroyed.
The state file is a plain-text JSON file stored locally that contains sensitive information, as it contains all the details related to the provisioned infrastructure (including SSH keys and passwords). For this reason, the state file should not be stored in a Version Control System and, if you are working as a team, the state file should be stored in a secure remote backend instead (AWS S3, Google Cloud Storage).
The state file is for Terraform internal use only, and should never be edited by hand.
Terraform allows multiple users to work together with state locking, ensuring that the state file does not get corrupted by multiple parallel operations.
When a remote backend is configured, Terraform will automatically load the state file from the shared storage every time it is required by a Terraform operation. With state locking, the integrity of the state file is always maintained. The shared storage may be encrypted at rest and/or in transit to ensure that sensitive information is secured.
Example of configuration:
terraform { backend "s3" { bucket = "project-terraform-state-bucket01" key = "finance/terraform.tfstate" region = "us-west-1" dynamodb_table = "state-locking" } }
terraform.tf
Be sure to execute terraform init
to initialize the state file in the remote backend.
Tip
|
To remove a resource from the management of terraform, use the |
# Validate the configuration files in the current directory terraform validate # Reformat all Terraform configuration files terraform fmt # Reads and outputs the Terraform state in a human-readable form. terraform show # In machine-readable form terraform show -json terraform providers # Reads output variables from the Terraform state file and prints their value. terraform output # Sync the state file with the provisioned infrastructure # terraform plan -refresh-only is preferred as it gives you the option to review the modifications first terraform refresh # Visual representation of resource dependencies terraform graph $terraform state list <resource_type>.<resource_name> ... # Prints all attributes of a specific resource # Note that terraform show command will show the entire state terraform state show <resource_type>.<resource_name> # Move items from a state file to another terraform state mv <source> <destination> # Ex: renaming a resource $ terraform state mv aws_dynamodb_table.state_locking aws_dynamodb_table.state_locking-db # Thereafter, if you manually rename the resource in the configuration file, no change will be applied # Prints the state file from its local/remote location terraform state pull terraform state pull -json | jq ... # Remove a resource from the management of terraform terraform state rm <resource_type>.<resource_name> # Thereafter, remove manually the corresponding resources from the configuration files # Push a local state file to a remote state. To use with caution terraform state push
Caution
|
Terraform will redact (hide) sensitive outputs when planning, applying, or destroying your configuration, or when you
query all of your outputs. Terraform will not redact sensitive outputs in other cases, such as when you query a specific
output by name with terraform output <name> .
|
Using lifecycle rules, you can control how Terraform creates and destroys resources.
resource "aws_instance" "cerberus" { ami = "ami-0123456789" instance_type = "m5.large" tags = { Name = "Cerberus-Webserver" } lifecycle { create_before_destroy = true } }
After terraform apply
, if we change the ami
value and apply again, Terraform will destroy the cerberus instance
before recreating it with the new ami. This is the default behavior.
With the added lifecycle rule, we make sure to create the instance with the new ami before deleting the old one.
resource "aws_instance" "cerberus" { ami = "ami-0123456789" instance_type = "m5.large" tags = { Name = "Cerberus-Webserver" } lifecycle { prevent_destroy = true } }
With prevent_destroy
set to true
, Terraform will reject any changes that will result in the resource getting
destroyed and will output an error message.
We can also ignore some changes.
resource "aws_instance" "cerberus" { ami = "ami-0123456789" instance_type = "m5.large" tags = { Name = "Cerberus-Webserver-1" } lifecycle { ignore_changes = [ tags ] } }
With this rule, any change to the tag will be ignored during an apply.
Finally, you can ignore changes from all attributes as follows:
resource "aws_instance" "cerberus" { ami = "ami-0123456789" instance_type = "m5.large" tags = { Name = "Cerberus-Webserver-1" } lifecycle { ignore_changes = all } }
Terraform uses the term "tainted" to describe a resource instance which may not be fully functional, either because its creation partially failed or because you’ve manually marked it as such using this command.
This will not modify your infrastructure directly, but subsequent Terraform plans will include actions to destroy the remote object and create a new object to replace it.
You can remove the "taint" state from a resource instance using the "terraform untaint" command.
It is recommended using the -replace
option with terraform apply to force Terraform to replace an object even though
there are no configuration changes that would require it.
$ terraform apply -replace="aws_instance.example[0]"
We recommend the -replace
option because the change will be reflected in the Terraform plan, letting you
understand how it will affect your infrastructure before you take any externally-visible action. When you use
terraform taint
, other users could create a new plan against your tainted object before you can review the effects.
You can use the TF_LOG
environment variable to set the logging level of Terraform output.
Log levels are INFO
, WARN
, ERROR
, DEBUG
and TRACE
.
# export TF_LOG=<log_level> export TF_LOG=TRACE
To persist the logs into a file, use the TF_LOG_PATH
environment variable.
export TF_LOG_PATH=/tmp/terraform.log
Caution
|
Note that even when TF_LOG_PATH is set, TF_LOG must be set in order for any logging to be enabled.
|
terraform import
command allows bringing existing resources, created by other means, into the
management of Terraform. This is different of data sources, which define resources that are read-only
infrastructure. For instance, you can have resources created with the management console of your cloud provider, or
by using another IaC tool such as Ansible.
Let us use an existing EC2 instance as an example that we want to import to Terraform. First, you need an attribute to uniquely identify the resource, such as the EC2 instance ID.
The terraform import
command only imports resources into the state file. Prior to running the command, you must
manually add the resource blocks in the configuration files, to which the imported objects will be attached.
resource "aws_instance" "webserver-2" { # Resource arguments to fill in after the resource is imported }
# terraform import <resource_type.resource_name> <unique_attribute> # The instance ID is collected from the management console terraform import aws_instance.webserver-2 i-0123456798ABCDEF
We can then inspect the state file and look for attributes. Once you have all the details, you can fill in the arguments of the resource block for "webserver-2".
terraform plan
will refresh the state and understand that the EC2 instance already exists. The resource is now
under the control of Terraform.
You can make use of the same configuration directory (configuration files + state file) to create multiple infrastructure environments, such as development and production environments.
A default workspace is automatically created within every Terraform configuration.
terraform workspace list
terraform workspace new development terraform workspace new production
When creating a new workspace, Terraform immediately switches to it as well. When listing the workspaces, the *
indicates which workspace is currently used.
variable "instance_type" { type = map default = { "development" = "t2.micro" "production" = "m5.large" } }
resource "aws_instance" "webserver" { ami = var.ami # Lookup function instance_type = lookup(var.instance_type, terraform.workspace) tags = { Environment = terraform.workspace } }
terraform.workspace
returns the workspace that we are currently in.
Instead of creating a single terraform.tfstate
state file, Terraform creates a terraform.tfstate.d
directory,
with one sub-directory per workspace for which we have completed at least one terraform apply
.
$tree terraform.tfstate.d/ terraform.tfstate.d/ ├── development │ └── terraform.tfstate └── production └── terraform.tfstate
Ex: count
resource "aws_instance" "web" { ami = var.ami instance_type = var.instance_type # count meta-argument count = 3 }
Ex: length
variable "webservers" { type = list default = ["web1", "web2", "web3"] }
resource "aws_instance" "web" { ami = var.ami instance_type = var.instance_type count = length(var.webservers) }
Ex: count index
resource "aws_instance" "web" { ami = var.ami instance_type = var.instance_type count = length(var.webservers) tags = { Name = var.webservers[count.index] } }
With count
, resources as created as a list
:
$terraform state list aws_instance.web[0] aws_instance.web[1] aws_instance.web[2]
Ex: for each
resource "aws_instance" "web" { ami = var.ami instance_type = var.instance_type for_each = var.webservers tags = { Name = each.value } }
But when using for_each
, the variable used needs to be a map or a set of strings.
variable "webservers" { type = set default = ["web1", "web2", "web3"] }
With for_each
, resources as created as a map
:
$terraform state list aws_instance.web["web1"] aws_instance.web["web2"] aws_instance.web["web3"]
resource "aws_instance" "webserver" { ami = "ami-0123456789ABCDEF" instance_type = "t2.micro" # Task to be executed on the resource being created provisioner "remote_exec" { inline = [ "sudo apt update", "sudo apt install -y nginx", "sudo systemctl enable nginx", "sudo systemctl start nginx", ] } connection { type = "ssh" host = self.public_ip user = "ubuntu" private_key = file("/root/.ssh/web") } key_name = aws_key_pair.web.id vpc_security_group_ids = [aws_security_group.ssh-access.id] } # The provisioner above can only work if SSH connectivity is enabled between the local machine and the AWS instance resource "aws_security_group" "ssh-access" { ... } resource "aws_key_pair" "web" { ... }
Local exec
resource "aws_instance" "webserver" { ami = "ami-0123456789ABCDEF" instance_type = "t2.micro" # Task to be executed on the local machine where Terraform is running provisioner "local_exec" { command = "echo ${aws_instance.webserver.public_ip} >> /tmp/ips.txt" } }
By default, provisioners are run after the resources are created. They are called "create time provisioners".
We can also make a provisioner run before a resource is destroyed with the when
argument:
resource "aws_instance" "webserver" { ami = "ami-0123456789ABCDEF" instance_type = "t2.micro" # Task to be executed on the local machine where Terraform is running provisioner "local_exec" { command = "echo Instance ${aws_instance.webserver.public_ip} Created! > /tmp/instance_state.txt" } provisioner "local_exec" { when = "destroy" command = "echo Instance ${aws_instance.webserver.public_ip} Destroyed! > /tmp/instance_state.txt" } }
By default, the terraform apply
command will fail and error out if the provisioner command fails.
For the operation to not fail and the resource to be created successfully (not "tainted") even if the provisioned
command or script fails, we can set the value of the on_failure
argument to continue
:
resource "aws_instance" "webserver" { ami = "ami-0123456789ABCDEF" instance_type = "t2.micro" # Task to be executed on the local machine where Terraform is running provisioner "local_exec" { # By default, on_failure = fail on_failure = continue command = "echo Instance ${aws_instance.webserver.public_ip} Created! > /tmp/instance_state.txt" } provisioner "local_exec" { when = "destroy" command = "echo Instance ${aws_instance.webserver.public_ip} Destroyed! > /tmp/instance_state.txt" } }
Terraform recommends to use provisioners as a last resort. Make use of options natively available when possible.
For example, use user_data
while creating an EC2 AWS instance to run an initialization script.
resource "aws_iam_policy" "adminUser" { name = "AdminUsers" # file function policy = file("admin-policy.json") } resource "local_file" "pet" { filename = var.filename # length function count = length(var.filename) }
resource "local_file" "pet" { filename = var.filename # for_each function with toset for_each = toset(var.region) } variable "region" { type = list default = ["us-east-1", "us-east-1", "ca-central-1"] description = "A list of AWS regions" }
terraform console
loads the state associated with the configuration directory by default, and loads any values
that are currently stored in it.
Numeric functions
variable "num" { type = set(number) default = [250, 10 , 11, 5] description = "A set of numbers" }
$ terraform console > max(-1, 2, -10, 200, -250) 200 > min(-1, 2, -10, 200, -250) -250 > var.num toset([ 5, 10, 11, 250, ]) > max(var.num...) = Values inside the set can be expanded into separate arguments using the expansion symbol "..." 250 > ceil(10.1) 11 > ceil(10.9) 11 > floor(10.1) 10 > floor(10.9) 10
String functions
$ terraform console > split(",", "abc,def,ghi") tolist([ "abc", "def", "ghi", ]) > lower("ABCDEFGHI") "abcdefghi" > upper("abcdefghi") "ABCDEFGHI" > title("abc-def,ghi-jkl") "Abc-Def,Ghi-Jkl" > substr("abc-def,ghi-jkl,mno-pqr", 0,7) # offset, length "abc-def" > substr("abc-def,ghi-jkl,mno-pqr", 8,7) "ghi-jkl" > substr("abc-def,ghi-jkl,mno-pqr", 16,7) "mno-pqr" > join(",", ["abc-def", "ghi-jkl", "mno-pqr"]) "abc-def,ghi-jkl,mno-pqr"
Collection functions
$ terraform console > length(var.num) 4 > slice(["a", "b", "c", "d", "e", "f"], 0, 3) [ "a", "b", "c", ] > index(["abc-def", "ghi-jkl", "mno-pqr"], "ghi-jkl") 1 > index(["abc-def", "ghi-jkl", "mno-pqr"], "mno-pqr") 2 > index(["abc-def", "ghi-jkl", "mno-pqr"], "gruik") ╷ │ Error: Error in function call │ │ on <console-input> line 1: │ (source code not available) │ │ Call to function "index" failed: item not found. ╵ > element(["abc-def", "ghi-jkl", "mno-pqr"], 0) "abc-def" > element(["abc-def", "ghi-jkl", "mno-pqr"], 1) "ghi-jkl" > element(["abc-def", "ghi-jkl", "mno-pqr"], 2) "mno-pqr" > element(["abc-def", "ghi-jkl", "mno-pqr"], 3) "abc-def" > element(["abc-def", "ghi-jkl", "mno-pqr"], 4) "ghi-jkl" > contains(["abc-def", "ghi-jkl", "mno-pqr"], "mno-pqr") true > contains(["abc-def", "ghi-jkl", "mno-pqr"], "grok") false
variable "ami" { type = map default = { "us-east-1" = "ami-xyz", "ca-central-1" = "ami-efg", "ap-south-1" = "ami-ABC" } description = "A map of AMI ID's for specific regions" }
$ terraform console > keys(var.ami) tolist([ "ap-south-1", "ca-central-1", "us-east-1", ]) > values(var.ami) tolist([ "ami-ABC", "ami-efg", "ami-xyz", ]) > lookup(var.ami, "ca-central-1") "ami-efg" > lookup(var.ami, "us-west-2") ╷ │ Error: Error in function call │ │ on <console-input> line 1: │ (source code not available) │ │ Call to function "lookup" failed: lookup failed to find key "us-west-2". ╵ > lookup(var.ami, "us-west-2", "ami-pqr") # default value provided "ami-pqr"
Type Conversion functions
Went missing !!
Tip
|
|
$ terraform console > 1 + 2 3 > 5 - 3 2 > 2 * 2 4 > 8 / 2 4 > 8 == 8 true > 8 == 7 false > 8 != "8" # no implicit type conversion true > 5 > 7 false > 5 > 4 true > 5 > 5 false > 4 <= 5 true > 8 > 7 && 8 < 10 true > 8 > 10 && 8 < 10 false > 8 > 9 || 8 < 10 true > ! (8 > 10) true
Example of use:
resource "random_password" "password-generator" { # condiftion ? true_val : false_val length = var.length < 8 ? 8 : var.length } variable "length" { type = number description = "The length of the password" } output "password" { value = random_password.password-generator.result }
terraform apply -var=length=5 -auto-approve
Tip
|
|
Local values allow for factorization within configuration files. Above all, you can make use of variables within the definition of your local values.
resource "aws_instance" "web" { ami = "ami-0123456789abc" instance_type = "t2.medium" tags = local.common_tags } resource "aws_instance" "db" { ami = "ami-0123456789def" instance_type = "t2.m5-large" tags = local.common_tags } locals { common_tags = { Department = "finance" Project = "cerberus" } }
resource "aws_s3_bucket" "finance_bucket" { acl = "private" bucket = local.bucket-prefix } resource "random_string" "random_suffix" { length = 6 special = false upper = false } variable "project" { default = "cerberus" } locals { bucket-prefix = "${var.project}-${random_string.random_suffix.id}-bucket" }
resource "aws_vpc" "backend-vpc" { cidr_block = "10.0.0.0/16" tags = { Name = "backend-vpc" } } resource "aws_subnet" "private-subnet" { vpc_id = aws_vpc.backend-vpc.id cidr_block = "10.0.2.0/24" tags = { Name = "private-subnet" } } resource "aws_security_group" "backend-sg" { name = "backend-sg" vpc_id = aws_vpc.backend-vpc.id ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_block = ["0.0.0.0/0"] } ingress { from_port = 8080 to_port = 8080 protocol = "tcp" cidr_block = ["0.0.0.0/0"] } # More ingress nested blocks # ... }
variable "ingress_ports" { type = list default = [22, 8080] }
Then, we can make use of a dynamic block in the backend-sg
resource to declare all the ingress rules:
resource "aws_security_group" "backend-sg" { name = "backend-sg" vpc_id = aws_vpc.backend-vpc.id dynamic "ingress" { for_each = var.ingress_ports content { # We use the name of the dynamic block "ingress" to loop over the ingress ports to be created from_port = ingress.value to_port = ingress.value protocol = "tcp" cidr_block = ["0.0.0.0/0"] } } }
We can make use of an alternative name to loop through the list of ingress ports:
resource "aws_security_group" "backend-sg" { name = "backend-sg" vpc_id = aws_vpc.backend-vpc.id dynamic "ingress" { for_each = var.ingress_ports content { # Alt name iterator = port from_port = port.value to_port = port.value protocol = "tcp" cidr_block = ["0.0.0.0/0"] } } }
Finally, let’s define an output variable called "to_ports" using a splat expression that will display all the
to_port
for the ingress rules defined within our security group:
output "to_ports" { # since the ingress rules are created as a list, we can iterate over the elements # using a splat expression with * symbol value = aws_security_group.backend-sg.ingress[*].to_port }
Tip
|
|
Any directory containing Terraform configuration files is a module. Modules can call other modules which allows resource configurations to be packaged and reused.
$ tree root/ root/ └── terraform-projects ├── aws-instance │ └── main.tf └── development └── main.tf
# Name given to the child module from this root module module "dev-webserver" { # Relative path to the child module source = "../aws-instance" }
The "root" module is the module that we currently operate. The "child" module is the module called from the root module.
Beside the provider plugins, the Terraform registry also provides modules to easily share them. Instead of relying on local modules, we can then reuse modules that have already been developed and stored within the registry.
module "security-group_ssh" { source = "terraform-aws-modules/security-group/aws/modules/ssh" # If version is not specified, Terraform will download the latest version of the module from the registry version = "3.16.0" # Insert the 2 required variables here (variables that do not have a default value) vpc_id = "vpc-7d8d215" ingress_cidr_blocks = ["10.10.0.0/16"] name = "ssh-access" }
As for providers, you should execute terraform init
to download the modules.
Alternatively, you can call terraform get
.
Note
|
Output values are necessary to share data from a child module to your root module. |
Terraform Cloud = SaaS with shared state out of the box.
With Terraform Cloud, there are no more local operations such as terraform init
, plan
and apply
.
-
Free plan: You can create an account for free at app.terraform.io. Remote state, remote operations, private module registry, community support but only 5 active users.
-
Team plan: Team management with fine-grained permissions
-
Team & Governance plan: Policy as code (Sentinel), policy enforcement (i.e. verifying that the provisioned architecture follows certain standards), Cloud SLA and support
-
Business Tier plan: Enterprise-level features, advanced security, compliance and governance. SSO (with Okta) and future support for Azure AD and SAML 2.0 IdPs, custom concurrency, self-hosting options (x86 64 Linux agents, Docker), premium support
Tip
|
|