Automating Infrastructure on Google Cloud with Terraform
========================================================

- Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing, popular service providers and custom in-house solutions.
- Configuration files describe to Terraform the components needed to run a single application or your entire data center. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform can determine what changed and create incremental execution plans that can be applied.
- The infrastructure Terraform can manage includes both low-level components such as compute instances, storage, and networking, and high-level components such as DNS entries and SaaS features.
- Key features:
  * Infrastructure as code
  * Execution plans
  * Resource graph
  * Change automation
- Terraform comes pre-installed in Cloud Shell.
- Terraform recognizes files ending in `.tf` or `.tf.json` as configuration files and will load them when it runs.
- A destructive change is a change that requires the provider to replace the existing resource rather than updating it. This usually happens because the cloud provider doesn't support updating the resource in the way described by your configuration.
- Terraform uses implicit dependency information to determine the correct order in which to create and update different resources.
- Sometimes there are dependencies between resources that are not visible to Terraform. The `depends_on` argument can be added to any resource and accepts a list of resources to create explicit dependencies for.
- Just like with terraform apply, Terraform determines the order in which things must be destroyed.

```bash
# The "resource" block in the instance.tf file defines a resource that exists within the infrastructure
cat <<EOF > instance.tf
resource "google_compute_instance" "terraform" {
  project      = "qwiklabs-gcp-00-11e2bd4a53d4"
  name         = "terraform"
  machine_type = "n1-standard-1"
  zone         = "us-west1-c"
  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
    }
  }
  network_interface {
    network = "default"
    access_config {
    }
  }
}
EOF

# Initialize Terraform
terraform init

# Create an execution plan
terraform plan

# In the same directory as the instance.tf file you created, run this command
terraform apply

# Inspect the current state
terraform show

# The terraform {} block is required so Terraform knows which provider to download from the Terraform Registry.
cat <<EOF > main.tf
terraform {
  required_providers {
    google = {
      source = "hashicorp/google"
    }
  }
}
provider "google" {
  version = "3.5.0"
  project = "qwiklabs-gcp-02-12656959b010"
  region  = "us-central1"
  zone    = "us-central1-c"
}
resource "google_compute_network" "vpc_network" {
  name = "terraform-network"
}
EOF

# Initialize, apply & verify
terraform init
terraform apply
terraform show

# Add more resources
echo 'resource "google_compute_instance" "vm_instance" {
  name         = "terraform-instance"
  machine_type = "f1-micro"
  tags         = ["web", "dev"]
  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
    }
  }
  network_interface {
    network = google_compute_network.vpc_network.name
    access_config {
    }
  }
}' >> main.tf

# Apply again
terraform apply

# Detroy the infrastructure
terraform destroy

# Add static IP
echo 'resource "google_compute_address" "vm_static_ip" {
  name = "terraform-static-ip"
}' >> main.tf

# Update network interface to use static IP
  # network_interface {
  #   network = google_compute_network.vpc_network.self_link
  #   access_config {
  #     nat_ip = google_compute_address.vm_static_ip.address
  #   }
  # }

# Save plan to a file
terraform plan -out static_ip

# Apply the plan
terraform apply "static_ip"
# Terraform is able to infer a dependency, and knows it must create the static IP before updating the instance

# Add more resources, like Storage bucket
echo '# New resource for the storage bucket our application will use.
resource "google_storage_bucket" "example_bucket" {
  name     = "qwiklabs-gcp-02-12656959b010"
  location = "US"
  website {
    main_page_suffix = "index.html"
    not_found_page   = "404.html"
  }
}
# Create a new instance that uses the bucket
resource "google_compute_instance" "another_instance" {
  # Tells Terraform that this VM instance must be created only after the
  # storage bucket has been created.
  depends_on = [google_storage_bucket.example_bucket]
  name         = "terraform-instance-2"
  machine_type = "f1-micro"
  boot_disk {
    initialize_params {
      image = "cos-cloud/cos-stable"
    }
  }
  network_interface {
    network = google_compute_network.vpc_network.self_link
    access_config {
    }
  }
}' >> main.tf

# To define a provisioner, modify the resource block defining the first vm_instance in your configuration to look like the following
resource "google_compute_instance" "vm_instance" {
  name         = "terraform-instance"
  machine_type = "f1-micro"
  tags         = ["web", "dev"]
  provisioner "local-exec" {
    command = "echo ${google_compute_instance.vm_instance.name}:  ${google_compute_instance.vm_instance.network_interface[0].access_config[0].nat_ip} >> ip_address.txt"
  }
  # ...
}
terraform apply

# Use terraform taint to tell Terraform to recreate the instance:
terraform taint google_compute_instance.vm_instance

terraform apply
```

- If a resource is successfully created but fails a provisioning step, Terraform will error and mark the resource as tainted. A resource that is tainted still exists, but shouldn't be considered safe to use, since provisioning failed.
- When you generate your next execution plan, Terraform will remove any tainted resources and create new resources, attempting to provision them again after creation.
- Provisioners can also be defined that run only during a destroy operation. These are useful for performing system cleanup, extracting data, etc.
- For many resources, using built-in cleanup mechanisms is recommended if possible (such as init scripts), but provisioners can be used if necessary.

## Terraform Modules

- A Terraform module is a set of Terraform configuration files in a single directory.
- As you manage your infrastructure with Terraform, increasingly complex configurations will be created. There is no intrinsic limit to the complexity of a single Terraform configuration file or directory, so it is possible to continue writing and updating your configuration files in a single directory. However, if you do, you may encounter one or more of the following problems:
  * Understanding and navigating the configuration files will become increasingly difficult.
  * Updating the configuration will become more risky, because an update to one block may cause unintended consequences to other blocks of your configuration.
  * Duplication of similar blocks of configuration may increase, for example, when you configure separate dev/staging/production environments, which will cause an increasing burden when updating those parts of your configuration.
  * If you want to share parts of your configuration between projects and teams, cutting and pasting blocks of configuration between projects could be error-prone and hard to maintain.
- What are modules for?
  * Organize configuration
  * Encapsulate configuration
  * Re-use configuration
  * Provide consistency and ensure best practices
- Modules can be loaded from either the local filesystem or a remote source.
- It is recommended that every Terraform practitioner use modules by following these best practices:
  * Start writing your configuration with a plan for modules. Even for slightly complex Terraform configurations managed by a single person, the benefits of using modules outweigh the time it takes to use them properly.
  * Use local modules to organize and encapsulate your code. Even if you aren't using or publishing remote modules, organizing your configuration in terms of modules from the beginning will significantly reduce the burden of maintaining and updating your configuration as your infrastructure grows in complexity.
  * Use the public Terraform Registry to find useful modules. This way you can quickly and confidently implement your configuration by relying on the work of others.
  * Publish and share modules with your team. Most infrastructure is managed by a team of people, and modules are an important tool that teams can use to create and maintain infrastructure. As mentioned earlier, you can publish modules either publicly or privately.
- When using a new module for the first time, you must run either terraform init or terraform get to install the module. When either of these commands is run, Terraform will install any new modules in the .terraform/modules directory within your configuration's working directory. For local modules, Terraform will create a symlink to the module's directory. Because of this, any changes to local modules will be effective immediately, without your having to re-run terraform get.

```bash
git clone https://github.com/terraform-google-modules/terraform-google-network
cd terraform-google-network
git checkout tags/v6.0.1 -b v6.0.1

echo 'module "test-vpc-module" {
  source       = "terraform-google-modules/network/google"
  version      = "~> 6.0"
  project_id   = var.project_id # Replace this with your project ID
  network_name = "my-custom-mode-network"
  mtu          = 1460
  subnets = [
    {
      subnet_name   = "subnet-01"
      subnet_ip     = "10.10.10.0/24"
      subnet_region = "us-west1"
    },
    {
      subnet_name           = "subnet-02"
      subnet_ip             = "10.10.20.0/24"
      subnet_region         = "us-west1"
      subnet_private_access = "true"
      subnet_flow_logs      = "true"
    },
    {
      subnet_name               = "subnet-03"
      subnet_ip                 = "10.10.30.0/24"
      subnet_region             = "us-west1"
      subnet_flow_logs          = "true"
      subnet_flow_logs_interval = "INTERVAL_10_MIN"
      subnet_flow_logs_sampling = 0.7
      subnet_flow_logs_metadata = "INCLUDE_ALL_METADATA"
      subnet_flow_logs_filter   = "false"
    }
  ]
  project_id   = var.project_id
  network_name = var.network_name
}' > main.tf

gcloud config list --format 'value(core.project)'

echo 'variable "project_id" {
  description = "The project ID to host the network in"
  default     = "qwiklabs-gcp-03-58c084b52f45"
}
variable "network_name" {
  description = "The name of the VPC network being created"
  default     = "example-vpc"
}' > variables.tf

echo 'output "network_name" {
  value       = module.test-vpc-module.network_name
  description = "The name of the VPC being created"
}
output "network_self_link" {
  value       = module.test-vpc-module.network_self_link
  description = "The URI of the VPC being created"
}
output "project_id" {
  value       = module.test-vpc-module.project_id
  description = "VPC project id"
}
output "subnets_names" {
  value       = module.test-vpc-module.subnets_names
  description = "The names of the subnets being created"
}
output "subnets_ips" {
  value       = module.test-vpc-module.subnets_ips
  description = "The IP and cidrs of the subnets being created"
}
output "subnets_regions" {
  value       = module.test-vpc-module.subnets_regions
  description = "The region where subnets will be created"
}
output "subnets_private_access" {
  value       = module.test-vpc-module.subnets_private_access
  description = "Whether the subnets will have access to Google APIs without a public IP"
}
output "subnets_flow_logs" {
  value       = module.test-vpc-module.subnets_flow_logs
  description = "Whether the subnets will have VPC flow logs enabled"
}
output "subnets_secondary_ranges" {
  value       = module.test-vpc-module.subnets_secondary_ranges
  description = "The secondary ranges associated with these subnets"
}
output "route_names" {
  value       = module.test-vpc-module.route_names
  description = "The routes associated with this VPC"
}' > outputs.tf

cd ~/terraform-google-network/examples/simple_project
terraform init
terraform apply
terraform destroy
rm -rd terraform-google-network -f
```

- Each of these files serves a purpose:
  * `LICENSE` contains the license under which your module will be distributed. When you share your module, the LICENSE file will let people using it know the terms under which it has been made available. Terraform itself does not use this file.
  - `README.md` contains documentation in markdown format that describes how to use your module. Terraform does not use this file, but services like the Terraform Registry and GitHub will display the contents of this file to visitors to your module's Terraform Registry or GitHub page.
  * `main.tf` contains the main set of configurations for your module. You can also create other configuration files and organize them in a way that makes sense for your project.
  * `variables.tf` contains the variable definitions for your module. When your module is used by others, the variables will be configured as arguments in the module block. Because all Terraform values must be defined, any variables that don't have a default value will become required arguments. A variable with a default value can also be provided as a module argument, thus overriding the default value.
  * `outputs.tf` contains the output definitions for your module. Module outputs are made available to the configuration using the module, so they are often used to pass information about the parts of your infrastructure defined by the module to other parts of your configuration.

```bash
# Create module
cd ~
touch main.tf
mkdir -p modules/gcs-static-website-bucket
cd modules/gcs-static-website-bucket
touch website.tf variables.tf outputs.tf
tee -a README.md <<EOF
# GCS static website bucket
This module provisions Cloud Storage buckets configured for static website hosting.
EOF
tee -a LICENSE <<EOF
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
    http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
EOF

echo 'resource "google_storage_bucket" "bucket" {
  name               = var.name
  project            = var.project_id
  location           = var.location
  storage_class      = var.storage_class
  labels             = var.labels
  force_destroy      = var.force_destroy
  uniform_bucket_level_access = true
  versioning {
    enabled = var.versioning
  }
  dynamic "retention_policy" {
    for_each = var.retention_policy == null ? [] : [var.retention_policy]
    content {
      is_locked        = var.retention_policy.is_locked
      retention_period = var.retention_policy.retention_period
    }
  }
  dynamic "encryption" {
    for_each = var.encryption == null ? [] : [var.encryption]
    content {
      default_kms_key_name = var.encryption.default_kms_key_name
    }
  }
  dynamic "lifecycle_rule" {
    for_each = var.lifecycle_rules
    content {
      action {
        type          = lifecycle_rule.value.action.type
        storage_class = lookup(lifecycle_rule.value.action, "storage_class", null)
      }
      condition {
        age                   = lookup(lifecycle_rule.value.condition, "age", null)
        created_before        = lookup(lifecycle_rule.value.condition, "created_before", null)
        with_state            = lookup(lifecycle_rule.value.condition, "with_state", null)
        matches_storage_class = lookup(lifecycle_rule.value.condition, "matches_storage_class", null)
        num_newer_versions    = lookup(lifecycle_rule.value.condition, "num_newer_versions", null)
      }
    }
  }
}' > website.tf

echo 'variable "name" {
  description = "The name of the bucket."
  type        = string
}
variable "project_id" {
  description = "The ID of the project to create the bucket in."
  type        = string
}
variable "location" {
  description = "The location of the bucket."
  type        = string
}
variable "storage_class" {
  description = "The Storage Class of the new bucket."
  type        = string
  default     = null
}
variable "labels" {
  description = "A set of key/value label pairs to assign to the bucket."
  type        = map(string)
  default     = null
}
variable "bucket_policy_only" {
  description = "Enables Bucket Policy Only access to a bucket."
  type        = bool
  default     = true
}
variable "versioning" {
  description = "While set to true, versioning is fully enabled for this bucket."
  type        = bool
  default     = true
}
variable "force_destroy" {
  description = "When deleting a bucket, this boolean option will delete all contained objects. If false, Terraform will fail to delete buckets which contain objects."
  type        = bool
  default     = true
}
variable "iam_members" {
  description = "The list of IAM members to grant permissions on the bucket."
  type = list(object({
    role   = string
    member = string
  }))
  default = []
}
variable "retention_policy" {
  description = "Configuration of the buckets data retention policy for how long objects in the bucket should be retained."
  type = object({
    is_locked        = bool
    retention_period = number
  })
  default = null
}
variable "encryption" {
  description = "A Cloud KMS key that will be used to encrypt objects inserted into this bucket"
  type = object({
    default_kms_key_name = string
  })
  default = null
}
variable "lifecycle_rules" {
  description = "The buckets Lifecycle Rules configuration."
  type = list(object({
    # Object with keys:
    # - type - The type of the action of this Lifecycle Rule. Supported values: Delete and SetStorageClass.
    # - storage_class - (Required if action type is SetStorageClass) The target Storage Class of objects affected by this Lifecycle Rule.
    action = any
    # Object with keys:
    # - age - (Optional) Minimum age of an object in days to satisfy this condition.
    # - created_before - (Optional) Creation date of an object in RFC 3339 (e.g. 2017-06-13) to satisfy this condition.
    # - with_state - (Optional) Match to live and/or archived objects. Supported values include: "LIVE", "ARCHIVED", "ANY".
    # - matches_storage_class - (Optional) Storage Class of objects to satisfy this condition. Supported values include: MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE, STANDARD, DURABLE_REDUCED_AVAILABILITY.
    # - num_newer_versions - (Optional) Relevant only for versioned objects. The number of newer versions of an object to satisfy this condition.
    condition = any
  }))
  default = []
}' > variables.tf

echo 'output "bucket" {
  description = "The created storage bucket"
  value       = google_storage_bucket.bucket
}' > outputs.tf

cd ../..
echo 'module "gcs-static-website-bucket" {
  source = "./modules/gcs-static-website-bucket"
  name       = var.name
  project_id = var.project_id
  location   = "us-east1"
  lifecycle_rules = [{
    action = {
      type = "Delete"
    }
    condition = {
      age        = 365
      with_state = "ANY"
    }
  }]
}' > main.tf
echo 'output "bucket-name" {
  description = "Bucket names."
  value       = "module.gcs-static-website-bucket.bucket"
}' > outputs.tf
echo 'variable "project_id" {
  description = "The ID of the project in which to provision resources."
  type        = string
  default     = "qwiklabs-gcp-03-58c084b52f45"
}
variable "name" {
  description = "Name of the buckets to create."
  type        = string
  default     = "qwiklabs-gcp-03-58c084b52f45"
}' > variables.tf

terraform init
terraform apply

# Upload files to a bucket
cd ~
curl https://raw.githubusercontent.com/hashicorp/learn-terraform-modules/master/modules/aws-s3-static-website-bucket/www/index.html > index.html
curl https://raw.githubusercontent.com/hashicorp/learn-terraform-modules/blob/master/modules/aws-s3-static-website-bucket/www/error.html > error.html
gsutil cp *.html gs://qwiklabs-gcp-03-58c084b52f45

terraform destroy
```

## Terraform state

- State is a necessary requirement for Terraform to function. People sometimes ask whether Terraform can work without state or not use state and just inspect cloud resources on every run. In the scenarios where Terraform may be able to get away without state, doing so would require shifting massive amounts of complexity from one place (state) to another place (the replacement concept).
- Terraform requires some sort of database to map Terraform config to the real world.
- In addition to tracking the mappings between resources and remote objects, Terraform must also track metadata such as resource dependencies.
- To ensure correct operation, Terraform retains a copy of the most recent set of dependencies within the state.
- In addition to basic mapping, Terraform stores a cache of the attribute values for all resources in the state. This is an optional feature of Terraform state and is used only as a performance improvement.
- In the default configuration, Terraform stores the state in a file in the current working directory where Terraform was run.
- Remote state is the recommended solution.
- State locking = If supported by your backend, Terraform will lock your state for all operations that could write state. This prevents others from acquiring the lock and potentially corrupting your state. State locking happens automatically on all operations that could write state.
- Workspace = The persistent data stored in the backend belongs to a workspace. Initially the backend has only one workspace, called `default`, and thus only one Terraform state is associated with that configuration. Certain backends support multiple named workspaces, which allows multiple states to be associated with a single configuration. The configuration still has only one backend, but multiple distinct instances of that configuration can be deployed without configuring a new backend or changing authentication credentials
- A backend in Terraform determines how state is loaded and how an operation such as apply is executed. This abstraction enables non-local file state storage, remote execution, etc.
- Here are some of the benefits of backends:
  * Working in a team
  * Keeping sensitive information off disk
  * Remote operations

```bash
# Add a local backend
touch main.tf
gcloud config list --format 'value(core.project)'
echo 'provider "google" {
  project     = "qwiklabs-gcp-03-c3e63b049a3b"
  region      = "us-central-1"
}
resource "google_storage_bucket" "test-bucket-for-state" {
  name        = "qwiklabs-gcp-03-c3e63b049a3b"
  location    = "US"
  uniform_bucket_level_access = true
}' > main.tf
echo 'terraform {
  backend "local" {
    path = "terraform/state/terraform.tfstate"
  }
}' >> main.tf
terraform init
terraform apply
terraform show

# Add a Cloud Storage backend
echo 'provider "google" {
  project     = "qwiklabs-gcp-03-c3e63b049a3b"
  region      = "us-central-1"
}
resource "google_storage_bucket" "test-bucket-for-state" {
  name        = "qwiklabs-gcp-03-c3e63b049a3b"
  location    = "US"
  uniform_bucket_level_access = true
}
terraform {
  backend "gcs" {
    bucket  = "qwiklabs-gcp-03-c3e63b049a3b"
    prefix  = "terraform/state"
  }
}' > main.tf
terraform init -migrate-state

# Change the key/value labels of the above bucket
terraform refresh

# Clean up your workspace
echo 'provider "google" {
  project     = "qwiklabs-gcp-03-c3e63b049a3b"
  region      = "us-central-1"
}
resource "google_storage_bucket" "test-bucket-for-state" {
  name        = "qwiklabs-gcp-03-c3e63b049a3b"
  location    = "US"
  uniform_bucket_level_access = true
}
terraform {
  backend "local" {
    path = "terraform/state/terraform.tfstate"
  }
}' > main.tf
terraform init -migrate-state

echo 'provider "google" {
  project     = "qwiklabs-gcp-03-c3e63b049a3b"
  region      = "us-central-1"
}
resource "google_storage_bucket" "test-bucket-for-state" {
  name        = "qwiklabs-gcp-03-c3e63b049a3b"
  location    = "US"
  uniform_bucket_level_access = true
  force_destroy = true
}
terraform {
  backend "local" {
    path = "terraform/state/terraform.tfstate"
  }
}' > main.tf
terraform apply
terraform destroy
```

## Import Terraform configuration

- Bringing existing infrastructure under Terraform’s control involves five main steps:
  1. Identify the existing infrastructure to be imported.
  2. Import the infrastructure into your Terraform state.
  3. Write a Terraform configuration that matches that infrastructure.
  4. Review the Terraform plan to ensure that the configuration matches the expected state and infrastructure.
  5. Apply the configuration to update your Terraform state.

```bash
docker run --name hashicorp-learn --detach --publish 8080:80 nginx:latest
docker ps
git clone https://github.com/hashicorp/learn-terraform-import.git
cd learn-terraform-import
terraform init
echo 'resource "docker_container" "web" {}' > learn-terraform-import/docker.tf
terraform import docker_container.web $(docker inspect -f {{.ID}} hashicorp-learn)
terraform show
terraform plan
terraform show -no-color > docker.tf
terraform plan
echo 'resource "docker_container" "web" {
    image = "sha256:87a94228f133e2da99cb16d653cd1373c5b4e8689956386c1c12b60a20421a02"
    name  = "hashicorp-learn"
    ports {
        external = 8080
        internal = 80
        ip       = "0.0.0.0"
        protocol = "tcp"
    }
}' > docker.tf
terraform plan
terraform apply
```