Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save Syed-Hassaan/e41a83345832666846ee6be0f69c1f36 to your computer and use it in GitHub Desktop.
Save Syed-Hassaan/e41a83345832666846ee6be0f69c1f36 to your computer and use it in GitHub Desktop.
GSP345 | Automating Infrastructure on Google Cloud with Terraform: Challenge Lab
######################################################################################
## Automating Infrastructure on Google Cloud with Terraform: Challenge Lab # GSP345 ##
######################################################################################
====================== Setup : Create the configuration files ======================
Make the empty files and directories in Cloud Shell or the Cloud Shell Editor.
------------------------------------------------------------------------------------
touch main.tf
touch variables.tf
mkdir modules
cd modules
mkdir instances
cd instances
touch instances.tf
touch outputs.tf
touch variables.tf
cd ..
mkdir storage
cd storage
touch storage.tf
touch outputs.tf
touch variables.tf
cd
--------------------------------------------------------------------------------
Add the following to the each variables.tf file, and fill in the GCP Project ID:
--------------------------------------------------------------------------------
variable "region" {
default = "us-central1"
}
variable "zone" {
default = "us-central1-a"
}
variable "project_id" {
default = "<FILL IN PROJECT ID>"
}
------------------------------------------
Add the following to the main.tf file :
------------------------------------------
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.55.0"
}
}
}
provider "google" {
project = var.project_id
region = var.region
zone = var.zone
}
module "instances" {
source = "./modules/instances"
}
---------------------------------------------------------------------------------
Run " terraform init " in Cloud Shell in the root directory to initialize terraform.
---------------------------------------------------------------------------------
====================== TASK 1: Import infrastructure ======================
Navigate to Compute Engine > VM Instances. Click on tf-instance-1. Copy the Instance ID down somewhere to use later.
Navigate to Compute Engine > VM Instances. Click on tf-instance-2. Copy the Instance ID down somewhere to use later.
Next, navigate to modules/instances/instances.tf. Copy the following configuration into the file:
--------------------------------------------------------------
resource "google_compute_instance" "tf-instance-1" {
name = "tf-instance-1"
machine_type = "n1-standard-1"
zone = var.zone
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
}
resource "google_compute_instance" "tf-instance-2" {
name = "tf-instance-2"
machine_type = "n1-standard-1"
zone = var.zone
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
}
--------------------------------------------------------------------------------------------
To import the first instance, use the following command, using the Instance ID for tf-instance-1 you copied down earlier.
------------------------------------------------------------------------------------------
terraform import module.instances.google_compute_instance.tf-instance-1 <Instance ID - 1>
------------------------------------------------------------------------------------------
To import the second instance, use the following command, using the Instance ID for tf-instance-2 you copied down earlier.
------------------------------------------------------------------------------------------
terraform import module.instances.google_compute_instance.tf-instance-2 <Instance ID - 2>
------------------------------------------------------------------------------------------
The two instances have now been imported into your terraform configuration. You can now optionally run the commands to update the state of Terraform. Type yes at the dialogue after you run the apply command to accept the state changes.
----------------
terraform plan
terraform apply
----------------
====================== TASK 2: Configure a remote backend ======================
Add the following code to the modules/storage/storage.tf file:
-------------------------------------------------------------------
resource "google_storage_bucket" "storage-bucket" {
name = var.project_id
location = "US"
force_destroy = true
uniform_bucket_level_access = true
}
-------------------------------------------------------------------
Next, add the following to the main.tf file:
------------------------------------------------------------------
module "storage" {
source = "./modules/storage"
}
----------------------------------------------------------------------------
Run the following commands to initialize the module and create the storage bucket resource. Type yes at the dialogue after you run the apply command to accept the state changes.
------------------------
terraform init
terraform apply
------------------------
Next, update the main.tf file so that the terraform block looks like the following. Fill in your GCP Project ID for the bucket argument definition.
-------------------------------------------
terraform {
backend "gcs" {
bucket = "<FILL IN PROJECT ID>"
prefix = "terraform/state"
}
required_providers {
google = {
source = "hashicorp/google"
version = "3.55.0"
}
}
}
--------------------------------------------
Run the following to initialize the remote backend. Type yes at the prompt.
----------------
terraform init
----------------
====================== TASK 3: Modify and update infrastructure ======================
Navigate to modules/instances/instance.tf. Replace the entire contents of the file with the following:
--------------------------------------------------------
resource "google_compute_instance" "tf-instance-1" {
name = "tf-instance-1"
machine_type = "n1-standard-2"
zone = var.zone
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
}
resource "google_compute_instance" "tf-instance-2" {
name = "tf-instance-2"
machine_type = "n1-standard-2"
zone = var.zone
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
}
resource "google_compute_instance" "tf-instance-3" {
name = "tf-instance-3"
machine_type = "n1-standard-2"
zone = var.zone
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
}
--------------------------------------------------------------------------------------------------
Run the following commands to initialize the module and create/update the instance resources. Type yes at the dialogue after you run the apply command to accept the state changes.
----------------
terraform init
terraform apply
----------------
====================== TASK 4: Taint and destroy resources ======================
Taint the tf-instance-3 resource by running the following command:
------------------------------------------------------------------------
terraform taint module.instances.google_compute_instance.tf-instance-3
------------------------------------------------------------------------
Run the following commands to apply the changes:
----------------
terraform init
terraform apply
----------------
Remove the tf-instance-3 resource from the instances.tf file. Delete the following code chunk from the file.
-----------------------------------------------------------
resource "google_compute_instance" "tf-instance-3" {
name = "tf-instance-3"
machine_type = "n1-standard-2"
zone = var.zone
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
}
--------------------------------------------------------------------
Run the following commands to apply the changes. Type yes at the prompt.
----------------
terraform apply
----------------
====================== TASK 5: Use a module from the Registry ======================
Copy and paste the following into the main.tf file:
----------------------------------------------------------------
module "vpc" {
source = "terraform-google-modules/network/google"
version = "~> 3.2.2"
project_id = var.project_id
network_name = "terraform-vpc"
routing_mode = "GLOBAL"
subnets = [
{
subnet_name = "subnet-01"
subnet_ip = "10.10.10.0/24"
subnet_region = "us-central1"
},
{
subnet_name = "subnet-02"
subnet_ip = "10.10.20.0/24"
subnet_region = "us-central1"
subnet_private_access = "true"
subnet_flow_logs = "true"
description = "This subnet has a description"
}
]
}
-------------------------------------------------------------------------------
Run the following commands to initialize the module and create the VPC. Type yes at the prompt.
---------------
terraform init
terraform apply
----------------
Navigate to modules/instances/instances.tf. Replace the entire contents of the file with the following:
-------------------------------------------------------
resource "google_compute_instance" "tf-instance-1" {
name = "tf-instance-1"
machine_type = "n1-standard-2"
zone = var.zone
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "terraform-vpc"
subnetwork = "subnet-01"
}
}
resource "google_compute_instance" "tf-instance-2" {
name = "tf-instance-2"
machine_type = "n1-standard-2"
zone = var.zone
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "terraform-vpc"
subnetwork = "subnet-02"
}
}
--------------------------------------------------------------------------------------------
Run the following commands to initialize the module and update the instances. Type yes at the prompt.
---------------
terraform init
terraform apply
----------------
====================== TASK 6: Configure a firewall ======================
Add the following resource to the main.tf file and fill in the GCP Project ID:
------------------------------------------------------------------
resource "google_compute_firewall" "tf-firewall" {
name = "tf-firewall"
network = "projects/<PROJECT_ID>/global/networks/terraform-vpc"
allow {
protocol = "tcp"
ports = ["80"]
}
source_tags = ["web"]
source_ranges = ["0.0.0.0/0"]
}
-------------------------------------------------------------------------
Run the following commands to configure the firewall. Type yes at the prompt.
---------------------
terraform init
terraform apply
----------------------
######################################################################################
## Automating Infrastructure on Google Cloud with Terraform: Challenge Lab # GSP345 ##
######################################################################################
@DynamiteC
Copy link

resource "google_compute_instance" "tf-instance-1" {
name = "tf-instance-1"
machine_type = "n1-standard-1"
zone = var.zone
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
metadata_startup_script = <<-EOT
#!/bin/bash
EOT
allow_stopping_for_update = true
}

resource "google_compute_instance" "tf-instance-2" {
name = "tf-instance-2"
machine_type = "n1-standard-1"
zone = var.zone
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
metadata_startup_script = <<-EOT
#!/bin/bash
EOT
allow_stopping_for_update = true
}

Copy link

ghost commented Nov 4, 2021

TASK 1: Import infrastructure fails. Attempted several times, milestone never achieved.

@bhanu-prakashl
Copy link

This lab has some issue

@adelkhayata76
Copy link

I just got 100% in this lab!

You just need to pay attention to these steps:

  1. Add the following to each instance block:
metadata_startup_script = <<-EOT
#!/bin/bash
EOT
allow_stopping_for_update = true
  1. You should replace "tf-instance-3" text with the instance name assigned to you by the lab ( You can find it on left panel ).
  2. You should replace the Google Storage Bucket Name with the one assigned to you by the lab ( You can find it on left panel ).
  3. You should replace the "terraform-vpc" text with the VPC name assigned to you by the lab ( You can find it on left panel ).
  4. I was having issues with the zone variable inside instances.tf so I replaced it with my project ID.

@yogeshbirje
Copy link

Hi,

How to overcome this scenario,

metadata_startup_script = <<-EOT
#!/bin/bash
EOT
allow_stopping_for_update = true

@Aabhusan
Copy link

I have just completed it. i have added all the codes to the repo. . when you clone delete .terraform, and terraform.tfstate files.
https://github.com/Aabhusan/terraform-labs

@akhyaruu
Copy link

I can complete the challenge lab because of this code, but it just helps 55% only. So I'll leave it note in here:

First is my advice when you want to use this code

  • DON'T COPY PASTE the code without understanding what you're doing and ofc following along with lab instruction
  • please take a look at what VARIABLE you should give to the code
  • always RE-READ the instruction so there's nothing in left
  • you can always go back to the previous lab for seeing code

Second is some additional code for providing more detail about that's one

  • every instance block resource "google_compute_instance" "tf-instance-<NUMBER>" {} before the closing statement add this code
    metadata_startup_script = <<-EOT #!/bin/bash EOT allow_stopping_for_update = true
  • replace any variable name like "tf-instance-3", "terraform-vpc", and "Google storage bucket name" with the variable given from the lab
  • you don't need to specify the argument zone when creating instance

@itistech
Copy link

This lab has some issue. Not al the steps are correct

@johnnieng
Copy link

https://gist.github.com/Syed-Hassaan/e41a83345832666846ee6be0f69c1f36?permalink_comment_id=3937718#gistcomment-3937718

resource "google_storage_bucket" "storage-bucket" {
name = "tf-bucket-148026"
location = "US"
force_destroy = true
uniform_bucket_level_access = true
}

terraform {
backend "gcs" {
bucket = "tf-bucket-148026"
prefix = "terraform/state"
}
required_providers {
google = {
source = "hashicorp/google"
version = "3.55.0"
}
}
}
provider "google" {
project = var.project_id
region = var.region
zone = var.zone
}
module "instances" {
source = "./modules/instances"
}
module "storage" {
source = "./modules/storage"
}

resource "google_compute_instance" "tf-instance-1" {
name = "tf-instance-1"
machine_type = "n1-standard-2"
zone = var.zone
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
metadata_startup_script = <<-EOT
#!/bin/bash
EOT
allow_stopping_for_update = true
}
resource "google_compute_instance" "tf-instance-2" {
name = "tf-instance-2"
machine_type = "n1-standard-2"
zone = var.zone
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
metadata_startup_script = <<-EOT
#!/bin/bash
EOT
allow_stopping_for_update = true
}
resource "google_compute_instance" "tf-instance-3" {
name = "tf-instance-182056"
machine_type = "n1-standard-2"
zone = var.zone
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
metadata_startup_script = <<-EOT
#!/bin/bash
EOT
allow_stopping_for_update = true
}

module "vpc" {
source = "terraform-google-modules/network/google"
version = "~> 3.4.0"
project_id = var.project_id
network_name = "tf-vpc-391516"
routing_mode = "GLOBAL"
subnets = [
{
subnet_name = "subnet-01"
subnet_ip = "10.10.10.0/24"
subnet_region = "us-central1"
},
{
subnet_name = "subnet-02"
subnet_ip = "10.10.20.0/24"
subnet_region = "us-central1"
subnet_private_access = "true"
subnet_flow_logs = "true"
description = "This subnet has a description"
}
]
}

resource "google_compute_instance" "tf-instance-1" {
name = "tf-instance-1"
machine_type = "n1-standard-2"
zone = var.zone
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "tf-vpc-391516"
subnetwork = "subnet-01"
}
metadata_startup_script = <<-EOT
#!/bin/bash
EOT
allow_stopping_for_update = true
}
resource "google_compute_instance" "tf-instance-2" {
name = "tf-instance-2"
machine_type = "n1-standard-2"
zone = var.zone
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "tf-vpc-391516"
subnetwork = "subnet-02"
}
metadata_startup_script = <<-EOT
#!/bin/bash
EOT
allow_stopping_for_update = true
}

resource "google_compute_firewall" "tf-firewall" {
name = "tf-firewall"
network = "projects/qwiklabs-gcp-02-b84ff56656df/global/networks/tf-vpc-391516"
allow {
protocol = "tcp"
ports = ["80"]
}
source_tags = ["web"]
source_ranges = ["0.0.0.0/0"]
}

@aagirre92
Copy link

aagirre92 commented Aug 24, 2022

Even though I create the infrastructure with Terraform (correctly importing the already existent instances) the checkpoint always states "Please create instances.tf file and import both instances and update them using Terraform.".

My main.tf file looks like this;

`
terraform {
backend "local" {
path = "terraform/state/terraform.tfstate"
}
}

provider "google" {
project = var.project_id
region = var.region
zone = var.zone
}

module "instances" {
source = "./modules/instances"

}
`

@deveshase
Copy link

deveshase commented Dec 1, 2022

Just completed the lab today (after attempting for the 3rd time) - For anyone getting this error, "Please create instances.tf file and import both instances and update them using Terraform."

Instructions are incorrect, if you add metadata startup script instances do not upgrade in place and they are destroyed and recreated which is not what the lab wants so you will get this error: "Please create instances.tf file and import both instances and update them using Terraform.".
Remove it from your instances.tf and you'll be good.

modules/
└── instances
├── instances.tf
├── outputs.tf
└── variables.tf

└── storage
├── storage.tf
├── outputs.tf
└── variables.tf

touch main.tf variables.tf
mkdir -p modules/instances
mkdir -p modules/storage

cd modules/instances
touch instances.tf variables.tf output.tf

cd ..
cd storage
touch storage.tf variables.tf output.tf

cd ..
cd ..

variables.tf

```

variable "region" {
    default = "us-east1"
}

variable "zone" {
    default = "us-east1-c"
}

variable "project_id" {
    default = "qwiklabs-gcp-02-2e28a7a4085e"
}
```


main.tf
~~~~~~~~~~

//do not use provider version
```
terraform {
    required_providers {
        google =  {
            source = "hashicorp/google"
        }
    }   
}

provider "google" {
    project = var.project_id
    region  = var.region
    zone    = var.zone
}
```


terraform init







Task 2
###############

main.tf
~~~~~
```

module "instances" {
    source = "./modules/instances"
}
```

terraform init


instances.tf
~~~~~

```
resource "google_compute_instance" "tf-instance-1" {
    name         = "tf-instance-1"
    machine_type = "n1-standard-1"
    boot_disk       {
        initialize_params {
            image = "debian-10-buster-v20221102"
        }
    }
    network_interface {
        network = "default"
        access_config {
        }
    } 
    allow_stopping_for_update = true
}

resource "google_compute_instance" "tf-instance-2"{
    name         = "tf-instance-2"
    machine_type = "n1-standard-1"
    boot_disk       {
        initialize_params {
            image = "debian-10-buster-v20221102"
        }
    }
    network_interface {
        network = "default"
        access_config {
        }
    } 
    allow_stopping_for_update = true
}


```


To import the instance, use the following command:

terraform import module.instances.google_compute_instance.tf-instance-1 tf-instance-1

terraform import module.instances.google_compute_instance.tf-instance-2 tf-instance-2


terraform show

terraform plan
terraform apply




**IMPORTANT: Instructions are incorrect, if you add metadata startup script instances do not upgrade in place and they are destroyed and recreated which is not what the lab wants so you will get this error: "Please create instances.tf file and import both instances and update them using Terraform.".
Remove it from your instances.tf and you'll be good.

metadata_startup_script = <<-EOT
        #!/bin/bash
    EOT**
    


Task 3
#############
   

storage.tf
~~~~
```
resource "google_storage_bucket" "storage-bucket"{
    name = "tf-bucket-146584"
    location = "US"
    force_destroy = true
    uniform_bucket_level_access = true
}
```


main.tf
~~~~~~~~
 ```

module "storage" {
    source = "./modules/storage"
} 

```
 

terraform init
terraform apply


main.tf
~~~~~~~~
Add in provider block
```

    backend "gcs" {
        prefix = "terraform/state"
        bucket = "tf-bucket-305061"
    }
```


terraform init
yes



Task 4
#############

update instance1 machine type
```

resource "google_compute_instance" "tf-instance-1" {
  name         = "tf-instance-1"
  machine_type = "n1-standard-2" 
}

```

update instance2 machine type
```

resource "google_compute_instance" "tf-instance-2" {
  name         = "tf-instance-2"
  machine_type = "n1-standard-2" 
}

```

```

resource "google_compute_instance" "tf-instance-556429"{
    name         = "tf-instance-556429"
    machine_type = "n1-standard-2"
    boot_disk       {
        initialize_params {
            image = "debian-10-buster-v20221102"
        }
    }
    network_interface {
        network = "default"
        access_config {
        }
    } 
    allow_stopping_for_update = true
}


```


terraform init
terraform apply




task 5
############
`
terraform taint module.instances.google_compute_instance.tf-instance-987888
`
terraform plan
terraform apply


Remove the tf-instance-3 resource from the instances.tf file.
terraform init
terraform apply



Task 6
############
main.tf
~~~
```

module "vpc" {
    source  = "terraform-google-modules/network/google"
    version = "3.4.0"

    project_id   = var.project_id
    network_name = "tf-vpc-657550"
    routing_mode = "GLOBAL"

    subnets = [
        {
            subnet_name           = "subnet-01"
            subnet_ip             = "10.10.10.0/24"
            subnet_region         = "us-east1"
        },
        {
            subnet_name           = "subnet-02"
            subnet_ip             = "10.10.20.0/24"
            subnet_region         = "us-east1"
        }]
}


```

terraform init -upgrade
terraform apply




instances.tf file - update the configuration resources to connect tf-instance-1 to subnet-01 and tf-instance-2 to subnet-02.


```

resource "google_compute_instance" "tf-instance-1" {
    
    network_interface {
        network = "tf-vpc-657550"
        subnetwork = "subnet-01"
    }

}   


resource "google_compute_instance" "tf-instance-2" {
  
    network_interface {
        network = "tf-vpc-657550"
        subnetwork = "subnet-02"
    }

} 
```


terraform init
terraform apply



Task 7
#############
```

resource "google_compute_firewall" "tf-firewall" {
  name    = "tf-firewall"
  network = "tf-vpc-657550"

  allow {
    protocol = "tcp"
    ports    = ["80"]
  }

  source_tags = ["web"]
  source_ranges = ["0.0.0.0/0"]
}
```


terraform apply

@Kirnesh92
Copy link

Thank you! :)

@linhnde
Copy link

linhnde commented Oct 12, 2023

I've just completed the lab today. deveshase is right. The start-up script makes the task stuck. We just need to remove the metadata_startup_script and keep allow_stopping_for_update.
Remember to double check subnet-01 and subnet-02 in instances.tf to pass the corresponding part.

@oche-jay
Copy link

oche-jay commented May 7, 2024

Thanks for this, you saved me several wasted hours and credits!

@sheetalsingh92
Copy link

please help !!!
for task 2 when i make changes in instances.tf file and try to do terraform import. it gives me error saying - cannot import non-existent remote object. can someone please guide what am i doing wrong?

image

@WebOfWyrd
Copy link

WebOfWyrd commented Oct 14, 2024

If someone else happens to struggle with Task 1, the lab instructions are quite misleading.

I completed the lab successfully today (14th October 2024) and I was struggling with that Task 1 as in first attempts I edited the code in a way that it would make only an in-place upgrade, which is the way anyone would do it in the real world, but this will make the Task 1 to fail when running the check on it.

After quite a bit of Googling about this and reading the lab reviews, I found out that the VM's must be destroyed by Terraform and then to be re-created. Which is of course completely wrong way if the intention is to simulate a real world scenario where you want to add manually created VM's to be managed by Terraform.

Adding the metadata according to lab instructions and allowing the VM's to be destroyed and re-created because of this made the lab check go through successfully.

This instruction https://gist.github.com/Syed-Hassaan/e41a83345832666846ee6be0f69c1f36?permalink_comment_id=4388061#gistcomment-4388061 helped me through the rest of the lab.

So in comparison, snippet from deveshase's code without the lab metadata:

""""
resource "google_compute_instance" "tf-instance-2"{
name = "tf-instance-2"
machine_type = "n1-standard-1"
boot_disk {
initialize_params {
image = "debian-10-buster-v20221102"
}
}
network_interface {
network = "default"
access_config {
}
}
allow_stopping_for_update = true
}
""""

and same code with the metadata as it is in the lab instructions which will cause the VM's to be re-created (makes sure both VM's will have this lab metadata):

""""
resource "google_compute_instance" "tf-instance-2"{
name = "tf-instance-2"
machine_type = "n1-standard-1"
boot_disk {
initialize_params {
image = "debian-10-buster-v20221102"
}
}
network_interface {
network = "default"
access_config {
}
}
metadata_startup_script = <<-EOT
#!/bin/bash
EOT
allow_stopping_for_update = true
}
"""""

I also had some version issues with the bucket and VPC, but I just removed the "version" line from the VPC module as well and running terraform init -upgrade fixed the issue. I'm not sure if the key is to leave the versions out from both main.tf provider block and from the VPC module, but anyways that worked for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment