Skip to content

Instantly share code, notes, and snippets.

@O1ahmad
Created October 12, 2020 21:37
Show Gist options
  • Save O1ahmad/7e9903ddc99c430f2f4ca45e4621599e to your computer and use it in GitHub Desktop.
Save O1ahmad/7e9903ddc99c430f2f4ca45e4621599e to your computer and use it in GitHub Desktop.
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
google_container_cluster.coda_cluster_east: Refreshing state... [id=projects/o1labs-192920/locations/us-east1/clusters/coda-infra-east]
google_container_cluster.buildkite_infra_central1: Refreshing state... [id=projects/o1labs-192920/locations/us-central1/clusters/buildkite-infra-central1]
google_container_cluster.buildkite_infra_east1: Refreshing state... [id=projects/o1labs-192920/locations/us-east1/clusters/buildkite-infra-east1]
google_container_cluster.buildkite_infra_east4: Refreshing state... [id=projects/o1labs-192920/locations/us-east4/clusters/buildkite-infra-east4]
data.google_client_config.current: Refreshing state...
data.aws_secretsmanager_secret.prometheus_remote_write_config: Refreshing state...
google_container_node_pool.central1_compute_nodes: Refreshing state... [id=projects/o1labs-192920/locations/us-central1/clusters/buildkite-infra-central1/nodePools/buildkite-central1-compute]
google_container_node_pool.east1_compute_nodes: Refreshing state... [id=projects/o1labs-192920/locations/us-east1/clusters/buildkite-infra-east1/nodePools/buildkite-east1-compute]
data.aws_secretsmanager_secret_version.current_prometheus_remote_write_config: Refreshing state...
google_container_node_pool.east_primary_nodes: Refreshing state... [id=projects/o1labs-192920/locations/us-east1/clusters/coda-infra-east/nodePools/coda-infra-east]
helm_release.east_prometheus: Refreshing state... [id=east-prometheus]
google_container_node_pool.east4_compute_nodes: Refreshing state... [id=projects/o1labs-192920/locations/us-east4/clusters/buildkite-infra-east4/nodePools/buildkite-east4-compute]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
Terraform will perform the following actions:
# google_container_cluster.coda_cluster_central1 will be created
+ resource "google_container_cluster" "coda_cluster_central1" {
+ additional_zones = (known after apply)
+ cluster_ipv4_cidr = (known after apply)
+ default_max_pods_per_node = (known after apply)
+ enable_binary_authorization = false
+ enable_intranode_visibility = (known after apply)
+ enable_kubernetes_alpha = false
+ enable_legacy_abac = false
+ enable_shielded_nodes = false
+ enable_tpu = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ initial_node_count = 1
+ instance_group_urls = (known after apply)
+ label_fingerprint = (known after apply)
+ location = "us-central1"
+ logging_service = (known after apply)
+ master_version = (known after apply)
+ min_master_version = "1.15"
+ monitoring_service = (known after apply)
+ name = "coda-infra-central1"
+ network = "default"
+ node_locations = (known after apply)
+ node_version = (known after apply)
+ operation = (known after apply)
+ project = (known after apply)
+ region = (known after apply)
+ remove_default_node_pool = true
+ services_ipv4_cidr = (known after apply)
+ subnetwork = (known after apply)
+ zone = (known after apply)
+ addons_config {
+ cloudrun_config {
+ disabled = (known after apply)
}
+ horizontal_pod_autoscaling {
+ disabled = (known after apply)
}
+ http_load_balancing {
+ disabled = (known after apply)
}
+ kubernetes_dashboard {
+ disabled = (known after apply)
}
+ network_policy_config {
+ disabled = (known after apply)
}
}
+ authenticator_groups_config {
+ security_group = (known after apply)
}
+ cluster_autoscaling {
+ enabled = (known after apply)
+ auto_provisioning_defaults {
+ oauth_scopes = (known after apply)
+ service_account = (known after apply)
}
+ resource_limits {
+ maximum = (known after apply)
+ minimum = (known after apply)
+ resource_type = (known after apply)
}
}
+ master_auth {
+ client_certificate = (known after apply)
+ client_key = (sensitive value)
+ cluster_ca_certificate = (known after apply)
+ client_certificate_config {
+ issue_client_certificate = false
}
}
+ network_policy {
+ enabled = (known after apply)
+ provider = (known after apply)
}
+ node_config {
+ disk_size_gb = (known after apply)
+ disk_type = (known after apply)
+ guest_accelerator = (known after apply)
+ image_type = (known after apply)
+ labels = (known after apply)
+ local_ssd_count = (known after apply)
+ machine_type = (known after apply)
+ metadata = (known after apply)
+ min_cpu_platform = (known after apply)
+ oauth_scopes = (known after apply)
+ preemptible = (known after apply)
+ service_account = (known after apply)
+ tags = (known after apply)
+ taint = (known after apply)
+ sandbox_config {
+ sandbox_type = (known after apply)
}
+ shielded_instance_config {
+ enable_integrity_monitoring = (known after apply)
+ enable_secure_boot = (known after apply)
}
+ workload_metadata_config {
+ node_metadata = (known after apply)
}
}
+ node_pool {
+ initial_node_count = (known after apply)
+ instance_group_urls = (known after apply)
+ max_pods_per_node = (known after apply)
+ name = (known after apply)
+ name_prefix = (known after apply)
+ node_count = (known after apply)
+ node_locations = (known after apply)
+ version = (known after apply)
+ autoscaling {
+ max_node_count = (known after apply)
+ min_node_count = (known after apply)
}
+ management {
+ auto_repair = (known after apply)
+ auto_upgrade = (known after apply)
}
+ node_config {
+ disk_size_gb = (known after apply)
+ disk_type = (known after apply)
+ guest_accelerator = (known after apply)
+ image_type = (known after apply)
+ labels = (known after apply)
+ local_ssd_count = (known after apply)
+ machine_type = (known after apply)
+ metadata = (known after apply)
+ min_cpu_platform = (known after apply)
+ oauth_scopes = (known after apply)
+ preemptible = (known after apply)
+ service_account = (known after apply)
+ tags = (known after apply)
+ taint = (known after apply)
+ sandbox_config {
+ sandbox_type = (known after apply)
}
+ shielded_instance_config {
+ enable_integrity_monitoring = (known after apply)
+ enable_secure_boot = (known after apply)
}
+ workload_metadata_config {
+ node_metadata = (known after apply)
}
}
+ upgrade_settings {
+ max_surge = (known after apply)
+ max_unavailable = (known after apply)
}
}
+ pod_security_policy_config {
+ enabled = (known after apply)
}
}
# google_container_cluster.coda_cluster_east4 will be created
+ resource "google_container_cluster" "coda_cluster_east4" {
+ additional_zones = (known after apply)
+ cluster_ipv4_cidr = (known after apply)
+ default_max_pods_per_node = (known after apply)
+ enable_binary_authorization = false
+ enable_intranode_visibility = (known after apply)
+ enable_kubernetes_alpha = false
+ enable_legacy_abac = false
+ enable_shielded_nodes = false
+ enable_tpu = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ initial_node_count = 1
+ instance_group_urls = (known after apply)
+ label_fingerprint = (known after apply)
+ location = "us-east4"
+ logging_service = (known after apply)
+ master_version = (known after apply)
+ min_master_version = "1.15"
+ monitoring_service = (known after apply)
+ name = "coda-infra-east4"
+ network = "default"
+ node_locations = (known after apply)
+ node_version = (known after apply)
+ operation = (known after apply)
+ project = (known after apply)
+ region = (known after apply)
+ remove_default_node_pool = true
+ services_ipv4_cidr = (known after apply)
+ subnetwork = (known after apply)
+ zone = (known after apply)
+ addons_config {
+ cloudrun_config {
+ disabled = (known after apply)
}
+ horizontal_pod_autoscaling {
+ disabled = (known after apply)
}
+ http_load_balancing {
+ disabled = (known after apply)
}
+ kubernetes_dashboard {
+ disabled = (known after apply)
}
+ network_policy_config {
+ disabled = (known after apply)
}
}
+ authenticator_groups_config {
+ security_group = (known after apply)
}
+ cluster_autoscaling {
+ enabled = (known after apply)
+ auto_provisioning_defaults {
+ oauth_scopes = (known after apply)
+ service_account = (known after apply)
}
+ resource_limits {
+ maximum = (known after apply)
+ minimum = (known after apply)
+ resource_type = (known after apply)
}
}
+ master_auth {
+ client_certificate = (known after apply)
+ client_key = (sensitive value)
+ cluster_ca_certificate = (known after apply)
+ client_certificate_config {
+ issue_client_certificate = false
}
}
+ network_policy {
+ enabled = (known after apply)
+ provider = (known after apply)
}
+ node_config {
+ disk_size_gb = (known after apply)
+ disk_type = (known after apply)
+ guest_accelerator = (known after apply)
+ image_type = (known after apply)
+ labels = (known after apply)
+ local_ssd_count = (known after apply)
+ machine_type = (known after apply)
+ metadata = (known after apply)
+ min_cpu_platform = (known after apply)
+ oauth_scopes = (known after apply)
+ preemptible = (known after apply)
+ service_account = (known after apply)
+ tags = (known after apply)
+ taint = (known after apply)
+ sandbox_config {
+ sandbox_type = (known after apply)
}
+ shielded_instance_config {
+ enable_integrity_monitoring = (known after apply)
+ enable_secure_boot = (known after apply)
}
+ workload_metadata_config {
+ node_metadata = (known after apply)
}
}
+ node_pool {
+ initial_node_count = (known after apply)
+ instance_group_urls = (known after apply)
+ max_pods_per_node = (known after apply)
+ name = (known after apply)
+ name_prefix = (known after apply)
+ node_count = (known after apply)
+ node_locations = (known after apply)
+ version = (known after apply)
+ autoscaling {
+ max_node_count = (known after apply)
+ min_node_count = (known after apply)
}
+ management {
+ auto_repair = (known after apply)
+ auto_upgrade = (known after apply)
}
+ node_config {
+ disk_size_gb = (known after apply)
+ disk_type = (known after apply)
+ guest_accelerator = (known after apply)
+ image_type = (known after apply)
+ labels = (known after apply)
+ local_ssd_count = (known after apply)
+ machine_type = (known after apply)
+ metadata = (known after apply)
+ min_cpu_platform = (known after apply)
+ oauth_scopes = (known after apply)
+ preemptible = (known after apply)
+ service_account = (known after apply)
+ tags = (known after apply)
+ taint = (known after apply)
+ sandbox_config {
+ sandbox_type = (known after apply)
}
+ shielded_instance_config {
+ enable_integrity_monitoring = (known after apply)
+ enable_secure_boot = (known after apply)
}
+ workload_metadata_config {
+ node_metadata = (known after apply)
}
}
+ upgrade_settings {
+ max_surge = (known after apply)
+ max_unavailable = (known after apply)
}
}
+ pod_security_policy_config {
+ enabled = (known after apply)
}
}
# google_container_node_pool.central1_experimental_nodes will be created
+ resource "google_container_node_pool" "central1_experimental_nodes" {
+ cluster = "coda-infra-central1"
+ id = (known after apply)
+ initial_node_count = (known after apply)
+ instance_group_urls = (known after apply)
+ location = "us-central1"
+ max_pods_per_node = (known after apply)
+ name = "coda-infra-central1"
+ name_prefix = (known after apply)
+ node_count = 1
+ node_locations = (known after apply)
+ project = (known after apply)
+ region = (known after apply)
+ version = (known after apply)
+ zone = (known after apply)
+ autoscaling {
+ max_node_count = 5
+ min_node_count = 0
}
+ management {
+ auto_repair = (known after apply)
+ auto_upgrade = (known after apply)
}
+ node_config {
+ disk_size_gb = 100
+ disk_type = (known after apply)
+ guest_accelerator = (known after apply)
+ image_type = (known after apply)
+ labels = (known after apply)
+ local_ssd_count = (known after apply)
+ machine_type = "n2d-standard-32"
+ metadata = {
+ "disable-legacy-endpoints" = "true"
}
+ oauth_scopes = [
+ "https://www.googleapis.com/auth/logging.write",
+ "https://www.googleapis.com/auth/monitoring",
]
+ preemptible = false
+ service_account = (known after apply)
+ taint = (known after apply)
+ sandbox_config {
+ sandbox_type = (known after apply)
}
+ shielded_instance_config {
+ enable_integrity_monitoring = (known after apply)
+ enable_secure_boot = (known after apply)
}
+ workload_metadata_config {
+ node_metadata = (known after apply)
}
}
+ upgrade_settings {
+ max_surge = (known after apply)
+ max_unavailable = (known after apply)
}
}
# google_container_node_pool.central1_primary_nodes will be created
+ resource "google_container_node_pool" "central1_primary_nodes" {
+ cluster = "coda-infra-central1"
+ id = (known after apply)
+ initial_node_count = (known after apply)
+ instance_group_urls = (known after apply)
+ location = "us-central1"
+ max_pods_per_node = (known after apply)
+ name = "coda-infra-central1"
+ name_prefix = (known after apply)
+ node_count = 4
+ node_locations = (known after apply)
+ project = (known after apply)
+ region = (known after apply)
+ version = (known after apply)
+ zone = (known after apply)
+ autoscaling {
+ max_node_count = 10
+ min_node_count = 0
}
+ management {
+ auto_repair = (known after apply)
+ auto_upgrade = (known after apply)
}
+ node_config {
+ disk_size_gb = 100
+ disk_type = (known after apply)
+ guest_accelerator = (known after apply)
+ image_type = (known after apply)
+ labels = (known after apply)
+ local_ssd_count = (known after apply)
+ machine_type = "n1-standard-16"
+ metadata = {
+ "disable-legacy-endpoints" = "true"
}
+ oauth_scopes = [
+ "https://www.googleapis.com/auth/logging.write",
+ "https://www.googleapis.com/auth/monitoring",
]
+ preemptible = false
+ service_account = (known after apply)
+ taint = (known after apply)
+ sandbox_config {
+ sandbox_type = (known after apply)
}
+ shielded_instance_config {
+ enable_integrity_monitoring = (known after apply)
+ enable_secure_boot = (known after apply)
}
+ workload_metadata_config {
+ node_metadata = (known after apply)
}
}
+ upgrade_settings {
+ max_surge = (known after apply)
+ max_unavailable = (known after apply)
}
}
# google_container_node_pool.east1_compute_nodes will be updated in-place
~ resource "google_container_node_pool" "east1_compute_nodes" {
cluster = "buildkite-infra-east1"
id = "projects/o1labs-192920/locations/us-east1/clusters/buildkite-infra-east1/nodePools/buildkite-east1-compute"
initial_node_count = 5
instance_group_urls = [
"https://www.googleapis.com/compute/v1/projects/o1labs-192920/zones/us-east1-b/instanceGroupManagers/gke-buildkite-infra--buildkite-east1--755607bc-grp",
"https://www.googleapis.com/compute/v1/projects/o1labs-192920/zones/us-east1-c/instanceGroupManagers/gke-buildkite-infra--buildkite-east1--b864bc42-grp",
"https://www.googleapis.com/compute/v1/projects/o1labs-192920/zones/us-east1-d/instanceGroupManagers/gke-buildkite-infra--buildkite-east1--2399be82-grp",
]
location = "us-east1"
name = "buildkite-east1-compute"
~ node_count = 2 -> 5
node_locations = [
"us-east1-b",
"us-east1-c",
"us-east1-d",
]
project = "o1labs-192920"
version = "1.15.12-gke.20"
autoscaling {
max_node_count = 5
min_node_count = 2
}
management {
auto_repair = true
auto_upgrade = true
}
node_config {
disk_size_gb = 500
disk_type = "pd-standard"
guest_accelerator = []
image_type = "COS"
labels = {}
local_ssd_count = 0
machine_type = "c2-standard-16"
metadata = {
"disable-legacy-endpoints" = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
preemptible = true
service_account = "default"
tags = []
taint = []
shielded_instance_config {
enable_integrity_monitoring = true
enable_secure_boot = false
}
}
upgrade_settings {
max_surge = 1
max_unavailable = 0
}
}
# google_container_node_pool.east4_compute_nodes will be updated in-place
~ resource "google_container_node_pool" "east4_compute_nodes" {
cluster = "buildkite-infra-east4"
id = "projects/o1labs-192920/locations/us-east4/clusters/buildkite-infra-east4/nodePools/buildkite-east4-compute"
initial_node_count = 5
instance_group_urls = [
"https://www.googleapis.com/compute/v1/projects/o1labs-192920/zones/us-east4-a/instanceGroupManagers/gke-buildkite-infra--buildkite-east4--3ca6a41e-grp",
"https://www.googleapis.com/compute/v1/projects/o1labs-192920/zones/us-east4-b/instanceGroupManagers/gke-buildkite-infra--buildkite-east4--751977d8-grp",
"https://www.googleapis.com/compute/v1/projects/o1labs-192920/zones/us-east4-c/instanceGroupManagers/gke-buildkite-infra--buildkite-east4--9f799b55-grp",
]
location = "us-east4"
name = "buildkite-east4-compute"
~ node_count = 2 -> 5
node_locations = [
"us-east4-a",
"us-east4-b",
"us-east4-c",
]
project = "o1labs-192920"
version = "1.15.12-gke.20"
autoscaling {
max_node_count = 5
min_node_count = 2
}
management {
auto_repair = true
auto_upgrade = true
}
node_config {
disk_size_gb = 500
disk_type = "pd-standard"
guest_accelerator = []
image_type = "COS"
labels = {}
local_ssd_count = 0
machine_type = "c2-standard-16"
metadata = {
"disable-legacy-endpoints" = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
preemptible = true
service_account = "default"
tags = []
taint = []
shielded_instance_config {
enable_integrity_monitoring = true
enable_secure_boot = false
}
}
upgrade_settings {
max_surge = 1
max_unavailable = 0
}
}
# google_container_node_pool.east4_primary_nodes will be created
+ resource "google_container_node_pool" "east4_primary_nodes" {
+ cluster = "coda-infra-east4"
+ id = (known after apply)
+ initial_node_count = (known after apply)
+ instance_group_urls = (known after apply)
+ location = "us-east4"
+ max_pods_per_node = (known after apply)
+ name = "coda-infra-east4"
+ name_prefix = (known after apply)
+ node_count = 4
+ node_locations = (known after apply)
+ project = (known after apply)
+ region = (known after apply)
+ version = (known after apply)
+ zone = (known after apply)
+ autoscaling {
+ max_node_count = 15
+ min_node_count = 0
}
+ management {
+ auto_repair = (known after apply)
+ auto_upgrade = (known after apply)
}
+ node_config {
+ disk_size_gb = 100
+ disk_type = (known after apply)
+ guest_accelerator = (known after apply)
+ image_type = (known after apply)
+ labels = (known after apply)
+ local_ssd_count = (known after apply)
+ machine_type = "n1-standard-16"
+ metadata = {
+ "disable-legacy-endpoints" = "true"
}
+ oauth_scopes = [
+ "https://www.googleapis.com/auth/logging.write",
+ "https://www.googleapis.com/auth/monitoring",
]
+ preemptible = false
+ service_account = (known after apply)
+ taint = (known after apply)
+ sandbox_config {
+ sandbox_type = (known after apply)
}
+ shielded_instance_config {
+ enable_integrity_monitoring = (known after apply)
+ enable_secure_boot = (known after apply)
}
+ workload_metadata_config {
+ node_metadata = (known after apply)
}
}
+ upgrade_settings {
+ max_surge = (known after apply)
+ max_unavailable = (known after apply)
}
}
# google_container_node_pool.east_primary_nodes will be updated in-place
~ resource "google_container_node_pool" "east_primary_nodes" {
cluster = "coda-infra-east"
id = "projects/o1labs-192920/locations/us-east1/clusters/coda-infra-east/nodePools/coda-infra-east"
initial_node_count = 12
instance_group_urls = [
"https://www.googleapis.com/compute/v1/projects/o1labs-192920/zones/us-east1-b/instanceGroupManagers/gke-coda-infra-east-coda-infra-east-59129a85-grp",
"https://www.googleapis.com/compute/v1/projects/o1labs-192920/zones/us-east1-d/instanceGroupManagers/gke-coda-infra-east-coda-infra-east-af71a930-grp",
"https://www.googleapis.com/compute/v1/projects/o1labs-192920/zones/us-east1-c/instanceGroupManagers/gke-coda-infra-east-coda-infra-east-de417094-grp",
]
location = "us-east1"
name = "coda-infra-east"
~ node_count = 10 -> 4
node_locations = [
"us-east1-b",
"us-east1-c",
"us-east1-d",
]
project = "o1labs-192920"
version = "1.15.12-gke.20"
~ autoscaling {
~ max_node_count = 30 -> 15
~ min_node_count = 10 -> 0
}
management {
auto_repair = false
auto_upgrade = true
}
node_config {
disk_size_gb = 500
disk_type = "pd-standard"
guest_accelerator = []
image_type = "COS"
labels = {}
local_ssd_count = 0
machine_type = "n1-standard-16"
metadata = {
"disable-legacy-endpoints" = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
preemptible = false
service_account = "default"
tags = []
taint = []
shielded_instance_config {
enable_integrity_monitoring = true
enable_secure_boot = false
}
}
upgrade_settings {
max_surge = 1
max_unavailable = 0
}
}
# helm_release.central1_prometheus will be created
+ resource "helm_release" "central1_prometheus" {
+ atomic = false
+ chart = "stable/prometheus"
+ cleanup_on_fail = false
+ create_namespace = false
+ dependency_update = false
+ disable_crd_hooks = false
+ disable_openapi_validation = false
+ disable_webhooks = false
+ force_update = true
+ id = (known after apply)
+ lint = false
+ max_history = 0
+ metadata = (known after apply)
+ name = "central1-prometheus"
+ namespace = "default"
+ recreate_pods = false
+ render_subchart_notes = true
+ replace = false
+ reset_values = false
+ reuse_values = false
+ skip_crds = false
+ status = "deployed"
+ timeout = 300
+ values = [
+ <<~EOT
"server":
"global":
"external_labels":
"origin_prometheus": "central1-prometheus"
"persistentVolume":
"size": "50Gi"
"remoteWrite":
- "basic_auth":
"password": "eyJrIjoiZDEwYzcwMzAwMTJkZGNlODNlMmU2NTRiOTAxZGUwY2JhMDZjYjNlOCIsIm4iOiJjb2RhLXNlcnZpY2VzIiwiaWQiOjI2MDI2Nn0"
"username": "8245"
"url": "https://prometheus-us-central1.grafana.net/api/prom/push"
"write_relabel_configs":
- "action": "keep"
"regex": "(container.*|Coda.*)"
"source_labels":
- "__name__"
EOT,
]
+ verify = false
+ version = "11.12.1"
+ wait = true
}
# helm_release.east4_prometheus will be created
+ resource "helm_release" "east4_prometheus" {
+ atomic = false
+ chart = "stable/prometheus"
+ cleanup_on_fail = false
+ create_namespace = false
+ dependency_update = false
+ disable_crd_hooks = false
+ disable_openapi_validation = false
+ disable_webhooks = false
+ force_update = true
+ id = (known after apply)
+ lint = false
+ max_history = 0
+ metadata = (known after apply)
+ name = "east4-prometheus"
+ namespace = "default"
+ recreate_pods = false
+ render_subchart_notes = true
+ replace = false
+ reset_values = false
+ reuse_values = false
+ skip_crds = false
+ status = "deployed"
+ timeout = 300
+ values = [
+ <<~EOT
"server":
"global":
"external_labels":
"origin_prometheus": "east4-prometheus"
"persistentVolume":
"size": "50Gi"
"remoteWrite":
- "basic_auth":
"password": "eyJrIjoiZDEwYzcwMzAwMTJkZGNlODNlMmU2NTRiOTAxZGUwY2JhMDZjYjNlOCIsIm4iOiJjb2RhLXNlcnZpY2VzIiwiaWQiOjI2MDI2Nn0"
"username": "8245"
"url": "https://prometheus-us-central1.grafana.net/api/prom/push"
"write_relabel_configs":
- "action": "keep"
"regex": "(container.*|Coda.*)"
"source_labels":
- "__name__"
EOT,
]
+ verify = false
+ version = "11.12.1"
+ wait = true
}
Plan: 7 to add, 3 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment