terraform

Kubernetes secret with Flux and Terraform

拥有回忆 提交于 2021-01-29 12:38:14
问题 I am new to terraform and devops in general. First I need to get ssh key from url to known host to later use for Flux. data "helm_repository" "fluxcd" { name = "fluxcd" url = "https://charts.fluxcd.io" } resource "helm_release" "flux" { name = "flux" namespace = "flux" repository = data.helm_repository.fluxcd.metadata[0].name chart = "flux" set { name = "git.url" value = "git.project" } set { name = "git.secretName" value = "flux-git-deploy" } set { name = "syncGarbageCollection.enabled"

Configure subnets using terraform cidrsubnet

孤者浪人 提交于 2021-01-29 07:10:34
问题 I am trying to create two subnet using the cidrsubnet function that Terraform supports. The VPC cidr I have is "10.32.0.0/16". I am trying to have subnets 10.32.1.0/27 and 10.32.3.0/27. I am having some trouble getting the cidrsubnet function in order to achieve this. What I have so far is: cidrsubnet(10.32.0.0/16, 11, netnum???) I do not understand what value I need for the netnum in order to get the value I want. Any explanation on this part of the function would be helpful. I've tried

How to share a terraform script without module dependencies

感情迁移 提交于 2021-01-29 04:20:29
问题 I want to share a terraform script that will be used across different projects. I know how to create and share modules, but this setup has a big annoyance: when I reference a module in a script and perform a terraform apply , if the module resource does not exist it will be created, but also if I perform a terraform destroy this resource will be destroyed. If I have two projects dependent on the same module, and in one of them I call a terraform destroy it may lead to a inconsistent state,

Terraform - access root module script from child module

限于喜欢 提交于 2021-01-29 04:04:42
问题 I have a ROOT_MODULE with main.tf : #Root Module - Just run the script resource "null_resource" "example" { provisioner "local_exec" { command = "./script.sh" } and script.sh : echo "Hello world now I have another directory elsewhere where I've created a CHILD_MODULE with another main.tf : #Child Module module "ROOT_MODULE" { source = "gitlabURL/ROOT_MODULE" } I've exported my planfile: terraform plan -out="planfile" however, when I do terraform apply against the planfile, the directory I am

Terraform cyclic dependency

 ̄綄美尐妖づ 提交于 2021-01-28 20:21:17
问题 I'm trying to instantiate 3 aws_instances that are aware of each other ip address via Terraform. This of course results in a cyclic dependency. I was wondering what is the best way to overcome this problem. I've tried a couple of solutions: Instantiate 2 instances together, and then 1 instance that depends on those 2. In the third instance, have a user_data script that allows the instance to ssh into the other 2 instances to setup the necessary configs. It works, but I don't like the fact

AWS Terraform tried to destory and rebuild RDS cluster

六眼飞鱼酱① 提交于 2021-01-28 19:52:54
问题 I have an RDS cluster I built using terraform, this is running deletion protection currently. When I update my terraform script for something (example security group change) and run this into the environment it always tries to breakdown and rebuild the RDS cluster. Running this now with deletion protection stops the rebuild, but causes the terraform apply to fail as it cannot destroy the cluster. How can I get this to keep the existing RDS cluster without rebuilding every time I run my script

error listing tags for RDS DB Cluster Snapshot

风流意气都作罢 提交于 2021-01-28 11:37:34
问题 So I have a workflow that looks like this: [Production] Snap cluster Share snapshot to Staging [Staging] Create new cluster out of shared snapshot I'm using terraform so my config will look like this (for brevity I excluded other attributes and resources) data "aws_db_cluster_snapshot" "development_final_snapshot" { db_cluster_identifier = "arn:prod_id:my_cluster" include_shared = true most_recent = true snapshot_type = "shared" } resource "aws_rds_cluster" "aurora" { snapshot_identifier = "$

Terraform GCP vm instance create - Error 403

▼魔方 西西 提交于 2021-01-28 09:20:37
问题 this is my first try to create VM on GCP Through terraform. here are the 2 files which i created. provider.tf provider "google" { credentials = "${file("xxxxxx.json")}" project = "project-1-200623" region = "us-central1" } compute.tf # Create a new instance resource "google_compute_instance" "default" { name = "test" machine_type = "n1-standard-1" zone = "us-central1-a" boot_disk { initialize_params { image = "debian-cloud/debian-8" } } network_interface { network = "default" access_config {}

How do I create N VMs with M disks created and attached per VM?

China☆狼群 提交于 2021-01-28 08:34:08
问题 resource "azurerm_windows_virtual_machine" "virtual_machine" { count = var.vm_count name = "${local.vm_name}${count.index +1}" resource "azurerm_virtual_machine_data_disk_attachment" "datadisk01" { count = var.disk_count **virtual_machine_id = azurerm_windows_virtual_machine.virtual_machine[count.index].id managed_disk_id = element("${module.DISK.datadisk_id}","${count.index}") } Issue - I have 2 diffenent count varaibles vm_count and disk_count. I want a generic solution, Ex. If vm count is

Terraform: Referencing resources created in for_each in another resource

别说谁变了你拦得住时间么 提交于 2021-01-28 08:12:00
问题 When I had a single hosted zone it was easy for me to create the zone and then create the NS records for the zone in the delegating account by referencing the hosted zone by name. Edit To try to avoid confusion this is what I wanted to achieve but for multiple hosted zones and the owner of the domain is a management account: https://dev.to/arswaw/create-a-subdomain-in-amazon-route53-in-2-minutes-3hf0 Now I need to create multiple hosted zones and pass the nameserver records back to the parent