问题
I want to share a terraform script that will be used across different projects. I know how to create and share modules, but this setup has a big annoyance: when I reference a module in a script and perform a terraform apply
, if the module resource does not exist it will be created, but also if I perform a terraform destroy
this resource will be destroyed.
If I have two projects dependent on the same module, and in one of them I call a terraform destroy
it may lead to a inconsistent state, since the module is being used by another project. The script can either fail because it cannot destroy the resource or it will destroy the resource and affect the other project.
In my scenario, I want to share network scripts between two projects and I want the network resources to never be destroyed. I cannot create a project only for this resource because I need to reference it somehow in my projects, and the only way to do it is via its ID, which I have no idea what is going to be.
prevent_destroy
is also not an option, since I do need to destroy other resources but the shared script resource. This configuration makes terraform destroy
fail.
Is there any way to reference the resource, like by its name, or is there any other better approach to accomplish what I want?
回答1:
If I understand you correctly, you have some resource R
that is a "singleton". That is, only one instance of R
can ever exist in your AWS account. For example, you can only ever have one aws_route53_zone with the name "foo.com". If you include R
as a module in two different places, then either one may create it when you run terraform apply
and either one may delete it when you run terraform destroy
. You'd like to avoid that, but you still need some way to get an output attribute from R
(e.g. the zone_id
for an aws_route53_zone
resource is generated by AWS, so you can't guess it).
If that's the case, then instead of using a R
as a module, you should:
- Create
R
by itself in its own set of Terraform templates. Let's say those are under/terraform/R
. Configure
/terraform/R
to use Remote State. For example, here is how you can configure those templates to store their remote state in an S3 bucket (you'll need to fill in the bucket name/region as indicated):terraform remote config \ -backend=s3 \ -backend-config="bucket=(YOUR BUCKET NAME)" \ -backend-config="key=terraform.tfstate" \ -backend-config="region=(YOUR BUCKET REGION)" \ -backend-config="encrypt=true"
Define any output attributes you need from
R
as output variables. For example:output "zone_id" { value = "${aws_route_53.example.zone_id}" }
- When you run
terraform apply
in/terraform/R
, it will store its Terraform state, including that output, in an S3 bucket. Now, in all other Terraform templates that need that output attribute from
R
, you can pull it in from the S3 bucket using the terraform_remote_state data source. For example, let's say you had some template/terraform/foo
that needed thatzone_id
parameter to create an aws_route53_record (you'll need to fill in the bucket name/region as indicated):data "terraform_remote_state" "r" { backend = "s3" config { bucket = "(YOUR BUCKET NAME)" key = "terraform.tfstate" region = "(YOUR BUCKET REGION)" } } resource "aws_route53_record" "www" { zone_id = "${data.terraform_remote_state.r.zone_id}" name = "www.foo.com" type = "A" ttl = "300" records = ["${aws_eip.lb.public_ip}"] }
- Note that
terraform_remote_state
is a read-only data source. That means when you runterraform apply
orterraform destroy
on any templates that use that resource, they will not have any effect inR
.
For more info, check out How to manage terraform state and Terraform: Up & Running.
来源:https://stackoverflow.com/questions/39900522/how-to-share-a-terraform-script-without-module-dependencies