Terraform (0.12.29) 导入未按预期工作;导入成功,但计划显示销毁并重新创建
Posted
技术标签:
【中文标题】Terraform (0.12.29) 导入未按预期工作;导入成功,但计划显示销毁并重新创建【英文标题】:Terraform (0.12.29) import not working as expected; import succeeded but plan shows destroy & recreate 【发布时间】:2021-02-21 18:49:50 【问题描述】:一些背景:
我们有 terraform 代码来创建各种 AWS 资源。其中一些资源是按 AWS 账户创建的,因此存储在我们项目的 account-scope
文件夹中。当时我们只有一个 AWS 区域。现在我们的应用程序是多区域的,因此将为每个 AWS 账户的每个区域创建这些资源。
为了做到这一点,我们现在已将这些 TF 脚本移至 region-scope
文件夹,该文件夹将按区域运行。由于这些资源不再属于“帐户范围”,我们已将它们从帐户范围Terraform 状态中删除。
现在当我尝试导入这些资源时
通过从xyz-region-scope
目录运行它来导入资源:
terraform import -var-file=config/us-west-2/default.tfvars -var-file=variables.tfvars -var-file=../globals.tfvars -var profile=xyz-stage -var region=us-west-2 -var tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8 -no-color <RESOURCE_NAME> <RESOURCE_ID>
资源的一个例子是:
RESOURCE_NAME=module.buckets.aws_s3_bucket.cloudtrail_logging_bucket
RESOURCE_ID="ab-xyz-stage-cloudtrail-logging-72a2c5cd"
我原以为导入会更新本地机器上 terraform 状态文件中的资源,但在 xyz-region-scope/state/xyz-stage/terraform.tfstate
下创建的 terraform 状态文件没有更新。
通过以下方式验证导入:
terraform show
运行 terraform 计划:
terraform plan -var-file=config/us-west-2/default.tfvars -var-file=variables.tfvars -var-file=../globals.tfvars -var profile=xyz-stage -var region=us-west-2 -var tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8 -no-color
但 terraform plan 输出显示 Plan: 6 to add, 0 to change, 5 to destroy.
即这些资源将被销毁并重新创建。
我不清楚为什么会这样,是我遗漏了什么并且没有做对吗?
请注意,我们将远程状态存储在 S3 存储桶中但我目前没有在 S3 存储桶中为 区域范围 创建远程 TF 状态文件(我有一个用于 帐户范围)。我期待Import..Plan..Apply
进程也会为区域范围创建一个。
编辑:在运行导入后,我看到在 S3 中为区域范围创建的远程 TF 状态文件。我看到这个新的区域范围 tf 状态文件与旧帐户范围一之间的一个区别是:新文件在任何资源 resources[] > instances[]
下都没有任何 "depends_on"
块
环境:
Local machine: macOS v10.14.6
Terraform v0.12.29
+ provider.aws v3.14.1
+ provider.null v2.1.2
+ provider.random v2.3.1
+ provider.template v2.1.2
编辑 2:
这是我的导入和 terraform 计划:
terraform import module.buckets.random_id.cloudtrail_bucket_suffix cqLFzQ
terraform import module.buckets.aws_s3_bucket.cloudtrail_logging_bucket "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
terraform import module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
terraform import module.buckets.module.access_logging_bucket.aws_s3_bucket.default "ab-xyz-stage-access-logging-9d8e94ff"
terraform import module.buckets.module.access_logging_bucket.random_id.bucket_suffix nY6U_w
terraform import module.encryption.module.data_key.aws_iam_policy.decrypt "arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_decrypt"
terraform import module.encryption.module.data_key.aws_iam_policy.encrypt "arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_encrypt"
mymachine:xyz-region-scope kuldeepjain$ ../scripts/terraform.sh xyz-stage plan -no-color
+ set -o posix
+ IFS='
'
++ blhome
+ BASH_LIB_HOME=/usr/local/lib/mycompany/ab/bash_library/0.0.1-SNAPSHOT
+ source /usr/local/lib/mycompany/ab/bash_library/0.0.1-SNAPSHOT/s3/bucket.sh
+ main xyz-stage plan -no-color
+ '[' 3 -lt 2 ']'
+ local env=xyz-stage
+ shift
+ local command=plan
+ shift
++ get_region xyz-stage
++ local env=xyz-stage
++ shift
+++ aws --profile xyz-stage configure get region
++ local region=us-west-2
++ '[' -z us-west-2 ']'
++ echo us-west-2
+ local region=us-west-2
++ _get_bucket xyz-stage xyz-stage-tfstate
++ local env=xyz-stage
++ shift
++ local name=xyz-stage-tfstate
++ shift
+++ _get_bucket_list xyz-stage xyz-stage-tfstate
+++ local env=xyz-stage
+++ shift
+++ local name=xyz-stage-tfstate
+++ shift
+++ aws --profile xyz-stage --output json s3api list-buckets --query 'Buckets[?contains(Name, `xyz-stage-tfstate`) == `true`].Name'
++ local 'bucket_list=[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ _count_buckets_in_json '[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ local 'json=[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ shift
+++ echo '[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ jq '. | length'
++ local number_of_buckets=1
++ '[' 1 == 0 ']'
++ '[' 1 -gt 1 ']'
+++ echo '[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ jq -r '.[0]'
++ local bucket_name=ab-xyz-stage-tfstate-5b8873b8
++ echo ab-xyz-stage-tfstate-5b8873b8
+ local tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8
++ get_config_file xyz-stage us-west-2
++ local env=xyz-stage
++ shift
++ local region=us-west-2
++ shift
++ local config_file=config/us-west-2/xyz-stage.tfvars
++ '[' '!' -f config/us-west-2/xyz-stage.tfvars ']'
++ config_file=config/us-west-2/default.tfvars
++ echo config/us-west-2/default.tfvars
+ local config_file=config/us-west-2/default.tfvars
+ export TF_DATA_DIR=state/xyz-stage/
+ TF_DATA_DIR=state/xyz-stage/
+ terraform get
+ terraform plan -var-file=config/us-west-2/default.tfvars -var-file=variables.tfvars -var-file=../globals.tfvars -var profile=xyz-stage -var region=us-west-2 -var tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8 -no-color
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
module.encryption.module.data_key.data.null_data_source.key: Refreshing state...
module.buckets.data.template_file.dependencies: Refreshing state...
module.buckets.module.access_logging_bucket.data.template_file.dependencies: Refreshing state...
module.encryption.module.data_key.data.aws_region.current: Refreshing state...
module.buckets.module.access_logging_bucket.data.aws_caller_identity.current: Refreshing state...
data.aws_caller_identity.current: Refreshing state...
module.buckets.module.access_logging_bucket.data.aws_kms_alias.encryption_key_alias: Refreshing state...
module.buckets.data.aws_caller_identity.current: Refreshing state...
module.encryption.module.data_key.data.aws_caller_identity.current: Refreshing state...
module.encryption.module.data_key.data.aws_kms_alias.default: Refreshing state...
module.buckets.module.access_logging_bucket.data.template_file.encryption_configuration: Refreshing state...
module.encryption.module.data_key.data.aws_iam_policy_document.decrypt: Refreshing state...
module.encryption.module.data_key.data.aws_iam_policy_document.encrypt: Refreshing state...
module.buckets.module.access_logging_bucket.random_id.bucket_suffix: Refreshing state... [id=nY6U_w]
module.encryption.module.data_key.aws_iam_policy.decrypt: Refreshing state... [id=arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_decrypt]
module.encryption.module.data_key.aws_iam_policy.encrypt: Refreshing state... [id=arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_encrypt]
module.buckets.module.access_logging_bucket.aws_s3_bucket.default: Refreshing state... [id=ab-xyz-stage-access-logging-9d8e94ff]
module.buckets.random_id.cloudtrail_bucket_suffix: Refreshing state... [id=cqLFzQ]
module.buckets.aws_s3_bucket.cloudtrail_logging_bucket: Refreshing state... [id=ab-xyz-stage-cloudtrail-logging-72a2c5cd]
module.buckets.data.aws_iam_policy_document.restrict_access_cloudtrail: Refreshing state...
module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket: Refreshing state... [id=ab-xyz-stage-cloudtrail-logging-72a2c5cd]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
-/+ destroy and then create replacement
<= read (data resources)
Terraform will perform the following actions:
# module.buckets.data.aws_iam_policy_document.restrict_access_cloudtrail will be read during apply
# (config refers to values not yet known)
<= data "aws_iam_policy_document" "restrict_access_cloudtrail"
+ id = (known after apply)
+ json = (known after apply)
+ statement
+ actions = [
+ "s3:GetBucketAcl",
]
+ effect = "Allow"
+ resources = [
+ (known after apply),
]
+ sid = "AWSCloudTrailAclCheck"
+ principals
+ identifiers = [
+ "cloudtrail.amazonaws.com",
]
+ type = "Service"
+ statement
+ actions = [
+ "s3:PutObject",
]
+ effect = "Allow"
+ resources = [
+ (known after apply),
]
+ sid = "AWSCloudTrailWrite"
+ condition
+ test = "StringEquals"
+ values = [
+ "bucket-owner-full-control",
]
+ variable = "s3:x-amz-acl"
+ principals
+ identifiers = [
+ "cloudtrail.amazonaws.com",
]
+ type = "Service"
# module.buckets.aws_s3_bucket.cloudtrail_logging_bucket must be replaced
-/+ resource "aws_s3_bucket" "cloudtrail_logging_bucket"
+ acceleration_status = (known after apply)
+ acl = "private"
~ arn = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply)
~ bucket = "ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply) # forces replacement
~ bucket_domain_name = "ab-xyz-stage-cloudtrail-logging-72a2c5cd.s3.amazonaws.com" -> (known after apply)
~ bucket_regional_domain_name = "ab-xyz-stage-cloudtrail-logging-72a2c5cd.s3.us-west-2.amazonaws.com" -> (known after apply)
+ force_destroy = false
~ hosted_zone_id = "Z3BJ6K6RIION7M" -> (known after apply)
~ id = "ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply)
~ region = "us-west-2" -> (known after apply)
~ request_payer = "BucketOwner" -> (known after apply)
tags =
"mycompany:finance:accountenvironment" = "xyz-stage"
"mycompany:finance:application" = "ab-platform"
"mycompany:finance:billablebusinessunit" = "my-dev"
"name" = "Cloudtrail logging bucket"
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
~ lifecycle_rule
- abort_incomplete_multipart_upload_days = 0 -> null
enabled = true
~ id = "intu-lifecycle-s3-int-tier" -> (known after apply)
- tags = -> null
transition
days = 32
storage_class = "INTELLIGENT_TIERING"
- logging
- target_bucket = "ab-xyz-stage-access-logging-9d8e94ff" -> null
- target_prefix = "logs/cloudtrail-logging/" -> null
+ logging
+ target_bucket = (known after apply)
+ target_prefix = "logs/cloudtrail-logging/"
~ versioning
~ enabled = false -> (known after apply)
~ mfa_delete = false -> (known after apply)
# module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket must be replaced
-/+ resource "aws_s3_bucket_policy" "cloudtrail_logging_bucket"
~ bucket = "ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply) # forces replacement
~ id = "ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply)
~ policy = jsonencode(
- Statement = [
-
- Action = "s3:GetBucketAcl"
- Effect = "Allow"
- Principal =
- Service = "cloudtrail.amazonaws.com"
- Resource = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd"
- Sid = "AWSCloudTrailAclCheck"
,
-
- Action = "s3:PutObject"
- Condition =
- StringEquals =
- s3:x-amz-acl = "bucket-owner-full-control"
- Effect = "Allow"
- Principal =
- Service = "cloudtrail.amazonaws.com"
- Resource = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd/*"
- Sid = "AWSCloudTrailWrite"
,
]
- Version = "2012-10-17"
) -> (known after apply)
# module.buckets.random_id.cloudtrail_bucket_suffix must be replaced
-/+ resource "random_id" "cloudtrail_bucket_suffix"
~ b64 = "cqLFzQ" -> (known after apply)
~ b64_std = "cqLFzQ==" -> (known after apply)
~ b64_url = "cqLFzQ" -> (known after apply)
byte_length = 4
~ dec = "1923270093" -> (known after apply)
~ hex = "72a2c5cd" -> (known after apply)
~ id = "cqLFzQ" -> (known after apply)
+ keepers =
+ "aws_account_id" = "123412341234"
+ "env" = "xyz-stage"
# forces replacement
# module.buckets.module.access_logging_bucket.aws_s3_bucket.default must be replaced
-/+ resource "aws_s3_bucket" "default"
+ acceleration_status = (known after apply)
+ acl = "log-delivery-write"
~ arn = "arn:aws:s3:::ab-xyz-stage-access-logging-9d8e94ff" -> (known after apply)
~ bucket = "ab-xyz-stage-access-logging-9d8e94ff" -> (known after apply) # forces replacement
~ bucket_domain_name = "ab-xyz-stage-access-logging-9d8e94ff.s3.amazonaws.com" -> (known after apply)
~ bucket_regional_domain_name = "ab-xyz-stage-access-logging-9d8e94ff.s3.us-west-2.amazonaws.com" -> (known after apply)
+ force_destroy = false
~ hosted_zone_id = "Z3BJ6K6RIION7M" -> (known after apply)
~ id = "ab-xyz-stage-access-logging-9d8e94ff" -> (known after apply)
~ region = "us-west-2" -> (known after apply)
~ request_payer = "BucketOwner" -> (known after apply)
tags =
"mycompany:finance:accountenvironment" = "xyz-stage"
"mycompany:finance:application" = "ab-platform"
"mycompany:finance:billablebusinessunit" = "my-dev"
"name" = "Access logging bucket"
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
- grant
- permissions = [
- "READ_ACP",
- "WRITE",
] -> null
- type = "Group" -> null
- uri = "http://acs.amazonaws.com/groups/s3/LogDelivery" -> null
- grant
- id = "0343271a8c2f184152c171b223945b22ceaf5be5c9b78cf167660600747b5ad8" -> null
- permissions = [
- "FULL_CONTROL",
] -> null
- type = "CanonicalUser" -> null
- lifecycle_rule
- abort_incomplete_multipart_upload_days = 0 -> null
- enabled = true -> null
- id = "intu-lifecycle-s3-int-tier" -> null
- tags = -> null
- transition
- days = 32 -> null
- storage_class = "INTELLIGENT_TIERING" -> null
+ logging
+ target_bucket = (known after apply)
+ target_prefix = "logs/access-logging/"
~ versioning
~ enabled = false -> (known after apply)
~ mfa_delete = false -> (known after apply)
# module.buckets.module.access_logging_bucket.random_id.bucket_suffix must be replaced
-/+ resource "random_id" "bucket_suffix"
~ b64 = "nY6U_w" -> (known after apply)
~ b64_std = "nY6U/w==" -> (known after apply)
~ b64_url = "nY6U_w" -> (known after apply)
byte_length = 4
~ dec = "2643367167" -> (known after apply)
~ hex = "9d8e94ff" -> (known after apply)
~ id = "nY6U_w" -> (known after apply)
+ keepers =
+ "aws_account_id" = "123412341234"
+ "env" = "xyz-stage"
# forces replacement
Plan: 6 to add, 0 to change, 5 to destroy.
cloudtrail_bucket_suffix
的当前远程 TF 状态(LEFT)与旧帐户范围(RIGHT)的差异片段:
【问题讨论】:
计划应该指出娱乐的原因。例如这些变化迫使他们。你能发布计划的输出吗? @mariux,感谢您的检查。我添加了我的 terraform 计划的输出 sn-p。如果您需要任何其他详细信息,请告诉我。 那么根据您的评论,这是否意味着我导入的内容(它们是 aws 中的实际资源)与我的 terraform 脚本所代表的内容有所不同? @mariux 还想提一下,我确实对将 TF 从版本0.11
升级到 0.12.29
进行了一些代码更改。
感谢我的补充,我可能有一个提示......看我的回答
【参考方案1】:
计划显示存储桶名称不同(bucket
强制替换)。
这会触发存储桶本身和相关资源的重新创建。
您需要将存储桶名称设为稳定状态,然后其余部分也将稳定。由于您使用随机后缀作为存储桶名称,我怀疑您忘记导入它。 random_id
资源允许这样的导入:
terraform import module.buckets.random_id.cloudtrail_bucket_suffix cqLFzQ
编辑:
但是,您需要删除 keepers
,因为它们会触发替换 random_id
资源。 keepers
用于在其他资源发生变化时触发依赖资源的重新创建。
我认为这不是您想要的存储桶,因为您定义的保持器似乎是稳定/静态的:account_id
和 env
在此部署中都不太可能发生变化。如果您真的需要它们,您可以尝试手动操作状态。
【讨论】:
谢谢 Mariux。是的,我们正在使用 random_id 并且我已经导入了它。是这样的:terraform import module.buckets.random_id.cloudtrail_bucket_suffix cqLFzQ
我添加了完整的 terraform import
和 terraform plan
输出以供参考。请查看我更新的问题。
对不起.. 是的错过了这个...查看我的更新版本.. 应该删除守门员.. 如果您真的需要它们,我可以尝试我们曾经为客户使用的技巧.. . 从来没有尝试过 random_id ......
我想保留keepers
。当您说技巧时,您的意思是更新当前的远程 TF 状态文件以具有 keeepers
等?我添加了当前远程状态与此资源是 account-scope 的一部分时的差异。请检查
由于资源的性质是随机的,我们对另一个资源的导入顺序不起作用。所以我在这里看到的唯一方法是手动操作状态并在提供者的存储库中打开一个问题以支持导入保持器。很抱歉这里没有更好的选择。以上是关于Terraform (0.12.29) 导入未按预期工作;导入成功,但计划显示销毁并重新创建的主要内容,如果未能解决你的问题,请参考以下文章