将 terraform 0.12.6 切换到 0.13.0 给我 provider["registry.terraform.io/-/null"] 是必需的,但它已被删除

Posted

技术标签:

【中文标题】将 terraform 0.12.6 切换到 0.13.0 给我 provider["registry.terraform.io/-/null"] 是必需的,但它已被删除【英文标题】:Switch terraform 0.12.6 to 0.13.0 gives me provider["registry.terraform.io/-/null"] is required, but it has been removed 【发布时间】:2020-12-14 20:27:51 【问题描述】:

我在远程 terraform-cloud 中管理状态

我已经下载并安装了最新的 terraform 0.13 CLI

然后我删除了 .terraform

然后我运行terraform init 并没有出错

然后我做了

➜ terraform apply -var-file env.auto.tfvars

Error: Provider configuration not present

To work with
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0] its
original provider configuration at provider["registry.terraform.io/-/null"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0],
after which you can remove the provider configuration again.

Releasing state lock. This may take a few moments...

这是模块/kubernetes/main.tf的内容

###################################################################################
# EKS CLUSTER                                                                     #
#                                                                                 #
# This module contains configuration for EKS cluster running various applications #
###################################################################################

module "eks_label" 
  source      = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master"
  namespace   = var.project
  environment = var.environment
  attributes  = [var.component]
  name        = "eks"



#
# Local computed variables
#
locals 
  names = 
    secretmanage_policy = "secretmanager-$var.environment-policy"
  


data "aws_eks_cluster" "cluster" 
  name = module.eks-cluster.cluster_id


data "aws_eks_cluster_auth" "cluster" 
  name = module.eks-cluster.cluster_id


provider "kubernetes" 
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.9"


module "eks-cluster" 
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = module.eks_label.id
  cluster_version = var.cluster_version
  subnets         = var.subnets
  vpc_id          = var.vpc_id

  worker_groups = [
    
      instance_type = var.cluster_node_type
      asg_max_size  = var.cluster_node_count
    
  ]

  tags = var.tags


# Grant secretmanager access to all pods inside kubernetes cluster
# TODO:
# Adjust implementation so that the policy is template based and we only allow
# kubernetes access to a single key based on the environment.
# we should export key from modules/secrets and then grant only specific ARN access
# so that only production cluster is able to read production secrets but not dev or staging
# https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html#permissions_grant-get-secret-value-to-one-secret
resource "aws_iam_policy" "secretmanager-policy" 
  name        = local.names.secretmanage_policy
  description = "allow to read secretmanager secrets $var.environment"
  policy      = file("modules/kubernetes/policies/secretmanager.json")


#
# Attache the policy to k8s worker role
#
resource "aws_iam_role_policy_attachment" "attach" 
  role       = module.eks-cluster.worker_iam_role_name
  policy_arn = aws_iam_policy.secretmanager-policy.arn


#
# Attache the S3 Policy to Workers
# So we can use aws commands inside pods easily if/when needed
#
resource "aws_iam_role_policy_attachment" "attach-s3" 
  role       = module.eks-cluster.worker_iam_role_name
  policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"

【问题讨论】:

【参考方案1】:

此修复的所有功劳归于在 cloudposse slack 频道上提及此问题的人:

terraform state replace-provider -auto-approve -- -/null registry.terraform.io/hashicorp/null

这解决了我与此错误有关的问题,并解决了下一个错误。都是为了升级 terraform 上的版本。

【讨论】:

这是意料之中的,因为 Terraform 仍处于 0.x.y 版本 Terraform 是在 2014 年创建的,他们要钱来支持这个软件。【参考方案2】:

对于我们来说,我们更新了我们在代码中使用的所有提供程序 URL,如下所示:

terraform state replace-provider 'registry.terraform.io/-/null' 
'registry.terraform.io/hashicorp/null'
terraform state replace-provider 'registry.terraform.io/-/archive' 
'registry.terraform.io/hashicorp/archive'
terraform state replace-provider 'registry.terraform.io/-/aws' 
'registry.terraform.io/hashicorp/aws'

我想非常具体地替换,所以我在替换新 URL 时使用了损坏的 URL。

更具体地说,这仅适用于terraform 13

https://www.terraform.io/docs/providers/index.html#providers-in-the-terraform-registry

【讨论】:

谢谢!在 terraform 从 0.12.26 升级到 0.13 期间帮助了我【参考方案3】:

当有一个处于最新 Terraform 状态的对象不再在配置中但 Terraform 无法销毁它(正如通常预期的那样)时,会出现此错误,因为这样做的提供程序配置也不存在。

解决方案:

仅当您最近删除了对象时才会出现这种情况 “data.null_data_source”以及提供者“null”块。到 继续此you’ll need to temporarily restore that provider "null" block,运行 terraform apply 以拥有Terraform destroy object data "null_data_source",然后您可以删除提供程序“null” 阻止,因为不再需要它。

【讨论】:

Naveen 但我的代码更改为零 - 这个提供程序是怎么漏掉的? 这很奇怪。也许你可以尝试添加提供者空块并测试它是否运行良好

以上是关于将 terraform 0.12.6 切换到 0.13.0 给我 provider["registry.terraform.io/-/null"] 是必需的,但它已被删除的主要内容,如果未能解决你的问题,请参考以下文章

Terraform 0.11:无法将动态安全组添加到 aws 实例

如何自动切换角色策略(Terraform)

如何使用 Azure Devops 将 powershell 列表变量传输到 terraform?

为 Terraform 和 Kubernetes 切换 gcloud 帐户

Terraform 模块依赖项不起作用(版本 0.12)

terraform 将云作曲家从 1.0 升级到 2.0?