Create GKE cluster and namespace with Terraform

亡梦爱人 提交于 2020-12-15 01:57:34

问题


I need to create GKE cluster and then create namespace and install db through helm to that namespace. Now I have gke-cluster.tf that creates cluster with node pool and helm.tf, that has kubernetes provider and helm_release resource. It first creates cluster, but then tries to install db but namespace doesn't exist yet, so I have to run terraform apply again and it works. I want to avoid scenario with multiple folder and run terraform apply only once. What's the good practice for situaction like this? Thanks for the answers.


回答1:


The solution posted by user adp is correct but I wanted to give more insight on using Terraform for this particular example in regards of running single commmand:

  • $ terraform apply --auto-approve.

Basing on following comments:

Can you tell how are you creating your namespace? Is it with kubernetes provider? - Dawid Kruk

resource "kubernetes_namespace" - Jozef Vrana

This setup needs specific order of execution. First the cluster, then the resources. By default Terraform will try to create all of the resources at the same time. It is crucial to use a parameter depends_on = [VALUE].

The next issue is that the kubernetes provider will try to fetch the credentials at the start of the process from ~/.kube/config. It will not wait for the cluster provisioning to get the actual credentials. It could:

  • fail when there is no .kube/config
  • fetch credentials for the wrong cluster.

There is ongoing feature request to resolve this kind of use case (also there are some workarounds):

  • Github.com: Hashicorp: Terraform: Issue: depends_on for providers

As an example:

# Create cluster
resource "google_container_cluster" "gke-terraform" {
  project = "PROJECT_ID"
  name     = "gke-terraform"
  location = var.zone
  initial_node_count = 1
}

# Get the credentials 
resource "null_resource" "get-credentials" {

 depends_on = [google_container_cluster.gke-terraform] 
 
 provisioner "local-exec" {
   command = "gcloud container clusters get-credentials ${google_container_cluster.gke-terraform.name} --zone=europe-west3-c"
 }
}

# Create a namespace
resource "kubernetes_namespace" "awesome-namespace" {

 depends_on = [null_resource.get-credentials]

 metadata {
   name = "awesome-namespace"
 }
}

Assuming that you had earlier configured cluster to work on and you didn't delete it:

  • Credentials for Kubernetes cluster are fetched.

  • Terraform will create a cluster named gke-terraform

  • Terraform will run a local command to get the credentials for gke-terraform cluster

  • Terraform will create a namespace (using old information):

    • if you had another cluster in .kube/config configured, it will create a namespace in that cluster (previous one)
    • if you deleted your previous cluster, it will try to create a namespace in that cluster and fail (previous one)
    • if you had no .kube/config it will fail on the start

Important!

Using "helm_release" resource seems to get the credentials when provisioning the resources, not at the start!

As said you can use helm provider to provision the resources on your cluster to avoid the issues I described above.

Example on running a single command for creating a cluster and provisioning resources on it:

variable zone {
  type = string
  default = "europe-west3-c"
}

resource "google_container_cluster" "gke-terraform" {
  project = "PROJECT_ID"
  name     = "gke-terraform"
  location = var.zone
  initial_node_count = 1
}

data "google_container_cluster" "gke-terraform" { 
  project = "PROJECT_ID"
  name     = "gke-terraform"
  location = var.zone
}

resource "null_resource" "get-credentials" {

 # do not start before resource gke-terraform is provisioned
 depends_on = [google_container_cluster.gke-terraform] 

 provisioner "local-exec" {
   command = "gcloud container clusters get-credentials ${google_container_cluster.gke-terraform.name} --zone=${var.zone}"
 }
}


resource "helm_release" "mydatabase" {
  name  = "mydatabase"
  chart = "stable/mariadb"
  
  # do not start before the get-credentials resource is run 
  depends_on = [null_resource.get-credentials] 

  set {
    name  = "mariadbUser"
    value = "foo"
  }

  set {
    name  = "mariadbPassword"
    value = "qux"
  }
}

Using above configuration will yield:

data.google_container_cluster.gke-terraform: Refreshing state...
google_container_cluster.gke-terraform: Creating...
google_container_cluster.gke-terraform: Still creating... [10s elapsed]
<--OMITTED-->
google_container_cluster.gke-terraform: Still creating... [2m30s elapsed]
google_container_cluster.gke-terraform: Creation complete after 2m38s [id=projects/PROJECT_ID/locations/europe-west3-c/clusters/gke-terraform]
null_resource.get-credentials: Creating...
null_resource.get-credentials: Provisioning with 'local-exec'...
null_resource.get-credentials (local-exec): Executing: ["/bin/sh" "-c" "gcloud container clusters get-credentials gke-terraform --zone=europe-west3-c"]
null_resource.get-credentials (local-exec): Fetching cluster endpoint and auth data.
null_resource.get-credentials (local-exec): kubeconfig entry generated for gke-terraform.
null_resource.get-credentials: Creation complete after 1s [id=4191245626158601026]
helm_release.mydatabase: Creating...
helm_release.mydatabase: Still creating... [10s elapsed]
<--OMITTED-->
helm_release.mydatabase: Still creating... [1m40s elapsed]
helm_release.mydatabase: Creation complete after 1m44s [id=mydatabase]



回答2:


The create_namespace argument of helm_release resource can help you.

create_namespace - (Optional) Create the namespace if it does not yet exist. Defaults to false.

https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#create_namespace

Alternatively, you can define a dependency between the namespace resource and helm_release like below:

resource "kubernetes_namespace" "prod" {
  metadata {
    annotations = {
      name = "prod-namespace"
    }

    labels = {
      namespace = "prod"
    }

    name = "prod"
  }
}
resource "helm_release" "arango-crd" { 
  name = "arango-crd" 
  chart = "./kube-arangodb-crd"
  namespace = "prod"  

  depends_on = [ kubernetes_namespace.prod ]
}


来源:https://stackoverflow.com/questions/63782742/create-gke-cluster-and-namespace-with-terraform

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!