azure-aks

Kubernetes: Expose multiple services internally & externally

北城余情 提交于 2020-12-15 05:35:32
问题 Am using AKS for my cluster Scenario : We have multiple API's (say svc1, svc2 & svc3 accessible on port 101, 102, 103). These API links need to be exposed to client and are also used internally in application. Question : I want to expose this to both external & internal load balancer on same ports. Also when i access the service internally, i want them to be accessible by service name (Example: svc1:101) 回答1: In Kubernetes: if you want to expose something internally only , you should use

Weird Error in kubernetes: “starting container process caused ”exec: \“/usr/bin/dumb-init\”: stat /usr/bin/dumb-init: no such file or directory"

江枫思渺然 提交于 2020-12-13 03:24:26
问题 I built a customised Docker image of airflow following this document: "https://github.com/puckel/docker-airflow". Built and run in my local VM. Everything was successful and airflow was up. Pushed the image to ACR (Azure container registry) and launched it in aks via stable helm chart. Referred this link "https://github.com/helm/charts/tree/master/stable/airflow". Now suddenly in kubernetes the pods are not up and fails out with the below error. Error: failed to start container "airflow

How to expose an Azure Kubernetes cluster with a public IP address using Terraform

廉价感情. 提交于 2020-12-06 06:47:06
问题 I'm having trouble to expose a k8s cluster deployed on AKS with a public IP address. I'm using GitHub Actions to do the deployment. The following are my .tf and deployment.yml files; Please see below the errors I'm facing. main.tf provider "azurerm" { features {} } provider "azuread" { version = "=0.7.0" } terraform { backend "azurerm" { resource_group_name = "tstate-rg" storage_account_name = "tstateidentity11223" container_name = "tstate" access_key = "/qSJCUo..." key = "terraform.tfstate"

Client communication to RabbitMQ fails using SSL Peer Verification

ぃ、小莉子 提交于 2020-08-10 19:38:23
问题 I am facing a weird situation in communication with RabbitMQ from a client, the following are the details RabbitMQ running on Azure AKS cluster (Containerized), exposed over the internet, Traffic is routed to RabbitMQ using Azure Traffic Manager (Custom Domain), RabbitMQ is configured to support SSL - and Peer Verification is set to true, Internal (Organization) server certificate is configured to the RabbitMQ config file. RabbitMQ Version 3.7.8 The client is deployed on BizTalk - Azure

DisallowedHost Django deployment in Kubernetes cluster: Invalid HTTP_HOST header

主宰稳场 提交于 2020-06-29 04:33:54
问题 I have a Django deployment for a frontend service in my Azure Kubernetes cluster with some basic configuration. But note that the same question applies for my local Minikube cluster. I fetch my Django frontend container image from my remote container registry and expose port 8010 . My service configuration is quite simple as well. frontend.deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: frontend-v1 labels: app: frontend-v1 spec: replicas: 1 selector: matchLabels: app:

AKS creation through CLI fails: Reconcile standard load balancer failed

梦想的初衷 提交于 2020-06-15 04:37:32
问题 I have created the AKS using below CLI command on countless number of occasions. I had no problem until today. az aks create --resource-group rg --name ama --generate-ssh-keys --location southeastasia --aad-server-app-id xxxxx-xxxxxx-xxx-xxx-xxxxxxxxx --aad-server-app-secret @xxx?=1[xxx:xxx:xxxx:xxxx --aad-client-app-id xxxxx-xxx-xxx-xxx-xxx --client-secret xxxxx-xxx-xxx-xxxx-xxxxxxxx --aad-tenant-id xxxx-xxxx-xxxx-xxxx-xxxx --service-principal xxxxx-xxxx-xxxx-xxxx-xxxxxxx --node-count 3