azure-aks

Need help troubleshooting Istio IngressGateway HTTP ERROR 503

北慕城南 提交于 2021-02-05 05:51:41
问题 My Test Environment Cluster has the following configurations : Global Mesh Policy (Installed as part of cluster setup by our org) : output of kubectl describe MeshPolicy default Name: default Namespace: Labels: operator.istio.io/component=Pilot operator.istio.io/managed=Reconcile operator.istio.io/version=1.5.6 release=istio Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"authentication.istio.io/v1alpha1","kind":"MeshPolicy","metadata":{"annotations":{},"labels":

How to run Postman test cases by helm and rollback to last successful version if any test fail

风流意气都作罢 提交于 2021-01-29 11:17:11
问题 I am using Helm kubernetes deployment and I want to run the postman test cases before a final deployment, and if any test case fails then rollback (or retain the current deployment like Blue-Green deployment). How to achieve this? 回答1: I achieved the expected behavior with Helm Chart Tests and the postman/newman Docker image. My Helm template for the test execution: apiVersion: v1 kind: Pod metadata: name: API Test annotations: "helm.sh/hook": test-success spec: containers: - name:

Getting nginx-ingress to use UDP in Azure

久未见 提交于 2021-01-29 05:14:15
问题 Currently this setup worked for TCP but once switched to UDP I get the error The Service "nginx-ingress-controller" is invalid: spec.ports: Invalid value: []core.ServicePort{core.ServicePort{Name:"proxied-udp-30001", Protocol:"UDP", Port:30001, TargetPort:intstr.IntOrString{Type:0, IntVal:30001, StrVal:""}, NodePort:0}, core.ServicePort{Name:"proxied-udp-30002", Protocol:"UDP", Port:30002, TargetPort:intstr.IntOrString{Type:0, IntVal:30002, StrVal:""}, NodePort:0}, core.ServicePort{Name:"http

How to reschedule my pods after scaling down a node in Azure Kubernetes Service (AKS)?

强颜欢笑 提交于 2021-01-28 19:25:42
问题 I am going to start with an example. Say I have an AKS cluster with three nodes. Each of these nodes runs a set of pods, let's say 5 pods. That's 15 pods running on my cluster in total, 5 pods per node, 3 nodes. Now let's say that my nodes are not fully utilized at all and I decide to scale down to 2 nodes instead of 3. When I choose to do this within Azure and change my node count from 3 to 2, Azure will close down the 3rd node. However, it will also delete all pods that were running on the

Azure Kubernetes - Istio Egress not working

霸气de小男生 提交于 2021-01-28 13:30:08
问题 I have used the following configuration to setup the Istio cat << EOF | kubectl apply -f - apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: istio-system name: istio-control-plane spec: # Use the default profile as the base # More details at: https://istio.io/docs/setup/additional-setup/config-profiles/ profile: default # Enable the addons that we will want to use addonComponents: grafana: enabled: true prometheus: enabled: true tracing: enabled: true kiali:

Unable to pull new image with AKS and ACR

吃可爱长大的小学妹 提交于 2021-01-27 19:10:27
问题 I'm suddenly having issues pulling the latest image from Azure container registry with AKS (which previously worked fine. If I run kubectl describe pod <podid> I get: Failed to pull image <image>: rpc error: code = Unknown desc = Error response from daemon: Get <image>: unauthorized: authentication required I've tried logging into the ACR manually and it's all working correctly - the new images have pushed correctly and I can pull them manually. Moreover I've tried: az aks update -g

Configuring an AKS load balancer for HTTPS access

久未见 提交于 2021-01-27 04:45:09
问题 I'm porting an application that was originally developed for the AWS Fargate container service to AKS under Azure. In the AWS implementation an application load balancer is created and placed in front of the UI microservice. This load balancer is configured to use a signed certificate, allowing https access to our back-end. I've done some searches on this subject and how something similar could be configured in AKS. I've found a lot of different answers to this for a variety of similar

Failure in kubernetes working deployment files , after cluster upgradation from 1.11 to 1.14.6, being deployed via circleCI

纵然是瞬间 提交于 2021-01-06 13:15:46
问题 I am using circleCI for deployments, with AKS version 1.11 , the pipelines were working fine but after the AKS upgradation to 1.14.6, failure is seen while applying the deployment and service object files. I deployed manually at kubernetes cluster, there didn't appear any error but while deploying through circleCI, I am getting following kind of errors while using version 2 of circleCI error: SchemaError(io.k8s.api.extensions.v1beta1.DeploymentRollback): invalid object doesn't have additional

Failure in kubernetes working deployment files , after cluster upgradation from 1.11 to 1.14.6, being deployed via circleCI

落花浮王杯 提交于 2021-01-06 13:13:24
问题 I am using circleCI for deployments, with AKS version 1.11 , the pipelines were working fine but after the AKS upgradation to 1.14.6, failure is seen while applying the deployment and service object files. I deployed manually at kubernetes cluster, there didn't appear any error but while deploying through circleCI, I am getting following kind of errors while using version 2 of circleCI error: SchemaError(io.k8s.api.extensions.v1beta1.DeploymentRollback): invalid object doesn't have additional

Failure in kubernetes working deployment files , after cluster upgradation from 1.11 to 1.14.6, being deployed via circleCI

≯℡__Kan透↙ 提交于 2021-01-06 13:06:32
问题 I am using circleCI for deployments, with AKS version 1.11 , the pipelines were working fine but after the AKS upgradation to 1.14.6, failure is seen while applying the deployment and service object files. I deployed manually at kubernetes cluster, there didn't appear any error but while deploying through circleCI, I am getting following kind of errors while using version 2 of circleCI error: SchemaError(io.k8s.api.extensions.v1beta1.DeploymentRollback): invalid object doesn't have additional