amazon-eks

Pods in EKS: can't resolve DNS (but can ping IP)

假装没事ソ 提交于 2020-08-10 05:02:25
问题 I have 2 EKS clusters, in 2 different AWS accounts and with, I might assume, different firewalls (which I don't have access to). The first one (Dev) is all right, however, with the same configuration, UAT cluster pods is struggling to resolve DNS. The Nodes can resolve and seems to be all right. 1) ping 8.8.8.8 works --- 8.8.8.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms 2) I can ping the IP of google (and others), but not the actual dns names. Our

AWS EKS - Authenticate Kubernetes python lib from inside a pod

瘦欲@ 提交于 2020-07-19 06:18:23
问题 Objective I want to connect to and call Kubernetes REST APIs from inside a running pod, the Kubernetes in question is an AWS EKS cluster using IAM authentication. All of this using Kubernetes Python lib. What I have tried From inside my python file : from kubernetes import client, config config.load_incluster_config() v1 = client.CoreV1Api() ret = v1.list_pod_for_all_namespaces(watch=False) The above command throws a 403 error, This I believe is due to the different auth mechanism that AWS

EKS: Unable to pull logs from pods

大兔子大兔子 提交于 2020-06-26 06:47:33
问题 kubectl logs command intermittently fails with "getsockopt: no route to host" error. # kubectl logs -f mypod-5c46d5c75d-2Cbtj Error from server: Get https://X.X.X.X:10250/containerLogs/default/mypod-5c46d5c75d-2Cbtj/metaservichart?follow=true: dial tcp X.X.X.X:10250: getsockopt: no route to host If I run the same command 5-6 times it works. I am not sure why this is happening. Any help will be really appreciated. 回答1: Just fyi, I just tried using another VPC 172.18.X.X for EKS, and all

Enabling AWS Group to access AWS EKS cluster

て烟熏妆下的殇ゞ 提交于 2020-06-16 18:07:02
问题 This question is essentially a duplicate of Adding IAM Group to aws-auth configmap in AWS EKS. However, the question does not have an accepted answer and I would like to provide more context. I know that aws-auth ConfigMap object does not allow mapping AWS Group directly. A workaround would be to map an AWS Role instead. I tried that but were unable to get it working. Mapping an AWS User works without issues. I setup an AWS Role arn:aws:iam::027755483893:role/development-readwrite with

Adding name to EC2 instances when deploying AWS::EKS::Nodegroup in CloudFormation

不羁的心 提交于 2020-05-30 10:15:09
问题 I'm creating a CloudFormation template to deploy an EKS node group using the AWS::EKS::Nodegroup CloudFormation resource. It looks like you can create tags for the node group resource specifically, but cannot change the name of the EC2 instances that are deployed as part of the node group. From the AWS documentation, it looks like tags are not propagated to other resources the node group deploys (such as EC2 instances). Does anyone know of a way on how we can update the name of the EC2

Terraform: Deploying a Docker Compose app on EKS/ECS

℡╲_俬逩灬. 提交于 2020-05-14 19:46:50
问题 TL;DR I use an open-source server application running on Docker Compose . It has a few services, including PostgreSQL DB and Redis. How can I best deploy this application to AWS in full IaC with Terraform? Solutions so far 1. AWS ecs-cli ecs-cli now supports sending docker compose configs in Amazon ECS. However, I do not think it could be integrated with the Terraform workflow (which is maybe not a big fuss). What I know for sure is that ecs-cli is not supported in CloudFormation , as per

EKS - Node labels

﹥>﹥吖頭↗ 提交于 2020-02-15 07:13:24
问题 Is there a way to add node labels when deploying worker nodes in EKS. I do not see an option in the CF template available for worker nodes. EKS-CF-Workers The only option I see right now is to use kubectl label command to add labels which is post cluster setup. However, the need to have complete automation which means applications are deployed automatically post cluster deployments and labels help in achieving the segregation. 回答1: With the new EKS-optimized AMIs(amazon-eks-node-vXX) and

EKS - Node labels

五迷三道 提交于 2020-02-15 07:12:22
问题 Is there a way to add node labels when deploying worker nodes in EKS. I do not see an option in the CF template available for worker nodes. EKS-CF-Workers The only option I see right now is to use kubectl label command to add labels which is post cluster setup. However, the need to have complete automation which means applications are deployed automatically post cluster deployments and labels help in achieving the segregation. 回答1: With the new EKS-optimized AMIs(amazon-eks-node-vXX) and

Docker service won't run inside Container - ERROR : ulimit: error setting limit (Operation not permitted)

雨燕双飞 提交于 2020-02-03 09:02:43
问题 I'm running a cluster on AWS EKS. Container(StatefulSet POD) that currently running has Docker installation inside of it. I ran this image as Kubernetes StatefulSet in my cluster. Here is my yaml file, apiVersion: apps/v1 kind: StatefulSet metadata: name: jenkins labels: run: jenkins spec: serviceName: jenkins replicas: 1 selector: matchLabels: run: jenkins template: metadata: labels: run: jenkins spec: securityContext: fsGroup: 1000 containers: - name: jenkins image: 99*****.dkr.ecr.<region>

Docker service won't run inside Container - ERROR : ulimit: error setting limit (Operation not permitted)

删除回忆录丶 提交于 2020-02-03 09:02:33
问题 I'm running a cluster on AWS EKS. Container(StatefulSet POD) that currently running has Docker installation inside of it. I ran this image as Kubernetes StatefulSet in my cluster. Here is my yaml file, apiVersion: apps/v1 kind: StatefulSet metadata: name: jenkins labels: run: jenkins spec: serviceName: jenkins replicas: 1 selector: matchLabels: run: jenkins template: metadata: labels: run: jenkins spec: securityContext: fsGroup: 1000 containers: - name: jenkins image: 99*****.dkr.ecr.<region>