docker-container

Missing log lines when writing to cloudwatch from ECS Docker containers

只谈情不闲聊 提交于 2019-12-04 04:03:18
(Docker container on AWS-ECS exits before all the logs are printed to CloudWatch Logs) Why are some streams of a CloudWatch Logs Group incomplete (i.e., the Fargate Docker Container exits successfully but the logs stop being updated abruptly)? Seeing this intermittently, in almost all log groups, however, not on every log stream/task run. I'm running on version 1.3.0 Description: A Dockerfile runs node.js or Python scripts using the CMD command. These are not servers/long-running processes, and my use case requires the containers to exit when the task completes. Sample Dockerfile: FROM node:6

web server running inside a docker container running inside an EC2 instance responses very slowly

拈花ヽ惹草 提交于 2019-12-03 20:14:54
I have a web server running inside a docker container in AWS EC2 Ubuntu instance. When I send requests to the web server, I get the response very slowly (20+ seconds most of the times, although the response time varies). It does not time-out though . The web server is a very lightweight. It is only to test, so almost does not do anything. docker version 17.03.0-ce docker-compose version 1.12.0-rc1 How I debugged so far When sending requests to the web server running in the docker container from within the EC2 instance (url = ' http:// localhost:xxxx/api ') it is still very slow. So should not

docker swarm - how to balance already running containers in a swarm cluster?

蹲街弑〆低调 提交于 2019-12-03 13:29:47
问题 I have docker swarm cluster with 2 nodes on AWS. I stopped the both instances and initially started swarm manager and then worker. Before stopped the instances i had a service running with 4 replicas distributed among manager and worker. When i started swarm manager node first all replica containers started on manager itself and not moving to worker at all. Please tell me how to do load balance? Is swarm manager not responsible to do when worker started? 回答1: Swarm currently (18.03) does not

Huge files in Docker containers

落爺英雄遲暮 提交于 2019-12-03 06:47:12
I need to create a Docker image (and consequently containers from that image) that use large files (containing genomic data, thus reaching ~10GB in size). How am I supposed to optimize their usage? Am I supposed to include them in the container (such as COPY large_folder large_folder_in_container )? Is there a better way of referencing such files? The point is that it sounds strange to me to push such container (which would be >10GB) in my private repository. I wonder if there is a way of attaching a sort of volume to the container, without packing all those GBs together. Thank you. Am I

docker swarm - how to balance already running containers in a swarm cluster?

大憨熊 提交于 2019-12-03 03:36:54
I have docker swarm cluster with 2 nodes on AWS. I stopped the both instances and initially started swarm manager and then worker. Before stopped the instances i had a service running with 4 replicas distributed among manager and worker. When i started swarm manager node first all replica containers started on manager itself and not moving to worker at all. Please tell me how to do load balance? Is swarm manager not responsible to do when worker started? Swarm currently (18.03) does not move or replace containers when new nodes are started, if services are in the default "replicated mode".

How to rebuild docker container in docker-compose.yml?

拜拜、爱过 提交于 2019-12-02 13:56:25
There are scope of services which defined in docker-compose.yml. These service have been started. I need to rebuild only one of these and start it without up other services. I run the following commands: docker-compose up -d # run all services docker-compose stop nginx # stop only one. but it still running !!! docker-compose build --no-cache nginx docker-compose up -d --no-deps # link nginx to other services At the end i got old nginx container. By the way docker-compose doesn't kill all running containers ! docker-compose up $docker-compose up -d --no-deps --build <service_name> --no-deps -

Windows container failed to start with error, “failed to create endpoint on network nat: HNS failed with error : Failed to create endpoint.”

这一生的挚爱 提交于 2019-12-01 16:56:45
I have been trying Windows Containers on windows server 2016 TP5. Suddenly I started getting error while running a container with port maping option -p 80:80 c:\>docker run -it -p 80:80 microsoft/iis cmd docker: Error response from daemon: failed to create endpoint sharp_brahmagupta on network nat: HNS failed with error : Failed to create endpoint. I made sure that no other container is running and port 80 on host machine is not being used by any other service. Did anyone face same issue? After searching around I stunbled upon this issue on github. This seemed to be a known issue with Windows

Docker container with entrypoint variable expansion and CMD parameters

元气小坏坏 提交于 2019-12-01 15:44:45
I want to create a Docker Image that acts as an executable for which the user passes a token as an environment variable. The executable has sub commands that the user should pass via dockers CMD (think of git with authentication via Env). However, Docker does not append the CMD to the entrypoint. The relevant part of my Dockerfile looks like this: ENTRYPOINT ["/bin/sh", "-c", "/usr/bin/mycmd --token=$MY_TOKEN"] CMD ["pull", "stuff"] So if this container is executed without any CMD overrides and secret as the MY_TOKEN variable, I would expect mycmd --token=secret pull stuff to be executed. If

Docker container with entrypoint variable expansion and CMD parameters

有些话、适合烂在心里 提交于 2019-12-01 13:50:53
问题 I want to create a Docker Image that acts as an executable for which the user passes a token as an environment variable. The executable has sub commands that the user should pass via dockers CMD (think of git with authentication via Env). However, Docker does not append the CMD to the entrypoint. The relevant part of my Dockerfile looks like this: ENTRYPOINT ["/bin/sh", "-c", "/usr/bin/mycmd --token=$MY_TOKEN"] CMD ["pull", "stuff"] So if this container is executed without any CMD overrides

Is it possible to start a stopped container from another container

大兔子大兔子 提交于 2019-11-30 08:48:37
There are two containers A and B. Once container A starts, one process will be executed, then the container will stop. Container B is just an web application (say expressjs). Is it possible to kickstart A from container B ? It is possible to grant a container access to docker so that it can spawn other containers on your host. You do this by exposing the docker socket inside the container, e.g: docker run -v /var/run/docker.sock:/var/run/docker.sock --name containerB myimage ... Now, if you have the docker client available inside the container, you will be able to control the docker daemon on