Im trying to deploy a docker
container image to AWS
using ECS
, but the EC2 instance is not being created. I have scoured the internet
I realize this is an older thread, but I stumbled on it after seeing the error the OP mentioned while following this tutorial.
Changing to an ecs-optimized
AMI image did not help. My VPC already had a route 0.0.0.0/0 pointing to the subnet. My instances were added to the correct cluster, and they had the proper permissions.
Thanks to @sanath_p's link to this thread, I found a solution and took these steps:
IP address type
under the Advanced settings to "Assign a public IP address to every instance"Just in case someone else is blocked with this problem as I was... I've tried everything here and didn't work for me.
Besides what was said here regards the EC2 Instance Role, as commented here, in my case only worked if I still configured the EC2 Instance with simple information. Using the User Data an initial script like this:
#!/bin/bash
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_CLUSTER=quarkus-ec2
EOF
Informing the related ECS Cluster Name created at this ecs config file, resolved my problem. Without this config, the ECS Agent Log at the EC2 Instance was showing an error that was not possible to connect to the ECS, doing this I've got the EC2 Instance visible to the ECS Cluster.
After doing this, I could get the EC2 Instance available for my EC2 Cluster:
The AWS documentation said that this part is optional, but in my case, it didn't work without this "optional" configuration.
I ran into this issue when using Fargate. I fixed it when I explicitly defined launchType="FARGATE"
when calling run_task
.
The real issue is lack of permission. As long as you create and assign a IAM Role with AmazonEC2ContainerServiceforEC2Role permission, the problem goes away.