I use a computing environment of 0-256 m3.medium on demand instances. My Job definition requires 1 CPU and 3 GB of Ram, which m3.medium has.
What are possible reasons why AWS Batch Jobs are stuck in state RUNNABLE
?
AWS says:
A job that resides in the queue, has no outstanding dependencies, and is therefore ready to be scheduled to a host. Jobs in this state are started as soon as sufficient resources are available in one of the compute environments that are mapped to the job’s queue. However, jobs can remain in this state indefinitely when sufficient resources are unavailable.
but that does not answer my question
There are other reasons why a Job can get stuck in RUNNABLE:
- Insufficient permissions for the role associated to the Computed Environment
- No internet access from the Compute Environment instance.
You will need to associate a NAT or Internet Gateway to the Compute Environment subnet.
- Make sure to check the "Enable auto-assign public IPv4 address" setting on your Compute Environment's subnet. (Pointed out by @thisisbrians in the comments)
- Problems with your image. You need to use an ECS optimized AMI or make sure you have the ECS container agent working. More info at aws docs
- You're trying to launch instances for which you account is limited to 0 instances (EC2 console > limits, in the left menu). (Read more on gergely-danyi comment)
- And as mentioned insufficient resources
Also, make sure to read the AWS Batch troubleshooting
The roles should be defined using, at least, the next policies and trusted relationships. If not, they will get stuck in RUNNABLE as they don't have the enough privileges to start:
AWSBatchServiceRole
- Attached policies:
AWSBatchServiceRole
Trusted relationship:
batch.amazonaws.com
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "batch.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }
ecsInstanceRole
- Attached policies:
AmazonEC2ContainerServiceforEC2Role
Trusted relationship:
ec2.amazonaws.com
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }
I just fought with this for a while, and found the answer.
One possible reason jobs can get stuck in Runnable
is because there are no instances to run the job on. If this is the case, looking at the auto scaling group as mentioned in the above answer can show you the actual error that's preventing instances from being started, guiding you to the exact problem rather than leaving you to try any number solutions to problems you don't have. Error messages are our friends.
来源:https://stackoverflow.com/questions/48151332/why-are-aws-batch-jobs-stuck-in-runnable