Why are no Amazon S3 authentication handlers ready?

后端 未结 12 1992
广开言路
广开言路 2020-12-13 08:21

I have my $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY environment variables set properly, and I run this code:

import boto
conn = boto.connect_s3()
         


        
相关标签:
12条回答
  • 2020-12-13 09:09

    On Mac, exporting keys need to look like this: key=value. So exporting, say, AWS_ACCESS_KEY_ID environmental var should look like this: AWS_ACCESS_KEY_ID=yourkey. If you have any quotations around your values, as mentioned in above answers, boto will throw the above-mentioned error.

    0 讨论(0)
  • 2020-12-13 09:10

    In my case the problem was that in IAM "users by default have no permissions". It took me all day to track that down, since I was used to the original AWS authentication model (pre-iam) in which what are now called "root" credentials were the only way.

    There are lots of AWS documents on creating users, but only a few places where they note that you have to give them permissions for them to do anything. One is Working with Amazon S3 Buckets - Amazon Simple Storage Service, but even it doesn't really just tell you to go to the Policies tab, suggest a good starting policy, and explain how to apply it.

    The wizard-of-sorts simply encourages you to "Get started with IAM users" and doesn't clarify that there is much more to do. Even if you poke around a bit, you just see e.g. "Managed Policies There are no managed policies attached to this user." which doesn't suggest that you need a policy to do anything.

    To establish a root-like user, see: Creating an Administrators Group Using the Console - AWS Identity and Access Management

    I don't see a specific policy which simply simply allows read-only access to all of S3 (my own buckets as well as public ones owned by others).

    0 讨论(0)
  • 2020-12-13 09:16

    Following up on nealmcb's answer on IAM roles. Whilst deploying EMR clusters using an IAM role, I had a similar issue where at times (not every time) this error would come up whilst connecting boto to s3.

    boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler']
    

    The Metadata Service can timeout whilst retrieving credentials. Thus, as the docs suggest, I added a Boto section in the config and increased the number of retries to retrieve the credentials. Note that the default is 1 attempt.

    import boto, ConfigParser
    try:
        boto.config.add_section("Boto")
    except ConfigParser.DuplicateSectionError:
        pass
    boto.config.set("Boto", "metadata_service_num_attempts", "20")
    

    http://boto.readthedocs.org/en/latest/boto_config_tut.html?highlight=retries#boto

    Scroll down to: You can control the timeouts and number of retries used when retrieving information from the Metadata Service (this is used for retrieving credentials for IAM roles on EC2 instances)

    0 讨论(0)
  • 2020-12-13 09:16

    You can now set these as arguments in the connect function call.

    s3 = boto.connect_s3(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
    

    Just thought I'd add that incase anyone else searched like I did.

    0 讨论(0)
  • 2020-12-13 09:19

    I just ran into this problem while using Linux and SES, and I hope it may help others with a similar issue. I had installed awscli and configured my keys doing:

    sudo apt-get install awscli
    aws configure
    

    This is used to setup your credentials in ~/.aws/config just like @huythang said. But boto looks for your credentials in ~/.aws/credentials so copy them over

    cp ~/.aws/config ~/.aws/credentials
    

    Assuming an appropriate policy is setup for your user with those credentials - you shouldn't need to set any environment variables.

    0 讨论(0)
  • 2020-12-13 09:24

    Boto will take your credentials from the environment variables. I've tested this with V2.0b3 and it works fine. It will give precedence to credentials specified explicitly in the constructor, but it will pick up credentials from the environment variables too.

    The simplest way to do this is to put your credentials into a text file, and specify the location of that file in the environment.

    For example (on Windows: I expect it will work just the same on Linux but I have not personally tried that)

    Create a file called "mycred.txt" and put it into C:\temp This file contains two lines:

    AWSAccessKeyId=<your access id>
    AWSSecretKey=<your secret key>
    

    Define the environment variable AWS_CREDENTIAL_FILE to point at C:\temp\mycred.txt

    C:\>SET AWS_CREDENTIAL_FILE=C:\temp\mycred.txt
    

    Now your code fragment above:

    import boto
    conn = boto.connect_s3()
    

    will work fine.

    0 讨论(0)
提交回复
热议问题