Running EMR Spark With Multiple S3 Accounts

前端 未结 4 1217
隐瞒了意图╮
隐瞒了意图╮ 2020-12-29 11:12

I have an EMR Spark Job that needs to read data from S3 on one account and write to another.
I split my job into two steps.

  1. read data from the S3 (no

相关标签:
4条回答
  • 2020-12-29 11:20

    For controlling access of the resources, generally IAM roles are managed as a standard practice. Assume roles are used when you want to access resources in a different account. If you or your organisation follow the same then you should follow https://aws.amazon.com/blogs/big-data/securely-analyze-data-from-another-aws-account-with-emrfs/. The basic idea here is to use a credentials provider with which the access is obtained by EMRFS to access objects in S3 buckets. You can go one step further and make the ARN for STS and buckets parameterized for the JAR created in this blog.

    0 讨论(0)
  • 2020-12-29 11:23

    I believe you need to assign an IAM role to your compute nodes (you probably already have done this), then grant cross-account access to that role via IAM on the "Remote" account. See http://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html for the details.

    0 讨论(0)
  • 2020-12-29 11:29

    The solution is actually quite simple.

    Firstly, EMR clusters have two roles:

    • A service role (EMR_DefaultRole) that grants permissions to the EMR service (eg for launching Amazon EC2 instances)
    • An EC2 role (EMR_EC2_DefaultRole) that is attached to EC2 instances launched in the cluster, giving them access to AWS credentials (see Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances)

    These roles are explained in: Default IAM Roles for Amazon EMR

    Therefore, each EC2 instance launched in the cluster is assigned the EMR_EC2_DefaultRole role, which makes temporary credentials available via the Instance Metadata service. (For an explanation of how this works, see: IAM Roles for Amazon EC2.) Amazon EMR nodes use these credentials to access AWS services such as S3, SNS, SQS, CloudWatch and DynamoDB.

    Secondly, you will need to add permissions to the Amazon S3 bucket in the other account to permit access via the EMR_EC2_DefaultRole role. This can be done by adding a bucket policy to the S3 bucket (here named other-account-bucket) like this:

    {
        "Id": "Policy1",
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "Stmt1",
                "Action": "s3:*",
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::other-account-bucket",
                    "arn:aws:s3:::other-account-bucket/*"
                ],
                "Principal": {
                    "AWS": [
                        "arn:aws:iam::ACCOUNT-NUMBER:role/EMR_EC2_DefaultRole"
                    ]
                }
            }
        ]
    }
    

    This policy grants all S3 permissions (s3:*) to the EMR_EC2_DefaultRole role that belongs to the account matching the ACCOUNT-NUMBER in the policy, which should be the account in which the EMR cluster was launched. Be careful when granting such permissions -- you might want to grant permissions only to GetObject rather than granting all S3 permissions.

    That's all! The bucket in the other account will now accept requests from the EMR nodes because they are using the EMR_EC2_DefaultRole role.

    Disclaimer: I tested the above by creating a bucket in Account-A and assigning permissions (as shown above) to a role in Account-B. An EC2 instance was launched in Account-B with that role. I was able to access the bucket from the EC2 instance via the AWS Command-Line Interface (CLI). I did not test it within EMR, however it should work the same way.

    0 讨论(0)
  • 2020-12-29 11:35

    Using spark you can also use assume role to access an s3 bucket in another account but using an IAM Role in the other account. This makes it easier for the other account owner to manage the permissions provided to the spark job. Managing access via s3 bucket policies can be a pain as access rights are distributed to multiple locations rather than all contained within a single IAM role.

    Here is the hadoopConfiguration:

    "fs.s3a.credentialsType" -> "AssumeRole",
    "fs.s3a.stsAssumeRole.arn" -> "arn:aws:iam::<<AWSAccount>>:role/<<crossaccount-role>>",
    "fs.s3a.impl" -> "com.databricks.s3a.S3AFileSystem",
    "spark.hadoop.fs.s3a.server-side-encryption-algorithm" -> "aws:kms",
    "spark.hadoop.fs.s3a.server-side-encryption-kms-master-key-id" -> "arn:aws:kms:ap-southeast-2:<<AWSAccount>>:key/<<KMS Key ID>>"
    

    External IDs can also be used as a passphrase:

    "spark.hadoop.fs.s3a.stsAssumeRole.externalId" -> "GUID created by other account owner"
    

    We were using databricks for the above have not tried using EMR yet.

    0 讨论(0)
提交回复
热议问题