AWS Lambda Function can import a module when run locally, but not when deployed

最后都变了- 提交于 2019-12-08 05:23:14

问题


I am attempting to expand on this guide, by building a CodePipeline to pick up changes in GitHub, build them, and deploy the changes to my Lambda. sam build --use-container; sam local start-api allows me to successfully call the function locally - but when I deploy the function to AWS, the code fails to import a dependency.

My code depends on requests. I have duly included that in my requirements.txt file:

requests==2.20.0

My buildspec.yml includes directions to install the dependencies

version: 0.1
phases:
  install:
    commands:
      - pip install -r hello_world/requirements.txt -t .
      - pip install -U pytest
  pre_build:
    commands:
      - python -m pytest tests/
  build:
    commands:
      - aws cloudformation package --template-file template.yaml --s3-bucket <my_bucket>
                                   --output-template-file outputTemplate.yml
artifacts:
  type: zip
  files:
    - '**/*'

When my package builds in CodeBuild, that is acknowledged:

[Container] 2018/12/27 23:16:44 Waiting for agent ping 
[Container] 2018/12/27 23:16:46 Waiting for DOWNLOAD_SOURCE 
[Container] 2018/12/27 23:16:46 Phase is DOWNLOAD_SOURCE 
[Container] 2018/12/27 23:16:46 CODEBUILD_SRC_DIR=/codebuild/output/src775882062/src 
[Container] 2018/12/27 23:16:46 YAML location is /codebuild/output/src775882062/src/buildspec.yml 
[Container] 2018/12/27 23:16:46 Processing environment variables 
[Container] 2018/12/27 23:16:46 Moving to directory /codebuild/output/src775882062/src 
[Container] 2018/12/27 23:16:46 Registering with agent 
[Container] 2018/12/27 23:16:46 Phases found in YAML: 3 
[Container] 2018/12/27 23:16:46  PRE_BUILD: 1 commands 
[Container] 2018/12/27 23:16:46  BUILD: 1 commands 
[Container] 2018/12/27 23:16:46  INSTALL: 2 commands 
[Container] 2018/12/27 23:16:46 Phase complete: DOWNLOAD_SOURCE Success: true 
[Container] 2018/12/27 23:16:46 Phase context status code:  Message:  
[Container] 2018/12/27 23:16:46 Entering phase INSTALL 
[Container] 2018/12/27 23:16:46 Running command pip install -r hello_world/requirements.txt -t . 
Collecting requests==2.20.0 (from -r hello_world/requirements.txt (line 1)) 
  Downloading https://files.pythonhosted.org/packages/f1/ca/10332a30cb25b627192b4ea272c351bce3ca1091e541245cccbace6051d8/requests-2.20.0-py2.py3-none-any.whl (60kB)
...

But when I call the deployed function, I get an error:

Unable to import module 'app': No module named 'requests'

This seems very similar to this question, but I'm not using PYTHONPATH in my Lambda building.


EDIT: I added some debugging code to files in this package, to try to get a sense of their runtime environment. I also added similar debugging to another package that I deploy to Lambda via CodePipeline (though this one doesn't use SAM). Debugging code is below:

import os, sys
print('Inside ' + __file__)
for path in sys.path:
    print(path)
    if (os.path.exists(path)):
        print(os.listdir(path))
        for f in os.listdir(path):
          if f.startswith('requests'):
            print('Found requests!')
    print()

This code attempts to determine if the requests module is present in the sys.path of the Lambda's runtime environment - and, if so, where.

For this (SAM-enabled) package, requests was not found anywhere. In the non-SAM-enabled package, requests (as well as all the other requirements.txt-declared dependencies of the package) was found in /var/task.

It looks like either CodeBuild isn't bundling the dependencies of the function alongside the source, or CloudFormation isn't deploying those dependencies. I suspect that this is something to do with the fact that this is a SAM-defined function, and not a "vanilla" Cloudformation one.

This page says that "You can also use other AWS services that integrate with AWS SAM to automate your deployments", but I can't see how to get CodePipeline to run sam deploy instead of aws cloudformation deploy (although this page claims that they are synonyms).


EDIT2 - I believe I've found the problem. For context, recall that I have two packages that are deploying Lambdas via CodePipeline (or attempting to) - the one referred to in this question, which refers to the Lambda as AWS::Serverless::Function, and a second, which uses AWS::Lambda::Function. The first Function's code was defined as a relative location (i.e., a reference to a directory in my package: CodeUri: main/), whereas the second Function's Code was a reference to an S3 location (fetched, in CodePipeline, with Fn::GetArtifactAtt": ["built", "ObjectKey"]} or ...BucketName"]})

The following is a sample of the first package's CodeBuild output:

[Container] 2018/12/30 19:19:48 Running command aws cloudformation package --template-file template.yaml --s3-bucket pop-culture-serverless-bucket --output-template-file outputTemplate.yml 

Uploading to 669099ba3d2258eeb7391ad772bf870d  4222 / 4222.0  (100.00%) 
Successfully packaged artifacts and wrote output template to file outputTemplate.yml. 
Execute the following command to deploy the packaged template 
aws cloudformation deploy --template-file /codebuild/output/src110881899/src/outputTemplate.yml --stack-name <YOUR STACK NAME> 

Compare with the same output from the second package's CodeBuild output:

....
[Container] 2018/12/30 16:42:27 Running command aws cloudformation package --template-file template.json --s3-bucket {BUCKET_NAME} --output-template-file outputTemplate.yml 

Successfully packaged artifacts and wrote output template to file outputTemplate.yml. 
Execute the following command to deploy the packaged template 
aws cloudformation deploy --template-file /codebuild/output/src282566886/src/outputTemplate.yml --stack-name <YOUR STACK NAME>

This suggests that the first package's aws cloudformation package call results in an upload of a file (669099ba3d2258eeb7391ad772bf870d) to S3 which is based only on the content of the template.yaml, whereas the "output" of the Build stage of the second package's CodePipeline is a zip of the directory that CodeBuild has been running in - which includes the dependencies (because of the calls to pip install).

I could get around this by simply changing the template.yaml of my SAM-template Function to reference an S3 location - but this would mean that I would be unable to test updates to the function locally (with, e.g., sam local start-api) without editing the template, since it would reference the S3 location and so wouldn't be affected by local changes.

Ideally, I want to find a way to include the dependencies of the code in the packaged-and-uploaded S3 file. From local testing, it appears that running sam package/aws cloudformation package without having first run sam build results in just the source code (no dependencies) being included. However, I can't run sam build in CodeBuild, since SAM isn't installed there.

(This also suggests that I have been unintentionally deploying the test-dependencies of my second package - since it was required to install them in CodeBuild (in order to run tests))


回答1:


I found a "solution" to this, by installing the dependencies of my code in the main directory rather than the root directory. However, I believe that a superior option would be to use layers to hold dependencies.




回答2:


CodeBuild build environments (specifically when using managed images) are based on Ubuntu base images - these dependencies might not be compatible when running on Lambda. This is because Lambda container environments are based on Amazon Linux - https://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html

You can try fixing this by packaging the dependency in your source bundle and skipping this from your 'requirements.txt' file.

If I'm not wrong a similar issue is addressed at - Using moviepy, scipy and numpy in amazon lambda




回答3:


The reason that your lambda execution says "Unable to import module" when running in the actual AWS Lambda execution environment is because your lambda deployment package (that was uploaded to S3 by aws cloudformation package command) is missing required dependencies specified in your requirements.txt.

Command like aws cloudformation package or sam package will work with AWS::Serverless::Function resource in your CloudFormation template by zipping all contents (regardless of whether it is source code, dependencies or any other stuff) in a directory specified via CodeUri property then it will upload that resulting zip file to S3 bucket giving you a transformed CloudFormation template where the S3 bucket path to your deployment package replaces the path to the source code in your local machine specified in CodeUri property.

Looking at your buildspec.yml, I think the problem comes from -t . option you specified in pip install -r hello_world/requirements.txt -t . command in the install phase. This will install dependencies in a current directory (usually the root directory of your project) and not in the directory where the source code of hello_world lambda function resides. Thus, the dependencies will not get zipped together with the source code in the later aws cloudformation package step.

In general, when you create a lambda function deployment package (whether it's SAM-enabled or plain-old Lambda), you need to bundle everything (source code, dependencies, resources, etc.) that is used in your app. You usually do that by:-

  1. Use sam build command if it's SAM-enabled CloudFormation template. This command will automatically find your requirements.txt and install specified dependencies into the .aws-sam directory in preparation for uploading to S3.

  2. Manually run pip install -r requirements.txt to the right directory where contents get zipped as a deployment package for deploying lambda function. This works in both SAM-enabled or plain-old Lambda CloudFormation template.




回答4:


If your CodeUri is pointing to /main, the content of this folder will be zip and uploaded to S3 when running aws cloudformation package, but without the dependencies.

The difference when running sam package is that it installs the dependencies from requirements.txt for you and outputs it to .aws-sam/build/<functionname> folder.

So, in order to package the dependencies you need to access the function folder and install the dependencies locally, eg.

  • pip install -r requirements.txt -t .
  • then run aws cloudformation package --s3-bucket <YOUR_BUCKET> --template-file <YOUR TEMPLATE YAML> --output-template-file <OUTPUT TEMPLATE NAME YAML>.


来源:https://stackoverflow.com/questions/53952204/aws-lambda-function-can-import-a-module-when-run-locally-but-not-when-deployed

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!