How to import referenced files in ETL scripts?

被刻印的时光 ゝ 提交于 2021-02-05 07:11:32

问题


I have a script which I'd like to pass a configuration file into. On the Glue jobs page, I see that there is a "Referenced files path" which points to my configuration file. How do I then use that file within my ETL script?

I've tried from configuration import *, where the referenced file name is configuration.py, but no luck (ImportError: No module named configuration).


回答1:


I noticed the same issue. I believe there is already a ticket to address it, but here is what AWS support suggests in the meantime.

If you are using referenced files path variable in a Python shell job, referenced file is found in /tmp, where Python shell job has no access by default. However, the same operation works successfully in Spark job, because the file is found in the default file directory.

Code below helps find the absolute path of sample_config.json that was referenced in Glue job configuration and prints its contents.

import json
import sys, os

def get_referenced_filepath(file_name, matchFunc=os.path.isfile):
    for dir_name in sys.path:
        candidate = os.path.join(dir_name, file_name)
        if matchFunc(candidate):
            return candidate
    raise Exception("Can't find file: ".format(file_name))

with open(get_referenced_filepath('sample_config.json'), "r") as f:
    data = json.load(f)
    print(data)

Boto3 API can be used to access the referenced file as well

import boto3

s3 = boto3.resource('s3')
obj = s3.Object('sample_bucket', 'sample_config.json')
for line in obj.get()['Body']._raw_stream:
    print(line)


来源:https://stackoverflow.com/questions/48596627/how-to-import-referenced-files-in-etl-scripts

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!