AWS CodePipeline adding artifacts to S3 in less useful format than running steps individually

跟風遠走 提交于 2019-12-19 05:00:14

问题


I've set up a CodePipeline with the end goal of having a core service reside on S3 as a private maven repo for other pipelines to rely on. When the core service is updated and pushed to AWS CodeCommit, the pipeline should run, test it, build a jar using a maven docker image, then push the resulting jar to S3 where it can be accessed by other applications as needed.

Unfortunately, while the CodeBuild service works exactly how I want it to, uploading XYZCore.jar to /release on the bucket, the automated pipeline itself does not. Instead, it uploads to a "XYZCorePipeline" folder, which contains the input and output artifacts of the build. The output artifact itself is a zip file that's just a random string of characters. I checked the pipeline and it's using the service correctly, but pipeline specific builds always output there while standalone builds of the CodeBuild service output how I'd like them, allowing me to take advantage of things like versioning. What's the best way for me to fix the settings so the two builds match?


回答1:


Unfortunately CodePipeline does not support this use case.

As a workaround you could upload the artifact to S3 by invoking AWS CLI (aws s3 cp ...) from your buildspec.yml post_build.



来源:https://stackoverflow.com/questions/43131439/aws-codepipeline-adding-artifacts-to-s3-in-less-useful-format-than-running-steps

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!