amazon-s3

Angular 4 app on S3 denied access after redirect

懵懂的女人 提交于 2021-02-10 20:18:34
问题 I've built a simple angular 4 app that uses firebase as my authentication. And I use loginWithRedirect because loginWithPopup didn't work really on my cell phone. But the problem I'm running into is the redirect leaves the page, obviously, to authenticate, and then comes back to mysite.com/login but because its a SPA /login doesn't exist to the bucket I'm guessing. I've added this redirection rule, but it doesn't seem to be doing anything <RoutingRules> <RoutingRule> <Condition>

aws cli s3 sync: how to exclude multiple files

為{幸葍}努か 提交于 2021-02-10 18:48:55
问题 So I have a bash script which deletes all contents from an AWS S3 bucket and then uploads the contents of a local folder to that same bucket. #!/bin/bash # deploy to s3 function deploy(){ aws s3 rm s3://bucketname --profile Administrator --recursive aws s3 sync ./ s3://bucketname --profile Administrator --exclude='node_modules/*' --exclude='.git/*' --exclude='clickCounter.py' --exclude='package-lock.json' --exclude='bundle.js.map' --exclude='package.json' --exclude='webpack_dev_server.js' -

React Docusign Clickwrap Credentials

。_饼干妹妹 提交于 2021-02-10 18:00:56
问题 In my react application, when Im rendering the Docusign clickwrap, I need to supply the accountId and the clickwrapId. Is there a secure way to reference the accountId/clickwrapId without actually putting those values in. I dont want to expose those credentials in my react application function App() { React.useLayoutEffect(() => { docuSignClick.Clickwrap.render({ environment: 'https://demo.docusign.net', accountId: '...', clickwrapId: '...', onMustAgree(agreement) { // Called when no users

React Docusign Clickwrap Credentials

喜夏-厌秋 提交于 2021-02-10 18:00:34
问题 In my react application, when Im rendering the Docusign clickwrap, I need to supply the accountId and the clickwrapId. Is there a secure way to reference the accountId/clickwrapId without actually putting those values in. I dont want to expose those credentials in my react application function App() { React.useLayoutEffect(() => { docuSignClick.Clickwrap.render({ environment: 'https://demo.docusign.net', accountId: '...', clickwrapId: '...', onMustAgree(agreement) { // Called when no users

How to speed up processing time of AWS Transcribe?

拈花ヽ惹草 提交于 2021-02-10 16:17:27
问题 I have 6 second audio recording( ar-01.wav ) in wav format. I want to transcribe the audio file to text using amazon services. For that purpose I created a bucket by name test-voip and uploaded the audio file to bucket. When I try to convert the speech to text, a 6 second audio is taking 13.12 seconds. Here is my code snippet session = boto3.Session(aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key) transcribe = session.client('transcribe', region_name='us-east

Access AWS S3 from Lambda within Default VPC

你。 提交于 2021-02-10 15:43:16
问题 I have a lambda function which needs to access ec2 through ssh and load files and save it to s3. So,for that I have kept ec2 and lambda both in default VPCs and same subnet. Now the problem is that I am able to connect the function to ec2 but not to s3. Its killing me since morning as when I remove the vpc settings it uploads the files to s3 ,but then connection to ec2 is lost. I tried to add a NAT gateway to default VPC(although I am not sure I did it correctly or not because I am new to

AWS: Delete Permanently S3 objects less than 30 days using 'Lifecycle Rule'

感情迁移 提交于 2021-02-10 14:30:58
问题 Is there a way to configure on S3 Lifecycle to delete object less than 30 days (say I want to delete in 5 days Permanently without moving to any other Storage class like glacier? Or should I go by other alternative like Lambda ? I believe, S3 'Lifecycle Rule' allows storage class only more than 30 days. 回答1: You can use expiration action: Define when objects expire. Amazon S3 deletes expired objects on your behalf. You can set expiration time to 5 days or 1 day, or what suits you. For example

AWS: Delete Permanently S3 objects less than 30 days using 'Lifecycle Rule'

若如初见. 提交于 2021-02-10 14:29:22
问题 Is there a way to configure on S3 Lifecycle to delete object less than 30 days (say I want to delete in 5 days Permanently without moving to any other Storage class like glacier? Or should I go by other alternative like Lambda ? I believe, S3 'Lifecycle Rule' allows storage class only more than 30 days. 回答1: You can use expiration action: Define when objects expire. Amazon S3 deletes expired objects on your behalf. You can set expiration time to 5 days or 1 day, or what suits you. For example

How to set metadata in S3 using boto?

谁说胖子不能爱 提交于 2021-02-10 12:17:48
问题 I am trying to set metadata during pushing a file to S3. This is how it looks like : def pushFileToBucket(fileName, bucket, key_name, metadata): full_key_name = os.path.join(fileName, key_name) k = bucket.new_key(full_key_name) k.set_metadata('my_key', 'value') k.set_contents_from_filename(fileName) For some reason this throws error at set_metadata saying : boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch<

S3 object url is not secure(ssl) when opened in browser

五迷三道 提交于 2021-02-10 09:28:55
问题 I am building a small REST API service to store and retrieve photos. For that, I am using S3 as following: public String upload(InputStream uploadedInputStream, Map<String, String> metadata, String group, String filename) { TransferManager tm = TransferManagerBuilder.standard() .withS3Client(amazonS3) .build(); ObjectMetadata objectMetadata = new ObjectMetadata(); objectMetadata.setContentType(metadata.get(Configuration.CONTENT_TYPE_METADATA_KEY)); // TODO: 26/06/20 Add content-type to