amazon-ec2

How to serve an AWS EC2 instance from S3 subdirectory

馋奶兔 提交于 2021-01-29 20:30:55
问题 I have a website hosted on AWS S3, served over Cloudfront. www.mysite.com I am hosting a blog on an EC2 instance. I would like to have this blog served from www.mysite.com/blog For SEO purposes I do not want it to be www.blog.mysite.com Is it possible to achieve this with only S3 and Couldfront? I have played around with S3 redirects and Lambda@edge but the documentation on these is not great. In the case of Lambda@edge I want to avoid further complexity if I can. S3 redirects work but the

Cannot copy large (5 Gb) files with awscli 1.5.4

て烟熏妆下的殇ゞ 提交于 2021-01-29 17:45:45
问题 I have problem with aws-cli, I did a yum update, it updated awscli (among other things) and now awscli fails on large files (e.g. 5.1 Gb) with SignatureDoesNotMatch. The exact same command (to same bucket) with smaller files works. The big file still works if I use boto from python. It copies all parts but two it looks like (i.e. it was counted up to 743 of 745 parts), and then the error message comes. Looks like a bug in awscli? I could not find anything about it when I google around though.

Lost access to EC2 instance

99封情书 提交于 2021-01-29 15:31:07
问题 I reformatted my macbook and completely forgot to copy my ~/.ssh directory. I tried ssh'ing into my EC2 instance $ ssh ec2-user@xx.xxx.xxx.xx -i xxx.pem -v OpenSSH_8.1p1, LibreSSL 2.7.3 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 47: Applying options for * debug1: Connecting to xx.xxx.xxx.xx [xx.xxx.xxx.xx] port 22. debug1: connect to address xx.xxx.xxx.xx port 22: Operation timed out ssh: connect to host xx.xxx.xxx.xx port 22: Operation timed out

Does ECS task definition support volume mapping syntax?

馋奶兔 提交于 2021-01-29 13:14:32
问题 docker-compose spec support volume mapping syntax under services , for example: version: '2' volumes: jenkins_home: external: true services: jenkins: build: context: . args: DOCKER_GID: ${DOCKER_GID} DOCKER_VERSION: ${DOCKER_VERSION} DOCKER_COMPOSE: ${DOCKER_COMPOSE} volumes: - jenkins_home:/var/jenkins_home - /var/run/docker.sock:/var/run/docker.sock ports: - "8080:8080" Following "AWSTemplateFormatVersion": "2010-09-09" , the corresponding ECS task definition has volume syntax un-readable

Jenkins log getting huge and filling up entire disk space

自作多情 提交于 2021-01-29 11:34:42
问题 Every week I got surprised by my Jenkins server hitting 100% disk used by Jenkins log. So I remove the file, and then my disk gets lots of free space again. [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 987M 60K 987M 1% /dev tmpfs 997M 0 997M 0% /dev/shm /dev/xvda1 32G 32G 0 100% / <============================= DISK FULL [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo rm /var/log/jenkins/jenkins.log <======= REMOVE LOG FILE [ec2-user@ip-xxx-xxx-xxx-xxx ~]$

ssh into AWS Batch jobs

只谈情不闲聊 提交于 2021-01-29 11:31:36
问题 I would like to communicate with AWS Batch jobs from a local R process in the same way that Davis Vaughn demonstrated for EC2 at https://gist.github.com/DavisVaughan/865d95cf0101c24df27b37f4047dd2e5. The AWS Batch documentation describes how to set up a key pair and security group for batch jobs. However, I could not find detailed instructions about how to find the IP address of a job's instance or what user name I need. The IP address in particular is not available in the console when I run

Can The AWS CLI Copy From S3 To EC2?

雨燕双飞 提交于 2021-01-29 10:58:37
问题 I'm familiar with running the AWS CLI command to copy from a folder to S3 or from one S3 bucket to another S3 bucket: aws s3 cp ./someFile.txt s3://bucket/someFile.txt aws s3 cp s3://bucketSource/someFile.txt s3://bucketDestination/someFile.txt But is it possible to copy files from S3 to an EC2-Instance when you're not on the EC2-Instance? Something like: aws s3 cp s3://bucket/folder/ ec2-user@1.2.3.4:8080/some/folder/ I'm trying to run this from Jenkins which is why I can't simply run the

Cannot connect to an open port ec2 instance

三世轮回 提交于 2021-01-29 10:25:56
问题 I am trying to connect to a redis server hosted on AWS. I used my private key to ssh into the instance, install and run the server. Now I wanted to access the server using the public dns of the instance and the port 6379 (on which the server is running). I have added the port 6379 to security group with 0.0.0.0/0 and ::/0 but when I telnet on this port, I get: Trying [PUBLIC-DNS]... telnet: connect to address [PUBLIC-DNS]: Connection refused telnet: Unable to connect to remote host Any help

“Connect timeout on endpoint URL” when running cron job

假如想象 提交于 2021-01-29 08:54:26
问题 I have set a crontab file to run a Python script that creates an JSON file and writes it to an S3 bucket. It runs as expected when executed from the command line, but when I run it as a cron job, I get the following error: botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL This results from the following lines of code in the script: import boto3 def main(): # Create EC2 client and get EC2 response ec2 = boto3.client('ec2') response = ec2.describe_instances() My guess is

How to create custom construct library for aws-cdk in python

℡╲_俬逩灬. 提交于 2021-01-29 06:06:27
问题 Recently I have been using aws-cdk to create EC2, VPC and S3 services. But if I want to create my custom EC2 Library in python(not using JSII) than will be using aws_cdk's aws_ec2 library to actually create the EC2 Instance and a VPC. The custom library will accept arguments like Instance Name(String) , InstanceType(String) , MachineImage(String) , Subnet Type (String) Than this arguments will be refer like below: Disclaimer: Code below Might not be correct dummy_ec2 = ec2.Instance(self,