How could I store uploaded images to AWS S3 on PHP

旧巷老猫 提交于 2019-12-10 13:48:33

问题


I'm on an EC2 instance and I wish to connect my PHP website with my Amazon S3 bucket, I already saw the API for PHP here: http://aws.amazon.com/sdkforphp/ but it's not clear.

This is the code line I need to edit in my controller:

thisFu['original_img']='/uploads/fufu/'.$_POST['cat'].'/original_'.uniqid('fu_').'.jpg';

I need to connect to Amazon S3 and be able to change the code like this:

$thisFu['original_img']='my_s3_bucket/uploads/fufu/'.$_POST['cat'].'/original_'.uniqid('fu_').'.jpg';

I already configured an IAM user for the purpose but I don't know all the steps needed to accomplished the job.

How could I connect and interact with Amazon S3 to upload and retrieve public images?

UPDATE

I decided to try using the s3fs as suggested, so I installed it as described here (my OS is Ubuntu 14.04)

I run from console:

sudo apt-get install build-essential git libfuse-dev libcurl4-openssl-dev libxml2-dev mime-support automake libtool
sudo apt-get install pkg-config libssl-dev
git clone https://github.com/s3fs-fuse/s3fs-fuse
cd s3fs-fuse/
./autogen.sh
./configure --prefix=/usr --with-openssl
make
sudo make install

Everything was properly installed but what's next? Where should I declare credentials and how could I use this integration in my project?

2nd UPDATE

I created a file called .passwd-s3fs with a single code line with my IAM credentials accessKeyId:secretAccessKey.

I place it into my home/ubuntu directory and give it a 600 permission with chmod 600 ~/.passwd-s3fs

Next from console I run /usr/bin/s3fs My_S3bucket /uploads/fufu

Inside the /uploads/fufu there are all my bucket folders now. However when I try this command:

s3fs -o nonempty allow_other My_S3bucket /uploads/fufu

I get this error message:

s3fs: unable to access MOUNTPOINT My_S3bucket : No such file or directory

3rd UPDATE

As suggested I run this fusermount -u /uploads/fufu, after that I checked the fufu folder and is empty as expected. After that I tried again this command (with one more -o):

s3fs -o nonempty -o allow_other My_S3bucket /uploads/fufu

and got this error message:

fusermount: failed to open /etc/fuse.conf: Permission denied
fusermount: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf

Any other suggestion?

4th UPDATE 18/04/15

Under suggestion from console I run sudo usermod -a -G fuse ubuntu and sudo vim /etc/fuse.conf where I uncommented mount_max = 1000 and user_allow_other

Than I run s3fs -o nonempty -o allow_other My_S3bucket /uploads/fufu

At first sight no errors, so I thought everythings fine but it's exactly the opposite.

I'm a bit frustrated now, because I don't know what happened but my folder /uploads/fufu is hidden and using ls -Al I see only this

d????????? ? ?        ?              ?            ? fufu

I cannot sudo rm -r or -rf or mv -r it says that /uploads/fufu is a directory

I tried to reboot exit and mount -a, but nothing.

I tried to unmount using fusermount and the error message is fusermount: entry for /uploads/fufu not found in /etc/mtab

But I tried sudo vim /etc/mtab and I found this line: s3fs /uploads/fufu fuse.s3fs rw,nosuid,nodev,allow_other 0 0

Could someone tell me how can I unmount and finally remove this folder /uploads/fufu ?


回答1:


Despite to "S3fs is very reliable in recent builds", I can share my own experience with s3fs and info that we moved write operation from direct s3fs mounted folder access to aws console(SDK api possible way also) after periodic randomly system crashes .

Possible that you won't have any problem with small size files like images, but it certainly made the problem while we tried to write mp4 files. So last message at log before system crash was:

kernel: [ 9180.212990] s3fs[29994]: segfault at 0 ip 000000000042b503 sp 00007f09b4abf530 error 4 in s3fs[400000+52000]

and it was rare randomly cases, but that made system unstable.

So we decided still to keep s3fs mounted, but use it only for read access

Below I show how to mount s3fs with AIM credentials without password file

#!/bin/bash -x
S3_MOUNT_DIR=/media/s3
CACHE_DIR=/var/cache/s3cache

wget http://s3fs.googlecode.com/files/s3fs-1.74.tar.gz
tar xvfz s3fs-1.74.tar.gz
cd s3fs-1.74
./configure
make
make install

mkdir $S3_MOUNT_DIR
mkdir $CACHE_DIR

chmod 0755 $S3_MOUNT_DIR
chmod 0755 $CACHE_DIR

export IAMROLE=`curl http://169.254.169.254/latest/meta-data/iam/security-credentials/`

/usr/local/bin/s3fs $S3_BUCKET $S3_MOUNT_DIR  -o iam_role=$IAMROLE,rw,allow_other,use_cache=$CACHE_DIR,uid=222,gid=500

Also you will need to create IAM role that assigned to the instance with attached policy:

{"Statement":[{"Resource":"*","Action":["s3:*"],"Sid":"S3","Effect":"Allow"}],"Version":"2012-10-17"}

In you case, seems it is reasonable to use php sdk (other answer has usage example already), but you also can write images to s3 with aws console:

aws s3 cp /path_to_image/image.jpg s3://your_bucket/path

If you will have IAM role created and assigned to your instance you won't need to provide any additional credentials

Update - answer to your question:

  • I don't need to include the factory method for declare my IAM credentials?

Yes if you will have IAM assigned to ec2 instance, then at code you just need to create the client as:

     $s3Client = S3Client::factory();
     $bucket = 'my_s3_bucket';
     $keyname = $_POST['cat'].'/original_'‌​.uniqid('fu_').'.jpg';
     $localFilePath = '/local_path/some_image.jpg';

 $result = $s3Client->putObject(array(
        'Bucket' => $bucket,
        'Key'    => $keyname,
        'SourceFile'   => $filePath,
        'ACL'    => 'public-read',
        'ContentType' => 'image/jpeg'
    ));
    unlink($localFilePath);

option 2: If you do not need file local storage stage , but will put direclt from upload form:

 $s3Client = S3Client::factory();
 $bucket = 'my_s3_bucket';
 $keyname = $_POST['cat'].'/original_'‌​.uniqid('fu_').'.jpg';
 $dataFromFile = file_get_contents($_FILES['uploadedfile']['tmp_name']); 

$result = $s3Client->putObject(array(
    'Bucket' => $bucket,
    'Key'    => $keyname,
    'Body' => $dataFromFile,
    'ACL'    => 'public-read',
));

And to get s3 link if you will have public access

$publicUrl = $s3Client->getObjectUrl($bucket, $keyname);

Or generate signed url to private content:

$validTime = '+10 minutes';
$signedUrl = $s3Client->getObjectUrl($bucket, $keyname, $validTime);



回答2:


I agree the documentation at that link is a bit hard to dig and leaves a lot of dots to be connected.

However, I found something a lot better here: http://docs.aws.amazon.com/aws-sdk-php/v2/guide/service-s3.html

It has sample code and instructions for almost all the S3 operations.




回答3:


To give you a little more clarity, since you are a beginner: download the AWS SDK via this Installation Guide

Then set up your AWS account client on your PHP webserver using this bit

use Aws\S3\S3Client;

$client = S3Client::factory(array('profile' => '<profile in your aws credentials file>'
));

If you would like more information on how to use AWS credentials files, head here.

Then to upload a file that you have on your own PHP server:

$result = $client->putObject(array(
'Bucket'     => $bucket,
'Key'        => 'data_from_file.txt',
'SourceFile' => $pathToFile,
'Metadata'   => array(
    'Foo' => 'abc',
    'Baz' => '123'
)
));

If you are interested in learning how to upload images to a php file, I would recommend looking at this W3 schools tutorial. This tutorial can help you get off the ground for saving the file locally on your server in a temporary directory before it gets uploaded to your S3 bucket.




回答4:


A much easier and transparent to your application setup is simply to mount the s3 partition with s3fs

https://github.com/s3fs-fuse/s3fs-fuse

(Use option allow_other) Your s3fs then behaves like a normal folder would just move the file to that folder. S3fs then uploads

S3fs is very reliable in recent builds

You can read images this way also, But loose any effect that the AWS CDN has, though last time i tried it it wasn't a huge difference

you need a s3fspassword file

in the format accessKeyId:secretAccessKey

it can be in either of these places

using the passwd_file command line option
setting the AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables
using a .passwd-s3fs file in your home directory
using the system-wide /etc/passwd-s3fs file

The file needs 600 permissions

https://github.com/s3fs-fuse/s3fs-fuse/wiki/Fuse-Over-Amazon

Has some info.

When that is complete the command is s3fs bucket_name mnt_dir

you can find the keys here

https://console.aws.amazon.com/iam/home?#security_credential

from the example about i would assume your mnt_dir would be /uploads/fufu

so s3fs bucket /uploads/fufu

to your second problem

s3fs -o nonempty allow_other My_S3bucket /uploads/fufu

is wrong, you need to specify -o again

s3fs -o nonempty -o allow_other My_S3bucket /uploads/fufu

the user you are mounting as needs to be in the fuse group

sudo usermod -a -G fuse your_user

or sudo addgroup your_user fuse



来源:https://stackoverflow.com/questions/29457382/how-could-i-store-uploaded-images-to-aws-s3-on-php

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!