having trouble displaying an image uploaded to Amazon s3 by fine-uploader

﹥>﹥吖頭↗ 提交于 2019-12-23 20:21:34

问题


I am now trying to set up fineuploader-s3 to show an image of the file successfully uploaded on the aws server, as was done on the example page here: http://fineuploader.com/#s3-demo

I am (still) using the code at https://github.com/Widen/fine-uploader-server/blob/master/php/s3/s3demo.php, and I’ve added

uploadSuccess: {
        endpoint: "s3demo.php?success"
    }

to the fine-uploader instance in my javascript file, so that the temporary link should be generated by the function in the s3demo.php file.

I realized that I had to install the AWS SDK to get this to work.The zip method of installation really didn’t work, so I am using phar. I changed that section of the s3demo.php file to:

require 'aws.phar';
use Aws\S3\S3Client;

I also uncommented these two lines:

$serverPublicKey = $_SERVER['PARAM1'];
$serverPrivateKey = $_SERVER['PARAM2'];

I am having two problems in getting this to work.The first is that something is going wrong with my success response from AWS from which I think I'm supposed to be getting the link to the file.

The file uploads perfectly, but I get an error in the Console:

[FineUploader 3.8.0] Sending POST request for 0 s3.jquery.fineuploader-3.8.0.js:164
[FineUploader 3.8.0] Received the following response body to an AWS upload success request for id 0: <br />
<b>Fatal error</b>:  Uncaught exception 'Guzzle\Http\Exception\CurlException' with message '[curl] 28: Connection timed out after 1001 milliseconds [url] http://169.254.169.254/latest/meta-data/iam/security-credentials/' in phar:///MYSITE/aws.phar/Guzzle/Http/Curl/CurlMulti.php:339
Stack trace:
#0 phar:///MYSITE//aws.phar/Guzzle/Http/Curl/CurlMulti.php(280): Guzzle\Http\Curl\CurlMulti-&gt;isCurlException(Object(Guzzle\Http\Message\Request), Object(Guzzle\Http\Curl\CurlHandle), Array)
#1 phar:///MYSITE//aws.phar/Guzzle/Http/Curl/CurlMulti.php(245): Guzzle\Http\Curl\CurlMulti-&gt;processResponse(Object(Guzzle\Http\Message\Request), Object(Guzzle\Http\Curl\CurlHandle), Array)
#2 phar:///MYSITE//aws.phar/Guzzle/Http/Curl/CurlMulti.php(228): Guzzle\Http\Curl\CurlMulti-&gt;processMessages()
#3 phar:///MYSITE//aws.phar/Guzzle/Http/Curl/CurlMulti.php(212): Guzzle\Http\Curl\CurlMulti-&gt;executeHandles()
#4 phar:///MYSITE/z/aw in <b>phar:///home/nextq2/public_html/lenz/aws.phar/Aws/Common/InstanceMetadata/InstanceMetadataClient.php</b> on line <b>82</b><br />
 s3.jquery.fineuploader-3.8.0.js:164
[FineUploader 3.8.0] Upload success was acknowledged by the server. s3.jquery.fineuploader-3.8.0.js:164

Does this mean there is something wrong with my AWS SDK installation, or in my permissions settings on Amazon? For the CORS and IAM settings? Which are still as follows:

<CORSRule>
        <AllowedOrigin>MY WEBSITE</AllowedOrigin>
        <AllowedMethod>POST</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>

My group policy on IAM:

    {
      "Version":"2012-10-17",
      "Statement":[{
         "Effect":"Allow",
         "Action":"s3:PutObject",
         "Resource":"arn:aws:s3:::MY_BUCKET/*”
       }]
}

The second issue, which I’m sure I should be able to figure out but can’t, is how to access the json array generated by s3demo.php in my javascript so I can display the uploaded image. I guess it’s not $templink[0].I was wondering if it would be possible to see the example code that gives the view button on http://fineuploader.com/#s3-demo its function. If I should make that a second question here, I'm happy to do so.

Thanks very much for your time.

EDIT to add my full code as requested:

PHP:

<?php
/**
 * PHP Server-Side Example for Fine Uploader S3.
 * Maintained by Widen Enterprises.
 *
 * Note: This is the exact server-side code used by the S3 example
 * on fineuploader.com.
 *
 * This example:
 *  - handles both CORS and non-CORS environments
 *  - handles delete file requests for both DELETE and POST methods
 *  - Performs basic inspections on the policy documents and REST headers before signing them
 *  - Ensures again the file size does not exceed the max (after file is in S3)
 *  - signs policy documents (simple uploads) and REST requests
 *    (chunked/multipart uploads)
 *
 * Requirements:
 *  - PHP 5.3 or newer
 *  - Amazon PHP SDK (only if utilizing the AWS SDK for deleting files or otherwise examining them)
 *
 * If you need to install the AWS SDK, see http://docs.aws.amazon.com/aws-sdk-php-2/guide/latest/installation.html.
 */

// You can remove these two lines if you are not using Fine Uploader's
// delete file feature

require 'aws/aws-autoloader.php';
use Aws\S3\S3Client;


// These assume you have the associated AWS keys stored in
// the associated system environment variables
$clientPrivateKey = ‘I put my private key here;
// These two keys are only needed if the delete file feature is enabled
// or if you are, for example, confirming the file size in a successEndpoint
// handler via S3's SDK, as we are doing in this example.
$serverPublicKey = $_SERVER['PARAM1'];
$serverPrivateKey = $_SERVER['PARAM2'];

$expectedMaxSize = 15000000; 
$expectedBucket = “my bucket name here;

$method = getRequestMethod();

// This first conditional will only ever evaluate to true in a
// CORS environment
if ($method == 'OPTIONS') {
    handlePreflight();
}
// This second conditional will only ever evaluate to true if
// the delete file feature is enabled
else if ($method == "DELETE") {
  //  handlePreflightedRequest(); // only needed in a CORS environment
    deleteObject();
}
// This is all you really need if not using the delete file feature
// and not working in a CORS environment
else if ($method == 'POST') {
    handlePreflightedRequest();

    // Assumes the successEndpoint has a parameter of "success" associated with it,
    // to allow the server to differentiate between a successEndpoint request
    // and other POST requests (all requests are sent to the same endpoint in this example).
    // This condition is not needed if you don't require a callback on upload success.
    if (isset($_REQUEST["success"])) {
        verifyFileInS3();
    }
    else {
        signRequest();
    }
}

// This will retrieve the "intended" request method.  Normally, this is the
// actual method of the request.  Sometimes, though, the intended request method
// must be hidden in the parameters of the request.  For example, when attempting to
// send a DELETE request in a cross-origin environment in IE9 or older, it is not
// possible to send a DELETE request.  So, we send a POST with the intended method,
// DELETE, in a "_method" parameter.
function getRequestMethod() {
    global $HTTP_RAW_POST_DATA;

    // This should only evaluate to true if the Content-Type is undefined
    // or unrecognized, such as when XDomainRequest has been used to
    // send the request.
    if(isset($HTTP_RAW_POST_DATA)) {
        parse_str($HTTP_RAW_POST_DATA, $_POST);
    }

    if ($_POST['_method'] != null) {
        return $_POST['_method'];
    }

    return $_SERVER['REQUEST_METHOD'];
}

// Only needed in cross-origin setups
function handlePreflightedRequest() {
    // If you are relying on CORS, you will need to adjust the allowed domain here.
    //header('Access-Control-Allow-Origin: http://nextquestion.org');
}

// Only needed in cross-origin setups
function handlePreflight() {
    handlePreflightedRequest();
    header('Access-Control-Allow-Methods: POST');
    header('Access-Control-Allow-Headers: Content-Type');
}

function getS3Client() {
    global $serverPublicKey, $serverPrivateKey;

    return S3Client::factory(array(
        'key' => $serverPublicKey,
        'secret' => $serverPrivateKey
    ));
}

// Only needed if the delete file feature is enabled
function deleteObject() {
    getS3Client()->deleteObject(array(
        'Bucket' => $_POST['bucket'],
        'Key' => $_POST['key']
    ));
}

function signRequest() {
    header('Content-Type: application/json');

    $responseBody = file_get_contents('php://input');
    $contentAsObject = json_decode($responseBody, true);
    $jsonContent = json_encode($contentAsObject);

    $headersStr = $contentAsObject["headers"];
    if ($headersStr) {
        signRestRequest($headersStr);
    }
    else {
        signPolicy($jsonContent);
    }
}

function signRestRequest($headersStr) {
    if (isValidRestRequest($headersStr)) {
        $response = array('signature' => sign($headersStr));
        echo json_encode($response);
    }
    else {
        echo json_encode(array("invalid" => true));
    }
}

function isValidRestRequest($headersStr) {
    global $expectedBucket;

    $pattern = "/\/$expectedBucket\/.+$/";
    preg_match($pattern, $headersStr, $matches);

    return count($matches) > 0;
}

function signPolicy($policyStr) {
    $policyObj = json_decode($policyStr, true);

    if (isPolicyValid($policyObj)) {
        $encodedPolicy = base64_encode($policyStr);
        $response = array('policy' => $encodedPolicy, 'signature' => sign($encodedPolicy));
        echo json_encode($response);
    }
    else {
        echo json_encode(array("invalid" => true));
    }
}

function isPolicyValid($policy) {
    global $expectedMaxSize, $expectedBucket;

    $conditions = $policy["conditions"];
    $bucket = null;
    $parsedMaxSize = null;

    for ($i = 0; $i < count($conditions); ++$i) {
        $condition = $conditions[$i];

        if (isset($condition["bucket"])) {
            $bucket = $condition["bucket"];
        }
        else if (isset($condition[0]) && $condition[0] == "content-length-range") {
            $parsedMaxSize = $condition[2];
        }
    }

    return $bucket == $expectedBucket && $parsedMaxSize == (string)$expectedMaxSize;
}

function sign($stringToSign) {
    global $clientPrivateKey;

    return base64_encode(hash_hmac(
            'sha1',
            $stringToSign,
            $clientPrivateKey,
            true
        ));
}

// This is not needed if you don't require a callback on upload success.
function verifyFileInS3() {
    global $expectedMaxSize;

    $bucket = $_POST["bucket"];
    $key = $_POST["key"];

    // If utilizing CORS, we return a 200 response with the error message in the body
    // to ensure Fine Uploader can parse the error message in IE9 and IE8,
    // since XDomainRequest is used on those browsers for CORS requests.  XDomainRequest
    // does not allow access to the response body for non-success responses.
    if (getObjectSize($bucket, $key) > $expectedMaxSize) {
        // You can safely uncomment this next line if you are not depending on CORS
        //header("HTTP/1.0 500 Internal Server Error");
        deleteObject();
        echo json_encode(array("error" => "File is too big!"));
    }
    else {
        echo json_encode(array("tempLink" => getTempLink($bucket, $key)));
    }
}
function testfunction(){
    alert('whatever');
}
// Provide a time-bombed public link to the file.
function getTempLink($bucket, $key) {
    $client = getS3Client();
    $url = "{$bucket}/{$key}";
    $request = $client->get($url);

    return $client->createPresignedUrl($request, '+15 minutes');
}

function getObjectSize($bucket, $key) {
    $objInfo = getS3Client()->headObject(array(
            'Bucket' => $bucket,
            'Key' => $key
        ));
    return $objInfo['ContentLength'];
}
?>

My html. I used another example that Mark had on StackOverflow for this test, because eventually I want to submit some other data simultaneously:

<!DOCTYPE html>
<html>
<head>

  <title>test of fine uploader</title>
  <meta charset="utf-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1">


  <link href="fineuploader-3.8.0.css" rel="stylesheet">
  <style>
  .button {
      display: block;
      height: 30px;
      width: 100px;
      border: 1px solid #000;
  }
  </style>
 <script src="http://code.jquery.com/jquery-1.9.1.min.js"></script>
<script src="s3.jquery.fineuploader-3.8.0.js"></script>
<script type="text/javascript" src="lenz_javascript4.js"></script>

</head> 
<body> 

<!-- Generated Image Thumbnail -->
<a href="#" id="thumbnail">view image</a>

<form action="fineuploadertest.php" method="post" id="uploader">
<input type="text" name="textbox" value="Test data">
    <div id="manual-fine-uploader"></div>
    <div id="triggerUpload" class="button" style="margin-top: 10px;">click here
    </div>
</form>

</body>
</html>

My javascript:

$(document).ready(function() {

    $("#triggerUpload").click(function () {
        $("#manual-fine-uploader").fineUploaderS3('uploadStoredFiles'); 
    });

    function submitForm () { 
        if ($(this).fineUploader('getInProgress') == 0) {
            var failedUploads = $(this).fineUploaderS3('getUploads', 
                { status: qq.status.UPLOAD_FAILED });
            if (failedUploads.length == 0) {    
                // do any other form processing here
                $("#uploader").submit();
            }
        }
    };


    $("#manual-fine-uploader").fineUploaderS3({
        autoUpload: false,
        debug: true,

              request: {

                  endpoint: "http://my bucket name.s3.amazonaws.com",

                  accessKey: “I put my access key here”
              },
                validation: {
                    allowedExtensions: ['jpeg', 'jpg', 'gif', 'png'],
                    sizeLimit: 15000000,
                    itemLimit: 3
                },

              signature: {

                  endpoint: "s3demo.php"
              },
            camera: {
                 ios: true
            },
              iframeSupport: {
                  localBlankPagePath: "/success.html"
              },
              uploadSuccess: {
        endpoint: "s3demo.php?success"

    }
    });
});

回答1:


It sounds like you simply want to mirror the behavior of the S3 demo on FineUploader.com. So, the piece you are apparently having trouble with is the part of the demo that allows you to view/download the file you have uploaded. My guess is that you are not setting the PARAM1 and PARAM2 environment variables. You should really have a look at how the $_SERVER super global works in the PHP documentation. This code, as it stands, expects you to have a system environment variable named PARAM1 that holds the public AWS key associated with the IAM user you should have created for your server (not your client) to use. The PARAM2 system environment variable should be set to the secret key for this same user. You can either set these environment variables, or set the associated $serverPublicKey and $serverPrivateKey PHP globals to the server-side IAM user's public and secret keys, respectively.

Note that the poor name choice for the system environment variables associated with the server's AWS public and secret key (PARAM1 and PARAM2) is due to the fact that the server for the fineuploader.com S3 demo is running on an AWS EC2 instance created by Amazon's Elastic Beanstalk service. Elastic Beanstalk does not provide (at least it doesn't obviously provide) a way to name system environment variables via the Elastic Beanstalk UI for PHP apps. It names them PARAM1, PARAM2, etc.

The $serverPublicKey and $serverPrivateKey variables should NOT be the same keys associated with the IAM user you created for client-side tasks. You should have created a different IAM user with permissions appropriate for server-side tasks. For example, if you want to support the delete file feature, you would want to have an IAM user with the "S3:DeleteObject" permission. This user should be restricted to server-side tasks only for security reasons.

In your case, your server-side IAM user must have "S3:GetObject" permission on your bucket. This permission is required in order to get objects from your bucket. The safest approach is to only give this permission to your server-side IAM user. You're probably asking: "If my client-side user can't read object from my bucket, how do I allow the file to be downloaded client-side?" Well, one option is to set the acl option in Fine Uploader to "public-read" and then construct a URL client-side using this convention: "http://mybucket.s3.amazonaws.com/objectkey". This is NOT the way the S3 demo on fineuploader.com work. Read on for details...

I didn't want to give users unlimited access to files they uploaded into Fine Uploader's S3 bucket, so I left the acl to "private" (the default value), I only gave my server-side IAM user "S3:GetObject" permission to the Fine Uploader S3 bucket, and I have the server return a "time-bombed" signed URL to the associated object in the bucket. The URL returned by the server contains an expiration parameter that only allows it to be used for 15 minutes. Any attempts to change that expiration parameter in the query string will invalidate the signature and the request will fail. The getTempLink function in the PHP example will generate a time-bombed signed URL that is returned in the response to Fine Uploader's uploadSucess.endpoint POST request. You can access this value by contributing a complete event handler. The responseJSON object parameter passed into your callback will contain a tempLink property that will contain the signed URL. You can then generate an anchor with the src attribute set to the value of this property.



来源:https://stackoverflow.com/questions/18369235/having-trouble-displaying-an-image-uploaded-to-amazon-s3-by-fine-uploader

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!