I generated the url similar to this for my users to retrieve image files from my aws s3 bucket:
Sometimes a user may refresh the page and the url to the same resource get a new set of values for Expires
and Signature
.
The browser will treat these two urls as different two objects, and will try to download the resource from the s3 bucket again.
It causes some performance issue. Is it possible to make a browser to be aware of the fact that, despite the difference in the parameters part of the url, the user is trying to retrieve the same resource and hence retrieve it from its local cache?
I had the same issue and solved it with caching the images presigned-url links, this will solve the problem.
When you request a pre-signed url from AWS it returns a link with expire & signature which points to the source file in your bucket, each time you make a refresh: the server will pass a request to AWS and then retrieve a new expire & signature which the browser knows nothing about, so it will fetch the same image again.
The way to tell the browser hold on do not fetch that again, is to send back the same signature when you refresh the page! It sounds easy but it can get messy so bear with me.
The solution I made was using a "Redis" cache layer that maps the browser loaded images to a serial.
If userA loads the page you simply check if it has been visited before by the same user and the same browser & MacAddress using a "serial" you should add to the browser's local storage,
serial structure = (browser-type&name + userID + MacAddress), this is to make sure it works if userA login from different devices and browsers at the same time, and to make sure that you create the same serial each time userA loads the page/images from this specific browser and device.
If userA has no serial saved to local storage then add it to local the storage and go check if the serial exists in Redis (the user maybe logged-in before and cleared the browser-cache), if yes then remove the attached object which contains the presigned-links for your images as parameters:
"serial" :
{
imageID_1 : "https://s3.bucket.xxx/imageID_1.jpg?.......xyz1",
imageID_2 : "https://s3.bucket.xxx/imageID_2.jpg?.......xyz2",
imageID_3 : "https://s3.bucket.xxx/imageID_3.jpg?.......xyz3"
}
Then request new pre-signed url for each imageID/key that has to be loaded on the page and attach them to the "serial" object parameters in Redis. If no, then just add both "serial" + object parameters:
"serial" :
{
imageID_1 : "https://s3.bucket.xxx/imageID_1.jpg?.......abc1",
imageID_2 : "https://s3.bucket.xxx/imageID_2.jpg?.......abc2",
imageID_3 : "https://s3.bucket.xxx/imageID_3.jpg?.......abc3"
}
if the browser has a serial cached into local storage, you also have to check with each request if the imageID exists in the object parameters linked to the serial in-order to create presigned-url for it (if it does not exist) then add it as a new parameter to the object :
"serial" :
{
imageID_1 : "https://s3.bucket.xxx/imageID_1.jpg?.......abc1",
imageID_2 : "https://s3.bucket.xxx/imageID_2.jpg?.......abc2",
imageID_3 : "https://s3.bucket.xxx/imageID_3.jpg?.......abc3",
imageID_4 : "https://s3.bucket.xxx/imageID_4.jpg?.......abc4",
}
this is in case the user already has a serial saved into the browsers local storage, telling there is some loaded images before, so we have to check if userA is requesting the same images loaded before or he is requesting new ones.
finally you have to check if images are not loaded or forbidden to be accessed from the browser for any reason (font-end code to check and report back to the server), in this case you just remove the serial attached object and attach the new presigned-urls then send them back to the front-end (browser).
来源:https://stackoverflow.com/questions/53205295/how-to-make-browser-cache-identical-image-with-different-aws-s3-presigned-url