The Facebook Crawler is hitting my servers multiple times every second and it seems to be ignoring both the Expires header and the og:ttl property.
In some cases, it is
Facebook documentation specifically states "Images are cached based on the URL and won't be updated unless the URL changes.". This means it doesn't matter which headers or meta tags you add to your page, the bot is supposed to cache the image anyway.
This made me think:
I'd monitor the page logs and see exactly what happens - if the page URL or the image URL is even slightly different, the caching mechanism won't work. Luckily, this doesn't seem like a headers/tags type of issue.