As an intern in an economic research team, I was given the task to find a way to automatically collect specific data on a real estate ad website, using R.
I assume that
That's quite a big question, so you need to break it down into smaller ones, and see which bits you get stuck on.
Is the problem with retrieving a web page? (Watch out for proxy server issues.) Or is the tricky bit accessing the useful bits of data from it? (You'll probably need to use xPath for this.)
Take a look at the web-scraping example on Rosetta code and browse these SO questions for more information.