I would like to crawl a website, the problem is, that its full of JavaScript things, such as buttons and such that when they are pressed, they do not change the URL, but the
WWW::Mechanize::Firefox might be of use. that way you can have Firefox handle the complex JavaScript issues and then extract the resultant html.
I would suggest HtmlUnit and Perl wrapper: WWW::HtmlUnit.
Another option might be Selenium with WWW::Selenium module
The WWW::Scripter module has a JavaScript plugin that may be useful. Can't say I've used it myself, however.