Here is a python web crawler, which should make a good starting point. Your general strategy is this:
- you need to take care that outbound links are never followed, including links on the same domain but higher up than your starting point.
- as you spider, the site collect a hash of page urls mapped to a list of all the internal urls included in each page.
- take a pass over this list, assigning a token to each unique url.
- use your hash of {token => [tokens]} to generate a graphviz file that will lay out a graph for you
- convert the graphviz output into an imagemap where each node links to its corresponding webpage
The reason you need to do all this is, as leonm noted, that websites are graphs, not trees, and laying out graphs is a harder problem than you can do in a simple piece of javascript and css. Graphviz is good at what it does.