You're going to need to measure the times for your situation, because the answer will depend upon:
Server-rendered HTML:
- The amount of time necessary on the server to format the data as HTML, under low and high loads.
- The amount of time necessary to move formatted HTML to the client, under low and high loads.
- The amount of time necessary to redraw your page with the formatted HTML on the client, for slow and fast clients and browsers.
Client-rendered HTML:
- The amount of time necessary on the server to format the data as JSON, under low and high loads.
- The amount of time necessary to move the JSON data to the client, under low and high loads.
- The amount of time necessary on the client to render HTML from the JSON data, for slow and fast clients and browsers.
This is a case where an hour in the lab running tests before coding could save you from having to redo everything later.
[Added]
Each set of measurements (1, 2, 3) is going to require a different set of tools to capture the data. I would pick 3 sets of representative data (smallest, average, largest) and then for each dataset, make each of the measurements listed above. Note that you don't need to (and in fact shouldn't) use your full application -- you really just want the smallest chunk of code that will do what you want. Then I'd look for the variations between server-rendered and client-rendered, and decide which (if any) was more important in my application.
You're NOT going to be able to measure every possible combination, but if you choose the slowest browser on the slowest PC you can lay your hands on (eg: a cheap netbook), and use the slowest possible internet connection (you've still got an AOL dialup account for testing, right?) that will tend to show you the worst case, which is what you really care about.