At my company, we have a legacy database with various tables and therefore many, many fields.
A lot of the fields seem to have large limits (ex: NVARCHAR(MAX)
There's two parts to this question:
Does using NVARCHAR over VARCHAR hurt performance? Yes, storing data in unicode fields doubles the storage requirements. Your data stored in those fields is 2x the size it needs to be (until SQL Server 2008 R2 came out, which includes unicode compression. Your table scans will take twice as long and only half as much data can be stored in memory in the buffer cache.
Does using MAX hurt performance? Not directly, but when you use VARCHAR(MAX), NVARCHAR(MAX), and those kinds of fields, and if you need to index the table, you won't be able to rebuild those indexes online in SQL Server 2005/2008/R2. (Denali brings some improvements around tables with MAX fields so some indexes can be rebuilt online.)
Yes, the query optimizer can guess how many rows fit in a page, if you have a lot of varchar
fields that are larger than necessary, SQL Server can internally guess the wrong number of rows.