As part of a bulk load of data from an external source the stageing table is defined with varchar(max) columns. The idea being that each column will be able to hold whateve
The storage overhead is the same between varchar(n) and varchar(max) The storage size is the actual length of data entered + 2 bytes
MSDN Reference
Check out these similar SO questions:
https://stackoverflow.com/questions/166371/varcharmax-versus-varcharn-in-ms-sql-server Are there any disadvantages to always using nvarchar(MAX)?
As far as I know, the overhead that you are probably thinking about (storing the data out-of-row in the same way a TEXT or BINARY value is stored in sql server) only applies if the data size exceeds 8000 bytes. So there shouldn't be a problem using this with smaller columns for ETL processes.
VARCHAR(MAX) column values will be stored IN the table row, space permitting. So if you have a single VARCHAR(MAX) field and it's 200, 300 byte, chances are it'll be stored inline with the rest of your data. No problem or additional overhead here.
Only when the entire data of a single row cannot fit on a single SQL Server page (8K) anymore, only then will SQL Server move VARCHAR(MAX) data into overflow pages.
So all in all, I think you get the best of both worlds - inline storage when possible, overflow storage when necessary.
Marc
PS: As Mitch points out, this default behaviour can be turned off - I don't see any compelling reasons to do so, however....
Well I want to say that there shouldn't be that big an overhead because i don't think that sql automatically assigned an alloted amount of data for nvarchar, and instead only allots what is needed for what is inserted, but i don't have naything to prove or back up that idea.
If you use an varchar(max) or varbinary(max) in MSSQL2005 SSIS is creating a temporary file for each column in your record this can drop you performance and become a big problem. MS claims that they solved this issue in MSSQL2008.