My fellow programmer has a strange requirement from his team leader; he insisted on creating varchar
columns with a length of 16*2n.
What is
Completely pointless restriction as far as I can see. Assuming standard FixedVar
format (as opposed to the formats used with row/page compression or sparse columns) and assuming you are talking about varchar(1-8000)
columns
All varchar
data is stored at the end of the row in a variable length section (or in offrow pages if it can't fit in row). The amount of space it consumes in that section (and whether or not it ends up off row) is entirely dependant upon the length of the actual data not the column declaration.
SQL Server will use the length declared in the column declaration when allocating memory (e.g. for sort
operations). The assumption it makes in that instance is that varchar
columns will be filled to 50% of their declared size on average so this might be a better thing to look at when choosing a size.