Is there any advantage of using int vs varbinary for storing bit masks in terms of performance or flexibility.
For my purposes, I will always be doing reads on these bit
I usually agree with @hainstech's answer of using bit fields, because you can explicitly name each bit field to indicate what it should store. However I haven't seen a practical approach to doing bitmask comparisons with bit fields. With SQL Server's bitwise operators (&, |, etc...) it's easy to find out if a range of flags are set. A lot more work to do that with equality operators against a large number of bit fields.