问题
I have a decision to make for a database with up to 20,000+ student profiles, Each student profile have more than 120+ Property,
In many Enterprise projects like DOTNETNUKE CMS, the used method is two tables, one is UserProfile :
[ProfileID],[UserID],[PropertyDefinitionID],[PropertyValue]
and the other is : ProfilePropertyDefinition
[PropertyDefinitionID],[DataType] ,[PropertyCategory],[PropertyName]
If i used this method i will have 20,000 x 120 = 2.4 million row!
I know this is the correct way, to have a normalized database, and I know, the Database servers were created to handle millions of records, but i am wondering, why don't we create one profile table with 120 column + enough backup columns for the extensibility? If the performance is important, Does it more harm than good?
回答1:
why don't we create one profile table with 100 column + enough backup columns for the extensibility?
Your other design is more extensible. You're not limited to 120 rows.
As far as performance, you'd have to test the two different database designs, but you're talking about a ProfilePropertyDefinition table with potentially 2.4 million 100 byte rows as opposed to 20 thousand 50 kilobyte rows.
Unless you're reading all the rows all the time, shorter rows generally perform better.
来源:https://stackoverflow.com/questions/12937829/database-engineering-many-rows-or-extra-backup-columns-for-a-scenario-like-exte