What are the [dis]advantages of using a key/value table over nullable columns or separate tables?

笑着哭i 提交于 2019-12-17 17:29:00

问题


I'm upgrading a payment management system I created a while ago. It currently has one table for each payment type it can accept. It is limited to only being able to pay for one thing, which this upgrade is to alleviate. I've been asking for suggestions as to how I should design it, and I have these basic ideas to work from:

  1. Have one table for each payment type, with a few common columns on each. (current design)
  2. Coordinate all payments with a central table that takes on the common columns (unifying payment IDs regardless of type), and identifies another table and row ID that has columns specialized to that payment type.
  3. Have one table for all payment types, and null the columns which are not used for any given type.
  4. Use the central table idea, but store specialized columns in a key/value table.

My goals for this are: not ridiculously slow, self-documenting as much as possible, and maximizing flexibility while maintaining the other goals.

I don't like 1 very much because of the duplicate columns in each table. It reflects the payment type classes inheriting a base class that provides functionality for all payment types... ORM in reverse?

I'm leaning toward 2 the most, because it's just as "type safe" and self-documenting as the current design. But, as with 1, to add a new payment type, I need to add a new table.

I don't like 3 because of its "wasted space", and it's not immediately clear which columns are used for which payment types. Documentation can alleviate the pain of this somewhat, but my company's internal tools do not have an effective method for storing/finding technical documentation.

The argument I was given for 4 was that it would alleviate needing to change the database when adding a new payment method, but it suffers even worse than 3 does from the lack of explicitness. Currently, changing the database isn't a problem, but it could become a logistical nightmare if we decide to start letting customers keep their own database down the road.

So, of course I have my biases. Does anyone have any better ideas? Which design do you think fits best? What criteria should I base my decision on?


回答1:


Perhaps you should look this question

The accepted answer from Bill Karwin goes into specific arguments against the key/value table usually know as Entity Attribute Value (EVA)

.. Although many people seem to favor EAV, I don't. It seems like the most flexible solution, and therefore the best. However, keep in mind the adage TANSTAAFL. Here are some of the disadvantages of EAV:

  • No way to make a column mandatory (equivalent of NOT NULL).
  • No way to use SQL data types to validate entries.
  • No way to ensure that attribute names are spelled consistently.
  • No way to put a foreign key on the values of any given attribute, e.g. for a lookup table.
  • Fetching results in a conventional tabular layout is complex and expensive, because to get attributes from multiple rows you need to do JOIN for each attribute.

The degree of flexibility EAV gives you requires sacrifices in other areas, probably making your code as complex (or worse) than it would have been to solve the original problem in a more conventional way.

And in most cases, it's an unnecessary to have that degree of flexibility. In the OP's question about product types, it's much simpler to create a table per product type for product-specific attributes, so you have some consistent structure enforced at least for entries of the same product type.

I'd use EAV only if every row must be permitted to potentially have a distinct set of attributes. When you have a finite set of product types, EAV is overkill. Class Table Inheritance would be my first choice.




回答2:


Note
This subject is being discussed, and this thread is being referenced in other threads, therefore I have given it a reasonable treatment, please bear with me. My intention is to provide understanding, so that you can make informed decisions, rather than simplistic ones based merely on labels. If you find it intense, read it in chunks, at your leisure; come back when you are hungry, and not before.

What, Exactly, about EAV, is "Bad" ?

1 Introduction

There is a difference between EAV done properly, and done badly, just as there is a difference between 3NF done properly and done badly. In our technical work, we need to be precise about exactly what works, and what does not; about what performs well, and what doesn't. Blanket statements are dangerous, misinform people, and thus hinder progress and universal understanding of the issues concerned.

I am not for or against anything, except poor implementations by unskilled workers, and misrepresenting the level of compliance to standards. And where I see misunderstanding, as here, I will attempt to address it.

Normalisation is also often misunderstood, so a word on that. Wiki and other free sources actually post completely nonsensical "definitions", that have no academic basis, that have vendor biases so as to validate their non-standard-compliant products. There is a Codd published his Twelve Rules. I implement a minimum of 5NF, which is more than enough for most requirements, so I will use that as a baseline. Simply put, assuming Third Normal Form is understood by the reader (at least that definition is not confused) ...

2 Fifth Normal Form

2.1 Definition

Fifth Normal Form is defined as:

  • every column has a 1::1 relation with the Primary Key, only
  • and to no other column, in the table, or in any other table
  • the result is no duplicated columns, anywhere; No Update Anomalies (no need for triggers or complex code to ensure that, when a column is updated, its duplicates are updated correctly).
  • it improves performance because (a) it affects less rows and (b) improves concurrency due to reduced locking

I make the distinction that, it is not that a database is Normalised to a particular NF or not; the database is simply Normalised. It is that each table is Normalised to a particular NF: some tables may only require 1NF, others 3NF, and yet others require 5NF.

2.2 Performance

There was a time when people thought that Normalisation did not provide performance, and they had to "denormalise for performance". Thank God that myth has been debunked, and most IT professionals today realise that Normalised databases perform better. The database vendors optimise for Normalised databases, not for denormllised file systems. The truth "denormalised" is, the database was NOT normalised in the first place (and it performed badly), it was unnormalised, and they did some further scrambling to improve performance. In order to be Denormalised, it has to be faithfully Normalised first, and that never took place. I have rewritten scores of such "denormalised for performance" databases, providing faithful Normalisation and nothing else, and they ran at least ten, and as much as a hundred, times faster. In addition, they required only a fraction of the disk space. It is so pedestrian that I guarantee the exercise, in writing.

2.3 Limitation

The limitations, or rather the full extent of 5NF is:

  • it does not handle optional values, and Nulls have to be used (many designers disallow Nulls and use substitutes, but this has limitations if it not implemented properly and consistently)
  • you still need to change DDL in order to add or change columns (and there are more and more requirements to add columns that were not initially identified, after implemnation; change control is onerous)
  • although providing the highest level of performance due to Normalisation (read: elimination of duplicates and confused relations), complex queries such as pivoting (producing a report of rows, or summaries of rows, expressed as columns) and "columnar access" as required for data warehouse operations, are difficult, and those operations only, do not perform well. Not that this is due only to the SQL skill level available, and not to the engine.

3 Sixth Normal Form

3.1 Definition

Sixth Normal Form is defined as:

  • the Relation (row) is the Primary Key plus at most one attribute (column)

It is known as the Irreducible Normal Form, the ultimate NF, because there is no further Normalisation that can be performed. Although it was discussed in academic circles in the mid nineties, it was formally declared only in 2003. For those who like denigrating the formality of the Relational Model, by confusing relations, relvars, "relationships", and the like: all that nonsense can be put to bed because formally, the above definition identifies the Irreducible Relation, sometimes called the Atomic Relation.

3.2 Progression

The increment that 6NF provides (that 5NF does not) is:

  • formal support for optional values, and thus, elimination of The Null Problem
    • a side effect is, columns can be added without DDL changes (more later)
  • effortless pivoting
  • simple and direct columnar access
    • it allows for (not in its vanilla form) an even greater level of performance in this department

Let me say that I (and others) were supplying enhanced 5NF tables 20 years ago, explicitly for pivoting, with no problem at all, and thus allowing (a) simple SQL to be used and (b) providing very high performance; it was nice to know that the academic giants of the industry had formally defined what we were doing. Overnight, my 5NF tables were renamed 6NF, without me lifting a finger. Second, we only did this where we needed it; again, it was the table, not the database, that was Normalised to 6NF.

3.3 SQL Limitation

It is a cumbersome language, particularly re joins, and doing anything moderately complex makes it very cumbersome. (It is a separate issue that most coders do not understand or use subqueries.) It supports the structures required for 5NF, but only just. For robust and stable implementations, one must implement additional standards, which may consist in part, of additional catalogue tables. The "use by" date for SQL had well and truly elapsed by the early nineties; it is totally devoid of any support for 6NF tables, and desperately in need of replacement. But that is all we have, so we need to just Deal With It.

For those of us who had been implementing standards and additional catalogue tables, it was not a serious effort to extend our catalogues to provide the capability required to support 6NF structures to standard: which columns belong to which tables, and in what order; mandatory/optional; display format; etc. Essentially a full MetaData catalogue, married to the SQL catalogue.

Note that each NF contains each previous NF within it, so 6NF contains 5NF. We did not break 5NF in order provide 6NF, we provided a progression from 5NF; and where SQL fell short we provided the catalogue. What this means is, basic constraints such as for Foreign Keys; and Value Domains which were provided via SQL Declarative Referential integrity; Datatypes; CHECKS; and RULES, at the 5NF level, remained intact, and these constraints were not subverted. The high quality and high performance of standard-compliant 5NF databases was not reduced in anyway by introducing 6NF.

3.4 Catalogue

It is important to shield the users (any report tool) and the developers, from having to deal with the jump from 5NF to 6NF (it is their job to be app coding geeks, it is my job to be the database geek). Even at 5NF, that was always a design goal for me: a properly Normalised database, with a minimal Data Directory, is in fact quite easy to use, and there was no way I was going to give that up. Keep in mind that due to normal maintenance and expansion, the 6NF structures change over time, new versions of the database are published at regular intervals. Without doubt, the SQL (already cumbersome at 5NF) required to construct a 5NF row from the 6NF tables, is even more cumbersome. Gratefully, that is completely unnecessary.

Since we already had our catalogue, which identified the full 6NF-DDL-that-SQL-does-not-provide, if you will, I wrote a small utility to read the catalogue and:

  • generate the 6NF table DDL.
  • generate 5NF VIEWS of the 6NF tables. This allowed the users to remain blissfully unaware of them, and gave them the same capability and performance as they had at 5NF
  • generate the full SQL (not a template, we have those separately) required to operate against the 6NF structures, which coders then use. They are released from the tedium and repetition which is otherwise demanded, and free to concentrate on the app logic.

I did not write an utility for Pivoting because the complexity present at 5NF is eliminated, and they are now dead simple to write, as with the 5NF-enhanced-for-pivoting. Besides, most report tools provide pivoting, so I only need to provide functions which comprise heavy churning of stats, which needs to be performed on the server before shipment to the client.

3.5 Performance

Everyone has their "disease" to suffer, their cross to bear; I happen to be obsessed with Performance. My 5NF databases performed well, so let me assure you that I ran far more benchmarks than were necessary, before placing anything in production. The 6NF database performed exactly the same as the 5NF database, no better, no worse. This is no surprise, because the only thing the 'complex" 6NF SQL does, that the 5NF SQL doesn't, is perform much more joins and subqueries.

You have to examine the myths.

  • Anyone who has benchmarked the issue (i.e examined the execution plans of queries) will know that Joins Cost Nothing, it is a compile-time resolution, they have no effect at execution time.
  • Yes, of course, the number of tables joined; the size of the tables being joined; whether indices can be used; the distribution of the keys being joined; etc, all cost something.
  • But the join itself costs nothing.
  • A query on five (larger) tables in a Unnormalised database is much slower than the equivalent query on ten (smaller) tables in the same database if it were Normalised. the point is, neither the four nor the nine Joins cost anything; they do not figure in the performance problem; the selected set on each Join does figure in it.

3.6 Benefit

  1. Unrestricted columnar access. This is where 6NF really stands out. The straight columnar access was so fast that there was no need to export the data to a data warehouse in order to obtain speed from specialised DW structures.

    My research into a few DWs, by no means complete, shows that they consistently store data by columns, as opposed to rows, which is exactly what 6NF does. I am conservative, so I am not about to make any declarations that 6NF will displace DWs, but in my case it eliminated the need for one.

  2. It would not be fair to compare functions available in 6NF that were unavailable in 5NF (eg. Pivoting), which obviously ran much faster.

That was our first true 6NF database (with a full catalogue, etc; as opposed to the always 5NF with enhancements only as necessary; which later turned out to be 6NF), and the customer is very happy. Of course I was monitoring performance for some time after delivery, and I identified an even faster columnar access method for my next 6NF project. That, when I do it, might present a bit of competition for the DW market. The customer is not ready, and we do not fix that which is not broken.

3.7 What, Exactly, about 6NF, is "Bad" ?

Note that not everyone would approach the job with as much formality, structure, and adherence to standards. So it would be silly to conclude from our project, that all 6NF databases perform well, and are easy to maintain. It would be just as silly to conclude (from looking at the implementations of others) that all 6NF databases perform badly, are hard to maintain; disasters. As always, with any technical endeavour, the resulting performance and ease of maintenace are strictly dependent on formality, structure, and adherence to standards, in addition to the relevant skill set.

3.8 Availablility

Please don't expose yourself and ask for anything beyond the boundaries of standard commercial practice, such as "published references", the customer is an Australian bank, the whole implementation is confidential; but I am free to take prospects on visits. You are also welcome to view (but not copy) the documentation at our offices in Sydney. The methodology (structures and standards beyond the publicly available 6NF education) and the utilities, is our proprietary Intellectual Property, and it is available for assignments. At this stage I am selling it only as part of an assignment, because (a) I need to reasonably ensure success of the project (in order not to hurt our reputation), and (b) one successful project under our belts is not enough maturity to classify it as 'ready for market'.

I am happy to continue answering questions, and providing helpful information re the 6NF catalogue, advice re what works and what doesn't, etc, without actually publishing our IP (documentation). I am also happy to run qualified benchmarks for you.

4 Entity Attribute Value

Disclosure: Experience. I have inspected a few of these, mostly hospital and medical systems. I have performed corrective assignments on two of them. The initial delivery by the overseas provider was quite adequate, although not great, but the extensions implemented by the local provider were a mess. But not nearly the disaster that people have posted about re EAV on this site. A few months intense work fixed them up nicely.

4.1 What It Is

It was obvious to me that the EAV implementations I have worked on are merely subsets of Sixth Normal Form. Those who implement EAV do so because they want some of the features of 6NF (eg. ability to add columns without DDL changes), but they do not have the academic knowledge to implement true 6NF, or the standards and structures to implement and administer it securely. Even the original provider did not know about 6NF, or that EAV was a subset of 6NF, but they readily agreed when I pointed it out to them. Because the structures required to provide EAV, and indeed 6NF, efficiently and effectively (catalogue; Views; automated code generation) are not formally identified in the EAV community, and are missing from most implementations, I classify EAV as the bastard son Sixth Normal Form.

4.2 What, Exactly, about EAV, is "Bad" ?

Going by the comments in this and other threads, yes, EAV done badly is a disaster. More important (a) they are so bad that the performance provided at 5NF (forget 6NF) is lost and (b) the ordinary isolation from the complexity has not been implemented (coders and users are "forced" to use cumbersome navigation). And if they did not implement a catalogue, all sorts of preventable errors will not have been prevented. All that may well be true for bad (EAV or other) implementations, but it has nthing to do with 6NF or EAV. The two projects I worked had quite adequate performance (sure, it could be improved; but there was no bad performance due to EAV), and good isolation of complexity. Of course, they were nowhere near the quality or performance of my 5NF databases or my true 6NF database, but they were fair enough, given the level of understanding of the posted issues within the EAV community. They were not the disasters and sub-standard nonsense alleged to be EAV in these pages.

5 Nulls

There is a well-known and documented issue called The Null Problem. It is worthy of an essay by itself. For this post, suffice to say:

  • the problem is really the optional or missing value; here the consideration is table design such that there are no Nulls vs Nullable columns
  • actually it does not matter because, regardless of whether you use Nulls/No Nulls/6NF to exclude missing values, you will have to code for that, the problem precisely then, is handling missing values, which cannot be circumvented
    • except of course for pure 6NF, which eliminates the Null Problem
    • the coding to handle missing values remains
      • except, with automated generation of SQL code, heh heh
  • Nulls are bad news for peformance, and many of us have decided decades ago not to allow Nulls in the database (Nulls in passed paramaters and result sets, to indicate missing values, is fine)
    • which means a set of Null Substitutes and boolean columns to indicate missing values
  • Nulls cause otherwise fixed len columns to be variable len; variable len columns should never be used in indices, because a little 'unpacking' has to be performed on every access of every index entry, during traversal or dive.

6 Position

I am not a proponent of EAV or 6NF, I am a proponent of quality and standards. My position is:

  1. Always, in all ways, do whatever you are doing to the highest standard that you are aware of.

  2. Normalising to Third Normal Form is minimal for a Relational Database (5NF for me). DataTypes, Declarative referential Integrity, Transactions, Normalisation are all essential requirements of a database; if they are missing, it is not a database.

    • if you have to "denormalise for performance", you have made serious Normalisation errors, your design in not normalised. Period. Do not "denormalise", on the contrary, learn Normalisation and Normalise.
  3. There is no need to do extra work. If your requirement can be fulfilled with 5NF, do not implement more. If you need Optional Values or ability to add columns without DDL changes or the complete elimination of the Null Problem, implement 6NF, only in those tables that need them.

  4. If you do that, due only to the fact that SQL does not provide proper support for 6NF, you will need to implement:

    • a simple and effective catalogue (column mix-ups and data integrity loss are simply not acceptable)
    • 5NF access for the 6NF tables, via VIEWS, to isolate the users (and developers) from the encumbered (not "complex") SQL
    • write or buy utilities, so that you can generate the cumbersome SQL to construct the 5NF rows from the 6NF tables, and avoid writing same
    • measure, monitor, diagnose, and improve. If you have a performance problem, you have made either (a) a Normalisation error or (b) a coding error. Period. Back up a few steps and fix it.
  5. If you decide to go with EAV, recognise it for what it is, 6NF, and implement it properly, as above. If you do, you will have a successful project, guaranteed. If you do not, you will have a dog's breakfast, guaranteed.

6.1 There Ain't No Such Thing As A Free Lunch

That adage has been referred to, but actually it has been misused. The way it actually, deeply applies is as above: if you want the benefits of 6NF/EAV, you had better be willing too do the work required to obtain it (catalogue, standards). Of course, the corollary is, if you don't do the work, you won't get the benefit. There is no "loss" of Datatypes; value Domains; Foreign keys; Checks; Rules. Regarding performance, there is no performance penalty for 6NF/EAV, but there is always a substantial performance penalty for slip-shod, sub-standard work.

7 Specific Question

Finally. With due consideration to the context above, and that it is a small project with a small team, there is no question:

  • Do not use EAV (or 6NF for that matter)
  • Do not use Nulls or Nullable columns (unless you wish to subvert performance)
  • Do use a single Payment table for the common payment columns
  • and a child table for each PaymentType, each with its specific columns
  • All fully typecast and constrained.

  • What's this "another row_id" business ? Why do some of you stick an ID on everything that moves, without checking if it is a deer or an eagle ? No. The child is a dependent child. The Relation is 1::1. The PK of the child is the PK of the parent, the common Payment table. This is an ordinary Supertype-Subtype cluster, the Differentiator is PaymentTypeCode. Subtypes and supertypes are an ordinary part of the Relational Model, and fully catered for in the database, as well as in any good modelling tool.

    Sure, people who have no knowledge of Relational databases think they invented it 30 years later, and give it funny new names. Or worse, they knowingly re-label it and claim it as their own. Until some poor sod, with a bit of education and professional pride, exposes the ignorance or the fraud. I do not know which one it is, but it is one of them; I am just stating facts, which are easy to confirm.

Thanks for staying with me to the end.

A. Responses to Comments

A.1 Attribution

My stating that "I am faithful to the RM", and referring to the "Giants of the Industry", I assumed that IT professionals would understand what that meant. Humble apologies.

  1. I do not have personal or private or special definitions. All statements regarding the definition (such as imperatives) of:
    • Normalisation,
    • Normal Forms, and
    • the Relational Model.
      .
      refer to the many original texts By EF Codd and CJ Date (not available free on the web)
      .
      The latest being Temporal Data and The Relational Model by CJ Date, Hugh Darwen, Nikos A Lorentzos
      .
      and nothing but those texts
      .
      "I stand on the shoulders of giants"
      .
  2. The essence, the body, all statements regarding the implementation (eg. subjective, and first person) of the above are based on experience; implementing the above principles and concepts, as a commercial organisation (salaried consultant or running a consultancy), in large financial institutions in America and Australia, over 32 years.
    • This includes scores of large assignments correcting or replacing sub-standard or non-relational implementations.
      .
  3. The Null Problem vis-a-vis Sixth Normal Form
    A freely available White Paper relating to the title (it does not define The Null Problem alone) can be found at:
    http://www.dcs.warwick.ac.uk/~hugh/TTM/Missing-info-without-nulls.pdf.
    .
    A 'nutshell' definition of 6NF (meaningful to those experienced with the other NFs), can be found on p6

A.2 Supporting Evidence

  1. As stated at the outset, the purpose of this post is to counter the misinformation that is rife in this community, as a service to the community.
  2. Evidence supporting statements made re the implementation of the above principles, can be provided, if and when specific statements are identified; and to the same degree that the incorrect statements posted by others, to which this post is a response, is likewise evidenced. If there is going to be a bun fight, let's make sure the playing field is level
  3. Here are a few docs that I can lay my hands on immediately, to get started (I am on assignment in NZ, will provide more in a couple of days, the customer names have to be obfuscated).

    a. Large Bank
    This is the best example, as it was undertaken for explicitly the reasons in this post, and goals were realised. They had a budget for Sybase IQ (DW product) but the reports were so fast when we finished the project, they did not need it. The trade analytical stats were my 5NF plus pivoting extensions which turned out to be 6NF, described above. I think all the questions asked in the comments have been answered in the doc, except:
    - number of rows:
    - old database is unknown, but it can be extrapolated from the other stats
    - new database = 20 tables over 100M, 4 tables over 10B.

    b. Small Financial Institute Part A
    Part B - The meat
    Part C - Referenced Diagrams
    Part D - Appendix, Audit of Indices Before/After (1 line per Index)
    Note four docs; the fourth only for those who wish to inspect detailed Index changes. They were running a 3rd party app that could not be changed because the local supplier was out of business, plus 120% extensions which they could, but did not want to, change. We were called in because they upgraded to a new version of Sybase, which was much faster, which shifted the various performance thresholds, which caused large no of deadlocks. Here we Normalised absolutely everything in the server except the db model, with the goal (guaranteed beforehand) of eliminating deadlocks (sorry, I am not going to explain that here: people who argue about the "denormalisation" issue, will be in a pink fit about this one). It included a reversal of "splitting tables into an archive db for performance", which is the subject of another post (yes, the new single table performed faster than the two spilt ones). This exercise applies to MS SQL Server [insert rewrite version] as well.

    c. Yale New Haven Hospital
    That's Yale School of Medicine, their teaching hospital. Been supporting them for years. Third party app on top of Sybase. The problem with stats is, 80% of the time they were collecting snapshots at nominated test times only, but no consistent history, so there is no "before image" to compare our new consistent stats with. I do not know of any other co who can get Unix and Sybase internal stats on the same graphs, in an automated manner. Now the network is the threshold (trust the reader appreciates that that is a Good Thing).

Just something to start with, that has been cleared for publication. More later. Ok, let's have some evidence supporting the notion that "'denormalisation' improves performance", etc. Your turn.

A.3 Length

  1. I do not apologise for the length or for the condensed nature. People with short attention spans (no offence, it is a reality these days), or who are unfamiliar with Relational technology or terminology, should refer to source texts, or to proponents of said technology.
  2. By definition, that excludes Wiki, and anyone denigrating said technology, such as the posters to which this post is a response. It is impossible for an elephant to define a gazelle; and if they do postulate about the life of gazelles, we should not listen to them.



回答3:


My #1 principle is not to redesign something for no reason. So I would go with option 1 because that's your current design and it has a proven track record of working.

Spend the redesign time on new features instead.




回答4:


If I were designing from scratch I would go with number two. It gives you the flexibility you need. However with number 1 already in place and working and this being soemting rather central to your whole app, i would probably be wary of making a major design change without a good idea of exactly what queries, stored procs, views, UDFs, reports, imports etc you would have to change. If it was something I could do with a relatively low risk (and agood testing alrady in place.) I might go for the change to solution 2 otherwise you might beintroducing new worse bugs.

Under no circumstances would I use an EAV table for something like this. They are horrible for querying and performance and the flexibility is way overrated (ask users if they prefer to be able to add new types 3-4 times a year without a program change at the cost of everyday performance).




回答5:


At first sight, I would go for option 2 (or 3): when possible, generalize. Option 4 is not very Relational I think, and will make your queries complex. When confronted to those question, I generally confront those options with "use cases":
-how is design 2/3 behaving when do this or this operation ?



来源:https://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!