Use case for hstore against multiple columns

前端 未结 2 1939
别跟我提以往
别跟我提以往 2020-12-18 05:33

I\'m having some troubles deciding on which approach to use.

I have several entity \"types\", let\'s call them A,B and C, who share a certain number of attributes (a

相关标签:
2条回答
  • 2020-12-18 06:27

    (4) Inheritance

    The cleanest style from a database-design point-of-view would probably be inheritance, like @yieldsfalsehood suggested in his comment. Here is an example with more information, code and links:
    Select (retrieve) all records from multiple schemas using Postgres

    The current implementation of inheritance in Postgres has a number of limitations, though. Among others, you cannot define a common foreign key constraints for all inheriting tables. Read the last chapter about caveats carefully.

    (3) hstore, json (pg 9.2+) / jsonb (pg 9.4+)

    A good alternative for lots of different or a changing set of attributes, especially since you can even have functional indices on attributes inside the column:

    • unique index or constraint on hstore key
    • Index for finding an element in a JSON array
    • jsonb indexing in Postgres 9.4

    EAV type of storage has its own set of advantages and disadvantages. This question on dba.SE provides a very good overview.

    (1) One table with lots of columns

    It's the simple, kind of brute-force alternative. Judging from your description, you would end up with around 100 columns, most of them boolean and most of them NULL most of the time. Add a column entity_id to mark the type. Enforcing constraints per type is a bit awkward with lots of columns. I wouldn't bother with too many constraints that might not be needed.

    The maximum number of columns allowed is 1600. With most of the columns being NULL, this upper limit applies. As long as you keep it down to 100 - 200 columns, I wouldn't worry. NULL storage is very cheap in Postgres (basically 1 bit per column, but it's more complex than that.). That's only like 10 - 20 bytes extra per row. Contrary to what one might assume (!), most probably much smaller on disk than the hstore solution.

    While such a table looks monstrous to the human eye, it is no problem for Postgres to handle. RDBMSes specialize in brute force. You might define a set of views (for each type of entity) on top of the base table with just the columns of interest and work with those where applicable. That's like the reverse approach of inheritance. But this way you can have common indexes and foreign keys etc. Not that bad. I might do that.

    All that said, the decision is still yours. It all depends on the details of your requirements.

    0 讨论(0)
  • 2020-12-18 06:36

    In my line of work, we have rapidly-changing requirements, and we rarely get downtime for proper schema upgrades. Having done both the big-record with lots on nulls and highly normalized (name,value), I've been thinking that it might be nice it have all the common attributes in proper columns, and the different/less common ones in a "hstore" bucket for the rest.

    0 讨论(0)
提交回复
热议问题