Can PostgreSQL array be optimized for join?

后端 未结 1 1451
死守一世寂寞
死守一世寂寞 2021-01-26 08:34

I see that Postgres array is good for performance if the array\'s element is the data itself, e.g., tag

http://shon.github.io/2015/12/21/postgres_array_performance.html<

相关标签:
1条回答
  • 2021-01-26 08:55

    No, storing FKs in an array is never a good idea for general purpose tables. First an foremost, there is the fact you mentioned in passing: Foreign key constraints for array elements are not implemented (yet). This alone should void the idea.

    There was an attempt to implement the feature for Postgres 9.3 that was stopped by serious performance issues. See this thread on pgsql-hackers.

    Also, while read performance can be improved with arrays for certain use cases, write performance plummets. Think of it: To insert, update or delete a single element from a long array, you now have to write a new row version with the whole array for every canged element. And I see serious lock contention ahead, too.

    If your table is read only, the idea starts to make more sense. But then I would consider a materialized view with denormalized arrays on top of a normalized many-to-many implementation:

    • How to implement a many-to-many relationship in PostgreSQL?

    While being at it, the MV can include all join tables and produce one flat table for even better read performance (for typical use cases). This way you get referential integrity and good read (and write) performance - at the cost of the overhead and additional storage for managing the MV.

    0 讨论(0)
提交回复
热议问题