How to design a database for User Defined Fields?

前端 未结 14 557
花落未央
花落未央 2020-11-27 08:37

My requirements are:

  • Need to be able to dynamically add User-Defined fields of any data type
  • Need to be able to query UDFs quickly
  • Need to be
相关标签:
14条回答
  • 2020-11-27 09:17

    This is a problematic situation, and none of the solutions appears "right". However option 1 is probably the best both in terms of simplicity and in terms of performance.

    This is also the solution used in some commercial enterprise applications.

    EDIT

    another option that is available now, but didn't exist (or at least wasn't mature) when the question was original asked is to use json fields in the DB.

    many relational DBs support now json based fields (that can include a dynamic list of sub fields) and allow querying on them

    postgress

    mysql

    0 讨论(0)
  • 2020-11-27 09:18

    I've managed this very successfully in the past using none of these options (option 6? :) ).

    I create a model for the users to play with (store as xml and expose via a custom modelling tool) and from the model generated tables and views to join the base tables with the user-defined data tables. So each type would have a base table with core data and a user table with user defined fields.

    Take a document as an example: typical fields would be name, type, date, author, etc. This would go in the core table. Then users would define their own special document types with their own fields, such as contract_end_date, renewal_clause, blah blah blah. For that user defined document there would be the core document table, the xcontract table, joined on a common primary key (so the xcontracts primary key is also foreign on the primary key of the core table). Then I would generate a view to wrap these two tables. Performance when querying was fast. additional business rules can also be embedded into the views. This worked really well for me.

    0 讨论(0)
  • 2020-11-27 09:21

    I would recommend #4 since this type of system was used in Magento which is a highly accredited e-commerce CMS platform. Use a single table to define your custom fields using fieldId & label columns. Then, have separate tables for each data type and within each of those tables have an index that indexes by fieldId and the data type value columns. Then, in your queries, use something like:

    SELECT *
    FROM FieldValues_Text
    WHERE fieldId IN (
        SELECT fieldId FROM Fields WHERE userId=@userId
    )
    AND value LIKE '%' + @search + '%'
    

    This will ensure the best possible performance for user-defined types in my opinion.

    In my experience, I've worked on several Magento websites that serves millions of users per month, hosts thousands of products with custom product attributes, and the database handles the workload easily, even for reporting.

    For reporting, you can use PIVOT to convert your Fields table label values into column names, then pivot your query results from each data type table into those pivoted columns.

    0 讨论(0)
  • 2020-11-27 09:24
    1. Create multiple UDF tables, one per data type. So we'd have tables for UDFStrings, UDFDates, etc. Probably would do the same as #2 and auto-generate a View anytime a new field gets added

    According to my research multiple tables based on the data type not going to help you in performance. Especially if you have bulk data, like 20K or 25K records with 50+ UDFs. Performance was the worst.

    You should go with single table with multiple columns like:

    varchar Name
    varchar Type
    decimal NumberValue
    varchar StringValue
    date DateValue
    
    0 讨论(0)
  • 2020-11-27 09:26

    If performance is the primary concern, I would go with #6... a table per UDF (really, this is a variant of #2). This answer is specifically tailored to this situation and the description of the data distribution and access patterns described.

    Pros:

    1. Because you indicate that some UDFs have values for a small portion of the overall data set, a separate table would give you the best performance because that table will be only as large as it needs to be to support the UDF. The same holds true for the related indices.

    2. You also get a speed boost by limiting the amount of data that has to be processed for aggregations or other transformations. Splitting the data out into multiple tables lets you perform some of the aggregating and other statistical analysis on the UDF data, then join that result to the master table via foreign key to get the non-aggregated attributes.

    3. You can use table/column names that reflect what the data actually is.

    4. You have complete control to use data types, check constraints, default values, etc. to define the data domains. Don't underestimate the performance hit resulting from on-the-fly data type conversion. Such constraints also help RDBMS query optimizers develop more effective plans.

    5. Should you ever need to use foreign keys, built-in declarative referential integrity is rarely out-performed by trigger-based or application level constraint enforcement.

    Cons:

    1. This could create a lot of tables. Enforcing schema separation and/or a naming convention would alleviate this.

    2. There is more application code needed to operate the UDF definition and management. I expect this is still less code needed than for the original options 1, 3, & 4.

    Other Considerations:

    1. If there is anything about the nature of the data that would make sense for the UDFs to be grouped, that should be encouraged. That way, those data elements can be combined into a single table. For example, let's say you have UDFs for color, size, and cost. The tendency in the data is that most instances of this data looks like

       'red', 'large', 45.03 
      

      rather than

       NULL, 'medium', NULL
      

      In such a case, you won't incur a noticeable speed penalty by combining the 3 columns in 1 table because few values would be NULL and you avoid making 2 more tables, which is 2 fewer joins needed when you need to access all 3 columns.

    2. If you hit a performance wall from a UDF that is heavily populated and frequently used, then that should be considered for inclusion in the master table.

    3. Logical table design can take you to a certain point, but when the record counts get truly massive, you also should start looking at what table partitioning options are provided by your RDBMS of choice.

    0 讨论(0)
  • 2020-11-27 09:27

    Even if you provide for a user adding custom columns, it will not necessarily be the case that querying on those columns will perform well. There are many aspects that go into query design that allow them to perform well, the most important of which is the proper specification on what should be stored in the first place. Thus, fundamentally, is it that you want to allow users to create schema without thought as to specifications and be able to quickly derive information from that schema? If so, then it is unlikley that any such solution will scale well especially if you want to allow the user to do numerical analysis on the data.

    Option 1

    IMO this approach gives you schema with no knowledge as to what the schema means which is a recipe for disaster and a nightmare for report designers. I.e., you must have the meta data to know what column stores what data. If that metadata gets messed up, it has the potential to hose your data. Plus, it makes it easy to put the wrong data in the wrong column. ("What? String1 contains the name of convents? I thought it was Chalie Sheen's favorite drugs.")

    Option 3,4,5

    IMO, requirements 2, 3, and 4 eliminate any variation of an EAV. If you need to query, sort or do calculations on this data, then an EAV is Cthulhu's dream and your development team's and DBA's nightmare. EAV's will create a bottleneck in terms of performance and will not give you the data integrity you need to quickly get to the information you want. Queries will quickly turn to crosstab Gordian knots.

    Option 2,6

    That really leaves one choice: gather specifications and then build out the schema.

    If the client wants the best performance on data they wish to store, then they need to go through the process of working with a developer to understand their needs so that it is stored as efficiently as possible. It could still be stored in a table separate from the rest of the tables with code that dynamically builds a form based on the schema of the table. If you have a database that allows for extended properties on columns, you could even use those to help the form builder use nice labels, tooltips etc. so that all that was necessary is to add the schema. Either way, to build and run reports efficiently, the data needs to be stored properly. If the data in question will have lots of nulls, some databases have the ability to store that type of information. For example, SQL Server 2008 has a feature called Sparse Columns specifically for data with lots of nulls.

    If this were only a bag of data on which no analysis, filtering, or sorting was to be done, I'd say some variation of an EAV might do the trick. However, given your requirements, the most efficient solution will be to get the proper specifications even if you store these new columns in separate tables and build forms dynamically off those tables.

    Sparse Columns

    0 讨论(0)
提交回复
热议问题