What is the best way to store historical data in SQL Server 2005/2008?

后端 未结 8 947
刺人心
刺人心 2021-02-12 10:55

My simplified and contrived example is the following:-

Lets say that I want to measure and store the temperature (and other values) of all the worlds\' towns on a daily

相关标签:
8条回答
  • 2021-02-12 11:08

    I would use a single table with index views to provide me with the latest information. SQL 2005 and 2008 server are designed for data warehousing so should preform well under this condition.

    If you have a data pattern that requires writing to the db often, then the best choice would be to have an active table and archive table that you batch update at some interval.

    0 讨论(0)
  • 2021-02-12 11:14

    Your table is very narrow and would probably perform in a single properly indexed table which would never outstrip the capacity of SQL Server in a traditional normalized OLTP model, even for millions and millions of rows. Even with dual-table model advantages can be mitigated by using table partitioning in SQL Server. So it doesn't have much to recommend it over the single table model. This would be an Inmon-style or "Enterprise Data Warehouse"- scenario.

    In much bigger scenarios, I would transfer the data to a data warehouse (modeled with a Kimball-style dimensional model) on a regular basis and simply purge the live data - in some simple scenarios like yours, there might effectively be NO live data - it all goes straight into the warehouse. The dimensional model has a lot of advantages when slicing data different ways and storing huge numbers of facts with a variety of dimensions. Even in the data warehouse scenario, often fact tables are partitioned by date.

    It might not seem like your data has this (Town and Date are your only explicit dimensions), however, in most data warehouses, dimensions can snowflake or there can be redundancy, so there would be other dimensions about the fact stored at time of load instead of snowflaking for more efficiency - like State, Zip Code, WasItRaining, IsStationUrban (contrived).

    This might seem silly, but when you start to mine the data for results in data warehouses, this makes asking questions like - on a day with rain in urban environments, what was the average temperature in Maine? - just that little bit easier to get at without joining a whole bunch of tables (i.e. it doesn't require a lot of expertise on your normalized model and performs very quickly). Kind of like useless stats in baseball - but some apparently turn out to be useful.

    0 讨论(0)
  • 2021-02-12 11:15

    Instead of trying to optimize relational databases for this, you might want to consider using a Time series database. These are already optimized for dealing with time-based data. Some of their advantages are:

    • Faster at querying time-based keys
    • Large data throughput
      • Since default operation is just an append, this can be done very quickly. (InfluxDb supports millions of data points per second).
    • Able to compress data more agressively
    • More user-friendly for time-series data.
      • The API's tend to reflect typical use-cases for time-series data
      • Aggregate metrics can be automatically calculated (e.g. windowed averages)
      • Specific visualization tools are often available.

    Personally I liked using the open source database InfluxDB, but other good alternatives are available.

    0 讨论(0)
  • 2021-02-12 11:16

    it DEPENDS on the applications usage patterns... If usage patterns indicate that the historical data will be queried more often than the current values, then put them all in one table... But if Historical queries are the exception, (or less than 10% of the queries), and the performance of the more common current value query will suffer from putting all data in one table, then it makes sense to separate that data into it's own table...

    0 讨论(0)
  • 2021-02-12 11:16

    I would keep the data in one table unless you have a very serious bias for current data (in usage) or history data (in volume). A compound index with DATE + TOWNID (in that order) would remove the performance concern in most cases (although clearly we don't have the data to be sure of this at this time).

    The one thing I would wonder about is if anyone will want data from both the current and history data for a town. If so, you just created at least one new view to worry about and possible performance problem in that direction.

    This is unfortunately one of those things where you may need to profile your solutions against real world data. I personally have used compound indexes such as specified above in many cases, and yet there are a few edge cases where I have opted to break the history into another table. Well, actually another data file, because the problem was that the history was so dense that I created a new data file for it alone to avoid bloating the entire primary data file set. Performance issues are rarely solved by theory.

    I would recommend reading up on query hints for index use, and "covering indexes" for more information about performance issues.

    0 讨论(0)
  • 2021-02-12 11:25

    I suggest keep in the same table since historical data is queried just as often. Unless you will be adding many more columns to the table.

    When size becomes an issue, you can partition it out by decade and have a stored procedure union the requested rows.

    0 讨论(0)
提交回复
热议问题