randomizing large dataset

前端 未结 3 1752
失恋的感觉
失恋的感觉 2021-01-15 14:58

I am trying to find a way to get a random selection from a large dataset.

We expect the set to grow to ~500K records, so it is important to find a way that keeps per

相关标签:
3条回答
  • 2021-01-15 15:09

    You could solve this with some denormalization:

    • Build a secondary table that contains the same pkeys and statuses as your data table
    • Add and populate a status group column which will be a kind of sub-pkey that you auto number yourself (1-based autoincrement relative to a single status)
    Pkey    Status    StatusPkey
    1       A         1
    2       A         2
    3       B         1
    4       B         2
    5       C         1
    ...     C         ...
    n       C         m (where m = # of C statuses)
    

    When you don't need to filter you can generate rand #s on the pkey as you mentioned above. When you do need to filter then generate rands against the StatusPkeys of the particular status you're interested in.

    There are several ways to build this table. You could have a procedure that you run on an interval or you could do it live. The latter would be a performance hit though since the calculating the StatusPkey could get expensive.

    0 讨论(0)
  • 2021-01-15 15:24

    You can do this efficiently, but you have to do it in two queries.

    First get a random offset scaled by the number of rows that match your 5% conditions:

    SELECT ROUND(RAND() * (SELECT COUNT(*) FROM MyTable WHERE ...conditions...))
    

    This returns an integer. Next, use the integer as an offset in a LIMIT expression:

    SELECT * FROM MyTable WHERE ...conditions... LIMIT 1 OFFSET ?
    

    Not every problem must be solved in a single SQL query.

    0 讨论(0)
  • 2021-01-15 15:28

    Check out this article by Jan Kneschke... It does a great job at explaining the pros and cons of different approaches to this problem...

    0 讨论(0)
提交回复
热议问题