Sorting 1 million 8-decimal-digit numbers with 1 MB of RAM

后端 未结 30 1891
栀梦
栀梦 2020-12-22 14:33

I have a computer with 1 MB of RAM and no other local storage. I must use it to accept 1 million 8-digit decimal numbers over a TCP connection, sort them, and then send the

相关标签:
30条回答
  • 2020-12-22 14:51

    To represent the sorted array one can just store the first element and the difference between adjacent elements. In this way we are concerned with encoding 10^6 elements that can sum up to at most 10^8. Let's call this D. To encode the elements of D one can use a Huffman code. The dictionary for the Huffman code can be created on the go and the array updated every time a new item is inserted in the sorted array (insertion sort). Note that when the dictionary changes because of a new item the whole array should be updated to match the new encoding.

    The average number of bits for encoding each element of D is maximized if we have equal number of each unique element. Say elements d1, d2, ..., dN in D each appear F times. In that case (in worst case we have both 0 and 10^8 in input sequence) we have

    sum(1<=i<=N) F. di = 10^8

    where

    sum(1<=i<=N) F = 10^6, or F=10^6/N and the normalized frequency will be p= F/10^=1/N

    The average number of bits will be -log2(1/P) = log2(N). Under these circumstances we should find a case that maximizes N. This happens if we have consecutive numbers for di starting from 0, or, di= i-1, therefore

    10^8=sum(1<=i<=N) F. di = sum(1<=i<=N) (10^6/N) (i-1) = (10^6/N) N (N-1)/2

    i.e.

    N <= 201. And for this case average number of bits is log2(201)=7.6511 which means we will need around 1 byte per input element for saving the sorted array. Note that this doesn't mean D in general cannot have more than 201 elements. It just sows that if elements of D are uniformly distributed, it cannot have more than 201 unique values.

    0 讨论(0)
  • 2020-12-22 14:51

    While receiving the stream do these steps.

    1st set some reasonable chunk size

    Pseudo Code idea:

    1. The first step would be to find all the duplicates and stick them in a dictionary with its count and remove them.
    2. The third step would be to place number that exist in sequence of their algorithmic steps and place them in counters special dictionaries with the first number and their step like n, n+1..., n+2, 2n, 2n+1, 2n+2...
    3. Begin to compress in chunks some reasonable ranges of number like every 1000 or ever 10000 the remaining numbers that appear less often to repeat.
    4. Uncompress that range if a number is found and add it to the range and leave it uncompressed for a while longer.
    5. Otherwise just add that number to a byte[chunkSize]

    Continue the first 4 steps while receiving the stream. The final step would be to either fail if you exceeded memory or start outputting the result once all the data is collected by beginning to sort the ranges and spit out the results in order and uncompressing those in order that need to be uncompressed and sort them when you get to them.

    0 讨论(0)
  • 2020-12-22 14:52

    Google's (bad) approach, from HN thread. Store RLE-style counts.

    Your initial data structure is '99999999:0' (all zeros, haven't seen any numbers) and then lets say you see the number 3,866,344 so your data structure becomes '3866343:0,1:1,96133654:0' as you can see the numbers will always alternate between number of zero bits and number of '1' bits so you can just assume the odd numbers represent 0 bits and the even numbers 1 bits. This becomes (3866343,1,96133654)

    Their problem doesn't seem to cover duplicates, but let's say they use "0:1" for duplicates.

    Big problem #1: insertions for 1M integers would take ages.

    Big problem #2: like all plain delta encoding solutions, some distributions can't be covered this way. For example, 1m integers with distances 0:99 (e.g. +99 each one). Now think the same but with random distance in the range of 0:99. (Note: 99999999/1000000 = 99.99)

    Google's approach is both unworthy (slow) and incorrect. But to their defense, their problem might have been slightly different.

    0 讨论(0)
  • 2020-12-22 14:53

    What kind of computer are you using? It may not have any other "normal" local storage, but does it have video RAM, for example? 1 megapixel x 32 bits per pixel (say) is pretty close to your required data input size.

    (I largely ask in memory of the old Acorn RISC PC, which could 'borrow' VRAM to expand the available system RAM, if you chose a low resolution or low colour-depth screen mode!). This was rather useful on a machine with only a few MB of normal RAM.

    0 讨论(0)
  • 2020-12-22 14:53

    If the input stream could be received few times this would be much easier (no info about that, idea and time-performance problem). Then, we could count the decimal values. With counted values it would be easy to make the output stream. Compress by counting the values. It depends what would be in the input stream.

    0 讨论(0)
  • 2020-12-22 14:55

    Gilmanov's answer is very wrong in its assumptions. It starts speculating based in a pointless measure of a million consecutive integers. That means no gaps. Those random gaps, however small, really makes it a poor idea.

    Try it yourself. Get 1 million random 27 bit integers, sort them, compress with 7-Zip, xz, whatever LZMA you want. The result is over 1.5 MB. The premise on top is compression of sequential numbers. Even delta encoding of that is over 1.1 MB. And never mind this is using over 100 MB of RAM for compression. So even the compressed integers don't fit the problem and never mind run time RAM usage.

    It's saddens me how people just upvote pretty graphics and rationalization.

    #include <stdint.h>
    #include <stdlib.h>
    #include <time.h>
    
    int32_t ints[1000000]; // Random 27-bit integers
    
    int cmpi32(const void *a, const void *b) {
        return ( *(int32_t *)a - *(int32_t *)b );
    }
    
    int main() {
        int32_t *pi = ints; // Pointer to input ints (REPLACE W/ read from net)
    
        // Fill pseudo-random integers of 27 bits
        srand(time(NULL));
        for (int i = 0; i < 1000000; i++)
            ints[i] = rand() & ((1<<27) - 1); // Random 32 bits masked to 27 bits
    
        qsort(ints, 1000000, sizeof (ints[0]), cmpi32); // Sort 1000000 int32s
    
        // Now delta encode, optional, store differences to previous int
        for (int i = 1, prev = ints[0]; i < 1000000; i++) {
            ints[i] -= prev;
            prev    += ints[i];
        }
    
        FILE *f = fopen("ints.bin", "w");
        fwrite(ints, 4, 1000000, f);
        fclose(f);
        exit(0);
    
    }
    

    Now compress ints.bin with LZMA...

    $ xz -f --keep ints.bin       # 100 MB RAM
    $ 7z a ints.bin.7z ints.bin   # 130 MB RAM
    $ ls -lh ints.bin*
        3.8M ints.bin
        1.1M ints.bin.7z
        1.2M ints.bin.xz
    
    0 讨论(0)
提交回复
热议问题