How do I compute the approximate entropy of a bit string?

前端 未结 7 2061
梦如初夏
梦如初夏 2020-12-07 20:46

Is there a standard way to do this?

Googling -- \"approximate entropy\" bits -- uncovers multiple academic papers but I\'d like to just find a chunk of pseudocode de

相关标签:
7条回答
  • 2020-12-07 20:46

    Shannon's entropy equation is the standard method of calculation. Here is a simple implementation in Python, shamelessly copied from the Revelation codebase, and thus GPL licensed:

    import math
    
    
    def entropy(string):
            "Calculates the Shannon entropy of a string"
    
            # get probability of chars in string
            prob = [ float(string.count(c)) / len(string) for c in dict.fromkeys(list(string)) ]
    
            # calculate the entropy
            entropy = - sum([ p * math.log(p) / math.log(2.0) for p in prob ])
    
            return entropy
    
    
    def entropy_ideal(length):
            "Calculates the ideal Shannon entropy of a string with given length"
    
            prob = 1.0 / length
    
            return -1.0 * length * prob * math.log(prob) / math.log(2.0)
    

    Note that this implementation assumes that your input bit-stream is best represented as bytes. This may or may not be the case for your problem domain. What you really want is your bitstream converted into a string of numbers. Just how you decide on what those numbers are is domain specific. If your numbers really are just one and zeros, then convert your bitstream into an array of ones and zeros. The conversion method you choose will affect the results you get, however.

    0 讨论(0)
  • 2020-12-07 20:46

    There is no single answer. Entropy is always relative to some model. When someone talks about a password having limited entropy, they mean "relative to the ability of an intelligent attacker to predict", and it's always an upper bound.

    Your problem is, you're trying to measure entropy in order to help you find a model, and that's impossible; what an entropy measurement can tell you is how good a model is.

    Having said that, there are some fairly generic models that you can try; they're called compression algorithms. If gzip can compress your data well, you have found at least one model that can predict it well. And gzip is, for example, mostly insensitive to simple substitution. It can handle "wkh" frequently in the text as easily as it can handle "the".

    0 讨论(0)
  • 2020-12-07 20:50

    The NIST Random Number Generator evaluation toolkit has a way of calculating "Approximate Entropy." Here's the short description:

    Approximate Entropy Test Description: The focus of this test is the frequency of each and every overlapping m-bit pattern. The purpose of the test is to compare the frequency of overlapping blocks of two consecutive/adjacent lengths (m and m+1) against the expected result for a random sequence.

    And a more thorough explanation is available from the PDF on this page:

    http://csrc.nist.gov/groups/ST/toolkit/rng/documentation_software.html

    0 讨论(0)
  • 2020-12-07 20:55

    I believe the answer is the Kolmogorov Complexity of the string. Not only is this not answerable with a chunk of pseudocode, Kolmogorov complexity is not a computable function!

    One thing you can do in practice is compress the bit string with the best available data compression algorithm. The more it compresses the lower the entropy.

    0 讨论(0)
  • 2020-12-07 21:00

    Using Shannon entropy of a word with this formula : http://imgur.com/a/DpcIH

    Here's a O(n) algorithm that calculates it :

    import math
    from collections import Counter
    
    
    def entropy(s):
        l = float(len(s))
        return -sum(map(lambda a: (a/l)*math.log2(a/l), Counter(s).values()))
    
    0 讨论(0)
  • 2020-12-07 21:01

    Entropy is not a property of the string you got, but of the strings you could have obtained instead. In other words, it qualifies the process by which the string was generated.

    In the simple case, you get one string among a set of N possible strings, where each string has the same probability of being chosen than every other, i.e. 1/N. In the situation, the string is said to have an entropy of N. The entropy is often expressed in bits, which is a logarithmic scale: an entropy of "n bits" is an entropy equal to 2n.

    For instance: I like to generate my passwords as two lowercase letters, then two digits, then two lowercase letters, and finally two digits (e.g. va85mw24). Letters and digits are chosen randomly, uniformly, and independently of each other. This process may produce 26*26*10*10*26*26*10*10 = 4569760000 distinct passwords, and all these passwords have equal chances to be selected. The entropy of such a password is then 4569760000, which means about 32.1 bits.

    0 讨论(0)
提交回复
热议问题