Expectation Maximization coin toss examples

后端 未结 4 627
死守一世寂寞
死守一世寂寞 2021-02-14 08:00

I\'ve been self-studying the Expectation Maximization lately, and grabbed myself some simple examples in the process:

http://cs.dartmouth.edu/~cs104/CS104_11.04.22.pdf T

相关标签:
4条回答
  • 2021-02-14 08:19

    I wrote the below code in Python which explains the example given in your second example paper by Do and Batzoglou.

    I recommend that you read this link first for a clear explanation of how and why the 'weightA' and 'weightB' in the code below are obtained.

    Disclaimer : The code does work but I am certain that it is not coded optimally. I am not a Python coder normally and have started using it two weeks ago.

    import numpy as np
    import math
    
    #### E-M Coin Toss Example as given in the EM tutorial paper by Do and Batzoglou* #### 
    
    def get_mn_log_likelihood(obs,probs):
        """ Return the (log)likelihood of obs, given the probs"""
        # Multinomial Distribution Log PMF
        # ln (pdf)      =             multinomial coeff            *   product of probabilities
        # ln[f(x|n, p)] = [ln(n!) - (ln(x1!)+ln(x2!)+...+ln(xk!))] + [x1*ln(p1)+x2*ln(p2)+...+xk*ln(pk)]     
    
        multinomial_coeff_denom= 0
        prod_probs = 0
        for x in range(0,len(obs)): # loop through state counts in each observation
            multinomial_coeff_denom = multinomial_coeff_denom + math.log(math.factorial(obs[x]))
            prod_probs = prod_probs + obs[x]*math.log(probs[x])
    
    multinomial_coeff = math.log(math.factorial(sum(obs))) -  multinomial_coeff_denom
    likelihood = multinomial_coeff + prod_probs
    return likelihood
    
    # 1st:  Coin B, {HTTTHHTHTH}, 5H,5T
    # 2nd:  Coin A, {HHHHTHHHHH}, 9H,1T
    # 3rd:  Coin A, {HTHHHHHTHH}, 8H,2T
    # 4th:  Coin B, {HTHTTTHHTT}, 4H,6T
    # 5th:  Coin A, {THHHTHHHTH}, 7H,3T
    # so, from MLE: pA(heads) = 0.80 and pB(heads)=0.45
    
    # represent the experiments
    head_counts = np.array([5,9,8,4,7])
    tail_counts = 10-head_counts
    experiments = zip(head_counts,tail_counts)
    
    # initialise the pA(heads) and pB(heads)
    pA_heads = np.zeros(100); pA_heads[0] = 0.60
    pB_heads = np.zeros(100); pB_heads[0] = 0.50
    
    # E-M begins!
    delta = 0.001  
    j = 0 # iteration counter
    improvement = float('inf')
    while (improvement>delta):
        expectation_A = np.zeros((5,2), dtype=float) 
        expectation_B = np.zeros((5,2), dtype=float)
        for i in range(0,len(experiments)):
            e = experiments[i] # i'th experiment
            ll_A = get_mn_log_likelihood(e,np.array([pA_heads[j],1-pA_heads[j]])) # loglikelihood of e given coin A
            ll_B = get_mn_log_likelihood(e,np.array([pB_heads[j],1-pB_heads[j]])) # loglikelihood of e given coin B
    
            weightA = math.exp(ll_A) / ( math.exp(ll_A) + math.exp(ll_B) ) # corresponding weight of A proportional to likelihood of A 
            weightB = math.exp(ll_B) / ( math.exp(ll_A) + math.exp(ll_B) ) # corresponding weight of B proportional to likelihood of B                            
    
            expectation_A[i] = np.dot(weightA, e) 
            expectation_B[i] = np.dot(weightB, e)
    
        pA_heads[j+1] = sum(expectation_A)[0] / sum(sum(expectation_A)); 
        pB_heads[j+1] = sum(expectation_B)[0] / sum(sum(expectation_B)); 
    
        improvement = max( abs(np.array([pA_heads[j+1],pB_heads[j+1]]) - np.array([pA_heads[j],pB_heads[j]]) ))
        j = j+1
    
    0 讨论(0)
  • The second PDF won't download for me, but I also visited the wikipedia page http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm which has more information. http://melodi.ee.washington.edu/people/bilmes/mypapers/em.pdf (which claims to be a gentle introduction) might be worth a look too.

    The whole point of the EM algorithm is to find parameters which maximize the likelihood of the observed data. This is the only bullet point on page 8 of the first PDF, the equation for capital Theta subscript ML.

    The EM algorithm comes in handy where there is hidden data which would make the problem easy if you knew it. In the three coins example this is the result of tossing coin 0. If you knew the outcome of that you could (of course) produce an estimate for the probability of coin 0 turning up heads. You would also know whether coin 1 or coin 2 was tossed three times in the next stage, which would allow you to make estimates for the probabilities of coin 1 and coin 2 turning up heads. These estimates would be justified by saying that they maximized the likelihood of the observed data, which would include not only the results that you are given, but also the hidden data that you are not - the results from coin 0. For a coin that gets A heads and B tails you find that the maximum likelihood for the probability of A heads is A/(A+B) - it might be worth you working this out in detail, because it is the building block for the M step.

    In the EM algorithm you say that although you don't know the hidden data, you come in with probability estimates which allow you to write down a probability distribution for it. For each possible value of the hidden data you could find the parameter values which would optimize the log likelihood of the data including the hidden data, and this almost always turns out to mean calculating some sort of weighted average (if it doesn't the EM step may be too difficult to be practical).

    What the EM algorithm asks you to do is to find the parameters maximizing the weighted sum of log likelihoods given by all the possible hidden data values, where the weights are given by the probability of the associated hidden data given the observations using the parameters at the start of the EM step. This is what almost everybody, including the Wikipedia algorithm, calls the Q-function. The proof behind the EM algorithm, given in the Wikipedia article, says that if you change the parameters so as to increase the Q-function (which is only a means to an end), you will also have changed them so as to increase the likelihood of the observed data (which you do care about). What you tend to find in practice is that you can maximize the Q-function using a variation of what you would do if you know the hidden data, but using the probabilities of the hidden data, given the estimates at the start of the EM-step, to weight the observations in some way.

    In your example it means totting up the number of heads and tails produced by each coin. In the PDF they work out P(Y=H|X=) = 0.6967. This means that you use weight 0.6967 for the case Y=H, which means that you increment the counts for Y=H by 0.6967 and increment the counts for X=H in coin 1 by 3*0.6967, and you increment the counts for Y=T by 0.3033 and increment the counts for X=H in coin 2 by 3*0.3033. If you have a detailed justification for why A/(A+B) is a maximum likelihood of coin probabilities in the standard case, you should be ready to turn it into a justification for why this weighted updating scheme maximizes the Q-function.

    Finally, the log likelihood of the observed data (the thing you are maximizing) gives you a very useful check. It should increase with every EM step, at least until you get so close to convergence that rounding error comes in, in which case you may have a very small decrease, signalling convergence. If it decreases dramatically, you have a bug in your program or your maths.

    0 讨论(0)
  • 2021-02-14 08:24

    The key to understanding this is knowing what the auxiliary variables are that make estimation trivial. I will explain the first example quickly, the second follows a similar pattern.

    Augment each sequence of heads/tails with two binary variables, which indicate whether coin 1 was used or coin 2. Now our data looks like the following:

    c_11 c_12 c_21 c_22 c_31 c_32 ...

    For each i, either c_i1=1 or c_i2=1, with the other being 0. If we knew the values these variables took in our sample, estimation of parameters would be trivial: p1 would be the proportion of heads in samples where c_i1=1, likewise for c_i2, and \lambda would be the mean of the c_i1s.

    However, we don't know the values of these binary variables. So, what we basically do is guess them (in reality, take their expectation), and then update the parameters in our model assuming our guesses were correct. So the E step is to take the expectation of the c_i1s and c_i2s. The M step is to take maximum likelihood estimates of p_1, p_2 and \lambda given these cs.

    Does that make a bit more sense? I can write out the updates for the E and M step if you prefer. EM then just guarantees that by following this procedure, likelihood will never decrease as iterations increase.

    0 讨论(0)
  • 2021-02-14 08:28

    As luck would have it, I have been struggling with this material recently as well. Here is how I have come to think of it:

    Consider a related, but distinct algorithm called the classify-maximize algorithm, which we might use as a solution technique for a mixture model problem. A mixture model problem is one where we have a sequence of data that may be produced by any of N different processes, of which we know the general form (e.g., Gaussian) but we do not know the parameters of the processes (e.g., the means and/or variances) and may not even know the relative likelihood of the processes. (Typically we do at least know the number of the processes. Without that, we are into so-called "non-parametric" territory.) In a sense, the process which generates each data is the "missing" or "hidden" data of the problem.

    Now, what this related classify-maximize algorithm does is start with some arbitrary guesses at the process parameters. Each data point is evaluated according to each one of those parameter processes, and a set of probabilities is generated-- the probability that the data point was generated by the first process, the second process, etc, up to the final Nth process. Then each data point is classified according to the most likely process.

    At this point, we have our data separated into N different classes. So, for each class of data, we can, with some relatively simple calculus, optimize the parameters of that cluster with a maximum likelihood technique. (If we tried to do this on the whole data set prior to classifying, it is usually analytically intractable.)

    Then we update our parameter guesses, re-classify, update our parameters, re-classify, etc, until convergence.

    What the expectation-maximization algorithm does is similar, but more general: Instead of a hard classification of data points into class 1, class 2, ... through class N, we are now using a soft classification, where each data point belongs to each process with some probability. (Obviously, the probabilities for each point need to sum to one, so there is some normalization going on.) I think we might also think of this as each process/guess having a certain amount of "explanatory power" for each of the data points.

    So now, instead of optimizing the guesses with respect to points that absolutely belong to each class (ignoring the points that absolutely do not), we re-optimize the guesses in the context of those soft classifications, or those explanatory powers. And it so happens that, if you write the expressions in the correct way, what you're maximizing is a function that is an expectation in its form.

    With that said, there are some caveats:

    1) This sounds easy. It is not, at least to me. The literature is littered with a hodge-podge of special tricks and techniques-- using likelihood expressions instead of probability expressions, transforming to log-likelihoods, using indicator variables, putting them in basis vector form and putting them in the exponents, etc.

    These are probably more helpful once you have the general idea, but they can also obfuscate the core ideas.

    2) Whatever constraints you have on the problem can be tricky to incorporate into the framework. In particular, if you know the probabilities of each of the processes, you're probably in good shape. If not, you're also estimating those, and the sum of the probabilities of the processes must be one; they must live on a probability simplex. It is not always obvious how to keep those constraints intact.

    3) This is a sufficiently general technique that I don't know how I would go about writing code that is general. The applications go far beyond simple clustering and extend to many situations where you are actually missing data, or where the assumption of missing data may help you. There is a fiendish ingenuity at work here, for many applications.

    4) This technique is proven to converge, but the convergence is not necessarily to the global maximum; be wary.

    I found the following link helpful in coming up with the interpretation above: Statistical learning slides

    And the following write-up goes into great detail of some painful mathematical details: Michael Collins' write-up

    0 讨论(0)
提交回复
热议问题