Expectation Maximization coin toss examples

后端 未结 4 623
死守一世寂寞
死守一世寂寞 2021-02-14 08:00

I\'ve been self-studying the Expectation Maximization lately, and grabbed myself some simple examples in the process:

http://cs.dartmouth.edu/~cs104/CS104_11.04.22.pdf T

4条回答
  •  南方客
    南方客 (楼主)
    2021-02-14 08:24

    The key to understanding this is knowing what the auxiliary variables are that make estimation trivial. I will explain the first example quickly, the second follows a similar pattern.

    Augment each sequence of heads/tails with two binary variables, which indicate whether coin 1 was used or coin 2. Now our data looks like the following:

    c_11 c_12 c_21 c_22 c_31 c_32 ...

    For each i, either c_i1=1 or c_i2=1, with the other being 0. If we knew the values these variables took in our sample, estimation of parameters would be trivial: p1 would be the proportion of heads in samples where c_i1=1, likewise for c_i2, and \lambda would be the mean of the c_i1s.

    However, we don't know the values of these binary variables. So, what we basically do is guess them (in reality, take their expectation), and then update the parameters in our model assuming our guesses were correct. So the E step is to take the expectation of the c_i1s and c_i2s. The M step is to take maximum likelihood estimates of p_1, p_2 and \lambda given these cs.

    Does that make a bit more sense? I can write out the updates for the E and M step if you prefer. EM then just guarantees that by following this procedure, likelihood will never decrease as iterations increase.

提交回复
热议问题