Directed probability graph - algorithm to reduce cycles?

后端 未结 4 1406
抹茶落季
抹茶落季 2021-02-07 21:28

Consider a directed graph which is traversed from first node 1 to some final nodes (which have no more outgoing edges). Each edge in the graph has a probability ass

相关标签:
4条回答
  • 2021-02-07 21:55

    I'm not expert in the area of Markov chains, and although I think it's likely that algorithms are known for the kind of problem you present, I'm having difficulty finding them.

    If no help comes from that direction, then you can consider rolling your own. I see at least two different approaches here:

    1. Simulation.

    Examine how the state of the system evolves over time by starting with the system in state 1 at 100% probability, and performing many iterations in which you apply your transition probabilities to compute the probabilities of the state obtained after taking a step. If at least one final ("absorbing") node can be reached (at non-zero probability) from every node, then over enough steps, the probability that the system is in anything other than a final state will decrease asymptotically toward zero. You can estimate the probability that the system ends in final state S as the probability that it is in state S after n steps, with an upper bound on the error in that estimate given by the probability that the system is in a non-final state after n steps.

    As a practical matter, this is the same is computing Trn, where Tr is your transition probability matrix, augmented with self-edges at 100% probability for all the final states.

    1. Exact computation.

    Consider a graph, G, such as you describe. Given two vertices i and f, such that there is at least one path from i to f, and f has no outgoing edges other than self-edges, we can partition the paths from i to f into classes characterized by the number of times they revisit i prior to reaching f. There may be an infinite number of such classes, which I will designate Cif(n), where n represents the number of times the paths in Cif(n) revisit node i. In particular, Cii(0) contains all the simple loops in G that contain i (clarification: as well as other paths).

    The total probability of ending at node f given that the system traverses graph G starting at node i is given by

    Pr(f|i, G) = Pr(Cif(0)|G) + Pr(Cif(1)|G) + Pr(Cif(2)|G) ...

    Now observe that if n > 0 then each path in Cif(n) has the form of a union of two paths c and t, where c belongs to Cii(n-1) and t belongs to Cif(0). That is, c is a path that starts at node i and ends at node i, passing through i n-1 times between, and t is a path from i to f that does not pass through i again. We can use that to rewrite our probability formula:

    Pr(f|i,G) = Pr(Cif(0)|G) + Pr(Cii(0)|G) * Pr(Cif(0)|G) + Pr(Cii(1)|G) * Pr(Cif(0)|G) + ...

    But note that every path in Cii(n) is a composition of n+1 paths belonging to Cii(0). It follows that Pr(Cii(n)|G) = Pr(Cii(0)|G)n+1, so we get

    Pr(f|i) = Pr(Cif(0)|G) + Pr(Cii(0)|G) * Pr(Cif(0)|G) + Pr(Cii(0)|G)2 * Pr(Cif(0)|G) + ...

    And now, a little algebra gives us

    Pr(f|i,G) - Pr(Cif(0)|G) = Pr(Cii(0)|G) * Pr(f|i,G)

    , which we can solve for Pr(f|i,G) to get

    Pr(f|i,G) = Pr(Cif(0)|G) / (1 - Pr(Cii(0)|G))

    We've thus reduced the problem to one in terms of paths that do not return to the starting node, except possibly as their end node. These do not preclude paths that have loops that don't include the starting node, but we can we nevertheless rewrite this problem in terms of several instances of the original problem, computed on a subgraph of the original graph.

    In particular, let S(i, G) be the set of successors of vertex i in graph G -- that is, the set of vertices s such that there is an edge from i to s in G, and let X(G,i) be the subgraph of G formed by removing all edges that start at i. Furthermore, let pis be the probability associated with edge (i, s) in G.

    Pr(Cif(0)|G) = Sum over s in S(i, G) of pis * Pr(f|s,X(G,i))

    In other words, the probability of reaching f from i through G without revisiting i in between is the sum over all successors of i of the product of the probability of reaching s from i in one step with the probability of reaching f from s through G without traversing any edges outbound from i. That applies for all f in G, including i.

    Now observe that S(i, G) and all the pis are knowns, and that the problem of computing Pr(f|s,X(G,i)) is a new, strictly smaller instance of the original problem. Thus, this computation can be performed recursively, and such a recursion is guaranteed to terminate. It may nevertheless take a long time if your graph is complex, and it looks like a naive implementation of this recursive approach would scale exponentially in the number of nodes. There are ways you could speed the computation in exchange for higher memory usage (i.e. memoization).


    There are likely other possibilities as well. For example, I'm suspicious that there may be a bottom-up dynamic programming approach to a solution, but I haven't been able to convince myself that loops in the graph don't present an insurmountable problem there.

    0 讨论(0)
  • 2021-02-07 22:00

    Problem Clarification

    The input data is a set of m rows of n columns of probabilities, essentially an m by n matrix, where m = n = number of vertices on a directed graph. Rows are edge origins and columns are edge destinations. We will, on the bases of the mention of cycles in the question, that the graph is cyclic, that at least one cycle exists in the graph.

    Let's define the starting vertex as s. Let's also define a terminal vertex as a vertex for which there are no exiting edges and the set of them as set T with size z. Therefore we have z sets of routes from s to a vertex in T, and the set sizes may be infinite due to cycles 1. In such a scenario, one cannot conclude that a terminal vertex will be reached in an arbitrarily large number of steps.

    In the input data, probabilities for rows that correspond with vertices not in T are normalized to total to 1.0. We shall assume the Markov property, that the probabilities at each vertex do not vary with time. This precludes the use of probability to prioritize routes in a graph search 2.

    Finite math texts sometimes name example problems similar to this question as Drunken Random Walks to underscore the fact that the walker forgets the past, referring to the memory-free nature of Markovian chains.

    Applying Probability to Routes

    The probability of arriving at a terminal vertex can be expressed as an infinite series sum of products.

    Pt = lim s -> ∞ Σ ∏ Pi, j,

    where s is the step index, t is a terminal vertex index, i ∈ [1 .. m] and j ∈ [1 .. n]

    Reduction

    When two or more cycles intersect (sharing one or more vertices), analysis is complicated by an infinite set of patterns involving them. It appears, after some analysis and review of relevant academic work, that arriving at an accurate set of terminal vertex arrival probabilities with today's mathematical tools may best be accomplished with a converging algorithm.

    A few initial reductions are possible.

    1. The first consideration is to enumerate the destination vertex, which is easy since the corresponding rows have probabilities of zero.

    2. The next consideration is to differentiate any further reductions from what the academic literature calls irreducible sub-graphs. The below depth first algorithm remembers which vertices have already been visited while constructing a potential route, so it can be easily retrofitted to identify which vertices are involved in cycles. However it is recommended to use existing well tested, peer reviewed graph libraries to identify and characterize sub-graphs as irreducible.

    Mathematical reduction of irreducible portions of the graph may or may not be plausible. Consider starting vertex A and sole terminating vertex B in the graph represented as {A->C, C->A, A->D, D->A, C->D, D->C, C->B, D->B}.

    Although one can reduce the graph to probability relations absent of cycles through vertex A, the vertex A cannot be removed for further reduction without either modifying probabilities of vertices exiting C and D or allowing both totals of probabilities of edges exiting C and D to be less than 1.0.

    Convergent Breadth First Traversal

    A breadth first traversal that ignores revisiting and allows cycles can iterate step index s, not to some fixed smax but to some sufficiently stable and accurate point in a convergent trend. This approach is especially called for if cycles overlap creating bifurcations in the simpler periodicity caused by a single cycle.

    Σ PsΔ s.

    For the establishment of a reasonable convergence as s increases, one must determine the desired accuracy as a criteria for completing convergence algorithm and a metric for measuring accuracy by looking at longer term trends in results at all terminal vertices. It may be important to provide a criteria where the sum of terminal vertex probabilities is close to unity in conjunction with the trend convergence metric, as both a sanity check and an accuracy criteria. Practically, four convergence criteria may be necessary 3.

    1. Per terminal vertex probability trend convergence delta
    2. Average probability trend convergence delta
    3. Convergence of total probability on unity
    4. Total number of steps (to cap depth for practical computing reasons)

    Even beyond these four, the program may need to contain a trap for an interrupt that permits the writing and subsequent examination of output after a long wait without the satisfying of all four above criteria.

    An Example Cycle Resistant Depth First Algorithm

    There are more efficient algorithms than the following one, but it is fairly comprehensible, it compiles without warning with C++ -Wall, and it produces the desired output for all finite and legitimate directed graphs and start and destination vertices possible 4. It is easy to load a matrix in the form given in the question using the addEdge method 5.

    #include <iostream>
    #include <list>
    
    class DirectedGraph {
    
        private:
            int miNodes;
            std::list<int> * mnpEdges;
            bool * mpVisitedFlags;
    
        private:
            void initAlreadyVisited() {
                for (int i = 0; i < miNodes; ++ i)
                    mpVisitedFlags[i] = false;
            }
    
            void recurse(int iCurrent, int iDestination,
                   int route[], int index,
                   std::list<std::list<int> *> * pnai) {
    
                mpVisitedFlags[iCurrent] = true;
                route[index ++] = iCurrent;
    
                if (iCurrent == iDestination) {
                    auto pni = new std::list<int>;
                    for (int i = 0; i < index; ++ i)
                        pni->push_back(route[i]);
                    pnai->push_back(pni);
    
                } else {
                    auto it = mnpEdges[iCurrent].begin();
                    auto itBeyond = mnpEdges[iCurrent].end();
                    while (it != itBeyond) {
                        if (! mpVisitedFlags[* it])
                            recurse(* it, iDestination,
                                    route, index, pnai);
                        ++ it;
                    }
                }
    
                -- index;
                mpVisitedFlags[iCurrent] = false;
            } 
    
        public:
            DirectedGraph(int iNodes) {
                miNodes = iNodes;
                mnpEdges = new std::list<int>[iNodes];
                mpVisitedFlags = new bool[iNodes];
            }
    
            ~DirectedGraph() {
                delete mpVisitedFlags;
            }
    
            void addEdge(int u, int v) {
                mnpEdges[u].push_back(v);
            }
    
            std::list<std::list<int> *> * findRoutes(int iStart,
                    int iDestination) {
                initAlreadyVisited();
                auto route = new int[miNodes];
                auto pnpi = new std::list<std::list<int> *>();
                recurse(iStart, iDestination, route, 0, pnpi);
                delete route;
                return pnpi;
            }
    };
    
    int main() {
    
        DirectedGraph dg(5);
    
        dg.addEdge(0, 1);
        dg.addEdge(0, 2);
        dg.addEdge(0, 3);
        dg.addEdge(1, 3);
        dg.addEdge(1, 4);
        dg.addEdge(2, 0);
        dg.addEdge(2, 1);
        dg.addEdge(4, 1);
        dg.addEdge(4, 3);
    
        int startingNode = 2;
        int destinationNode = 3;
    
        auto pnai = dg.findRoutes(startingNode, destinationNode);
    
        std::cout
                << "Unique routes from "
                << startingNode
                << " to "
                << destinationNode
                << std::endl
                << std::endl;
    
        bool bFirst;
        std::list<int> * pi;
        auto it = pnai->begin();
        auto itBeyond = pnai->end();
        std::list<int>::iterator itInner;
        std::list<int>::iterator itInnerBeyond;
        while (it != itBeyond) {
            bFirst = true;
            pi = * it ++;
            itInner = pi->begin();
            itInnerBeyond = pi->end();
            while (itInner != itInnerBeyond) {
                if (bFirst)
                    bFirst = false;
                else
                    std::cout << ' ';
                std::cout << (* itInner ++);
            }
            std::cout << std::endl;
            delete pi;
        }
    
        delete pnai;
    
        return 0;
    }
    

    Notes

    [1] Improperly handled cycles in a directed graph algorithm will hang in an infinite loop. (Note the trivial case where the number of routes from A to B for the directed graph represented as {A->B, B->A} is infinity.)

    [2] Probabilities are sometimes used to reduce the CPU cycle cost of a search. Probabilities, in that strategy, are input values for meta rules in a priority queue to reduce the computational challenge very tedious searches (even for a computer). The early literature in production systems termed the exponential character of unguided large searches Combinatory Explosions.

    [3] It may be practically necessary to detect breadth first probability trend at each vertex and specify satisfactory convergence in terms of four criteria

    1. Δ(Σ∏P)t <= Δmax ∀ t
    2. Σt=0T Δ(Σ∏P)t / T <= Δave
    3. |Σ Σ∏P - 1| <= umax, where u is the maximum allowable deviation from unity for the sum of final probabilities
    4. s < Smax

    [4] Provided there are enough computing resources available to support the data structures and ample time to arrive at an answer for the given computing system speed.

    [5] You can load DirectedGraph dg(7) with the input data using two loops nested to iterate through the rows and columns enumerated in the question. The body of the inner loop would simply be a conditional edge addition.

    if (prob != 0) dg.addEdge(i, j);
    

    Variable prob is P m,n. Route existence is only concerned with zero/nonzero status.

    0 讨论(0)
  • 2021-02-07 22:04

    I understand this as the following problem:

    Given an initial distribution to be on each node as a vector b and a Matrix A that stores the probability to jump from node i to node j in each time step, somewhat resembling an adjacency matrix.

    Then the distribution b_1 after one time step is A x b. The distribution b_2 after two time steps is A x b_1. Likewise, the distribution b_n is A^n x b.

    For an approximation of b_infinite, we can do the following:

    Vector final_probability(Matrix A, Vector b,
        Function Vector x Vector -> Scalar distance, Scalar threshold){
        b_old = b
        b_current = A x b
        while(distance(b_old,b_current) < threshold){
            b_old = b_current
            b_current = A x b_current
        }
        return b_current
    }
    

    (I used mathematical variable names for convencience)

    In other words, we assume that the sequence of distributions converges nicely after the given threshold. Might not hold true, but will usually work.

    You might want to add a maximal amount of iterations to that.

    Euclidean distance should work well as distance.

    (This uses the concept of a Markov Chain but is more of a pragmatical solution)

    0 讨论(0)
  • 2021-02-07 22:20

    I found this question while researching directed cyclic graphs. The probability of reaching each of the final nodes can be calculated using absorbing Markov chains.

    The video Markov Chains - Part 7 (+ parts 8 and 9) explains absorbing states in Markov chains and the math behind it.

    0 讨论(0)
提交回复
热议问题