What are some strategies for testing large state machines?

后端 未结 9 1661
刺人心
刺人心 2021-02-01 05:51

I inherited a large and fairly complex state machine. It has 31 possible states, all are really needed (big business process). It has the following inputs:

  • Enum: C
相关标签:
9条回答
  • 2021-02-01 06:21

    I can't think of any easy way to do test an FSM like this with out getting really pedantic and employing proofs, using machine learning techniques, or brute force.

    Brute force: Write a something that will generate all the 4320 test cases in some declarative manner with mostly incorrect data. I would recommend putting this in a CSV file and then use something like NUnits parameteric testing to load all the test cases. Now most of these test cases will fail so you will have to update the declarative file manually to be correct and take just a sample of the test cases randomly to fix.

    Machine Learning technique: You could employ some Vector machines or MDA algorithms/heuristics to try to learn on the sample you took from what we mentioned above and teach your ML program your FSM. Then run the algorithm on all the 4320 inputs and see where the two disagree.

    0 讨论(0)
  • 2021-02-01 06:23

    All-Pair Testing

    To constraint the amount of combinations to test and to be reasonable assured you have most important combinations covered, you should take a look at all-pair testing.

    the reasoning behind all-pairs testing is this: the simplest bugs in a program are generally triggered by a single input parameter. The next simplest category of bugs consists of those dependent on interactions between pairs of parameters, which can be caught with all-pairs testing.1 Bugs involving interactions between three or more parameters are progressively less common2, whilst at the same time being progressively more expensive to find by exhaustive testing, which has as its limit the exhaustive testing of all possible inputs.

    Also take a look at a previous answer here (shameless plug) for additional information and links to both all-pair & pict as tool.

    Example Pict model file

    Given model generates 93 testcases, covering all pairs of input parameters.

    #
    # This is a PICT  model for testing a complex state machine at work 
    #
    
    CurrentState  :0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
    Source        :1,2
    Request       :True, False
    Type          :True, False
    Status        :State1, State2, State3
    Handling      :State1, State2, State3
    Completed     :True,False
    
    #
    # One can add constraints to the model to exclude impossible 
    # combinations if needed.
    #
    # For example:
    # IF [Completed]="True" THEN CurrentState>15;
    #
    
    #
    # This is the PICT output of "pict ComplexStateMachine.pict /s /r1"
    #
    # Combinations:    515
    # Generated tests: 93
    
    0 讨论(0)
  • 2021-02-01 06:24

    I see the problem, but I'd definitely try splitting the logic out.

    The big problem area in my eyes is:

    • It has 31 possible states to be in.
    • It has the following inputs:
      • Enum: Current State (so 0 -> 30)
      • Enum: source (currently only 2 entries)
      • Boolean: Request
      • Boolean: type
      • Enum: Status (3 states)
      • Enum: Handling (3 states)
      • Boolean: Completed

    There is just far too much going on. The input is making the code hard to test. You've said it would be painful to split this up into more manageable areas, but it's equally if not more painful to test this much logic in on go. In your case, each unit test covers far too much ground.

    This question I asked about testing large methods is similar in nature, I found my units were simply too big. You'll still end up with many tests, but they'll be smaller and more manageable, covering less ground. This can only be a good thing though.

    Testing Legacy Code

    Check out Pex. You claim you inherited this code, so this is not actually Test-Driven-Development. You simply want unit tests to cover each aspect. This is a good thing, as any further work will be validated. I've personally not used Pex properly yet, however I was wowed by the video I saw. Essentially it will generate unit tests based on the input, which in this case would be the finite state machine itself. It will generate test cases you will not have enough thought of. Granted this is not TDD, but in this scenario, testing legacy code, it should be ideal.

    Once you have your test coverage, you can begin refactoring, or adding new features with the safety of good test coverage to ensure you don't break any existing functionality.

    0 讨论(0)
提交回复
热议问题