How to Read / Improve C.R.A.P Index Calculated by PHP

前端 未结 2 1046
粉色の甜心
粉色の甜心 2021-01-29 19:01

I just started working with PHPUnit and its colorful code coverage reports. I understand all the numbers and percentages save one: The C.R.A.P index. Can anyone offer me a solid

相关标签:
2条回答
  • 2021-01-29 19:36

    @Toader Mihai offered a solid explanation. (+1 from me)

    How to lower it:

    Write less complex code OR write better tested code. (See the graph below)

    Better tested Code ?

    In this context this just means: A higher code coverage and usually results in writing more tests.

    Less complex code ?

    For example: Refactor your methods into smaller ones:

    // Complex
    function doSomething() {
        if($a) {
            if($b) {
            }
            if($c) {
            }
        } else {
            if($b) {
            }
            if($c) {
            }
        }
    }
    
    // 3 less complex functions
    function doSomething() {
        if($a) {
            doA();
        } else {
            doNotA();
        }
    }
    
    function doA() {
        if($b) {
        }
        if($c) {
        }
    }
    
    function doNotA() {
        if($b) {
        }
        if($c) {
        }
    }
    

    (just a trivial example, you'll find more resources for that i'm sure)

    Additional resources:

    First off let me provide some additional resources:

    Creators blog post about the crap index

    just in case: Cyclomatic complexity explained. Tools like PHP_CodeSniffer and PHPMD will tell you that number in case you want to know.

    And while it is for you to decide what number is "ok" one often suggested number (that is a litte high imho) is a crap index of 30 resulting in a Graph like this:

    alt text (You can get the .ods file here: https://www.dropbox.com/s/3bihb9thlp2fyg8/crap.ods?dl=1 )

    0 讨论(0)
  • 2021-01-29 19:48

    Basically it wants to be a predictor of the risk of change for a method.

    It has two factors in it:

    • code complexity of the method (cyclomatic complexity) aka how many decisions paths exists in said method: comp(m).
    • how testable is that method (via automated tests, provided by a code coverage tool). Basically this measures how many decisions in said code are automatically testable.

    If the method has 100% coverage than the risk of change is considered to be equivalent only with the complexity of the method: C.R.A.P.(m) = comp(m).

    If the method has 0% coverage than the risk of change is considered to be a second degree polinomial in the complexity measure (reasoning being that if you can't test a code path changing it increases risk of breakage): C.R.A.P.(m) = comp(m)^2 + comp(m)

    Hopefully this will help you.

    I just noticed that I only provide the half answer (the read part). The how to improve it should be pretty clear if you understand the reasoning of the index. But a much more clear explanation is given in @edorian's answer.

    The short story is: write tests until you have near 100% coverage and after that refactor the methods to decrease the cyclomatic complexity. You can try to refactor before having tests but depending on the actual method complexity you risk introducing breakage if you can't reason (because of the complexity involved) all the consequences of the change you are doing.

    0 讨论(0)
提交回复
热议问题