How to debug a GLSL shader?

后端 未结 11 1328
青春惊慌失措
青春惊慌失措 2020-11-28 17:38

I need to debug a GLSL program but I don\'t know how to output intermediate result. Is it possible to make some debug traces (like with printf) with GLSL ?

相关标签:
11条回答
  • 2020-11-28 18:28

    You can try this: https://github.com/msqrt/shader-printf which is an implementation called appropriately "Simple printf functionality for GLSL."

    You might also want to try ShaderToy, and maybe watch a video like this one (https://youtu.be/EBrAdahFtuo) from "The Art of Code" YouTube channel where you can see some of the techniques that work well for debugging and visualising. I can strongly recommend his channel as he writes some really good stuff and he also has a knack for presenting complex ideas in novel, highly engaging and and easy to digest formats (His Mandelbrot video is a superb example of exactly that : https://youtu.be/6IWXkV82oyY)

    I hope nobody minds this late reply, but the question ranks high on Google searches for GLSL debugging and much has of course changed in 9 years :-)

    PS: Other alternatives could also be NVIDIA nSight and AMD ShaderAnalyzer which offer a full stepping debugger for shaders.

    0 讨论(0)
  • 2020-11-28 18:29

    At the bottom of this answer is an example of GLSL code which allows to output the full float value as color, encoding IEEE 754 binary32. I use it like follows (this snippet gives out yy component of modelview matrix):

    vec4 xAsColor=toColor(gl_ModelViewMatrix[1][1]);
    if(bool(1)) // put 0 here to get lowest byte instead of three highest
        gl_FrontColor=vec4(xAsColor.rgb,1);
    else
        gl_FrontColor=vec4(xAsColor.a,0,0,1);
    

    After you get this on screen, you can just take any color picker, format the color as HTML (appending 00 to the rgb value if you don't need higher precision, and doing a second pass to get the lower byte if you do), and you get the hexadecimal representation of the float as IEEE 754 binary32.

    Here's the actual implementation of toColor():

    const int emax=127;
    // Input: x>=0
    // Output: base 2 exponent of x if (x!=0 && !isnan(x) && !isinf(x))
    //         -emax if x==0
    //         emax+1 otherwise
    int floorLog2(float x)
    {
        if(x==0.) return -emax;
        // NOTE: there exist values of x, for which floor(log2(x)) will give wrong
        // (off by one) result as compared to the one calculated with infinite precision.
        // Thus we do it in a brute-force way.
        for(int e=emax;e>=1-emax;--e)
            if(x>=exp2(float(e))) return e;
        // If we are here, x must be infinity or NaN
        return emax+1;
    }
    
    // Input: any x
    // Output: IEEE 754 biased exponent with bias=emax
    int biasedExp(float x) { return emax+floorLog2(abs(x)); }
    
    // Input: any x such that (!isnan(x) && !isinf(x))
    // Output: significand AKA mantissa of x if !isnan(x) && !isinf(x)
    //         undefined otherwise
    float significand(float x)
    {
        // converting int to float so that exp2(genType) gets correctly-typed value
        float expo=float(floorLog2(abs(x)));
        return abs(x)/exp2(expo);
    }
    
    // Input: x\in[0,1)
    //        N>=0
    // Output: Nth byte as counted from the highest byte in the fraction
    int part(float x,int N)
    {
        // All comments about exactness here assume that underflow and overflow don't occur
        const float byteShift=256.;
        // Multiplication is exact since it's just an increase of exponent by 8
        for(int n=0;n<N;++n)
            x*=byteShift;
    
        // Cut higher bits away.
        // $q \in [0,1) \cap \mathbb Q'.$
        float q=fract(x);
    
        // Shift and cut lower bits away. Cutting lower bits prevents potentially unexpected
        // results of rounding by the GPU later in the pipeline when transforming to TrueColor
        // the resulting subpixel value.
        // $c \in [0,255] \cap \mathbb Z.$
        // Multiplication is exact since it's just and increase of exponent by 8
        float c=floor(byteShift*q);
        return int(c);
    }
    
    // Input: any x acceptable to significand()
    // Output: significand of x split to (8,8,8)-bit data vector
    ivec3 significandAsIVec3(float x)
    {
        ivec3 result;
        float sig=significand(x)/2.; // shift all bits to fractional part
        result.x=part(sig,0);
        result.y=part(sig,1);
        result.z=part(sig,2);
        return result;
    }
    
    // Input: any x such that !isnan(x)
    // Output: IEEE 754 defined binary32 number, packed as ivec4(byte3,byte2,byte1,byte0)
    ivec4 packIEEE754binary32(float x)
    {
        int e = biasedExp(x);
        // sign to bit 7
        int s = x<0. ? 128 : 0;
    
        ivec4 binary32;
        binary32.yzw=significandAsIVec3(x);
        // clear the implicit integer bit of significand
        if(binary32.y>=128) binary32.y-=128;
        // put lowest bit of exponent into its position, replacing just cleared integer bit
        binary32.y+=128*int(mod(float(e),2.));
        // prepare high bits of exponent for fitting into their positions
        e/=2;
        // pack highest byte
        binary32.x=e+s;
    
        return binary32;
    }
    
    vec4 toColor(float x)
    {
        ivec4 binary32=packIEEE754binary32(x);
        // Transform color components to [0,1] range.
        // Division is inexact, but works reliably for all integers from 0 to 255 if
        // the transformation to TrueColor by GPU uses rounding to nearest or upwards.
        // The result will be multiplied by 255 back when transformed
        // to TrueColor subpixel value by OpenGL.
        return vec4(binary32)/255.;
    }
    
    0 讨论(0)
  • 2020-11-28 18:34

    If you want to visualize the variations of a value across the screen, you can use a heatmap function similar to this (I wrote it in hlsl, but it is easy to adapt to glsl):

    float4 HeatMapColor(float value, float minValue, float maxValue)
    {
        #define HEATMAP_COLORS_COUNT 6
        float4 colors[HEATMAP_COLORS_COUNT] =
        {
            float4(0.32, 0.00, 0.32, 1.00),
            float4(0.00, 0.00, 1.00, 1.00),
            float4(0.00, 1.00, 0.00, 1.00),
            float4(1.00, 1.00, 0.00, 1.00),
            float4(1.00, 0.60, 0.00, 1.00),
            float4(1.00, 0.00, 0.00, 1.00),
        };
        float ratio=(HEATMAP_COLORS_COUNT-1.0)*saturate((value-minValue)/(maxValue-minValue));
        float indexMin=floor(ratio);
        float indexMax=min(indexMin+1,HEATMAP_COLORS_COUNT-1);
        return lerp(colors[indexMin], colors[indexMax], ratio-indexMin);
    }
    

    Then in your pixel shader you just output something like:

    return HeatMapColor(myValue, 0.00, 50.00);
    

    And can get an idea of how it varies across your pixels:

    enter image description here

    Of course you can use any set of colors you like.

    0 讨论(0)
  • 2020-11-28 18:39
    void main(){
      float bug=0.0;
      vec3 tile=texture2D(colMap, coords.st).xyz;
      vec4 col=vec4(tile, 1.0);
    
      if(something) bug=1.0;
    
      col.x+=bug;
    
      gl_FragColor=col;
    }
    
    0 讨论(0)
  • 2020-11-28 18:42

    Do offline rendering to a texture and evaluate the texture's data. You can find related code by googling for "render to texture" opengl Then use glReadPixels to read the output into an array and perform assertions on it (since looking through such a huge array in the debugger is usually not really useful).

    Also you might want to disable clamping to output values that are not between 0 and 1, which is only supported for floating point textures.

    I personally was bothered by the problem of properly debugging shaders for a while. There does not seem to be a good way - If anyone finds a good (and not outdated/deprecated) debugger, please let me know.

    0 讨论(0)
提交回复
热议问题