why should we go in eye-space coordinates during fragment stage in the OpenGL pipeline?

后端 未结 2 1478
有刺的猬
有刺的猬 2021-02-03 13:20

I\'m currently programming a small 3D engine, and I was wondering why I should go in eye-space coordinates in the fragment shader. To do that, I have to put my camera matrix in

2条回答
  •  陌清茗
    陌清茗 (楼主)
    2021-02-03 13:55

    You don't have to supply the camera matrix to the shader and do the light position and direction transformation there. Actually it is rather inefficient to do it that way, since you're doing the very same operations on the same numbers again and again for each vertex.

    Just transform the light position and direction CPU side and supply the readily transformed light parameters to the shader. However lighting calculations are still more concise in eye space, especially if normal mapping is involved. But you've to transform everything into eyespace anyway, as normals are not transformed by the perspective transform (though the vertex positions could be transformed into clip space directly).

提交回复
热议问题