I\'m currently programming a small 3D engine, and I was wondering why I should go in eye-space coordinates in the fragment shader. To do that, I have to put my camera matrix in
It's convenient. It's a well-defined space that exists, and one that you compute on the way to transforming positions anyway.
It has the same scale as world space, but doesn't have the problems world space does. Eye space is always (relatively) close to zero (since the eye is at 0), so it's a reasonable space for having an explicit transform matrix for. The scale is important, because you can provide distances (like the light attenuation terms) that are computed in world space. Distances don't change in eye space.
You need to transform it into a linear space anyway. Doing lighting, particularly with attentuation, in a non-linear space like post-projection spaces is... tricky. So you would have to provide normals and positions in some kind of linear space, so it may as well be eye space.
It requires the fewest transforms. Eye space is the space right before the projection transform. If you have to reverse-transform to a linear space (deferred rendering, for example), eye space is the one that requires the fewest operations.