I was wondering if the quality of texture mipmaps would be better if I used my own algorithm for pre-generating them, instead of the built-in automatic one. I\'d probably use a
What is motivating you to try? Are the mipmaps you have currently being poorly generated? (i.e. have you looked?) Bear in mind your results will often still be (tri)linearly interpolated anyway, so between that an motion there are often steeply diminishing returns to improved resampling.
It depends on the kind of assets you display. Lanczos filter gets closer to ideal low-pass filter and the results are noticeable if you compare the mip maps side by side. Most people will mistake aliasing for sharpness - again it depends whether your assets tend to contain high frequencies - I've definitely seen cases where box filter was not a good option. But since the mip map is then linearly interpolated anyway the gain might not be that noticeable. There is another thing to mention - most people use box filter and pass the output as an input into the next stage - in this way you lose both precision and visual energy (although gamma will help this one). If you can come up with code that uses arbitrary filter (mind you that most of them are separable into two passes) you would typically scale the filter kernel itself and produce mip map levels from the base texture, which is a good thing.
As an addition to this question, I have found that some completely different mipmapping (rather than those simply trying to achieve best down-scaling quality, like Lanczos filtering) algorithms have good effects on certain textures.
For instance, on some textures that are supposed to represent high-frequency information, I have tried using an algorithm that simply takes one random pixel of the four that are being considered for each iteration. The results depend much on the texture and what it is supposed to convey, but I have found that it gives great effect on some; not least for ground textures.
Another one I've tried is taking the most deviating of the four pixels to preserve contrasts. It has even fewer uses, but they do exist.
As such, I've implemented the option to choose mipmapping algorithm per texture.
EDIT: I thought I might provide some examples of the differences in practice. Here's a piece of grass texture on the ground, the leftmost picture being with standard average mipmapping, and the rightmost being with randomized mipmapping:
I hope the viewer can appreciate how much "apparent detail" is lost in the averaged mipmap, and how much flatter it looks for this kind of texture.
Also for reference, here are the same samples with 4× anisotropic filtering turned on (the above being tri-linear):
Anisotropic filtering makes the difference less pronounced, but it's still there.
There are good reasons to generate your own mipmaps. However, the quality of the downsampling is not one of them.
Game and graphic programmers have experimented with all kinds of downsampling algorithms in the past. In the end it turned out that the very simple "average four pixels"-method gives the best results. Also more advanced methods are in theory mathematical more correct they tend to take a lot of sharpness out of the mipmaps. This gives a flat look (Try it!).
For some (to me not understandable) reason the simple average method seems to have the best tradeoff between antialiasing and keeping the mipmaps sharp.
However, you may want to calculate your mipmaps with gamma-correction. OpenGL does not do this on it's own. This can make a real visual difference, especially for darker textures.
Doing so is simple. Instead of averaging four values together like this:
float average (float a, float b, float c, float d)
{
return (a+b+c+d)/4
}
Do this:
float GammaCorrectedAverage (float a, float b, float c, float d)
{
// assume a gamma of 2.0 In this case we can just square
// the components.
return sqrt ((a*a+b*b+c*c+d*d)/4)
}
This code assumes your color components are normalized to be in the range of 0 to 1.