Can normal maps be generated from a texture?

前端 未结 4 489
情深已故
情深已故 2021-01-31 23:56

If I have a texture, is it then possible to generate a normal-map for this texture, so it can be used for bump-mapping?

Or how are normal maps usually made?

相关标签:
4条回答
  • 2021-02-01 00:22

    Yes. Well, sort of. Normal maps can be accurately made from height-maps. Generally, you can also put a regular texture through and get decent results as well. Keep in mind there are other methods of making a normal map, such as taking a high-resolution model, making it low resolution, then doing ray casting to see what the normal should be for the low-resolution model to simulate the higher one.

    For height-map to normal-map, you can use the Sobel Operator. This operator can be run in the x-direction, telling you the x-component of the normal, and then the y-direction, telling you the y-component. You can calculate z with 1.0 / strength where strength is the emphasis or "deepness" of the normal map. Then, take that x, y, and z, throw them into a vector, normalize it, and you have your normal at that point. Encode it into the pixel and you're done.

    Here's some older incomplete-code that demonstrates this:

    // pretend types, something like this
    struct pixel
    {
        uint8_t red;
        uint8_t green;
        uint8_t blue;
    };
    
    struct vector3d; // a 3-vector with doubles
    struct texture; // a 2d array of pixels
    
    // determine intensity of pixel, from 0 - 1
    const double intensity(const pixel& pPixel)
    {
        const double r = static_cast<double>(pPixel.red);
        const double g = static_cast<double>(pPixel.green);
        const double b = static_cast<double>(pPixel.blue);
    
        const double average = (r + g + b) / 3.0;
    
        return average / 255.0;
    }
    
    const int clamp(int pX, int pMax)
    {
        if (pX > pMax)
        {
            return pMax;
        }
        else if (pX < 0)
        {
            return 0;
        }
        else
        {
            return pX;
        }
    }
    
    // transform -1 - 1 to 0 - 255
    const uint8_t map_component(double pX)
    {
        return (pX + 1.0) * (255.0 / 2.0);
    }
    
    texture normal_from_height(const texture& pTexture, double pStrength = 2.0)
    {
        // assume square texture, not necessarily true in real code
        texture result(pTexture.size(), pTexture.size());
    
        const int textureSize = static_cast<int>(pTexture.size());
        for (size_t row = 0; row < textureSize; ++row)
        {
            for (size_t column = 0; column < textureSize; ++column)
            {
                // surrounding pixels
                const pixel topLeft = pTexture(clamp(row - 1, textureSize), clamp(column - 1, textureSize));
                const pixel top = pTexture(clamp(row - 1, textureSize), clamp(column, textureSize));
                const pixel topRight = pTexture(clamp(row - 1, textureSize), clamp(column + 1, textureSize));
                const pixel right = pTexture(clamp(row, textureSize), clamp(column + 1, textureSize));
                const pixel bottomRight = pTexture(clamp(row + 1, textureSize), clamp(column + 1, textureSize));
                const pixel bottom = pTexture(clamp(row + 1, textureSize), clamp(column, textureSize));
                const pixel bottomLeft = pTexture(clamp(row + 1, textureSize), clamp(column - 1, textureSize));
                const pixel left = pTexture(clamp(row, textureSize), clamp(column - 1, textureSize));
    
                // their intensities
                const double tl = intensity(topLeft);
                const double t = intensity(top);
                const double tr = intensity(topRight);
                const double r = intensity(right);
                const double br = intensity(bottomRight);
                const double b = intensity(bottom);
                const double bl = intensity(bottomLeft);
                const double l = intensity(left);
    
                // sobel filter
                const double dX = (tr + 2.0 * r + br) - (tl + 2.0 * l + bl);
                const double dY = (bl + 2.0 * b + br) - (tl + 2.0 * t + tr);
                const double dZ = 1.0 / pStrength;
    
                math::vector3d v(dX, dY, dZ);
                v.normalize();
    
                // convert to rgb
                result(row, column) = pixel(map_component(v.x), map_component(v.y), map_component(v.z));
            }
        }
    
        return result;
    }
    
    0 讨论(0)
  • 2021-02-01 00:25

    There's probably many ways to generate a Normal map, but like others said, you can do it from a Height Map, and 3d packages like XSI/3dsmax/Blender/any of them can output one for you as an image.

    You can then output and RGB image with the Nvidia plugin for photoshop, an algorithm to convert it or you might be able to output it directly from those 3d packages with 3rd party plugins.

    Be aware that in some case, you might need to invert channels (R, G or B) from the generated normal map.

    Here's some resources link with examples and more complete explanation:

    1. http://developer.nvidia.com/object/photoshop_dds_plugins.html
    2. http://en.wikipedia.org/wiki/Normal_mapping
    3. http://www.vrgeo.org/fileadmin/VRGeo/Bilder/VRGeo_Papers/jgt2002normalmaps.pdf
    0 讨论(0)
  • 2021-02-01 00:26

    I don't think normal maps are generated from a texture. they are generated from a model.

    just as texturing allows you to define complex colour detail with minimal polys (as opposed to just using millions of ploys and just vertex colours to define the colour on your mesh)

    A normal map allows you to define complex normal detail with minimal polys.

    I believe normal maps are usually generated from a higher res mesh, and then is used with a low res mesh.

    I'm sure 3D tools, such as 3ds max or maya, as well as more specific tools will do this for you. unlike textures, I don't think they are usually done by hand.

    but they are generated from the mesh, not the texture.

    0 讨论(0)
  • 2021-02-01 00:33

    I suggest starting with OpenCV, due to its richness in algorithms. Here's one I wrote that iteratively blurs the normal map and weights those to the overall value, essentially creating more of a topological map.

    #define ROW_PTR(img, y) ((uchar*)((img).data + (img).step * y))
    cv::Mat normalMap(const cv::Mat& bwTexture, double pStrength)
    {
        // assume square texture, not necessarily true in real code
        int scale = 1.0;
        int delta = 127;
    
        cv::Mat sobelZ, sobelX, sobelY;
        cv::Sobel(bwTexture, sobelX, CV_8U, 1, 0, 13, scale, delta, cv::BORDER_DEFAULT);
        cv::Sobel(bwTexture, sobelY, CV_8U, 0, 1, 13, scale, delta, cv::BORDER_DEFAULT);
        sobelZ = cv::Mat(bwTexture.rows, bwTexture.cols, CV_8UC1);
    
        for(int y=0; y<bwTexture.rows; y++) {
            const uchar *sobelXPtr = ROW_PTR(sobelX, y);
            const uchar *sobelYPtr = ROW_PTR(sobelY, y);
            uchar *sobelZPtr = ROW_PTR(sobelZ, y);
    
            for(int x=0; x<bwTexture.cols; x++) {
                double Gx = double(sobelXPtr[x]) / 255.0;
                double Gy = double(sobelYPtr[x]) / 255.0;
    
                double Gz =  pStrength * sqrt(Gx * Gx + Gy * Gy);
    
                uchar value = uchar(Gz * 255.0);
    
                sobelZPtr[x] = value;
            }
        }
    
        std::vector<cv::Mat>planes;
    
        planes.push_back(sobelX);
        planes.push_back(sobelY);
        planes.push_back(sobelZ);
    
        cv::Mat normalMap;
        cv::merge(planes, normalMap);
    
        cv::Mat originalNormalMap = normalMap.clone();
    
        cv::Mat normalMapBlurred;
    
        for (int i=0; i<3; i++) {
            cv::GaussianBlur(normalMap, normalMapBlurred, cv::Size(13, 13), 5, 5);
            addWeighted(normalMap, 0.4, normalMapBlurred, 0.6, 0, normalMap);
        }
        addWeighted(originalNormalMap, 0.3, normalMapBlurred, 0.7, 0, normalMap);
    
        return normalMap;
    }
    
    0 讨论(0)
提交回复
热议问题