Mapping a texture from “1D” to “2D” with OpenGL matrix transformations

前端 未结 1 1744
隐瞒了意图╮
隐瞒了意图╮ 2021-01-21 16:01

(With this question I\'m trying to investigate an idea I had for solving this other one)

If I have a standard 2D array of dimensions width and heig

相关标签:
1条回答
  • 2021-01-21 16:07

    I haven't tried this yet, but I wanted to throw it out there as an idea already:

    UPDATE: I tried it now and it works beautifully with one minor change (see comment)!

    Let's say my big texture has width size and the image I want to draw has width width and starts at offset offset inside the big texture, where offset is the 1-D representation of the offset, i.e. x + y * size.

    Then, the following 4x4 matrix will almost achieve this mapping:

         _                                           _
        |      1        width        offset      0    |
        |                                             |
        |   1/size   width/size   offset/size    0    |
    M = |                                             |
        |      0          0            0         0    |
        |                                             |
        |_     0          0            0         1   _|
    

    So, in the example above, to draw the 4×5 image, the matrix would be

     _                    _
    |   1    4    25    0  |
    |  1/8  1/2  25/8   0  |
    |   0    0     0    0  |
    |_  0    0     0    1 _|
    

    The image coordinates will then need to be specified with a 4-vector containing

    ( x, y, 1, 1 )
    

    So, for example the coordinates of k (i.e. (2,2)) will map to:

    M*( 2, 2, 1, 1 ) => ( 35, 4.375, 0, 1 )
    

    which will be interpreted as texture coordinate (35, 4.375).

    If we now turn on nearest neighbor as the interpolation rule and enable texture wrapping in the x-direction, this should correspond to:

    ( 3, 4 )
    

    (I was using integer coordinates here, whereas in the final implementation, the final coordinates would need to be floats in the range from 0 to 1. This might be achievable very easily by replacing the 1 in the bottom right corner of the matrix with size, since that will end up in the fourth position of the output vector and thus divide the other three. This, as @chbaker0 pointed out, would only work, though, if the texture coordinates are subject to the usual perspective division. If they are not, the entire matrix M needs to be divided by size instead to achieve the desired result.)

    Does this sound reasonable at all or can someone see a problem with this before I go ahead and try to implement this? (Might take me a few days, since I have to do a couple other things first to get to a testable app...)

    0 讨论(0)
提交回复
热议问题