hardware texture filtering

This topic contains 2 replies, has 2 voices, and was last updated by  gg sato 7 years, 1 month ago.

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #30320

    gg sato
    Member

    Hi,

    In BlurVertShader.vsh of OGLES2Bloom project contained in Training folder of SDK, it says:

    // Blur filter kernel shader

    //

    // 0 1 2 3 4

    // x–x–X–x–x    <- original filter kernel

    //   y—X—y      <- filter kernel abusing the hardware texture filtering

    //       |

    //      texel center

    //

    //

    // Using hardware texture filtering, the amount of samples can be

    // reduced to three. To calculate the offset, use this formula:

    // d = w1 / (w1 + w2), whereas w1 and w2 denote the filter kernel weights

    What does the “filter kernel abusing the hardware texture filtering” mean?

    Thanks,

    Takenori

    #34436

    marco
    Member

    Hi Takenori,

    abusing might not be the best word to describe what we are actually doing Smile
    If you think of a common 3×3 Gaussian filter kernel:

    1   2   1
    2   4   2
    1   2   1

    The filter kernel is seperable, which means it can be split into a vector product:

    1                           1    2    1
    2  x   1   2   1   =   2    4    2
    1                           1    2    1

    Now we have to process the image twice, applying the seperated filter kernel each time.

    In the 3×3 filter kernel case this means we have to sample 3 texels in each pass per pixel.
    This is how it is usually done for the horizontal pass:

    A     B    C    (texels)
    ^    ^    ^    (sampling locations)
    1     2    1    (weights)

    which means in order to get the filtered pixel value we have to linearly combine the
    sampled texels with the according weights:

    result = (A * 1 + B * 2 + C * 1) * 0.25;  // 0.25 is the normalization value

    Now we can take advantage of the bilinear filtering by sampling inbetween the texels,
    reducing the amount of required texture samples to two:

    A   x   B   y   C    (texels)
        ^        ^         (sampling locations)
        a         b         (weights)

    The final equation is reduced by one multiplication and addition:

    result = (x*a + y*b) * norm; // norm is the normalization value for the weights

    Please note that the exact values of the sampling locations and weights have to be
    determined individually for each kernel.

    You can find more information about this topic in our “Post Processing Effects Development Recommendation” whitepaper,
    which is included in the SDK download.

    Hope this helps,

    Marco  
    marco2010-10-21 12:12:58

    #34437

    gg sato
    Member

    Thanks , Marco, for clarifying the detail. I should have read the doc.

    This gave me some hints…

Viewing 3 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic.