glReadPixels giving garbage

This topic contains 5 replies, has 2 voices, and was last updated by  Xmas 8 years, 1 month ago.

Viewing 6 posts - 1 through 6 (of 6 total)
  • Author
    Posts
  • #29864

    j_jiffer
    Member

    Hello board,

    I’ve been trying to figure out this bug for a couple days now, so any ideas or even inklings are greatly appreciated! It’s a long post, but it’s a pretty stripped down and simple program I’m describing.

    Basically, the data I’m getting from glReadPixels() is not what I’m expecting (so the problem could really be anywhere in my program!).  I’ve tried to simplify the program as much as possible to just isolate this problem.

    My shaders look like this:

    Code:
    // Fragment shader
    // Every pixel is given arbitrary color vector 0x9F109F10
    void main (void)
    {
        gl_FragColor = vec4(0.0625, 0.625, 0.0625, 0.625);
    };

    // Vertex shader
    attribute highp vec4 vPosition;

    void main(void)
    {
        gl_Position = vPosition;
    }

    I have a VBO that contains the vertex positions so that a square is drawn that covers the entire viewport on every render pass.

    Code:
    GLfloat vertices[] = { -1.0f, -1.0f, 0.0f,
                                 1.0f , -1.0f, 0.0f,
                                 1.0f ,  1.0f, 0.0f,
                                 -1.0f,  1.0f, 0.0f,
    };

    // Generate the vertex buffer object (VBO)
    glGenBuffers(1, vboPtr);

    // Bind the VBO so we can fill it with data
    glBindBuffer(GL_ARRAY_BUFFER, *vboPtr);

    // Set the buffer’s data
    unsigned int size = 4 * (sizeof(GLfloat) * 3);
    glBufferData(GL_ARRAY_BUFFER, size, vertices, GL_STATIC_DRAW);

    // Unbind the buffer
    glBindBuffer(GL_ARRAY_BUFFER, 0);

    And drawing the VBO consists of:

    Code:
    glBindBuffer(GL_ARRAY_BUFFER, vbo);
    glEnableVertexAttribArray(0);
       
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
       
    glBindAttribLocation(program, 0, “vPosition”);
       
    glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
    glDisableVertexAttribArray(0);
       
    glBindBuffer(GL_ARRAY_BUFFER, 0);

    I’m also using an FBO and doing render-to-texture.

    My viewport is set as glViewport(0, 0, IN_WIDTH, IN_WIDTH) where IN_WIDTH is a constant (currently set to 8).

    After doing one render pass, I use glReadPixels to inspect the output.

    Code:
    unsigned int data[IN_WIDTH * IN_WIDTH];
    glReadPixels(0, 0, IN_WIDTH, IN_WIDTH, GL_RGBA, GL_UNSIGNED_BYTE, data);

    // Init data to 0xEE so I can tell if data’s been written to
    memset(data, 0xEE, sizeof(unsigned int) * IN_WIDTH * IN_WIDTH);

    for (idx = 0;  idx < IN_WIDTH * IN_WIDTH; idx++)
    {
        printf(“output[%d] = %08Xn”, idx, data[idx]);
    }

    The expected output would be this:
    output[0] = 9F109F10
    output[1] = 9F109F10

    output[63] = 9F109F10

    But the actual output is this:
    output[0] = 9F109F10
    output[1] = 9F109F10
    output[2] = 9F109F10
    output[3] = 9F109F10
    output[4] = 9F109F10
    output[5] = 9F109F10
    output[6] = 9F109F10
    output[7] = 9F109F10

    output[8] = 40BBD000
    output[9] = 00001000
    output[10] = 40BBE000
    output[11] = 00001000
    output[12] = 40BBF000
    output[13] = 00001000
    output[14] = 40BC0000
    output[15] = 00001000
    output[16] = 40BC1000
    output[17] = 00001000
    output[18] = 40BC2000
    output[19] = 00001000
    output[20] = 40BC3000
    output[21] = 00001000
    output[22] = 40BC4000
    output[23] = 00001000
    output[24] = 40BC5000
    output[25] = 00001000
    output[26] = 40BC6000
    output[27] = 00001000
    output[28] = 40BC7000
    output[29] = 00001000
    output[30] = 40BC8000
    output[31] = 00001000
    output[32] = 9F109F10
    output[33] = 9F109F10
    output[34] = 9F109F10
    output[35] = 9F109F10
    output[36] = 9F109F10
    output[37] = 9F109F10
    output[38] = 9F109F10
    output[39] = 9F109F10

    output[40] = 40D4E000
    output[41] = 00021000
    output[42] = 40D6F000
    output[43] = 00001000
    output[44] = 40D70000
    output[45] = 00011000
    output[46] = 40D81000
    output[47] = 00011000
    output[48] = 40D92000
    output[49] = 00001000
    output[50] = 40D93000
    output[51] = 00001000
    output[52] = 40D94000
    output[53] = 00001000
    output[54] = 40D95000
    output[55] = 00001000
    output[56] = 40D96000
    output[57] = 00001000
    output[58] = 40D97000
    output[59] = 00001000
    output[60] = 40D98000
    output[61] = 00001000
    output[62] = 40D99000
    output[63] = 00002000

    I’ve bolded the two sections where the output is actually correct.  The rest of it just seems to be garbage. 

    Any ideas?

    Thank you!

    #33255

    Xmas
    Member

    What implementation are you running on? If PC emulation, please also specify the graphics card and drivers.

    Surely the memset call is supposed to take place before glReadPixels?

    #33256

    j_jiffer
    Member

    I’m running with an SGX 530 on the OMAP 3530.  If I run the code in PC emulation, everything works fine and I don’t have this problem.

    And whoops, the memset actually does take place before glReadPixels. I just messed up when copying that part over.

    I wonder if this could have something to do with my EGL setup? When I set everything up, there are no EGL errors and everything seems to run fine (the EGL code is virtually copied verbatim from the example code in the Linux emulation SDK).  But sometimes when I check eglGetError() in the middle of all this OpenGLES stuff, I get the error code EGL_BAD_DISPLAY.  I can’t reproduce this at will, and my code currently isn’t doing it, but sometimes if I just change one little thing or the place where I query the error, it appears.  Again, when I’m initializing all the EGL stuff I don’t get any errors — and I explicitly compare the return value from eglGetDisplay() with EGL_NO_DISPLAY, and it never reports any problems.

    Also possibly worth noting: If I disable the FBO, I get these results:
    output[0] = FF109E10
    output[1] = FF109E10
    output[2] = FF109E10
    output[3] = FF109E10
    output[4] = FF109E10
    output[5] = FF109E10
    output[6] = FF109E10
    output[7] = FF109E10
    output[8] = FF109E10
    output[9] = FF10A210
    output[10] = FF109E10
    output[11] = FF10A210
    output[12] = FF109E10
    output[13] = FF10A210
    output[14] = FF109E10
    output[15] = FF10A210
    output[16] = FF109E10
    output[17] = FF109E10
    output[18] = FF109E10
    output[19] = FF109E10
    output[20] = FF109E10
    output[21] = FF109E10
    output[22] = FF109E10
    output[23] = FF109E10
    output[24] = FF109E10
    output[25] = FF10A210
    output[26] = FF109E10
    output[27] = FF10A210
    output[28] = FF109E10
    output[29] = FF10A210
    output[30] = FF109E10
    output[31] = FF10A210
    output[32] = FF109E10
    output[33] = FF109E10
    output[34] = FF109E10
    output[35] = FF109E10
    output[36] = FF109E10
    output[37] = FF109E10
    output[38] = FF109E10
    output[39] = FF109E10
    output[40] = FF109E10
    output[41] = FF10A210
    output[42] = FF109E10
    output[43] = FF10A210
    output[44] = FF109E10
    output[45] = FF10A210
    output[46] = FF109E10
    output[47] = FF10A210
    output[48] = FF109E10
    output[49] = FF109E10
    output[50] = FF109E10
    output[51] = FF109E10
    output[52] = FF109E10
    output[53] = FF109E10
    output[54] = FF109E10
    output[55] = FF109E10
    output[56] = FF109E10
    output[57] = FF10A210
    output[58] = FF109E10
    output[59] = FF10A210
    output[60] = FF109E10
    output[61] = FF10A210
    output[62] = FF109E10
    output[63] = FF10A210

    They feel *kind of* correct in that the red and blue components are always right, but the green is off and not consistent.  (I’m using a RGB UNSIGNED_SHORT_5_6_5 configuration, which may explain some of that).

    In any case, I’d rather use an FBO anyway.

    Thanks!

    #33257

    Xmas
    Member

    The values are expected since the hardware uses dithering when rendering to an RGB565 surface (9E and A2 are the two closest values representable in 6 bits per channel, expanded to 8 bits).

    The FBO result certainly seems odd. What is the size of the texture you are rendering to?

    #33258

    j_jiffer
    Member

    My texture size is 8 x 8.

    Code:
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
                     IN_WIDTH, IN_WIDTH, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);

    I’ve tried changing the value of IN_WIDTH, and the problem seems to be the same — in every case, the first IN_WIDTH bytes returned by glReadPixels() (ie, the first row) are always correct, but then the bizarre stuff follows.

    I believe the texture is actually being written to correctly because I’ve done tests where each render pass does an addition reduction, and I get the correct result in the end.  So I think it’s just a problem with reading multiple rows correctly with glReadPixels().

    I’m pretty new to this stuff, so I’m not sure how my EGL configuration might affect FBOs. I used the same technique as the sample code in the SDK used for selecting a config.  So, my on-screen rendering is done using the 5-6-5 format (even though I specify GL_RGBA and GL_UNSIGNED_BYTE in the texture definition and in glReadPixels), but does that apply to off-screen rendering?  Could it be a problem that I’m specifying GL_RGBA/GL_UNSIGNED_BYTE which is in conflict with the EGL config?

    Thanks for all your help, Georg!

    #33259

    Xmas
    Member

    Hi, sorry for the late answer.

    j_jiffer wrote:
    I’ve tried changing the value of IN_WIDTH, and the problem seems to be the same — in every case, the first IN_WIDTH bytes returned by glReadPixels() (ie, the first row) are always correct, but then the bizarre stuff follows.

    Can you give us more details, what values did you try, and is there always the same amount of garbage between rows?

    Quote:
    I’m pretty new to this stuff, so I’m not sure how my EGL configuration might affect FBOs. I used the same technique as the sample code in the SDK used for selecting a config.  So, my on-screen rendering is done using the 5-6-5 format (even though I specify GL_RGBA and GL_UNSIGNED_BYTE in the texture definition and in glReadPixels), but does that apply to off-screen rendering?  Could it be a problem that I’m specifying GL_RGBA/GL_UNSIGNED_BYTE which is in conflict with the EGL config?

    The EGL config does not affect FBOs. After all, when you render to the RGBA8888 texture you get the right result, at least for the first few pixels.

    glReadPixels always accepts GL_RGBA/GL_UNSIGNED_BYTE, but upconverting RGB565 data to RGBA8888 does nothing to remove dithering that was applied during rendering.

Viewing 6 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic.