Synchronized output when rendering to an FBO with an eglImage attached

This topic contains 3 replies, has 2 voices, and was last updated by  Volker 4 years, 12 months ago.

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #31068

    Volker
    Member

    I’m currently rendering some graphical content on an OMAP 3530 with a SGX530. The rendering is done using a FBO that has a texture attached that is created by using eglCreateImageKHR and glEGLImageTargetTexture2DOES.
    I’m rendering the scene using OpenGL and finish it by calling eglWait() (alternatively I tried already glFinish, glFlush and all combinations of that functions).
    I then upload the data to an FPGA by reading the user space pointer provided by CMEM.
    Everything is working fine so far, but most of the time, the pixel data is not complete. It seems that about 70-90% of the FBO has been rendered already, before I access the pixel data from user space and transfer it to the FPGA.

    I already tried rendering the texture to a dummy opengl context I created for initializing OpenGL anyway. It didn’t make any difference. Unfortunately I can’t check the output, because I don’t have any physical display connection to the graphical output of the OMAP3530.

    Are there any bugs within glFinish/glFlush, that prevents the graphics from being rendered completly? Is there something that can help completing the pixel data so I can access it?

    Similar question with maybe some additional info was asked by me here: e2e.ti.com/support/dsp/omap_applications_processors/f/447/t/233100.aspx

    #36387

    Volker
    Member

    I maybe found a solution, but I think that it is a workaround for a bug and shouldn’t be necessary.

    If I bind the FBO and do a glReadPixels(0,0,1,1,…) (which costs me about 30-40ms) the problem is gone.
    Not sure about the OpenGL spec, but I would assume that glFinish or glFlush should be enough to have valid data in memory.

    #36388

    Joe Davis
    Member

    Hi Volker,

    The behaviour of glFinish & glFlush is dependant on the platform’s driver configuration.

    The reason that your renders are incomplete sometimes is that the surface is not being locked as the CPU is accessing it. Without any locking mechanism (or, alternatively, forcing flushes), you cannot guarantee that the image you are reading back is complete.

    There are two options:
    1. glReadPixels(): This will always force a render in PowerVR drivers but, as you have noticed, is a very expensive operation. It’s worth noting that glFlush/glFinish would also incur a large cost if they did force renders on your platform
    2. Synchronize the CPU/GPU accesses in your application: (described below)

    Synchronizing the CPU/GPU EGLImage accesses in your application
    This can be done by using the EGL_KHR_fence_sync extension and a circular buffer of EGLImages. Here’s an overview of how this approach can be implemented:
    1. Insert a fence into the command stream at the end of your GL render to the surface
    2. On a separate GL thread, poll for the fence to determine when the GPU has finished rendering to the surface
    3. Use a CPU-level locking mechanism to lock the surface and prevent the main GL thread from rendering to it
    4. Use your user-space pointer to read from the EGLImage
    5. Unlock the EGLImage once the CPU has finished reading data from it (thus, freeing up the EGLImage for the main GL thread to render into again)

    This circular buffer approach will allow your application to read back data from a rendered surface without stalling the GPU (the GPU can render into other EGLImage surfaces while the CPU is reading data from the locked surface).

    #36389

    Volker
    Member

    Thanks very much for your suggestion. I will try to implement your 2. solution, and check if it allows me to have a better performance.

Viewing 4 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic.