eglCreatePBufferSurface() bug

This topic contains 4 replies, has 3 voices, and was last updated by  TomCooksey 9 years, 5 months ago.

Viewing 5 posts - 1 through 5 (of 5 total)
  • Author
    Posts
  • #29521

    TomCooksey
    Member

    I’m a Qt/Embedded developer working on integrating OpenGL ES & OpenVG into our own windowing system (QWS).

    We are currently using an OMAP3430 board (with the null window system SGX drivers) and the LinuxPC emulation drivers.

    To get client processes to render 3D we are using PBuffer surfaces. Before we issue the QGLWidget::GLPaint event (which is where applications do their gl calls), we make the pbuffer context current, so the application’s gl calls are directed to the pbuffer. After the paint event, we grab the pbuffer pixels and copy them to a segment of shared memory using glReadPixels. Once in shared memory, an event is sent to the server, which uploads the image in shared memory to a texture (using glTexSubImage), which is then used for window composition. I realise this is far from ideal as it involves 2 extra copies, but hay, that’s all we can do with the Null driver. We don’t want to get into serializing the application’s gl calls to the server process if we can help it.

    Anyway… I want to use the LinuxPC emulation libraries for testing. We have fancy GL window compositing working. However, it seems eglCreatePBufferSurface() fails with EGL_BAD_ALLOC if there is no current context. In client processes, there will be no context current when we create the pbuffer surface and we have a problem. Reading the EGL spec, it doesn’t say anything about needing to have a current context when you create a pbuffer, so I am assuming this is a bug in the LinuxPC emulation library. It seems to work fine on our OMAP3 development board.

    #32092

    ptj
    Member

    Hi, I am sorry for the late answer, this does look like a bug in the emulation library indeed. We have fixed it but it’s only going to be publicly available from the 2.3 release which should get out sometime in August.

    #32093

    Xmas
    Member
    TomCooksey wrote:
    To get client processes to render 3D we are using PBuffer surfaces. Before we issue the QGLWidget::GLPaint event (which is where applications do their gl calls), we make the pbuffer context current, so the application’s gl calls are directed to the pbuffer. After the paint event, we grab the pbuffer pixels and copy them to a segment of shared memory using glReadPixels. Once in shared memory, an event is sent to the server, which uploads the image in shared memory to a texture (using glTexSubImage), which is then used for window composition. I realise this is far from ideal as it involves 2 extra copies, but hay, that’s all we can do with the Null driver. We don’t want to get into serializing the application’s gl calls to the server process if we can help it.

    Hi Tom,

    Can’t you use texture bindable pbuffers or framebuffer objects to render to a texture? I’m not sure I understand your comment about serializing GL calls.

    Regards,

    Georg

    #32094

    TomCooksey
    Member

    We can if the application is acting as the server, but not in the clients. QWS is built as a library, so the first Qt/Embedded application which is launched acts as the server and subsequent applications act as clients. There’s no “dedicated” server like there is in X.

    QGLWidgets in the server process do their GL rendering to texture-bindable pbuffers (which it seems aren’t supported on LinuxPC emulation – see other thread). In client processes, QGLWidgets render into regular pbuffers (which I think don’t need to have power-of-2 sizes). Clients then copy the pixels using glCopyPixels() into a shared memory segment. Both the server process and the client process can access the shared memory, so the server can upload the data into a texture. Actually, we use a pbuffer bound to a texture if the widget’s updating itself rapidly (i.e. animating) as this seems to be ~600% faster than using a texture alone (updating a texture using glTexSubImage() is even slower)!

    I assume pbuffers & FBOs can’t be shared between processes. So the only alternative we have to copying rendered pixels into shared memory would be to serialize the gl calls. I.e. We provide our own OpenGL ES “proxy” library which client processes link to. When they make gl calls, our library buffers the calls up into shared memory. Our library’s implementation of eglSwapBuffers() simply signals the server process. When signaled, the server process then reads the GL commands from the shared memory and makes calls to the real OpenGL ES library (having made a texture-bindable pbuffer the current draw surface first). Think GLX, but using shared memory rather than sockets. It’s something we don’t want to do if we can help it.

    Ideally we’d like to be able to share pbuffers or FBOs between processes. Client processes render into a texture-bindable pbuffer. The server process then binds the client’s pbuffer as a texture and draws it to the screen surface. As far as I can see, there is nothing in the EGL specification which lets us do this – we’d have to hook into the library/driver itself (Something we’re looking at doing for Gallium3D).

    #32095

    TomCooksey
    Member

    ptj: We’re currently getting round it by having client processes create a window surface but not calling XMapWindow on the underlying X window. Not ideal, but it works for now. We look forward to the 2.3 release!

Viewing 5 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic.