Best Way to Copy a FBO’s Color and Depth Attachments to Textures

This topic contains 3 replies, has 2 voices, and was last updated by  Joe Davis 3 years, 3 months ago.

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #31760

    Hi All,

    I’ve searched the net and haven’t found a definitive “best practice” so, I’ve come to this forum to seek some guidance.

    I’ve been working on my own engine and have come up on a problem. In my engine I have a “Camera” class which is exposed to users of the engine. The camera class can also have on it any number of “PostProcessors” (such as turning everything grayscale, tilt shift, some weather effects, etc.). Conceptually, all works well. The API for the PostProcessor defines that the PostProcessors take, as input, a color and/or depth texture. These textures represent the color and/or depth that the camera has rendered.

    My problem comes in the fact that sometimes the cameras are being rendered to FBOs which *ARE NOT GUARANTEED* to have a color texture. An example of this is when the cameras are rendering to the “default FBO” (which is constructed on iOS devices using EAGLContext’s renderbufferStorageFromDrawable function when passing in a EAGLLayer). The default FBO is, in most cases, rendering to a render buffer for the color attachment instead of to a texture.

    So, for my PostProcessor API to behave properly in all cases (as well as supporting other features such as temporal motion blur), I need to do potentially one of two things as I can see it.

    1. I can have all cameras at all time render to different “default FBOPrime” which renders to a FBO which has a color and/or depth texture associated with it and then, at the end of every frame, bind the “default FBO” (the one with color render buffer), and screen blit “default FBOPrime”s color texture. I think this is problematic because of the performance. If there are no post processors on any camera then the camera’s will be rendering to textures that don’t need to exist. Since rendering to textures is slower than rendering to a render buffer (from my understanding) this becomes a performance hit when no post processors are present.

    2. I can copy the contents from the “default FBO” (with the render buffer color attachment) to a “scratch FBO” that has a color and depth texture when the camera has post processors. I only have to do this copy once per camera and only if the cameras have post processors attached to them. Not an ideal scenario, but I am having difficulty thinking of a different work around.

    To do the copying to a texture, I was planning to use glCopyTexSubImage2D to perform this. From what I have read, due to the TBDR parallelization of the PowerVR chips, using glCopyTexSubImage2D is costly. Are there other alternatives to get the behavior I want (copy FBO color attachment to a texture) which would be less costly? Would the glCopyTexSubImage2D be any more costly than writing to a texture (instead of render buffer) and doing a full screen blit of the texture?

    Any help would be greatly appreciated. I’m going to go forward with the glCopyTexSubImage2D to just get some numbers, but I’m open to better alternatives.

    -EncodedNybble

    #38883

    Joe Davis
    Member

    Hi,

    Although iOS’s main framebuffer is an FBO, it’s best to consider it as a write only destination like the main framebuffer on other platforms.

    For best performance, I would recommend rendering directly to the main framebuffer when there are no post-processing passes enabled. If post-processing is required, you should render the game scene to an FBO so its attachments can be passed to your post-processing passes. The final post-process pass can then output to your main framebuffer. This solution will be much more efficient than using glCopyTex*Image2D().

    Thanks,
    Joe

    #38884

    Joe,

    Yes of course writing to an FBO that has texture(s) for color and depth attachments is faster and is what I am doing currently. The problem comes with particular, albeit rare, use cases.

    In some use cases where users of the engine would want to do some sort of temporal effect that requires use of the previous frame’s color information(like motion blur implementation from some parts of Battlefield 3 or was it Far Cry 3…I don’t remember). In this scenario, rendering to an off screen FBO when a camera has post processors means that the off screen FBO won’t be guaranteed to have the previous frame’s information and will (at least for one frame) look odd.

    Another use case where writing to an separate FBO when there are post processors would be if there are 3 “Cameras” all writing to the default frame buffer. Camera 1 clears color and depth, and writes out some draw calls. Camera 2 clears depth and writes out some geometry (let’s say it was rendering some rear view mirror in the bottom right of the screen). Camera 3 is an ortho camera that writes some piece of UI/HUD over the whole screen but requires alpha blending (and thus depends on the contents of the FBO).

    As it is set up, everything works. Using the “render to an off screen FBO when there are post processors” will yield inconsistent results. Let’s say camera 2 has a post processor on it (some crack post processor because the player’s car has taken too much damage and it rear-view mirror is partially broken) and let’s say camera 3 has a post processor on it (car has taken even more damage later, screen goes grayscale). Pardon the bad example here, I’m just making this up.

    If only camera 2 and 3 have post processors and if camera 3 is doing a large amount of blending, camera 3 will yield improper results for every frame. Camera 1 -> render straight to default FBO. Camera 2 -> clear depth of off screen FBO and render to off screen FBO. Off screen FBO’s color texture gets then blitted back to the default FBO. Camera 3 -> renders to off screen FBO (assuming it is the same “scratch” FBO as camera 2) and blends. Unfortunately the off screen FBO would only contain the render of camera 2 (since only camera 2 was rendering to off screen buffer). Therefore the UI/HUD would be rendered on top of the rear-view mirror and when the off screen FBO is blit back to the default FBO after all of the post processing, the default FBO would only contain the geometry from camera 2 and camera 3.

    There are ways to make the above example work (by explicitly writing camera 1 and camera 2 to a texture instead of default FBO and have camera 3’s render multiple full screen quads) I just don’t like the fact that the behavior of my engine would be inconsistent based solely only on particular cameras having post processors on them, i.e. if all 3 cameras above have no post-processors, everything was ok, but introducing post processors leads to different rendering behavior.

    In a nutshell, I’m currently doing the behavior you describe in your post Joe. I have a “scratch” off screen FBO that cameras with post processors write to (if they have the default FBO as their “target”). Unfortunately, that approach (at least the way I implemented it) doesn’t always work. The only thing I could think of that would work in every scenario would be to copy the default FBO to texture(s) which is why I made my original post.

    I’m not sure if my example made any sense, if clarification is needed, let me know. Thanks so much for your reply.

    Thanks for All of Your Help,

    EncodedNybble

    #38885

    Joe Davis
    Member

    In this scenario, rendering to an off screen FBO when a camera has post processors means that the off screen FBO won’t be guaranteed to have the previous frame’s information

    You’re correct that the FBO will contain undefined data before it’s first use. You would have the same problem with glCopyTex*Image2D() approach too. The best solution is to only apply the effect once if the texture has been rendered to in a previous frame.

    Let’s say camera 2 has a post processor on it (some crack post processor because the player’s car has taken too much damage and it rear-view mirror is partially broken) and let’s say camera 3 has a post processor on it (car has taken even more damage later, screen goes grayscale

    If both cameras render into textures then it should be straight forward to include the output of one camera render in a following render of another camera, e.g. blitting the cracked rear-view mirror into the bottom-right corner of camera 3’s render. The texture attached to camera 3’s render could then have post-processing effects applied to it.
    In a case where you’re considering post-processing for a cracked glass effect, you may get away with rendering geometry for the cracked glass at the end of camera 2’s render (on top of the rendered scene) as a cheap alternative to post-processing.

    There are ways to make the above example work (by explicitly writing camera 1 and camera 2 to a texture instead of default FBO and have camera 3’s render multiple full screen quads)

    This is the approach I’d go for. Any time you’re considering writing to the main framebuffer and reading back data, you should instead use a FBO with a colour texture attachment.

    Thanks,
    Joe

Viewing 4 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic.