Alpha test and alpha blending HW implementation

This topic contains 1 reply, has 2 voices, and was last updated by  Joe Davis 5 years, 7 months ago.

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
  • #30777

    Imagen I have a complex scene of N objects with alpha blending and alpha test used for every object. Normally this means that HSR will not be done and every object wil be shaded and textured using TSP and USSE, then alpha test and blending will be performed.

    Now imagen that additionally I draw an opaque full-screen quad with depth zero as a FIRST object in scene, so ALL N objects are hidden behind it and not visible.

    Does that mean that ISP unit will perform depth test for objects in scene and will send to TSP only fragments of my quad and all N objects in the scene will be ignored? Or they still will be processed in some kind by TSP so GPU time will be wasted for invisible objects?

    This understanding of HW behaviour in case of alpha test and blending is very important for my application.stanislav.volkov2012-03-12 17:55:32


    Joe Davis

    The hardware uses the on-chip depth buffer to recognise when fragments are hidden and will avoid redundant processing.

    In the case you’ve explained, the full screen quad would update the on-chip depth buffer and all other objects in the scene would be compared against the values in this depth buffer. If a given object is closer to the camera than the current value in the on-chip depth buffer, then it will be sent to the TSP for it’s fragment colour to be processed. If the object is further away from the camera, e.g. the object is further from the camera than the current value in the on-chip depth buffer, then the GPU will optimize this out and do no further processing for the object (i.e. it won’t be submitted to the TSP).

    Hope this explanation helps 🙂

Viewing 2 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic.