- January 11, 2012 at 10:52 pm #30731
I know that using ‘discard’ in a fragment shader is a big no-no as far as performance is concerned but there are parts in my renderer where I have to use it instead of sorted blending which I do for the majority of cases.
My question is does the very presence of discard in the shader affect performance or only when it is used? I was considering switching it on & off via a uniform in my main shader rather than write a new fragment shader with it in. Is that a sensible tactic? Sorry if this has been covered before but I did a quick search and couldn’t find an answer.
I’m coding for iPad 1 & 2 if that is of any relevance.
Thanks in advance,
Mark.January 12, 2012 at 9:14 am #35378
Unfortunately, because it would be impossible for the GPU to know ahead of time whether a fragment would hit the ‘discard’ route through the shader or not, if any route contains the ‘discard’ keyword the whole object must be treated as if it may hit ‘discard’.
Unfortunately, the solution is to write a separate shader.
Another solution, as your already aware, is to avoid using ‘discard’ altogether, and attempt to use ‘alpha blending’, in almost all cases the effect will be ‘good enough’. I assume your already doing this everywhere its applicable however.
Sorry I can’t help more.
Developer TechnologyJanuary 12, 2012 at 9:35 am #35379
Further to Bob’s post, you can use preprocessor macros in your shader code to effectively create two separate shaders, but only maintain and include one shader file.
The usage of this is exactly the same way as C preprocessor macros function.
If you’re using our SDK tools it’s very easy to do: just pass PVRTShaderLoadxxx an array of char* – which are your #defines – and this will Just Work.Arron2012-01-12 09:46:40