- May 14, 2009 at 10:34 am #29797
I have measured how much time my shader algorithms take on beagleboard by measuring the processing time of glDrawArrays. Some times it takes about 6 or 7 milliseconds, but sometimes it takes about 33 milliseconds. When I loop it 1000 times it takes about 20 seconds, so the average is 20 milliseconds. Any idea what might cause this? Vertex shader is basically copied from ‘introducing PVRTools’ -training course and fragment shader is a simple feature extraction algorithm. I can post it if needed.
-hnykMay 15, 2009 at 10:50 am #33053
Simply measuring the execution time of glDrawArrays will not give you a good indication of the shader’s performance because of SGX’s architecture. A more accurate way might be to measure the entire frame time over a number of frames and take the average – once with your rendering happening, and once with nothing happening – and then take the difference of the two averages. Also make sure that you are using a high precision timer with high granularity. There is another thread somewhere here where someone talks about reading back a pixel from the framebuffer to ensure that rendering has actually finished, I can’t seem to find it right now though.
Hope this helps.May 18, 2009 at 8:55 am #33054
So you are saying that even though it has gotten out from glDrawArrays function it still might be rendering and that causes the varyings in processing times? Is the thread in this section or somewhere in this forum? I couldn’t find it either.
I’m using the PVRShellGetTime() to get time. I suppose it has high enough precision and granularity?
Thanks for the tips. I will try to measure again.
Okay, I measured again and I got some weird results. My timer starts in last line of the InitView() and it stops and prints result in the first line of ReleaseView(). Addition to this I put some lines in the RenderScene() to help debugging. Something like this:
printf(“Frame! %lun”, PVRShellGetTime()-k);
The variable k is accuired when the timer starts.
Here are results when the glDrawArrays is commented:
The first four frames are fast but after that it takes about 9ms per frame. And here are the results when the glDrawArrays isn’t commented:
Again the first four frames are fast but after that it takes over 1 second to complete frame! This is way too slow. What is going on?
Okay it seems that every demo does the same so either my fragment shader is really so slow or there is something wrong in my setup.
hnyk 2009-05-19 11:10:16May 27, 2009 at 4:35 pm #33055
When you say every demo do you mean every demo from the OGLES2 SDK? If this is so then I guess there must be something up with your setup as these demos are run on the Beagles and other OMAP3 devices we have here regularly without showing this problem.
Can you show us the fragment shader that you are using? What geometry are you rendering and what vertex data are you passing?May 29, 2009 at 10:02 am #33056
I mean they are fast, but if you measure like I have measured you can see that four first frames are fast and then it starts to take longer.
But this is just an illusion, because if you close the application after first frame and you measure execution time with and without glDrawArrays you see that the frame takes as long as any other frame. Somehow the RenderScene-function just completes very fast four times and then it starts to take longer, but the rendering still takes same amount of time on every frame.
I have now optimized my code so it is quite fast but still the first “frames” are faster and then it takes longer.
Is there a logical explanation to this or is there just something wrong in my setup?
I prefer not to release my code yet, but if it helps I can tell you that I render one triangle fan, which is actually a square and I only pass texture coordinates from vertex shader to fragment shader.May 29, 2009 at 12:52 pm #33057
CPU and GPU work in parallel. In order to avoid starving the GPU while the CPU does something other than submitting draw calls, the driver has to do some amount of command buffering. So what you are seeing is the driver queuing up rendering commands very quickly for the first few frames, only then it starts waiting for the GPU to finish rendering a frame before putting more commands in the queue.