- November 14, 2011 at 2:45 pm #30685
I have a complex scene, 1m vertices or more, and I wondered if putting all this into one VBO (as per the training course) affects the performance or whether I should split it into multiple VBOs and, if thats the case, what the optimum cutoff point for each VBO size might be?
have read the guide in the sdk but couldn’t see a definitive answer.November 14, 2011 at 2:59 pm #35225
It depends on your scene and how you will be using those objects. Generally, you should batch objects as much as possible, e.g. all objects that use the same shaders and render state could be stored in a single VBO so that they can be rendered with a single draw call. Doing so will ensure there are as few redundant GL calls and state changes as possible.
That being said, there are cases where splitting the objects into multiple VBOs (even if they use the same shader and render states) makes sense, e.g. when it’s possible to perform culling on the CPU to remove draw calls for objects that will not be in the scene (off screen, in a different map of a game etc).
It would be best to follow our performance guidelines document as closely as possible, profile the performance of your application and then consider splitting the geometry into different VBOs if you think it will improve your batching or allow you to avoid rendering geometry that wont be seen.November 14, 2011 at 3:04 pm #35226
okay, I am doing quite a bit of cpu culling so I will batch based on that. Does this mean there is no theoretical upperlimit to the number of vertex data in a single VBO before it degrades performance?November 14, 2011 at 3:12 pm #35227
The only upper limit is the space that is left in memory for the GPU driver to address. If you refer to the glBuffferData reference page, you can see that the function will return a GL_OUT_OF_MEMORY error if there isn’t adequate storage available. You can query for this error with glGetError() after any call that may hit a memory limitation so that your application can decide how to deal with this scenario.