Virtual Texturing

This topic contains 5 replies, has 3 voices, and was last updated by  AndyM 6 years, 4 months ago.

Viewing 6 posts - 1 through 6 (of 6 total)
  • Author
    Posts
  • #30408

    AndyM
    Member

    I am currently doing a university project in which I am investigating Virtual Texturing (or megatexturing, as it is also known) for mobile platforms. Ultimately, I am hoping to implement a working demo on a mobile phone. If you are unfamiliar with this, there is a good explanation here:

    http://www.silverspaceship.com/src/svt/

    I will need a programmable pipeline, so it was suggested to me that I use OpenGL ES 2.0 and the PowerVR SDK. I was wondering if anyone could offer advice on whether this makes sense for my project, and what else I would need to take into consideration. I was hoping to work exclusively in C++, but I do not know if this is possible.

    I currently have access to a “nexus one” android phone, through my department. I don’t think it uses PowerVR hardware, but from what I understand it only needs to support Open GL ES 2.0. Is this correct?

    Thanks for your time,

    Andy

    #34658

    Another name is “adaptive texture maps”. There is a paper from 2002 with that title; if you google it, you will find the paper and more recent papers that cite it, which should give you a rather good overview of the academic literature on the topic.

    As you said, you need a programmable pipeline, and since you cannot program GPUs in C++ you will have to use GLSL (OpenGL Shading Language) if you use OpenGL ES 2.0.

    You are right that the Nexus One doesn’t have PowerVR hardware and that most of the PowerVR SDK for Android (if not everything) should still work with it. But, of course, don’t expect to get support from Imagination Technologies for hardware-specific problems on non-PowerVR hardware. (Problem is: when you run into a problem, you often don’t know whether it is hardware-specific.)

    From my experience with implementing the (non-mipmapped) technique for desktop GPUs, common problems are to use uncompressed textures (some systems use an automatic lossy texture compression) and to use nearest-neighbor interpolation correctly (make sure you understand how OpenGL calculates texture coordinates, e.g., 0 is at the left boundary of the leftmost texel, 1 is at the right boundary of the rightmost texel; thus, the center of the leftmost texel is at 1/(2*n) where n is the number of texels).

    If you find a more efficient way of handling bilinear interpolation than presented in that paper from 2002, please let me know.

    If you use mipmapping you cannot rely on the mipmapping technique of the hardware but you have to implement it yourself. That’s probably going to be the most difficult part of your fragment shader. I assume you need the derivatives of texture coordinates to compute the level of detail for your texture access. This feature is optional in OpenGL ES 2.0, you can only be sure that it is available if the extension GL_OES_standard_derivatives is available (see http://www.khronos.org/registry/gles/extensions/OES/OES_standard_derivatives.txt )

    Thus, one of the first things to do is probably to decide whether you need mipmapping. And if you need it, then you should check whether your phone supports it. According to some information here: http://stackoverflow.com/questions/3881197/opengl-es-2-0-extensions-on-android-devices
    it depends on the particular version of Android whether the extension is supported on the Nexes One or not.

    Hope this helps, Martin

    #34659

    AndyM
    Member

    Thanks for your response!

    Martin Kraus wrote:
    Another name is “adaptive texture maps”. There is a paper from 2002 with that title; if you google it, you will find the paper and more recent papers that cite it, which should give you a rather good overview of the academic literature on the topic.

    Thanks, I actually have that paper printed off, along with another paper entitled “Uniform Texture Management for Arbitrary Meshes” from 2004. I’m just reading through both of them properly now.

    Martin Kraus wrote:
    As you said, you need a programmable pipeline, and since you cannot program GPUs in C++ you will have to use GLSL (OpenGL Shading Language) if you use OpenGL ES 2.0.

    Yeah, I am aware of the shading language (although I do not yet know how to use it). I didn’t phrase that very well; I was actually referring to the fact that most android development seems to be done using Java. I am still struggling to understand how exactly PowerVR is installed on these devices. My confusion stems from the fact that, as a framework, I assumed it would be bundled with each application that uses it, but normal Android applications seem to use Java. Since I am working on a desktop to begin with, I have been using C++ with PowerVR, and I was hoping to eventually move my code to a mobile device with as little pain as possible.

    Concerning the mipmapping issue, my understanding of the system described on the website that I linked earlier suggests that mipmapping is effectively built into the texture management scheme. It seems as though both the graphics card and OpenGL know nothing about it, other than that some textures are being moved back and forth, and then the shader handles everything from there. I am new to all of this though, so I may have misunderstood.

    Thanks again,

    Andy

    #34660

    I assume that the PowerVR SDK is built on top of the Android NDK (for “native” programming in C or C++: http://developer.android.com/sdk/ndk/index.html ) which itself requires the standard Android SDK, which uses Java.

    With respect to mipmapping: yes, the idea is that only the fragment shader deals with mipmapping (not the OpenGL application, which uses the fragment shader). I looked at the slides and saw that Sean Barrett used a biased texture lookup into the indirection texture. That’s a nice idea and avoids that you need to compute the level-of-detail yourself. If you just want to port Sean Barrett’s code to OpenGL ES 2.0, there might be no problem with this, even if derivatives of texture coordinate are not available.

    #34661

    agalarneau
    Member

    I’m kind of interested in what you’ve been able to accomplish so far.. How is the research going? If what you do works well, I might try to do it myself as well! =)

    #34662

    AndyM
    Member

    Thanks for your help Martin,
    it was greatly appreciated 🙂 

    agalarneau wrote:
    I’m kind of interested in what you’ve been able to accomplish so far.. How is the research going? If what you do works well, I might try to do it myself as well! =)

    Well I got the whole thing running. There hasn’t been much time for the project since it takes place during a very busy academic year, so I have barely done any optimisation yet, and haven’t yet
    experimented with the basic settings (such as tilesize, LOD, etc.) I have it running at 20fps. That’s obviously not great, but I was half expecting it to run at 5fps prior to optimisation Tongue

    It’s also worth noting that this is running on an Adreno 200, which seems to perform quite poorly compared to the latest hardware from both Imagination and Qualcomm.

    Possibly the biggest performance issue is the fact that the tile pool texture is not in a compressed format. I have been looking into changing this as it should be very simple in theory (it’s discussed by Sean Barrett, amongst others), but at first glance it seems as though OpenGL ES 2.0 does not support updates to compressed textures. If this is true I have no idea why, since my understanding is that pretty much all texture compression schemes work by local compression of small blocks of texels. I can’t see why it’s a problem to blindly overwrite areas of the texture so long as you don’t split any blocks.

    The system is also limited to 256×256 tiles at the highest level of detail (1 byte per colour channel.) I think that should be easy to increase, since I can either make use of the alpha channel or the bits left over in the mip-level’s channel. I haven’t actually done this though, so I don’t want to claim that it is definitely possible yet.

    If you have any specific information that you’d like to know about the final implementation, let me know and I’ll see what I can do 🙂

    AndyM2011-06-09 18:02:28

Viewing 6 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic.