PVRTexTool 4.2.0: PVRTC1_4bpp quality regression

This topic contains 12 replies, has 6 voices, and was last updated by  Markus Henschel 1 year, 5 months ago.

Viewing 13 posts - 1 through 13 (of 13 total)
  • Author
    Posts
  • #52751

    I just downloaded the latest SDK, PVRTexTool 4.2.0, I confirm the PVRTC1_4bpp format is worst than previous version we used 3 years ago, using the best quality parameter. Also, it’s impossible to save a luminance texture (I8) in a .pvr file, but it was possible with an earlier version in 2011.

    #52774

    kevin
    Member

    Hi David,
    I can save a luminance texture (I8) in a .pvr file with 4.2.0 PVRTexTool.
    Open an image encode the it in to a luminance texture (I8) texture with [Encode] button.
    Can you also provide an image to reproduce the quality issue?:D
    Thanks,
    Kevin

    Attachments:
    You must be logged in to view attached files.
    #52934

    Pierre
    Member

    I have now switched to the new v4 version and noticed some quality loss.

    Attached is an example of a 32×32 texture, the left side using the old PVRTexLib and the right side using the new one. I am using LIB and DLL in my own tools.

    (the 2nd image is scaled, without interpolation, since 32×32 is pretty small)

    Edit: What is so bad about the new version. The border drifts inside the inner region.
    While the alpha seems better, it does not matter since I am using another alpha layer.
    In case this is due to alpha optimizations, it would be great to pass a flag to ignore those, since quite often the alpha is really bad and we never use the PVR alpha.

    Attachments:
    You must be logged in to view attached files.
    #52938

    Pierre
    Member

    Update: I actually posted the wrong comparison.
    Attached is a new one.

    In the old library I used BEST for compression. BEST in the new library is really bad, see the attached image.
    My old post on top actually used HIGH for compression, which gave better results (I compared a few).

    Attachments:
    You must be logged in to view attached files.
    1 user thanked author for this post.
    #52952

    I noticed similar issues. With PVRTexTool v3.23 the quality is much better for PVRTC1 4bpp RGBA than with 4.2.0. I will open a support ticket because I cannot share the source images here.

    Edit: My ticket number is 719

    #52956

    Joe Davis
    Member

    I’ve discussed the issue with the PVRTexTool lead. Up until PVRTexTool v3.23 (SDK v2.9), bleeding was enabled by default for textures with alpha channels that were compressed to 4 bits per pixel. This improves the quality of most textures containing transparency, but caused problems for developers that wanted to store data other than transparency in their alpha channels. We decided from that point onwards that the bleed pre-process should be opt-in.

    If you enable the bleed option (-l in the PVRTexTool command-line) when compressing textures with transparency in their alpha channel, it should result in equivalent or better quality than textures compressed with the older versions of PVRTexTool.

    A bug has been filed against our texture compression guide to explain the benefits of bleeding and other pre-processing techniques (BRN58781).

    #52962

    I had the same idea and already tried the bleeding option. The result looks different but equally bad. Even with bleeding enabled the output of v3.23 ist still much better than 4.2.0. Maybe 3.23 did the bleeding differently?

    When I look at the compressed file from v3.23 and disable the alpha channel to see the bleeding pixels it looks quite different. When I recompress the file from v3.23 with 4.2.0 it also looks much better than what 4.2.0 produces from the raw data. Could it be that the old bleeding was better?

    Edit:
    I also tried using v3.23 from code now. I didn’t use any pre-processing like bleeding on the image and the quality is the same as in the UI/CLI tool. So unless v3.23 is doing the color bleeding automatically directly inside the CompressPVR function it doesn’t seem to be the source of the issue.

    #53046

    kevin
    Member

    Hi Markus,
    I have filed this bug as BRN58898.
    Software Bugs issue – BRN58898: 58898: PVRTexTool GUI & CL Image quality regression in PVRTexTool

    Thanks,
    Kevin

    #53198

    Simon
    Moderator

    Update: I actually posted the wrong comparison.
    Attached is a new one.

    In the old library I used BEST for compression. BEST in the new library is really bad, see the attached image.
    My old post on top actually used HIGH for compression, which gave better results (I compared a few).

    Hi Pierre,
    any chance you could post the source image for your test case?
    Thanks
    Simon

    #53258

    Simon
    Moderator

    I noticed similar issues. With PVRTexTool v3.23 the quality is much better for PVRTC1 4bpp RGBA than with 4.2.0. I will open a support ticket because I cannot share the source images here.

    Edit: My ticket number is 719

    Hi Markus,
    I had a look at the image data you uploaded (which Kevin has assigned as BRN58781), trying it with both the earlier 3.23 version and the latest compressor.

    When zoomed in, it looks to me that the 3.23 version is actually making a mess of the alpha channel but perhaps it is being blended over a ‘benign background’, so making it look better than it actually is.

    In 3.23, some parts of the compression code treated alpha just as another channel, in the sense of trying to minimise the differences between source data and compressed output, but that often resulted in fully opaque/fully transparent texels becoming partially transparent. Neither of these are desirable and so the compressor was changed to make it try harder to make fully opaque/fully transparent texels remain that way. This does mean less data is directed to the RGB channels so they can become more blurred but given the background can be arbitrary, it is more important to get those A=225, A=0 cases correct.

    Since you didn’t want your source image made public, I’ve tried to create a simple test that is “equivalent” to the detail toward the base of your ‘wild’ image.

    ThreeVersions
    From left to right this shows the source image, 3.23 results, and most recent ‘research’ version of the compressor.
    The 3.2 may look a little sharper but this comes at a cost of poor quality alpha.
    Zooming in and showing the alpha channel on the right, the source is:

    Example-src

    which gets turned into the following by the older compressor:
    Example-3.2
    As you can see, some areas that should be fully opaque are quite transparent.

    Finally the most recent:
    Example-latest
    The alpha is considerably better but the colours, unfortunately, in the top left have suffered.

    Attachments:
    You must be logged in to view attached files.
    1 user thanked author for this post.
    #53430

    Hi Simon,
    thanks for taking the time to look into this. It’s definitely true that the alpha preservation in 4.2 is better than it was in 3.2. But at least for my source material the mess in the alpha channel is hardly visible. When choosing a pink background you hardly see any issues. Even if the background changes a lot so the unwanted partial transparency could be more visible it’s still not a problem.

    The loss in detail in the color channels is very visible though. So the assumption that preserving the alpha channel is more important than the color detail seems to be a problem in this case. Would it be possible to tweak the importance of keeping the alpha channel? Maybe the compressor could also do this automatically depending on the content of the rgb channels? So if there is high frequency data in the rgb channels the alpha channel becomes less important?

    Right now I use 3.2 together with 4.2. So if the quality of 4.2 isn’t right the user of the content pipeline can choose to use the older compressor.

    Regards,

    Markus

    #53449

    Simon
    Moderator

    Markus,
    That might be a little tricky as it’d require changing the interface to the exported tools so would need to be discussed with the developer support team.

    Having said that, there appears to be a relatively-easy work around. As I mentioned above, the 4.2 compressor attempts to keep fully opaque texels fully opaque and, similarly, fully transparent texels as fully transparent by more strongly weighting its decisions. For pixels with alpha in the range [1,254], however, it just uses the original weights, i.e. alpha is treated with the same importance as R,G,& B.

    I thus took your original 256×512 test image (that was logged as BRN58898) and then in GIMP just selected a rectangle around a “decorative” region, e.g, the lower area, say, from [100,328] to [153,404] (I set the selection feather to 3 pixels, but it might not matter). I then used the curves tool to lower the alpha, i.e.

    curves-adjust

    This thus makes the opaque parts in that region very slightly transparent and so the compressor won’t apply the “fully opaque” weighting in its best fit algorithm.

    Running this though the 4.2 compressor appears to ‘fix’ the problem you have or at least it reduces the colour blurring considerably. There is no need – in fact it wouldn’t be desirable – to apply this to the large, contiguous, opaque regions.

    Regards
    Simon

    Attachments:
    You must be logged in to view attached files.
    #53528

    Hi Simon,
    thanks for the workaround. It’s not quite practical to to this by hand as there are image sequences with 50 or more pictures in it but maybe I could add an option to apply it to the whole image automatically in our content pipeline. This will slow things down a bit but at least I don’t have to keep two versions of the texture converter.

    Thanks for your help. It’s really appreciated.

    Markus

    1 user thanked author for this post.
Viewing 13 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic.