Background Image

This topic contains 9 replies, has 5 voices, and was last updated by  jgh 6 years, 7 months ago.

Viewing 10 posts - 1 through 10 (of 10 total)
  • Author
    Posts
  • #30312

    strange99
    Member

    Hello,

    I’m new to POWEVR SDK (and a little bit to OpenglES).

    Compiling and running on iPhone4 (IOS 4.1) works fine (except in the simulator – don’t know why).

    I’m developing some kind of Ar-Application where I need to display the frames from the camera (iPhone4) as background of my model.

    getting the frame from the camera isn’t a problem. (Getting it as BGRA-memory or UIImage).

    But where an how can I update the background of the scene ?

    Thanks in advance !

    #34409

    radek_ne
    Member

    For bada but helpful

    http://developer.bada.com/blog/?p=373

    radek_ne2010-10-14 11:02:57

    #34410

    strange99
    Member

    Tahnks – I’ll take a look at that …

    #34411

    Gordon
    Moderator

    What problems are you having with the SDK projects? We have a known issue described here:
    https://www.imgtec.com/forum/forum_posts.asp?TID=999&KW=iphone&PID=3392#3392

    Selecting the appropriate Simulator SDK in the Base SDK setting fixed the problems that I was having with building for simulator – as I say in that post we’re still looking for a better fix.

    Apple support the iOS devices themselves so I would advise you to look at their documentation and possibly ask a question in the iOS forums about this issue: http://devforums.apple.com

    I haven’t tried this myself, but I believe one way to achieve what you want is to feed the ‘raw’ YUV data from the camera into GL using Apple’s APPLE_rgb_422 extension. I think it’s also possible to use Core Animation(?) to output the camera images on a different layer to your GL content if you don’t actually need to use the texture data in your 3D scene.

    The method proposed in the article that radek_ne shows is certainly a way of achieving an AR effect, but our preferred method is to take advantage of the GL_IMG_texture_stream2 extension in order to stream video data into OpenGL in a more efficient manner than a normal glTexImage2D upload would allow. Unfortunately, this isn’t available on many platforms yet.

    #34412

    strange99
    Member

    Thank you Gordon –

    I’m thinking about using a different layer (UIView) to display camera’s image – I don’t need it in my scene. Where is the best place (in code) to add this view ?

    Again: Thanks for the very fast reply !

    #34413

    Gordon
    Moderator

    I think you wil need to contact Apple for this as the UIView API and composition layer in general is very specific to iPhone. These links may help you though:

    http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/00_Introduction.html

    http://cmgresearch.blogspot.com/2010/10/augmented-reality-on-iphone-with-ios40.html

    #34414

    strange99
    Member

    Thank You, Gordon –

    I managed it by myself –

    #34415

    Hi,

    just a comment on the issue: I loaded the camera data into texture on an iPod touch 4th generation. To keep things simple, I’m memcopying the data to my own buffer such that I don’t need to worry about glTexImage2D accessing potentially invalid memory. One thing to notice is that the code for getting the data recommended by Apple compiles currently only on an actual device, not in the simulator since (at least) some header files of AVFoundation aren’t available for the simulator.

    After asking the camera for the format kCVPixelFormatType_32BGRA, the data (from the CVImageBuffer returned by CMSampleBufferGetImageBuffer() from a CMSampleBuffer) is converted correctly by glTexImage2D if the internal format is GL_RGBA and the external format is specified as GL_BGRA_EXT (or GL_BGRA) as described in the GL_APPLE_texture_format_BGRA8888 extension. (The GL_APPLE_rgb_422 extension might be more useful for the iPhone 3G.)

    #34416

    One more comment: it’s possible to get rid of the memcpy if the buffer is correctly retained and released.Thus, on iOS devices with the SGX GPU it boils down to a single call to glTexImage2D with the external format GL_BGRA_EXT (or GL_BGRA).

    #34417

    jgh
    Member

    Just to point out for anyone in the future who reads this thread: The GL_APPLE_rgb_422 extension is for component-based YCbCr; you can only get BiPlanar (Y plane and a CbCr plane) from the iOS camera as of iOS 4.3 (on devices newer than the 3G anyway). From 4.0 until at least 4.3 the only supported colourspaces are BGRA32 and YUV BiPlanar. I believe you can get ‘yuvs’ from AVAssetReader, however.

Viewing 10 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic.