Skybox OpenGL ES iPhone and iPad

前端 未结 3 970
悲哀的现实
悲哀的现实 2021-01-03 10:23

I need to create a virtual tour tool for iOS. It\'s an archaeological application: the user could open it when he\'s inside an historic building or when he\

相关标签:
3条回答
  • 2021-01-03 10:33

    I think you can do this, here are some links to help get you started:

    http://sidvind.com/wiki/Skybox_tutorial

    common problems:

    ( i would post direct links but stackoverflow wont let me )

    look on stackoverflow items no 2859722 and 2297564.

    some programs and tips to help make the textures:

    spacescape

    there are some great opengl tutorials here:

    nehe.gamedev.net

    they are not iphone specific, but they explain opengl pretty well. i think some folks have ported these to the phone as well, i just cant find them now.

    0 讨论(0)
  • 2021-01-03 10:53

    It's going to be pretty tough if you're learning OpenGL ES from scratch. I'd use a graphics engine to do most of the heavy lifting. I'm currently playing Ogre3d, from what I've seen so far I can recommend it: http://www.ogre3d.org/. It has Skybox (and much more) out of the box, and should be pretty straight forward to do.

    0 讨论(0)
  • 2021-01-03 10:59

    Since I know OpenGL ES quite well, I had a go at a demo project, doing much of what you describe. The specific intention was to do everything in the simplest way available under OpenGL ES as long as the performance was good enough.

    Starting from the OpenGL template that Apple supply, I have written one new class with a heavily commented implementation file 122 lines long that loads PNG images as textures. I've modified the sample view controller to draw a skybox as required and to respond to touches with a version of the normal iPhone inertial scrolling, which has meant writing less than 200 lines of (also commented) code.

    To achieve this I needed to know:

    • the CoreGraphics means for getting pixel data from a PNG
    • how to set up the PROJECTION stack to get a perspective projection with the correct aspect ratio
    • how to manipulate the MODELVIEW stack to ensure two-axis rotation (first person shooter or Google StreetView style) of the scene according to member variables and to ensure that the cube geometry I defined doesn't visibly intersect the near clip plane
    • how to specify vertex locations and texture coordinates to OpenGL
    • how to specify the triangles OpenGL should construct between vertices
    • how to set the OpenGL texture parameters accordingly to supply only one level of detail for the texture
    • how to track a touch to manipulate the member variables dictating rotation, including a tiny bit of mechanics to give an inertial rotation

    Of course, the normal view controller lifecycle instructions are obeyed. Textures are loaded on viewDidLoad and released on viewDidUnload, for example, to ensure that this view controller plays nicely with potential memory warnings.

    The main observations are that, beyond knowing the Objective-C signalling mechanisms, most of this is C stuff. You're primarily using C arrays and references to make C function calls, both for OpenGL and CoreGraphics. So a prerequisite for coding this yourself is being happy in C, not just Objective-C.

    The CoreGraphics stuff is a bit tedious but it's all just reading the docs to figure out how each type of thing relates to the next — none of it is really confusing. Just get into your head that you need a data provider for the PNG data, you can create an image from that data provider and then create a bitmap context with memory that you've allocated yourself, draw the image into the context and then release everything except the memory you allocated yourself to be left with the result. That result can be directly uploaded to OpenGL. It's relatively short boilerplate stuff, but OpenGL has no concept of PNGs and CoreGraphics has no convenient methods of pushing things into OpenGL.

    I've assumed that textures are a suitable size on disk. For practical purposes, that means assuming they're a power-of-two in size along each edge. Mine are 512x512.

    The OpenGL texture management stuff is easy enough; it's just reading the manual to learn about texture names, name allocation, texture parameters and uploading image data. More routine stuff that is more about knowing the right functions than managing an intuitive leap.

    For supplying the geometry to OpenGL I've just written out the arrays in full. I guess you need a bit of a spatial mind to do it, but sketching out a 3d cube on paper and numbering the corners would be a big help. There are three relevant arrays:

    • the vertex positions
    • the texture coordinates that go with each vertex location
    • a list of indices referring to vertex positions that defines the geometry

    In my code I've used 24 vertices, treating each face of the cube as a logically discrete thing (so, six faces, each with four vertices). I've defined the geometry using triangles only, for simplicity. Supplying this stuff to OpenGL is actually quite annoying when you're starting; making an error generally means your program crashes deep inside the OpenGL driver without giving you a hint as to what you did wrong. It's probably best to build up a bit at a time.

    In terms of a UIView capable of hosting OpenGL content, I've more or less used the vanilla stuff Apple directly supply in the OpenGL template. The one change I made was explicitly to disable any attempted use of OpenGL ES 2.x. 1.x is more than sufficient for this task, so we gain simplicity firstly by not providing two alternative rendering paths and secondly because the ES 2.x path would be a lot more complicated. ES 2.x is the fully programmable pipeline with pixel and vertex shaders, but in ES land the fixed pipeline is completely removed. So if you want one then you have to supply your own substitutes for the normal matrix stacks, you have to write vertex and fragment shaders to do 'a triangle with a texture', etc.

    The touch tracking isn't particularly complicated, more or less just requiring me to understand how the view frustum works and how touches are delivered in Cocoa Touch. Once you've done everything else, this bit should be quite easy.

    Notably, the maths I had to implement was extremely simple. Just the touch tracking, really. Assuming you wanted a Google Maps-type view meant that I could rely entirely on OpenGL's built-in ability to rotate things, for example. At no point do I explicitly handle a matrix.

    So, how long it would take you to write depends on your own confidence with C and with CoreGraphics, and how happy you are sometimes coding in the dark. Because I know what I'm doing, the whole thing took two or three hours.

    I'll try to find somewhere to upload the project so that you can have a look at it. I think it'd be helpful to leaf through it and see how alien it looks. That'll probably give you a good idea about whether you could implement something that meets all of your needs within the time frame of your project.

    I've left the view controller as having exactly one view, which is the OpenGL view. However, the normal iPhone compositing rules apply and in your project you can easily put normal controls on top. You can grab my little implementation at mediafire. StackOverflow post length limits prevent me from putting big snippets of code here, but please feel free to ask if you have any specific questions.

    0 讨论(0)
提交回复
热议问题