How do I support different OpenGL versions?

前端 未结 3 486
生来不讨喜
生来不讨喜 2021-02-08 07:04

I have two different systems one with OpenGL 1.4 and one with 3. My program uses Shaders which are part of OpenGL 3 and are only supported as ARB extension in the 1.4 implementa

相关标签:
3条回答
  • 2021-02-08 07:30

    Unless you really have to support 10 year old graphics cards for some reason, I strongly recommend targetting OpenGL 2.0 instead of 1.4 (in fact, I'd even go as far as targetting version 2.1).

    Since using "shaders that are core in 3.0" necessarily means that the graphics card must be capable of at least some version of GLSL, this rules out any hardware that is not capable of providing at least OpenGL 2.0. Which means that if someone has OpenGL 1.4 and can run your shaders, he is using 8-10 year old drivers. There is little to gain (apart from a support nightmare) from that.

    Targetting OpenGL 2.1 is reasonable, there are hardly any systems nowadays which don't support that (even assuming a minimum of OpenGL 3.2 may be an entirely reasonable choice).

    The market price for an entry level OpenGL 3.3 compatible card with roughly 1000x the processing power of a high end OpenGL 1.4 card was around $25 some two years ago. If you ever intend to sell your application, you have to ask yourself whether someone who cannot afford (or does not want to afford) this would be someone you'd reasonably expect to pay for your software.

    Having said that, supporting OpenGL 2.x and OpenGL >3.1 at the same time is a nightmare, because there are non-trivial changes in the shading language which go far beyond #define in varying and which will bite you regularly.

    Therefore, I have personally chosen to never again target anything lower than version 3.2 with instanced arrays and shader objects. This works with all hardware that can be reasonably expected having the processing power to run a modern application, and it includes the users who were too lazy to upgrade their driver to 3.3, providing the same features in a single code path. OpenGL 4.x features are loadable as extension if available, which is fine.
    But, of course, everybody has to decide for himself/herself which shoe fits best.

    Enough of my blah blah, back to the actual question:
    About not duplicating code for extensions/core, you can in many cases use the same names, function pointers, and constants. However, be warned: As a blanket statement, this is illegal, undefined, and dangerous.
    In practice, most (not all!) extensions are identical to the respective core functionality, and work just the same. But how to know which ones you can use and which ones will eat your cat? Look at gl.spec -- a function which has an alias entry is identical and indistinguishable from its alias. You can safely use these interchangeably.
    Extensions which are problematic often have an explanatory comment somewhere as well (such as "This is not an alias of PrimitiveRestartIndexNV, since it sets server instead of client state."), but do not rely on these, rely on the alias field.

    0 讨论(0)
  • 2021-02-08 07:34

    It depends: do you want to use OpenGL 3.x functionality? Not merely use the API, but use the actual hardware features behind that API.

    If not, then you can just write against GL 1.4 and rely on the compatibility profile. If you do, then you will need separate codepaths for the different levels of hardware you intend to support. This is standard, just for supporting different levels of hardware functionality.

    0 讨论(0)
  • 2021-02-08 07:38

    Like @Nicol Bolas already told you, it's inevitable to create two codepaths for OpenGL-3 core and OpenGL-2. OpenGL-3 core deliberately breaks with compatibility. However stakes are not as bad as it might seem, because most of the time the code will differ only in nuances and both codepaths can be written in a single source file and using methods of conditional compilation.

    For example

    #ifdef OPENGL3_CORE
        glVertexAttribPointer(Attribute::Index[Position], 3, GL_FLOAT, GL_FALSE, attribute.position.stride(), attribute.position.data());
        glVertexAttribPointer(Attribute::Index[Normal], 3, GL_FLOAT, GL_FALSE, attribute.position.stride(), attribute.position.data());
    
    #else
        glVertexPointer(3, GL_FLOAT, attribute.position.stride(), attribute.position.data());
        glNormalPointer(GL_FLOAT, attribute.normal.stride(), attribute.normal.data());
    #endif
    

    GLSL shaders can be reused similarily. Use of macros to change orrucances of predefined, but depreceated identifiers or introducing later version keywords e.g.

    #ifdef USE_CORE
    #define gl_Position position
    #else
    #define in varying
    #define out varying
    #define inout varying
    
    vec4f gl_Position;
    #endif
    

    Usually you will have a set of standard headers in your program's shader management code to build the final source, passed to OpenGL, of course again depending on the used codepath.

    0 讨论(0)
提交回复
热议问题