问题
I'm attempting to contribute to a cross-platform, memory-safe API for creating and using OpenGL contexts in Rust called glutin. There is an effort to redesign the API in a way that will allow users to create a GL context for a pre-existing window.
One of the concerns raised was that this might not be memory safe if a user attempts to create a GL context for a window that has a pre-existing DirectX context.
The documentation for wglCreateContext suggests that it will return NULL
upon failure, however it does not go into detail about what conditions might cause this.
Will wglCreateContext
fail safely (by returning NULL
) if a DirectX context already exists for the given HDC
(device context)? Or is the behaviour in this situation undefined?
I do not have access to a Windows machine with OpenGL support and am unable to test this directly myself.
回答1:
The real issue I see here is, that wglCreateContext
may fail for any reason, and you have to be able to deal with that.
That being said, the way you formulated your question reeks of a fundamental misunderstanding between the relationship between OpenGL contexts, device contexts and windows. To put in in three simple words: There is none!
Okay, that begs for clarification. What's the deal here? Why is there a HDC
parameter to wglCreateContext
if these are not related?
It all boils down to pixel formats (or framebuffer configuration). A window as you can see it on the screen, is a direct 1:1 representation of a block of memory. This block of memory has a certain pixel format. However as long as only abstract drawing methods are used (like the GDI is), the precise pixel format being used doesn't matter and the graphics system may silently switch the pixel format as it sees fit. In times long begone, when graphics memory was scarce this could mean huge savings.
However OpenGL assumes to operate of framebuffers with a specific, unchanging pixel format. So in order to support that a new API was added that allows to nail down the internal pixel format of a given window. However since the only part of the graphics system that's actually concerned with the framebuffer format is the part that draws stuff, i.e. the GDI, the framebuffer configuration happens through that part. HWND
s are related to passing around messages, associating input events to applications and all that jazz. But it's HDC
s that relate to everything graphics. So that's why you set the pixel format through an HDC
.
When creating an OpenGL context, that context has to be configured for the graphics device it's intended to be used on. And this again goes through the GDI and data structures that are addressed through HDC
handles. But once an OpenGL context has been created, it can be used with any HDC
that refers to a framebuffer that has a pixel format configured that is compatible to the HDC
the OpenGL context was originally created with. It can be a different HDC
of the same window, or it can be a HDC
of an entirely different window alltogether. And ever since OpenGL-3.3 core an OpenGL context can be made current with no HDC
at all, being completely self contained, operating on self managed framebuffers. And last but not least, that binding can be changed at any time.
Everytime when people, who have no clear understanding of this, implement some OpenGL binding or abstraction wrapper, they tend to get this part wrong and create unnecessarily tight straight jackets, which then other people, like me, have to fight their way out, because the way the abstraction works is ill conceived. The Qt guys made that mistake, the GTK+ guys did so, and now it seems apparently so do you. There is this code snippet on your Github project page:
let events_loop = glutin::EventsLoop::new();
let window = glutin::WindowBuilder::new()
.with_title("Hello, world!".to_string())
.with_dimensions(1024, 768)
.with_vsync()
.build(&events_loop)
.unwrap();
unsafe {
window.make_current()
}.unwrap();
unsafe {
gl::load_with(|symbol| window.get_proc_address(symbol) as *const _);
gl::ClearColor(0.0, 1.0, 0.0, 1.0);
}
Arrrrggggh. Why the heck are the methods make_current
and get_proc_address
associated with the window? Why?! Who came up with this? Don't do this shit, it makes the life of people who have to use this miserable and painful.
Do you want to know to what this leads? Horribly, messy unsafe code, that does disgusting and dirty things to forcefully and bluntly disable some of the safeguards present, just so that it can go to work. Like this horrible thing I had to do, to get Qt4's ill assumptions of how OpenGL works out of the way.
#ifndef _WIN32
#if OCTPROCESSOR_CREATE_GLWIDGET_IN_THREAD
QGLFormat glformat(
QGL::DirectRendering |
QGL::DoubleBuffer |
QGL::Rgba |
QGL::AlphaChannel |
QGL::DepthBuffer,
0 );
glformat.setProfile(QGLFormat::CompatibilityProfile);
gl_hidden_widget = new QGLWidget(glformat);
if( !gl_hidden_widget ) {
qDebug() << "creating woker context failed";
goto fail_init_glcontext;
}
gl_hidden_widget->moveToThread( QThread::currentThread() );
#endif
if( !gl_hidden_widget->isValid() ) {
qDebug() << "worker context invalid";
goto fail_glcontext_valid;
}
gl_hidden_widget->makeCurrent();
#else
if( wglu_create_pbuffer_with_glrc(
3,3,WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB,
&m_hpbuffer,
&m_hdc,
&m_hglrc,
&m_share_hglrc)
){
qDebug() << "failed to create worker PBuffer and OpenGL context";
goto fail_init_glcontext;
}
qDebug()
<< "m_hdc" << m_hdc
<< "m_hglrc" << m_hglrc;
if( !wglMakeCurrent(m_hdc, m_hglrc) ){
qDebug() << "failed making OpenGL context current on PBuffer HDC";
goto fail_glcontext_valid;
}
#endif
来源:https://stackoverflow.com/questions/44840267/will-wglcreatecontext-fail-safely-if-a-directx-context-already-exists-for-the