My .Net Winforms application creates three OpenGL rendering contexts in my main window, and then allows the user to popup other windows where each window has two more render
As Nathan Kidd correctly said, the limit is implementation-specific, and all you can do is to run some tests on common hardware.
I was bored at today's department meeting, so i tried to piece together a bit of code which creates OpenGL contexts and tries some rendering. I tried rendering with and without textures, with and without forward-compatible OpenGL context.
It turned out that the limit is pretty high for GeForce cards (maybe even no limit). For desktop Quadro, there was limit of 128 contexts that were able to repaint correctly, the program was able to create 128 more contexts with no errors, but the windows contained rubbish.
It was even more interesting on ATi Radeon 6950, there the redrawing stopped at window #105, and creating rendering context #200 failed.
If you want to try for yourself, the program can be found here: Max OpenGL Contexts test (there is full source code + win32 binaries).
That's the result. One piece of advice - avoid using multiple contexts where possible. Multiple contexts can be understood in application running at mulitple monitors, but applications on a single monitor should resort to a single context. Context switching is slow. And that's not all. Applications where OpenGL windows are overlapped by another windows require hardware clipping regions. There is one hardware clipping region on GeForce, eight or more on Quadro (CAD applications often use windows and menus that overlap OpenGL window, in contrast with games). In case more regions are needed, rendering falls back to software - so again - having lots of OpenGL windows (contexts) is not a very good idea.
The best bet is that there is no real answer to this question. It probably depends on some internal limitation of the driver, hardware of even the OS. Something you might want to try to check is the number of available texture units using glGet(GL_MAX_TEXTURE_UNITS)
but that may or may not be indicative.
A common solution to avoid this is to create multiple viewports within a single context rather than multiple contexts in a single window. It shouldn't be too hard to unite the two contextes that share a window to a single context with two viewports and some kind of UI widget to serve as the splitter. Multiple windows are a different story and you may want to consider completely re-thinking your UI design if there is an actual need for 26 separate OpenGL windows.
It's hard for me right now to think of a real UI use case that would actually require 26 different OpenGL windows operating simultaneously. maybe another option is to create a pool of say 5-10 contexts and reuse them only in the windows (tabs?) that are currently visible to the user. I didn't try it but it should be possible to create a context inside a plain window that contain nothing else and then move that window from parent window to parent window to whichever top-level window it is needed in.
EDIT -
Well, actually it's not that hard to think of one. The latest Chrome (9.x.x), supporting WebGL may want to open many tabs each with a WebGL context... I wonder if they handle this in any way. Just tried it and ran out of memory after 13 tabs... That would actually be a good check for you as well to see if its something you're doing wrong or if chrome and firefox (4.0.x-beta) have the same problem
Given the diverse nature of OpenGL drivers, your best bet is probably to check the behavior of the major drivers (AMD / Intel / NVIDIA / MS Software Render) and on first startup run a test. E.g. if you can see that NVIDIA always slows down like you saw, then just run a quick loop till you see where the limit is on that machine (or rather, card). It's not much fun, but I think it's pretty hard to reliably push the limits otherwise.
In other words "best bet" is just like previously answered, you can't know beforehand.
If you go through that much trouble to set OpenGL up in a over-the-top multi-threaded fashion, you could as well benefit from it and consider switching to Vulkan. See, by design, the OpenGL architecture funnels all the hard earned context/thread separated drawing operations into one single driver thread that then redistributes all these calls acrosss virtual hardware threads that map onto each context. The driver is in essence a huge bottleneck because it is not itself threaded, despite any glewmx sitting aroung. It is simply not designed to handle this well.
That said, I am curious if you used an older version of Glew, or if you do all the extension handling in some other way, since latest glew libs no longer support mx. One more reason to switch.