I\'m using opencv and openframeworks (ie. opengl) to calculate a camera (world transform and projection matrixes) from an image (and later, several images for triangulation)
The first thing to check is to verify that markers are reprojected correctly on the image given estimated intrinsic and extrinsic camera matrices. Then you can find camera position in the global frame and to see if it agrees with markers positions. (Use coordinate system as in OpenCV.) One this is done, there are not much things that can go wrong. Since you want points to lie on the xz plane, you need just a single transformation of coordinates. As I see, you do it with gWorldToCalibration matrix. Then apply the transformation both to markers and to camera position and verify that markers are in the right place. Then the camera position will be also correct (unless something went wrong with the handedness of coordinate system, but can be easily corrected).