All the major headsets have API documentation on their sites that explain how to integrate VR support into your engine. You should refer to that documentation for details. My experience is mostly with the Oculus SDK but other SDKs are similar.
You generally don't directly split the screen into two yourself - you provide images with left and right eye views to the SDK and the SDK performs warping for the lens optics and sends the outputs to the HMD display(s).
The SDK provides APIs to get the camera and viewport parameters you need to render each eye's view. With the Oculus SDK you also obtain your render targets for each eye view through API calls. You build view and projection matrices and set viewports for each eye view based on the information provided to you by the APIs for the HMD position, orientation, Field of View, target resolution, etc.
Rendering for each eye is essentially the same as whatever you are already doing in your engine but of course you have to render twice (once for each eye) using the camera and viewport information provided by the SDK and may wish to render a third view for display on the regular monitor. You may want to restructure parts of your engine for efficiency since the left and right eye views are very similar rather than naively render the entire scene twice but that is not strictly necessary.
There will probably be a call at the end of a frame to tell the SDK you've finished rendering and submit the completed eye buffers for display. Other than that there's not that much to it. Most of the challenge of VR rendering lies in achieving the required performance not in the integration of the SDKs which are fairly simple on the display side of things.