Lately I have been working on a simple screen sharing program.
Actually the program works on a TCP protocol
and uses the Desktop duplication API
For your screen of 1920 x 1080, with 4 byte color, you are looking at approximately 8 MB per frame. With 20 FPS, you have 160 MB/s. So getting from 8 MB to 200 KB (4 MB/s @ 20 FPS) is a great improvement.
I would like to get your attention to certain aspects that I am not sure you are focusing on, and hopefully it helps.
newpixels
and previouspixels
. Needless to say the the original screen and the diff screen will all be LZ4 compressed/decompressed. Every so often you should send the full array instead of the diff, if you use some lossy algorithm to compress the diff.The ideas above can be applied one on top of the other to get a better user experience. Ultimately, it depends on the specifics of your application and end-users.
EDIT:
Color Quantization can be used to reduce the number of bits used for a color. Below are some links to concrete implementations of Color Quantization
Usually the quantized colors are stored in a Color Palette and only the index into this palette is given to the decoding logic
Slashy,
Since you are using a high res frames and you want a good frame rate you're likely going to be looking at H.264 encoding. I've done some work in HD/SDI broadcast video which is totaly dependent on H.264, and a little now moving to H.265. Most of the libraries used in broadcast are written in C++ for speed.
I'd suggest looking at something like this https://msdn.microsoft.com/en-us/library/windows/desktop/dd797816(v=vs.85).aspx