C# Screen streaming program

后端 未结 2 2053
南笙
南笙 2021-02-07 06:12

Lately I have been working on a simple screen sharing program.

Actually the program works on a TCP protocol and uses the Desktop duplication API

相关标签:
2条回答
  • 2021-02-07 07:13

    For your screen of 1920 x 1080, with 4 byte color, you are looking at approximately 8 MB per frame. With 20 FPS, you have 160 MB/s. So getting from 8 MB to 200 KB (4 MB/s @ 20 FPS) is a great improvement.

    I would like to get your attention to certain aspects that I am not sure you are focusing on, and hopefully it helps.

    1. The more you compress your screen image, the more processing it might need
    2. You actually need to focus on compression mechanisms designed for series of continuously changing images, similar to video codecs (sans audio though). For example: H.264
    3. Remember, you need to use some kind of real-time protocol for transferring your data. The idea behind that is, if one of your frame makes it to the destination machine with a lag, you might as well drop the next few frames to play catch-up. Else you will be in a perennially lagging situation, which I doubt the users are going to enjoy.
    4. You can always sacrifice quality for performance. The simplest such mechanism that you see in similar technologies (like MS remote desktop, VNC, etc) is to send a 8 bit color (ARGB each of 2 bits) instead of 3 byte color that you are using.
    5. Another way to improve your situation would be to focus on a specific rectangle on the screen that you want to stream, instead of streaming the whole desktop. This will reduce the size of the frame itself.
    6. Another way would be to scale your screen image to a smaller image before transmitting and then scale it back to normal before displaying.
    7. After sending the initial screen, you can always send the diff between newpixels and previouspixels. Needless to say the the original screen and the diff screen will all be LZ4 compressed/decompressed. Every so often you should send the full array instead of the diff, if you use some lossy algorithm to compress the diff.
    8. Does UpdatedRegions, have overlapping areas? Can that be optimized to not send duplicate pixel information?

    The ideas above can be applied one on top of the other to get a better user experience. Ultimately, it depends on the specifics of your application and end-users.

    EDIT:

    • Color Quantization can be used to reduce the number of bits used for a color. Below are some links to concrete implementations of Color Quantization

      • Optimizing Color Quantization for Images
      • nQuant library
    • Usually the quantized colors are stored in a Color Palette and only the index into this palette is given to the decoding logic

    0 讨论(0)
  • 2021-02-07 07:18

    Slashy,

    Since you are using a high res frames and you want a good frame rate you're likely going to be looking at H.264 encoding. I've done some work in HD/SDI broadcast video which is totaly dependent on H.264, and a little now moving to H.265. Most of the libraries used in broadcast are written in C++ for speed.

    I'd suggest looking at something like this https://msdn.microsoft.com/en-us/library/windows/desktop/dd797816(v=vs.85).aspx

    0 讨论(0)
提交回复
热议问题