way to implement IPC

后端 未结 5 1231
感动是毒
感动是毒 2021-01-14 22:56

what is the preferred way to implement IPC over windows ?

i know of several like : named pipe, shared memory, semaphors ? , maybe COM (though i\'m not sure how)...<

相关标签:
5条回答
  • 2021-01-14 23:16

    Either RPC / out-of-process COM or DCOM (which will eventually use RPC anyways) are the preferred way to do IPC in Windows unless you're doing something really simple - I've seen so many cases of people going down the named pipes route, and ending up basically reimplementing what DCOM gives you for free. Don't make the same mistake :)

    0 讨论(0)
  • 2021-01-14 23:27

    transport over named pipes for me

    for data format either roll your own or use local RPC (which is what msft uses)

    0 讨论(0)
  • 2021-01-14 23:28

    MSDN has a nice summary.

    That being said, I think you should consider using a 3rd party library. Boost should be nice -as stated in another answer- and your GUI toolkit might have some abstractions, too.

    For pure Win32, anonymous pipes must be the easiest method (where you only have to call CreatePipe and use the two resulting file handles; double everything for full-duplex) but it has the drawback that it only works when both processes are running on the same machine and that you must already have some means of communication between the processes in order to pass the handles.

    0 讨论(0)
  • 2021-01-14 23:38

    A few years ago, we studied this particular question for a client/server situation where both client and server were running on the same machine. At the time, we were using sockets (UDP) even when client and server were on the same machine. For us, "best" turned out to be shared memory with named semaphores to synchronize it. At the time, I mainly studied pipes versus a raw shared memory implementation. I tested pipes with overlapped I/O and with I/O completion ports.

    I tested with a large variety of data sizes. At the low end where client and server were echoing 1 byte back and forth, the raw shared memory implementation was the fastest by a factor of 3. When I passed 10,000 bytes back and forth, the pipe implementations and the raw shared memory implementation were all about the same speed. I was using 4K buffers if I recall correctly with the shared memory implementation.

    For all data sizes, the shared memory test ranged between 2 times and 6 times faster than using sockets (compared against TCP).

    Between the pipe implementations, the overlapped I/O version was faster than the I/O completion port version by about 30% when passing small amounts of data. Again, with larger chunks of data, the difference was minimal.

    The pipe implementation was certainly much less complex to code. But we dealt with quite a few small chunks of data being passed back and forth, so it was worth the extra complexity to implement the shared memory version with named semaphores.

    Of course, this was several years ago as mentioned, and you have no idea if I implemented all the different tests correctly. Note too that this was with a single client. The final implementation of our shared memory communication does scale very well for hundreds of "clients" running. But I do not know if it is better at that scale than a pipe implementation.

    0 讨论(0)
  • 2021-01-14 23:39

    Take a look at boost::interprocess.

    Shared memory is probably the fastest in general, but somewhat error-prone and limited to local processes.

    COM is fully versioned and automatically supports remote IPC, but obviously it's platform-specific.

    For a large-scale application you might want to consider something like ActiveMQ or OpenMQ.

    0 讨论(0)
提交回复
热议问题