问题
I've coded some relatively simple communication protocol using shared memory, and shared mutexes. But then I wanted to expand support to communicate between two .dll's having different run-time in use. It's quite obvious that if you have some std::vector<__int64>
and two dll's - one vs2010, one vs2015 - they won't work politely with each other. Then I've thought - why I cannot serialize in ipc manner structure on one side and de-serialize it on another - then vs run-times will work smoothly with each other.
Long story short - I've created separate interface for sending next chunk of data and for requesting next chunk of data. Both are working while decoding happens - meaning if you have vector with 10 entries, each string 1 Mb, and shared memory is 10 Kb - then it would require 1*10*1024/10 times to transfer whole data. Each next request is followed by multiple outstanding function calls - either by SendChunk or GetNextChunk depending on transfer direction.
Now - I wanted to encode and decode happen simultaneously but without any threading - then I've came up with solution of using setjmp and longjmp. I'm attaching part of code below, just for you to get some understanding of what is happening in whole machinery.
#include "..."
#include <setjmp.h> //setjmp
class Jumper: public IMessageSerializer
{
public:
char lbuf[ sizeof(IpcCommand) + 10 ];
jmp_buf jbuf1;
jmp_buf jbuf2;
bool bChunkSupplied;
Jumper() :
bChunkSupplied(false)
{
memset( lbuf, 0 , sizeof(lbuf) );
}
virtual bool GetNextChunk( bool bSend, int offset )
{
if( !bChunkSupplied )
{
bChunkSupplied = true;
return true;
}
int r = setjmp(jbuf1);
((_JUMP_BUFFER *)&jbuf1)->Frame = 0;
if( r == 0 )
longjmp(jbuf2, 1);
bChunkSupplied = true;
return true;
}
virtual bool SendChunk( bool bLast )
{
bChunkSupplied = false;
int r = setjmp(jbuf2);
((_JUMP_BUFFER *)&jbuf2)->Frame = 0;
if( r == 0 )
longjmp(jbuf1, 1);
return true;
}
bool FlushReply( bool bLast )
{
return true;
}
IpcCommand* getCmd( void )
{
return (IpcCommand*) lbuf;
}
int bufSize( void )
{
return 10;
}
}; //class Jumper
Jumper jumper;
void main(void)
{
EncDecCtx enc(&jumper, true, true);
EncDecCtx dec(&jumper, false, false);
CString s;
if( setjmp(jumper.jbuf1) == 0 )
{
alloca(16*1024);
enc.encodeString(L"My testing my very very long string.");
enc.FlushBuffer(true);
} else {
dec.decodeString(s);
}
wprintf(L"%s\r\n", s.GetBuffer() );
}
There are couple of issues here. After first call to setjmp I'm using alloca() - which allocates memory from stack, it will be autofreed on return. alloca can happen only during first jump, because any function call always uses callstack (to save return address) and it can corrupt second "thread" call stack.
There are multiple articles discussing about how dangerous setjmp and longjmp are, but this is now somehow working solution. The stack size (16 Kb) is reservation for next function calls to come - decodeString and so on - it can be adjusted to bigger if not enough.
After trying out this code I've noticed that x86 code was working fine, but 64-but did not work - I've got similar problem to what is described here:
An invalid or unaligned stack was encountered during an unwind operation
Like article suggested I've added ((_JUMP_BUFFER *)&jbuf1)->Frame = 0;
kind of resetting - and after that 64-bit code started to work. Currently library is not using any exception mechanism and I'm not planning to use any (will try-catch everything if needed in encode* decode* function calls.
So questions:
Is it acceptable solution to disable unwinding in code ? (
((_JUMP_BUFFER *)&jbuf1)->Frame = 0;
) What unwinding really means in context setjmp / longjmp ?Do you see any potential problem with given code snipet?
来源:https://stackoverflow.com/questions/39603384/communication-protocol-and-local-loopback-using-setjmp-longjmp