I'm running into a problem with large POST data (>16384 bytes) when using Django 1.2.3, PyISAPIe v1.1.0-rc4, and IIS 7.5.
For example, when submitting approx. 60kB of form data using POST, the following happens:
- The first 16kB block of POST data are correct
- The next 16kB block is a repeat of the first block
- The next 16kB is another repeat of the first block
- The rest (<16kB) is correct again
The interesting part is that when using content-type="multipart/form-data"
, it works fine.
Using this information I tracked down the likely location of the bug to WSGIRequest._get_raw_post_data in django\core\handlers\wsgi.py, which handles the content-type="multipart/form-data"
separately from the default (no content-type) case.
Both cases read from self.environ['wsgi.input']
, which is set to the PyISAPIe object. The difference is that the default case seems to read in chunks of 16kB, whereas the multipart handler seems to read in chunks of just under 2GB.
I don't know enough about C and the Python interface for C to dig in further, but I'm guessing the bug is somewhere in PyISAPIe in the ReadClient function in ReadWrite.cpp.
My current workaround is to add content-type="multipart/form-data"
to forms that may product more than 16kB of data.
Has anybody run into this as well, or does anybody know how to determine if the bug is in fact in PyISAPIe?
Thank you!
PyISAPIe author here.
This was fixed in revision 184 in the repository but not in the downloadable release, as discussed on the mailing list.
It addressed a previously documented bug that apparently hasn't received much attention because many users are checking out the source rather than downloading the package. Or, that's my best guess anyway; regardless, I plan to provide a downloadable version of the fixed code.
Thanks for bringing this to my attention so I can be reminded to keep this project's releases in a functioning state.
I dug a little deeper and I think I found the issue.
In PyISAPIe\Readwrite.cpp:
PyISAPIe_Func(DWORD) ReadClient( Context &Ctx, DWORD Length, void *const Data )
{
if ( !Length )
Length = Ctx.ECB->cbTotalBytes;
if ( !Data )
// Return the size of the the data that would be read
return min(Length, Ctx.ECB->cbTotalBytes);
DWORD Ret, Total = 0;
if ( Length > Ctx.ECB->cbAvailable )
{
[...snip...]
}
else
{
memcpy(Data, Ctx.ECB->lpbData, Length);
Ctx.ECB->cbTotalBytes -= Length;
Ctx.ECB->cbAvailable -= Length;
return Length;
}
If the method is called repeatedly with Length <= Ctx.ECB->cbAvailable, it seems to always copy the beginning of the Ctx.ECB->lpbData buffer into Data, and not removing that data from the buffer or advancing a pointer. Only when the data is exhausted (cbAvailable == 0) is new data correctly read into Data later in the code.
Still not sure how to fix it, but at least I can work around it by reading in large enough chunks of data so that one chunk will read it all.
来源:https://stackoverflow.com/questions/9891467/large-post-data-is-corrupted-when-using-django-pyisapie-iis