fragmentation

How to prevent packet fragmentation for a HttpWebRequest

大城市里の小女人 提交于 2019-12-04 00:53:28
问题 I am having a problem using HttpWebRequest against a HTTP daemon on an embedded device. The problem appears to be that there is enough of a delay between the http headers being written to the socket stream, and the http payload (a POST), that the socket releases what's in the socket buffer to the server. This results in the HTTP request being split over two packets (fragmentation). This is perfectly valid, of course, however the server the other end doesn't cope with it if the packets are

Reduce SQL Server table fragmentation without adding/dropping a clustered index?

时间秒杀一切 提交于 2019-12-03 17:41:51
问题 I have a large database (90GB data, 70GB indexes) that's been slowly growing for the past year, and the growth/changes has caused a large amount of internal fragmentation not only of the indexes, but of the tables themselves. It's easy to resolve the (large number of) very fragmented indexes - a REORGANIZE or REBUILD will take care of that, depending on how fragmented they are - but the only advice I can find on cleaning up actual table fragmentation is to add a clustered index to the table.

TCP/UDP and ethernet MTU Fragmentation

跟風遠走 提交于 2019-12-03 17:26:18
I've read various sites and tutorials online but I am still confused. If the message is bigger than IP MTU, then send() returns the byte sent. What happens to the rest of the message? Am I to call send() again and attempt to send rest of the message? Or is that something IP layer should take care of automatically? If you are using TCP then the interface presented to you is that of a stream of bytes. You don't need to worry about how the stream of bytes gets from one end of the connection to the other. You can ignore the IP layer's MTU. In fact you can ignore the IP layer entirely. When you

Reduce SQL Server table fragmentation without adding/dropping a clustered index?

半城伤御伤魂 提交于 2019-12-03 05:55:26
I have a large database (90GB data, 70GB indexes) that's been slowly growing for the past year, and the growth/changes has caused a large amount of internal fragmentation not only of the indexes, but of the tables themselves. It's easy to resolve the (large number of) very fragmented indexes - a REORGANIZE or REBUILD will take care of that, depending on how fragmented they are - but the only advice I can find on cleaning up actual table fragmentation is to add a clustered index to the table. I'd immediately drop it afterwards, as I don't want a clustered index on the table going forward, but

IP Fragmentation and Reassembly

两盒软妹~` 提交于 2019-12-03 04:42:32
问题 I am currently going through my networking slides and was wondering if someone could help me with the concept of fragmentation and reassembly. I understand how it works, namely how datagrams are split into smaller chunks because network links have a MTU. However the example in the picture is confusing me. So the first two sections show a length of 1500, because this is the MSU, but shouldn't this mean that the last one should have 1000 (for a total of 4000 bytes) and not 1040? Where did these

Why and when is necessary to rebuild indexes in MongoDB?

无人久伴 提交于 2019-12-03 02:38:11
Been working with MongoDB for a while and today I had a doubt while discussing with a colleague. The thing is that when you create an index in MongoDB, the collection is processed and the index is built. The index is updated within insertion and deletion of documents so I don't really see the need to run a rebuild index operation (which drops the index and then rebuild it). According to MongoDB documentation: Normally, MongoDB compacts indexes during routine updates. For most users, the reIndex command is unnecessary. However, it may be worth running if the collection size has changed

IP Fragmentation and Reassembly

☆樱花仙子☆ 提交于 2019-12-02 17:12:33
I am currently going through my networking slides and was wondering if someone could help me with the concept of fragmentation and reassembly. I understand how it works, namely how datagrams are split into smaller chunks because network links have a MTU. However the example in the picture is confusing me. So the first two sections show a length of 1500, because this is the MSU, but shouldn't this mean that the last one should have 1000 (for a total of 4000 bytes) and not 1040? Where did these extra 40 bytes come from? My guess is that because the previous two fragments both had a header of 20

udp packet fragmentation for raw sockets

假如想象 提交于 2019-12-01 11:26:14
Follow-up of question packet fragmentation for raw sockets If I have a raw socket implemented as such: if ((sip_socket = socket(AF_INET, SOCK_RAW, IPPROTO_RAW)) < 0) { cout << "Unable to create the SIP sockets."<< sip_socket<<" \n"; return -3; } if ( setsockopt(sip_socket, IPPROTO_IP, IP_HDRINCL, &one, sizeof(one)) == -1) { cerr << "Unable to set option to Raw Socket.\n"; return -4; }; how can I set the ipHdr->fragment_offset (16 bits including 3 bit flags) if I have a packet of size 1756 (not including the IP header)? Do I need to prepare two packets-one of size 1480 and another of size 276,

Avoiding OutOfMemoryException during large, fast and frequent memory allocations in C#

喜你入骨 提交于 2019-11-30 08:45:09
Our application continuously allocates arrays for large quantities of data (say tens to hundreds of megabytes) which live for a shortish amount of time before being discarded. Done naively this can cause large object heap fragmentation, eventually causing the application to crash with an OutOfMemoryException despite the size of the currently live objects not being excessive. One way we have successfully managed this in the past is to chunk up the arrays to ensure they don't end up on the LOH, the idea being to avoid fragmentation by allowing memory to be compacted by the garbage collector. Our

Android Heap Fragmentation Strategy?

£可爱£侵袭症+ 提交于 2019-11-30 04:27:56
问题 I have an OpenGL Android app that uses a considerable amount of memory to set up a complex scene and this clearly causes significant heap fragmentation. Even though there are no memory leaks it is impossible to destroy and create the app without it running out of memory due to fragmentation. (Fragmentation is definitely the problem, not leaks) This causes a major problem since Android has a habit of destroying and creating activities on the same VM/heap which obviously causes the activity to