Memory management for layered communications stack on embedded system [closed]

一个人想着一个人 提交于 2019-12-13 03:49:15

问题


This question pertains to programming on embedded systems. I'm working on an experimental communication stack on an embedded device. The stack receives stream data from the underlying channel, detects discrete packets, reassembles fragmented data, etc...

Each function is implemented in a separate layer. Some layers delay processing of packets (because data arrived in an interrupt handler and further processing is offloaded onto the main context). Some layers merge multiple incoming packets into a single packet forwarded to the next upper layer (i.e. reassembly of fragmented data). Accordingly, some layers split one incoming packet into multiple packets forwarded to the next lower layer. Of course, any layer may at any point drop a packet without further notice because, for example, a checksum didn't match the data.

My question is about memory allocation for these data packets.

Currently, I'm using malloc on each layer. Specifically, I allocate memory for the packet to be forwarded to the next upper layer, pass the pointer to the handler of the next layer and free the memory again after the call. It is the next layer's handler's responsibility to copy the required data. Thus, each layer maintains ownership of the data is allocated and it is hard to forget to free allocated memory. This works very well but leads to a lot of unnecessary copies.

Alternatively, I could forward ownership of the buffer to the next layer. Then the next layer can do its work directly on the buffer and forward the same buffer to the next layer, and so on. I suppose this is somewhat trickier to get right that no memory is leaked.

Ultimately, because it is an embedded device, I want to find a solution without dynamic memory allocation. If each layer keeps ownership of its own memory then implementation without malloc should be easy enough. But if ownership is passed on then it seems more complicated.

Do you have any input?


回答1:


Look into LwIP packet buffers (pbuf), it resolves cases mentioned in your scenarios. http://www.nongnu.org/lwip/2_0_x/group__pbuf.html To robust your code executed by ISR, instead of malloc you can implement memory pools.




回答2:


Allocate memory in one place. Since it is an embedded system, you'll have to use a static memory pool. Like a classic ADT implemented as opaque type:

// buffer.h

typedef struct buffer_t buffer_t;

buffer_t* buffer_create (/*params*/);

/* setter & getter functions here */

// buffer.c

#include "buffer.h"

struct buffer_t
{
  /* private contents */
};


static buffer_t mempool [MEMPOOL_SIZE];
static size_t mempool_size = 0;

buffer_t* buffer_create (/*params*/)
{
  if(mempool_size == MEMPOOL_SIZE)
  { /* out of memory, handle error */ }

  buffer_t* obj = &mempool[mempool_size]; 
  mempool_size++;

  /* initialize obj here */

  return obj;
}

/* setter & getter functions here */

Now all your various application layers and processes only get to pass around copies of a pointer. In case you do need to actually make a hardcopy, you implement a buffer_copy function in the above ADT.

In case of a multi-process system, you will have to consider re-entrancy in case multiple processes are allowed to allocate buffers at the same time.



来源:https://stackoverflow.com/questions/54998104/memory-management-for-layered-communications-stack-on-embedded-system

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!