structs with uint8_t on a MCU without uint8_t datatype

别等时光非礼了梦想. 提交于 2019-12-30 09:22:34

问题


I am an Embedded Software developer and I want to interface to an external device. This device sends data via SPI. The structure of that data is predefined from the external device manufacturer and can't be edited. The manufacturer is providing some Header files with many typedefs of all the data send via SPI. The manufacturer also offers an API to handle the received packets in the correct way(I have access to the source of that API).

Now to my problem: The typedefed structures contain many uint8_t datatypes. Unfortunately, our MCU doesn't support uint8_t datatypes, because the smallest type is 16bit-wide(so even a char has 16-bit).

To use the API correctly the structures must be filled with the data received via SPI. Since the incoming data is byte-packet, we can't just copy this data into the struct, because our structs use 16-bit for those 8-bit types. As a result, we need to do many bitshift-operations to assign the received data correctly.

EXAMPLE:(manufacturers typedef struct)

typedef struct NETX_COMMUNICATION_CHANNEL_INFOtag
{
  uint8_t   bChannelType;              //uint16_t in our system
  uint8_t   bChannelId;                //uint16_t in our system
  uint8_t   bSizePositionOfHandshake;  //uint16_t in our system
  uint8_t   bNumberOfBlocks;           //uint16_t in our system
  uint32_t  ulSizeOfChannel;           
  uint16_t  usCommunicationClass;      
  uint16_t  usProtocolClass;           
  uint16_t  usProtocolConformanceClass;
  uint8_t   abReserved[2];             //uint16_t in our system
} NETX_COMMUNICATION_CHANNEL_INFO;

Can anybody think of an easy workaround to this problem? I really don't want to write a separate bitshift operation for every received packet type. (performance/time/space-waste)

My Idea (using bitfields to stuff 2xuint8_t into uint16_t or 4xuint8_t into uint32_t)

typedef struct NETX_COMMUNICATION_CHANNEL_INFOtag
{
  struct packet_uint8{
    uint32_t  bChannelType              :8;
    uint32_t  bChannelId                :8;
    uint32_t  bSizePositionOfHandshake  :8;
    uint32_t  bNumberOfBlocks           :8;
  }packet_uint8;
  uint32_t  ulSizeOfChannel;               
  uint16_t  usCommunicationClass;          
  uint16_t  usProtocolClass;               
  uint16_t  usProtocolConformanceClass;    
  uint16_t  abReserved;                    
} NETX_COMMUNICATION_CHANNEL_INFO;

Now I am not sure if this solution is going to work since the order of the bits inside the bitfield is not necessarily the order in the source file. (or is it if all the bitfields have the same size?)

I hope I described the problem well enough for you to understand.

Thanks and Regards.


回答1:


Your compiler manual should describe how the bit fields are laid out. Read it carefully. There is something called __attribute__((byte_peripheral)) too that should help with packing bitfields sanely in memory-mapped devices.


If you're unsure about the bitfields, just use uint16_t for these fields and an access macro with bit shifts, for example

#define FIRST(x) ((x) >> 8)
#define SECOND(x) ((x) & 0xFF)

...
    uint16_t channel_type_and_id;
...

int channel_type = FIRST(x->channel_type_and_id);
int channel_id = SECOND(x->channel_type_and_id);

Then you just need to be sure of the byte-order of the platform. If you need to change endianness which the MCU seems to support? you can just redefine these macros.


A bitfield would most probably still be implemented in terms of bitshifts so there wouldn't be much savings - and if there are byte-access functions for registers, then a compiler would know to optimize x & 0xff to use them




回答2:


According to the linked to compiler documentation byte access is through intrinsics

To access data in increments of 8 bits, use the __byte() and __mov_byte() intrinsics described in Section 7.5.6.

If you wanted to, you could make a new type to encapsulate how bytes should be accessed - something like a pair of bytes or a TwoByte class that will have the size of 16 bits.

For inspiration take a look at how std::bitset template class is implemented in STL for an analogue problem. https://en.cppreference.com/w/cpp/utility/bitset

As I posted in my other answer, I still believe the your bitfield could work - even though it might be platform specific. Basicly if it works out, the compiler should put in the correct bitshift operations.




回答3:


The bitfield approach may work in practice. Although you do need some way to verify or make sure that it is packed in the correct way for your target platform. The bitfield approach will not be portable as you state yourself the order of bitfields is platform dependent.



来源:https://stackoverflow.com/questions/53204008/structs-with-uint8-t-on-a-mcu-without-uint8-t-datatype

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!