I\'m using IAR Workbench compiler with MISRA C:2004 checking on.
The fragment is:
#define UNS_32 unsigned int
UNS_32 arg = 3U;
UNS_32 converted_arg = (
Second question first:
How do I make the code MISRA C:2004 compliant?
You can write this in a MISRA-compliant manner as follows:
typedef unsigned int UNS_32;
UNS_32 test(void);
UNS_32 test(void)
{
UNS_32 original_arg = 3U;
UNS_32 irq_source = 1UL << original_arg;
return irq_source;
}
Back to the first question:
What is really going on here?
First Rule 10.3 says that complex integer expressions shall not be cast to a type wider than the underlying type.
One key to understanding the error message is the concept underlying type, which is a MISRA-C specific concept. In short, the underlying type of a constant is the smallest type that it can fit into. In this case 1U
has the underlying type unsigned char
despite it having the language type unsigned int
.
The rationale behind the 10.3 rule is to avoid cases where the result of an operation is used in a context which is larger than the parts. The standard example of this is multiplication, where alpha
and beta
are 16 bit types:
uint32_t res = alpha * beta;
Here, if int
is 16 bits, the multiplication will be performed in 16 bits, the result will then be converted to 32 bits. On the other hand, if int
is 32 bits or larger, the multiplication will be performed in the larger precision. Concretely, this will make the result different when multiplying, say, 0x4000 and 0x10.
MISRA rule 10.3 solved this by enforcing that the cast result is placed in a temporary, which is later cast to the larger type. That way you are forced to write code one way or the other.
If the intention is to use a 16-bit multiplication:
uint16_t tmp = alpha * beta;
uint32_t res = tmp;
On the other hand, if the intention is a 32-bit multiplication:
UNS_32 res = (UNS_32)alpha * (UNS_32)beta;
So, in this case, the expression 1U << count
is the potential problem. If converted_arg
is larger than 16 bits, this could lead to a problem when using 16-bit int
s. MISRA does allow you, however, to write 1UL << count
or (UNS_32)((UNS_32)1U << original_arg)
. You mentioned that the MISRA checker issued an error in the latter case -- mine does not so please check again.
So, the way I see it, the MISRA C checker you used correctly identified a violation to rule 10.3.
In C89, which the MISRA rules specify, the type of an integer constant suffixed with a U
is the first of the list "unsigned int, unsigned long int" in which its value can be represented. This means that the type of 1U
must be unsigned int
.
The definition of the bitwise shift operator specifies that the integer promotions are performed on each operand (this does not change an unsigned int
), and that the type of the result is the type of the promoted left operand. In this case, the type of the result of (1U << converted_arg)
is therefore unsigned int
.
The only explicit conversion here is the cast of this unsigned int
value to unsigned int
, so this must be what the compiler is warning about - although there is no unsigned char
in sight, which means that the checker appears to be buggy.
Technically though, this cast from unsigned int
to unsigned int
does appear to violate rule 10.3, which says that the result of "complex expression" can only be cast to a narrower type - and casting to the same type is clearly not casting to a narrower type.
The cast is unnecessary - I would simply omit it.
In the earliest days of MISRA, programs to which it was applied would sometimes be targeted toward compilers whose behavior would not comply with the not-yet-published C89 Standard. On the machines for which C was invented, operations on 16-bit values cost the same as operations on 8-bit values. Promoting char
values to int
, and truncating the results when storing back to char
was actually cheaper and easier than performing arithmetic on char
values directly. While the C Standard, once published, would mandate that all C implementations must promote all integer values to an int
type that can accommodate at least the range -32767..32767, or an unsigned
type that can accommodate at least 0..65535, or else some larger type, 1980s compilers that targeted 8-bit machines didn't always do that.
Although it may seem crazy nowadays to try to use a C compiler that can't meet those requirements, programmers in the 1980s would have often faced the choice between using a "C-ish" compiler or writing everything in assembly language. Some of the rules in MISRA, including the "inherent type" rules, were designed to ensure that programs would work even if run on weird implementations which treat int
as an 8-bit type.