What is the best (cleanest, most efficient) way to write saturating addition in C?
The function or macro should add two unsigned inputs (need both 16- and 32-bit ver
Using C++ you could write a more flexible variant of Remo.D's solution:
template<typename T>
T sadd(T first, T second)
{
static_assert(std::is_integral<T>::value, "sadd is not defined for non-integral types");
return first > std::numeric_limits<T>::max() - second ? std::numeric_limits<T>::max() : first + second;
}
This can be easily translated to C - using the limits defined in limits.h
. Please also note that the Fixed width integer types might not been available on your system.
Just in case someone wants to know an implementation without branching using 2's complement 32bit integers.
Warning! This code uses the undefined operation: "shift right by -1" and therefore exploits the property of the Intel Pentium SAL instruction to mask the count operand to 5 bits.
int32_t sadd(int32_t a, int32_t b){
int32_t sum = a+b;
int32_t overflow = ((a^sum)&(b^sum))>>31;
return (overflow<<31)^(sum>>overflow);
}
It's the best implementation known to me
In ARM you may already have saturated arithmetic built-in. The ARMv5 DSP-extensions can saturate registers to any bit-length. Also on ARM saturation is usually cheap because you can excute most instructions conditional.
ARMv6 even has saturated addition, subtraction and all the other stuff for 32 bits and packed numbers.
On the x86 you get saturated arithmetic either via MMX or SSE.
All this needs assembler, so it's not what you've asked for.
There are C-tricks to do saturated arithmetic as well. This little code does saturated addition on four bytes of a dword. It's based on the idea to calculate 32 half-adders in parallel, e.g. adding numbers without carry overflow.
This is done first. Then the carries are calculated, added and replaced with a mask if the addition would overflow.
uint32_t SatAddUnsigned8(uint32_t x, uint32_t y)
{
uint32_t signmask = 0x80808080;
uint32_t t0 = (y ^ x) & signmask;
uint32_t t1 = (y & x) & signmask;
x &= ~signmask;
y &= ~signmask;
x += y;
t1 |= t0 & x;
t1 = (t1 << 1) - (t1 >> 7);
return (x ^ t0) | t1;
}
You can get the same for 16 bits (or any kind of bit-field) by changing the signmask constant and the shifts at the bottom like this:
uint32_t SatAddUnsigned16(uint32_t x, uint32_t y)
{
uint32_t signmask = 0x80008000;
uint32_t t0 = (y ^ x) & signmask;
uint32_t t1 = (y & x) & signmask;
x &= ~signmask;
y &= ~signmask;
x += y;
t1 |= t0 & x;
t1 = (t1 << 1) - (t1 >> 15);
return (x ^ t0) | t1;
}
uint32_t SatAddUnsigned32 (uint32_t x, uint32_t y)
{
uint32_t signmask = 0x80000000;
uint32_t t0 = (y ^ x) & signmask;
uint32_t t1 = (y & x) & signmask;
x &= ~signmask;
y &= ~signmask;
x += y;
t1 |= t0 & x;
t1 = (t1 << 1) - (t1 >> 31);
return (x ^ t0) | t1;
}
Above code does the same for 16 and 32 bit values.
If you don't need the feature that the functions add and saturate multiple values in parallel just mask out the bits you need. On ARM you also want to change the signmask constant because ARM can't load all possible 32 bit constants in a single cycle.
Edit: The parallel versions are most likely slower than the straight forward methods, but they are faster if you have to saturate more than one value at a time.
uint32_t saturate_add32(uint32_t a, uint32_t b)
{
uint32_t sum = a + b;
if ((sum < a) || (sum < b))
return ~((uint32_t)0);
else
return sum;
} /* saturate_add32 */
uint16_t saturate_add16(uint16_t a, uint16_t b)
{
uint16_t sum = a + b;
if ((sum < a) || (sum < b))
return ~((uint16_t)0);
else
return sum;
} /* saturate_add16 */
Edit: Now that you've posted your version, I'm not sure mine is any cleaner/better/more efficient/more studly.
The current implementation we are using is:
#define sadd16(a, b) (uint16_t)( ((uint32_t)(a)+(uint32_t)(b)) > 0xffff ? 0xffff : ((a)+(b)))
#define sadd32(a, b) (uint32_t)( ((uint64_t)(a)+(uint64_t)(b)) > 0xffffffff ? 0xffffffff : ((a)+(b)))
If you care about performance, you really want to do this sort of stuff in SIMD, where x86 has native saturating arithmetic.
Because of this lack of saturating arithmetic in scalar math, one can get cases in which operations done on 4-variable-wide SIMD is more than 4 times faster than the equivalent C (and correspondingly true with 8-variable-wide SIMD):
sub8x8_dct8_c: 1332 clocks
sub8x8_dct8_mmx: 182 clocks
sub8x8_dct8_sse2: 127 clocks