Funny that you mention malloc() specifically in your example.
In every hard-real-time, deeply embedded system that I've worked on, memory allocation is managed specially (usually not the heap, but fixed memory pools or something similar)... and also, whenever possible, all memory allocation is done up-front during initialization. This is surprisingly easier than most people would believe.
malloc() is vulnerable to fragmentation, is non-deterministic, and doesn't discrminate between memory types. With memory pools, you can have pools that are located/pulling from super fast SRAM, fast DRAM, battery-backed RAM (I've seen it), etc...
There are a hundred other issues (in answer to your original question), but memory allocation is a big one.
Also:
- Respect for / knowledge of the hardware platform
- Not automatically asssuming the hardware is perfect or even functional
- Awareness of certain language apects & features (e.g., exceptions in C++) that can cause things to go sideways quickly
- Awareness of CPU loading and memory utilization
- Awareness of interrupts, pre-emption, and the implications on shared data (where absolutely necessary -- the less shared data, the better)
- Most embedded systems are data/event driven, as opposed to polled; there are exceptions of course
- Most embedded developers are pretty comfortable with the concept of state machines and stateful behavior/modeling