The memory referred to by a
holds a pattern of bits which the processor uses to represent 12.5. How does it represent it: IEEE 754. What does it look like? See this calculator to find out: 0x4148000. What is that when interpreted as an int? 1095237632.
Why do you get a different value when you don't do the casting? I'm not 100% sure, but I'd guess it's because compilers can use a calling convention which passes floating point arguments in a different location than integers, so when printf tries to find the first integer argument after the string, there's nothing predictable there.
(Or more likely, as @Lindydancer points out, the float bits may be passed in the 'right' place for an int, but because they are first promoted to a double representation by extending with zeros, there are 0s where printf expects the first int to be.)