Programming languages (e.g. c, c++, and java) usually have several types for integer arithmetic:
signed
and unsigned
types
In general you should use the type that suits the requirements of your program and promotes readability and future maintainability as much as possible.
Having said that, as Chris points out people do use shorts vs ints to save memory. Think about the following scenario, you have 1,000,000 (a fairly small number) ints (typically 32 bytes) vs shorts (typically 16 bytes). If you know you'll never need to represent a number larger than 32,767, you could use a short. Or you could use an unsigned short if you know you'll never need to represent a number larger than 65,535. This would save: ((32 - 16) x 1,000,000) = 16k of memory.