why is there a memory leak when a program does not release a memory object before it exits ?
Well, the OS typically clean up your mess for you. However, what happens when your program is running for an arbitrary amount of time and you have leaked so much memory that you can't allocate anymore? You crash, and that's not good.
Isn't a good programming language design supposed to maintain a "foo-table" that takes care of this situation ?
No. Some programming languages have automated memory management, some do not. There are benefits and drawbacks to both models. Languages with manual memory management allow you to say when and where resources are allocated and released, i.e., it is very deterministic. A relative beginner will however inevitably write code that leaks while they are getting used to dealing with memory management.
Automated schemes are great for the programmer, but you don't get the same level of determinism. If I am writing a hardware driver, this may not be a good model for me. If I were writing a simple GUI, then I probably don't care about some objects persisting for a bit longer than they need to, so I will take an automated management scheme every time. That's not to say that GC'd languages are only for 'simple' tasks, some tasks just require a tighter control over your resources. Not all platforms have 4GB+ memory for you to play around in).
There are patterns that you can use to help you with memory management. The canonical example would be RAII (Resource Allocation is Initialization)