I agree that Introduction to Algorithms is a good book. For more detailed instructions on e.g. how to solve recurrence relations see Knuth's Concrete Mathematics. A good book on Computational Complexity itself is the one by Papadimitriou. But that last book might be a bit too thorough if you only want to calculate the complexity of given algorithms.
A short overview about big-O/-Omega notation:
- If you can give an algorithm that solves a problem in time T(c*(n log n)) (c being a constant), than the time complexity of that problem is O(n log n). The big-O gets rid of the c, that is any constant factors not depending on the input size n. Big-O gives an upper bound on the running time, because you have shown (by giving an algorithm) that you can solve the problem in that amount of time.
- If you can give a proof that any algorithm solving the problem takes at least time T(c*(n log n)) than the problem is in Omega(n log n). Big-Omega gives a lower bound on the problem. In most cases lower bounds are much harder to proof than upper bounds, because you have to show that any possible algorithm for the problem takes at least T(c*(n log n). Knowing a lower bound does not mean knowing an algorithm that reaches that lower bound.
- If you have a lower bound, e.g. Omega(n log n), and an algorithm that solves the problem in that time (an upper bound), than the problem is in Theta(n log n). This means an "optimal" algorithm for this problem is known. Of course this notation hides the constant factor c which can be quite big (that's why I wrote optimal in quotes).
Note that when using these notations you are usually talking about the worst-case running time of an algorithm.