If you want to do hardware-accelerated video encoding and decoding of H.264 video on iOS, the only way to go is AVFoundation. Don't use third-party libraries for the encoding or decoding, because they all currently are CPU-bound, and are much slower than what you get from AVFoundation. The one reason to use a third-party encoder or decoder would be if you are working with a format not supported in iOS by default.
For hardware-accelerated decoding, you'll want to use an AVAssetReader instance (or one of the player classes for pure playback). With an AVAssetReader, I regularly get 2X or higher playback speeds for reading H.264-encoded video, so the iOS devices use some pretty good hardware acceleration for that.
Similarly, for accelerated encoding, you'll use an AVAssetWriter. There are some tricks to getting AVAssetWriter to encode at the best speed (feeding in BGRA frames, using a pixel buffer pool, using the iOS 5.0 texture caches if reading from OpenGL ES), which I describe in detail within this answer.
If you want to see some code that uses the fastest paths I've found for accelerated encoding and decoding, you can look at my open source GPUImage framework, which, unlike the one linked by Anastasia, is totally free to use.