I realize that LLVM has a long way to go, but theoretically, can the optimizations that are in GCC/ICC/etc. for individual languages be applied to LLVM byte code? If so, does t
I don't know any details about the bytecode format used by LLVM but I think the answer to your question is no.
Just consider the following: dynamic vs. static typing. A programming language which is dynamically typed probably will be slower than a statically typed language, because the majority of type checking is performed at run-time.
There can also be some other differences between programming languages which may affect performance.