How would Big-O notation help in my day-to-day C# programming? Is it just an academic exercise?
Writing good software is largely about understanding and making informed decisions about trade-offs in your design. For example, sometimes you can tolerate a larger memory footprint for faster execution time, sometimes you can sacrifice execution time for a smaller memory footprint and so on.
Big-O notation is a formalization of these trade-offs so that software engineers can speak a common language about them. You may never have to formally prove the Big-O characteristics of an algorithm you design, but if you don't understand the concept on an abstract level, then chances are you won't be making good trade-offs in the software you develop.