I have a gnarly piece of code whose time-efficiency I would like to measure. Since estimating this complexity from the code itself is hard, I want to place it in a loop and time
Use the "ratio method" if you are trying to get a black-box estimate of the complexity. For instance: if you sit in a tight loop doing a fixed length job, like inserting a random record into a database, you record a timestamp at the end of each iteration. The timestamps will start to be farther apart as more data goes in. So, then graph the time difference between contiguous timestamps.
If you divide that graph by lg[n] and it continues to rise, then it's worse than lg[n]. Try dividing by: lg[n], n, nlg[n], nn, etc. When you divide by a function that is too high of an estimate, then plot will trend to zero. When you divide by a function that is too low, then the plot will continue to climb. When you have a good estimate, then there is a point in your data set at which you can place an upper and lower bound where the graph wanders around in for as far out as you care to check.