Midpoint Formula Overflow Error
问题 I'm learning algorithms/big o and i was just curious about this. The use of mid = (low+high)/2; in order to get the midpoint for a binary search algorithm is generally discouraged because of the possibility of an overflow error. Why would this cause an overflow error to occur, and how does mid = low + (high-low)/2; prevent this error? Thanks. 回答1: In the first case you calculate the value (low+high) which might be too huge to fit into an int, if low and high are both huge enough (say if both