Determining max batch size with TensorFlow Object Detection API
问题 TF Object Detection API grabs all GPU memory by default, so it's difficult to tell how much I can further increase my batch size. Typically I just continue to increase it until I get a CUDA OOM error. PyTorch on the other hand doesn't grab all GPU memory by default, so it's easy to see what percentage I have left to work with, without all the trial and error. Is there a better way to determine batch size with the TF Object Detection API that I'm missing? Something like an allow-growth flag