Can we share memory between workers in a Pytorch DataLoader?

后端 未结 0 1130
醉话见心
醉话见心 2021-01-22 10:54

My dataset depends on a 3GB tensor. This tensor could either be on the CPU or the GPU. The bottleneck of my code is the data loading preprocessing. But I can\'t add more than a

相关标签:
回答
  • 消灭零回复
提交回复
热议问题