How does one train multiple models in a single script in TensorFlow when there are GPUs present?

前端 未结 4 1124
暗喜
暗喜 2021-01-30 14:36

Say I have access to a number of GPUs in a single machine (for the sake of argument assume 8GPUs each with max memory of 8GB each in one single machine with some amount of RAM a

4条回答
  •  北荒
    北荒 (楼主)
    2021-01-30 14:40

    You probably don't want to do this.

    If you run thousands and thousands of models on your data, and pick the one that evaluates best, you are not doing machine learning; instead you are memorizing your data set, and there is no guarantee that the model you pick will perform at all outside that data set.

    In other words, that approach is similar to having a single model, which has thousands of degrees of liberty. Having a model with such high order of complexity is problematic, since it will be able to fit your data better than is actually warranted; such a model is annoyingly able to memorize any noise (outliers, measurement errors, and such) in your training data, which causes the model to perform poorly when the noise is even slightly different.

    (Apologies for posting this as an answer, the site wouldn't let me add a comment.)

提交回复
热议问题