Thoughts about train_test_split for machine learning

前端 未结 1 944
遇见更好的自我
遇见更好的自我 2021-01-26 14:42

I just noticed that many people tend to use train_test_split even before handling the missing data, and seem like they split the data at the very beginning

and there are

相关标签:
1条回答
  • 2021-01-26 15:28

    You should split the data as early as possible.

    To put it simply, your data engineering pipeline builds models too.

    Consider the simple idea of filling in missing values. To do this you need to "train" a mini-model to generate the mean or mode or some other average to use. Then you use this model to "predict" missing values.

    If you include the test data in the training process for these mini-models, then you are letting the training process peek at that data and cheat a little bit because of that. When it fills in the missing data, with values built using the test data, it is leaving little hints about what the test set is like. This is what "data leakage" means in practice. In an ideal world you could ignore it, and instead just use all data for training use the training score to decide which model is best.

    But that won't work, because in practice a model is only useful once it is able to predict any new data, and not just the data available at training time. Google Translate needs to work on whatever you and I type in today, not just what it was trained with earlier.

    So, in order to ensure that the model will continue to work well when that happens, you should test it on some new data in a more controlled way. Using a test set, which has been split out as early as possible and then hidden away, is the standard way to do that.

    Yes, it means some inconvenience to split the data engineering up for training vs testing. But many tools like scikit, which splits the fit and transform stages, make it convenient to build an end-to-end data engineering and modeling pipeline with the right train/test separation.

    0 讨论(0)
提交回复
热议问题