I have finalized a model and it is performing within acceptable limits. I am using python and scitkit-learn specifically.
Next is to move the model to production.
As the commentor suggested, you should use pickle
. Specifically for ML, what you're looking for is Model persistence. And with scikit-learn:
After training a scikit-learn model, it is desirable to have a way to persist the model for future use without having to retrain.
And their example:
>>> from sklearn import svm
>>> from sklearn import datasets
>>> clf = svm.SVC()
>>> iris = datasets.load_iris()
>>> X, y = iris.data, iris.target
>>> clf.fit(X, y)
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
>>> import pickle
>>> s = pickle.dumps(clf)
>>> clf2 = pickle.loads(s)
>>> clf2.predict(X[0:1])
array([0])
>>> y[0]
0
In the specific case of the scikit, it may be more interesting to use joblib’s replacement of pickle (
joblib.dump
&joblib.load
), which is more efficient on objects that carry large numpy arrays internally as is often the case for fitted scikit-learn estimators, but can only pickle to the disk and not to a string:
>>> from sklearn.externals import joblib
>>> joblib.dump(clf, 'filename.pkl')