I\'m working with Spark 1.3.0 using PySpark and MLlib and I need to save and load my models. I use code like this (taken from the official documentation )
from p
Use pipeline in ML to train the model, and then use MLWriter and MLReader to save models and read them back.
from pyspark.ml import Pipeline from pyspark.ml import PipelineModel pipeTrain.write().overwrite().save(outpath) model_in = PipelineModel.load(outpath)