If a big model consists of end-to-end individual models, can I (after training) preserve only one model and freeze/discard other models during inference?