MXNet:回归和分类

ⅰ亾dé卋堺 提交于 2019-12-02 15:36:27

质量声明:原创文章,内容质量问题请评论吐槽。如对您产生干扰,可私信删除。
主要参考:李沐等:动手学深度学习-伯克利教材



摘要: 深度框架MXNet学习笔记,以MLP+FC/Softmax实现回归“波士顿房价预测”和分类“Fashion-MNIST识别”


回归预测

数据预处理

数据集规模

train_data = pd.read_csv('data/kaggle_house_price_prediction/train.csv')
test_data = pd.read_csv('data/kaggle_house_price_prediction/test.csv')
all_features = pd.concat((train_data.iloc[:, 1:-1], test_data.iloc[:, 1:]))
  • 训练数据集包括1460个样本、 80个特征和1个标签;
  • 测试数据集包括1459个样本和80个特征;
  • 特征值有连续的数字(数值型特征)、离散的标签(类别型特征),甚至是缺失值“na”
  • 合并所有特征,统一进行预处理

数值型特征处理

  • 归一化
index = all_features.dtypes[all_features.dtypes != 'object'].index
all_features[index] = all_features[index].apply(lambda x: (x - x.mean()) / (x.std()))
  • 缺失值处理:标准化后,特征均值为0,所以可以直接用0来替换缺失值
all_features[index] = all_features[index].fillna(0)

类别型特征处理

  • pd.get_dummies进行独热编码
  • dummy_na=True 表示将缺失类别也看做一类,进行one-hot编码。举例说明,假设特征MSZoning里面有两个类别RL和RM,以及缺失类别NaN,则将MSZoning特征扩展为MSZoning_RL、MSZoning_RM、MSZoning_NaN,再进行one-hot编码
  • 特征数由79增加到了331
all_features = pd.get_dummies(all_features, dummy_na=True)

重新划分数据集

  • 通过DataFrame.values属性得到NumPy格式的数据,并转成nd.NDArray以便于后续训练
n_train = train_data.shape[0]
train_features = nd.array(all_features[:n_train].values)
test_features = nd.array(all_features[n_train:].values)
train_labels = nd.array(train_data.SalePrice.values).reshape((-1, 1))

模型构建

  • 输入层可自动确定为331
  • 靠近输入层丢弃概率建议偏小
def get_net():
    net = nn.Sequential()
    net.add(nn.Dense(360, activation='relu'),
            nn.Dropout(0.2),
            nn.Dense(64, activation='relu'),
            nn.Dropout(0.5),
            nn.Dense(1))
    net.initialize(init.Normal(sigma=0.01))
    return net

交叉验证模型参数

  • 训练模型
loss = gloss.L2Loss()
def log_rmse(net, features, labels):
    # 将小于1的值设成1,使得取对数时数值更稳定
    clipped_preds = nd.clip(net(features), 1, float('inf'))
    rmse = nd.sqrt(2 * loss(clipped_preds.log(), labels.log()).mean())
    return rmse.asscalar()

def train(net, train_features, train_labels, test_features, test_labels,
          num_epochs, learning_rate, weight_decay, batch_size):
    train_ls, test_ls = [], []
    train_iter = gdata.DataLoader(gdata.ArrayDataset(
        train_features, train_labels), batch_size, shuffle=True)
    # 这里使用了Adam优化算法,对学习率相对不那么敏感
    trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': learning_rate, 'wd': weight_decay})
    for epoch in range(num_epochs):
        for X, y in train_iter:
            with autograd.record():
                l = loss(net(X), y)
            l.backward()
            trainer.step(batch_size)
        train_ls.append(log_rmse(net, train_features, train_labels))
        if test_labels is not None:
            test_ls.append(log_rmse(net, test_features, test_labels))
    return train_ls, test_ls
  • 交叉验证:
def get_k_fold_data(k, i, X, y):
    assert k > 1
    fold_size = X.shape[0] // k
    X_train, y_train = None, None
    for j in range(k):
        idx = slice(j * fold_size, (j + 1) * fold_size)
        X_part, y_part = X[idx, :], y[idx]
        if j == i:
            X_valid, y_valid = X_part, y_part
        elif X_train is None:
            X_train, y_train = X_part, y_part
        else:
            X_train = nd.concat(X_train, X_part, dim=0)
            y_train = nd.concat(y_train, y_part, dim=0)
    return X_train, y_train, X_valid, y_valid

def k_fold(k, X_train, y_train, num_epochs, learning_rate, weight_decay, batch_size):
    train_l_sum, valid_l_sum = 0, 0
    for i in range(k):
        data = get_k_fold_data(k, i, X_train, y_train)
        net = get_net()
        train_ls, valid_ls = train(net, *data, num_epochs, learning_rate, weight_decay, batch_size)
        train_l_sum += train_ls[-1]
        valid_l_sum += valid_ls[-1]
        if i == 0:
            d2l.semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'rmse',
                         range(1, num_epochs + 1), valid_ls,
                         ['train', 'valid'])
        print('fold %d, train rmse %f, valid rmse %f' % (i, train_ls[-1], valid_ls[-1]))
    return train_l_sum / k, valid_l_sum / k
k=5; num_epochs=100; lr=0.01; weight_decay=20; batch_size=64
train_l, valid_l = k_fold(k, train_features, train_labels, num_epochs, lr, weight_decay, batch_size)
print('%d-fold validation: avg train rmse %f, avg valid rmse %f' % (k, train_l, valid_l))

在这里插入图片描述

训练模型

同上,交叉验证模型参数中的训练模型

预测

def train_and_pred(train_features, test_features, train_labels, test_data,
                   num_epochs, lr, weight_decay, batch_size):
    net = get_net()
    train_ls, _ = train(net, train_features, train_labels, None, None,
                        num_epochs, lr, weight_decay, batch_size)
    d2l.semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'rmse')
    print('train rmse %f' % train_ls[-1])
    preds = net(test_features).asnumpy()
    test_data['SalePrice'] = pd.Series(preds.reshape(1, -1)[0])
    submission = pd.concat([test_data['Id'], test_data['SalePrice']], axis=1)
    submission.to_csv('submission.csv', index=False)
标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!