最新要闻

广告

手机

15股获基金调研 华阳集团最受关注

15股获基金调研 华阳集团最受关注

快乐暑假 青春有法

快乐暑假 青春有法

家电

神经网络——基于sklearn的参数介绍及应用

来源:博客园


(相关资料图)

一、MLPClassifier&MLPRegressor参数和方法

参数说明(分类和回归参数一致):

hidden_layer_sizes :例如hidden_layer_sizes=(50, 50),表示有两层隐藏层,第一层隐藏层有50个神经元,第二层也有50个神经元。activation :激活函数,{‘identity’, ‘logistic’, ‘tanh’, ‘relu’}, 默认reluidentity:f(x) = xlogistic:其实就是sigmod,f(x) = 1 / (1 + exp(-x)).tanh:f(x) = tanh(x).relu:f(x) = max(0, x)solver: 权重优化器,{‘lbfgs’, ‘sgd’, ‘adam’}, 默认adamlbfgs:quasi-Newton方法的优化器sgd:随机梯度下降adam: Kingma, Diederik, and Jimmy Ba提出的机遇随机梯度的优化器注意:默认solver ‘adam’在相对较大的数据集上效果比较好(几千个样本或者更多),对小数据集来说,lbfgs收敛更快效果也更好。alpha :float,可选的,默认0.0001,正则化项参数batch_size : int , 可选的,默认’auto’,随机优化的minibatches的大小batch_size=min(200,n_samples),如果solver是’lbfgs’,分类器将不使用minibatchlearning_rate :学习率,用于权重更新,只有当solver为’sgd’时使用,{‘constant’,’invscaling’, ‘adaptive’},默认constant‘constant’: 有’learning_rate_init’给定的恒定学习率‘incscaling’:随着时间t使用’power_t’的逆标度指数不断降低学习率learning_rate_ ,effective_learning_rate = learning_rate_init / pow(t, power_t)‘adaptive’:只要训练损耗在下降,就保持学习率为’learning_rate_init’不变,当连续两次不能降低训练损耗或验证分数停止升高至少tol时,将当前学习率除以5.power_t: double, 可选, default 0.5,只有solver=’sgd’时使用,是逆扩展学习率的指数.当learning_rate=’invscaling’,用来更新有效学习率。max_iter: int,可选,默认200,最大迭代次数。random_state:int 或RandomState,可选,默认None,随机数生成器的状态或种子。shuffle: bool,可选,默认True,只有当solver=’sgd’或者‘adam’时使用,判断是否在每次迭代时对样本进行清洗。tol:float, 可选,默认1e-4,优化的容忍度learning_rate_int:double,可选,默认0.001,初始学习率,控制更新权重的补偿,只有当solver=’sgd’ 或’adam’时使用。属性说明:

coefs_包含w的矩阵,可以通过迭代获得每一层神经网络的权重矩阵classes_:每个输出的类标签loss_:损失函数计算出来的当前损失值coefs_:列表中的第i个元素表示i层的权重矩阵intercepts_:列表中第i个元素代表i+1层的偏差向量n_iter_ :迭代次数n_layers_:层数n_outputs_:输出的个数out_activation_:输出激活函数的名称

二、使用MLPClassifier进行分类

import numpy as npimport pandas as pdfrom sklearn.neural_network import MLPClassifierfrom sklearn.datasets import load_irisfrom sklearn.model_selection import train_test_splitfrom sklearn import metricsdata=load_iris()feature=data.datatarget=data.targetprint(np.unique(target))xtrain,xtest,ytrain,ytest=train_test_split(feature,target,train_size=0.7,random_state=421)nn=MLPClassifier(hidden_layer_sizes=(3,5),activation="tanh",shuffle=False,solver="lbfgs",alpha=0.001)model=nn.fit(xtrain,ytrain)pre=model.predict(xtest)print(pre)print(ytest)print(model.coefs_)print(model.n_layers_)print(model.n_outputs_)print(model.predict_proba(xtest))print(model.score(xtest,ytest))print(model.classes_)print(model.loss_)print(model.activation)print(model.intercepts_)print(model.n_iter_)print(metrics.confusion_matrix(ytest,pre))print("分类报告:", metrics.classification_report(ytest,pre))print("W权重:",model.coefs_[:1])print("损失值:",model.loss_)index=0for w in model.coefs_:    index += 1    print("第{}层网络层:".format(index))    print("权重矩阵:", w.shape)    print("系数矩阵:", w)

三、使用MLPRegressor进行回归

import numpy as npimport pandas as pdfrom sklearn.neural_network import MLPRegressorfrom sklearn.datasets import load_bostonfrom sklearn.model_selection import train_test_splitimport matplotlib.pyplot as pltfrom sklearn.model_selection import train_test_splitdata=load_boston()feature=data.datatarget=data.targetxtrain,xtest,ytrain,ytest=train_test_split(feature,target,train_size=0.7,random_state=421)nn=MLPRegressor(hidden_layer_sizes=(100,100),activation="identity",shuffle=False,solver="lbfgs",alpha=0.001)model=nn.fit(xtrain,ytrain)pre=model.predict(xtest)print(pre)print(ytest)print(model.coefs_)print(model.n_layers_)print(model.n_outputs_)print(model.score(xtest,ytest))index=0for w in model.coefs_:    index += 1    print("第{}层网络层:".format(index))    print("仅重矩阵:", w.shape)    print("系数矩阵:", w)plt.plot(range(152),pre,color="red")plt.plot(range(152),ytest,color="blue")plt.show()

关键词: