我正在用一段数据解释这种情况:
例如 数据集。
GA_ID PN_ID PC_ID MBP_ID GR_ID AP_ID class 0.033 6.652 6.681 0.194 0.874 3.177 0 0.034 9.039 6.224 0.194 1.137 0 0 0.035 10.936 10.304 1.015 0.911 4.9 1 0.022 10.11 9.603 1.374 0.848 4.566 1 0.035 2.963 17.156 0.599 0.823 9.406 1 0.033 10.872 10.244 1.015 0.574 4.871 1 0.035 21.694 22.389 1.015 0.859 9.259 1 0.035 10.936 10.304 1.015 0.911 4.9 1 0.035 10.936 10.304 1.015 0.911 4.9 1 0.035 10.936 10.304 1.015 0.911 4.9 0 0.036 1.373 12.034 0.35 0.259 5.723 0 0.033 9.831 9.338 0.35 0.919 4.44 0
功能选择第1步及其完成:VarianceThreshol
PN_ID PC_ID MBP_ID GR_ID AP_ID class 6.652 6.681 0.194 0.874 3.177 0 9.039 6.224 0.194 1.137 0 0 10.936 10.304 1.015 0.911 4.9 1 10.11 9.603 1.374 0.848 4.566 1 2.963 17.156 0.599 0.823 9.406 1 10.872 10.244 1.015 0.574 4.871 1 21.694 22.389 1.015 0.859 9.259 1 10.936 10.304 1.015 0.911 4.9 1 10.936 10.304 1.015 0.911 4.9 1 10.936 10.304 1.015 0.911 4.9 0 1.373 12.034 0.35 0.259 5.723 0 9.831 9.338 0.35 0.919 4.44 0
特征选择步骤2及其结果:基于树的特征选择(例如,来自klearn.ensemble import ExtraTreesClassifier)
PN_ID MBP_ID GR_ID AP_ID class 6.652 0.194 0.874 3.177 0 9.039 0.194 1.137 0 0 10.936 1.015 0.911 4.9 1 10.11 1.374 0.848 4.566 1 2.963 0.599 0.823 9.406 1 10.872 1.015 0.574 4.871 1 21.694 1.015 0.859 9.259 1 10.936 1.015 0.911 4.9 1 10.936 1.015 0.911 4.9 1 10.936 1.015 0.911 4.9 0 1.373 0.35 0.259 5.723 0 9.831 0.35 0.919 4.44 0
在这里我们可以得出结论,我们从6列(功能)和一个类标签开始,最后一步将其缩减为4个功能和一个类标签。GA_ID和PC_ID列已删除,而模型是使用PN_ID,MBP_ID,GR_ID和AP_ID功能构建的。
但是不幸的是,当我使用scikit-learn库中的可用方法执行功能选择时,我发现它仅返回数据形状和精简数据,而没有选择和省略的功能名称。
我已经写下了许多愚蠢的python代码(因为我不是非常有经验的程序员)来找到答案,但没有成功。
请给我建议一些摆脱它的方法,谢谢
(注意:特别是对于这篇文章,我从未对给定的示例数据集执行任何功能选择方法,而是我随机删除了该列以解释这种情况)
也许这段代码和注释说明会有所帮助(从此处改编)。
import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_classification from sklearn.ensemble import ExtraTreesClassifier # Build a classification task using 3 informative features X, y = make_classification(n_samples=1000, n_features=10, n_informative=3, n_redundant=0, n_repeated=0, n_classes=2, random_state=0, shuffle=False) # Build a forest and compute the feature importances forest = ExtraTreesClassifier(n_estimators=250, random_state=0) forest.fit(X, y) importances = forest.feature_importances_ #array with importances of each feature idx = np.arange(0, X.shape[1]) #create an index array, with the number of features features_to_keep = idx[importances > np.mean(importances)] #only keep features whose importance is greater than the mean importance #should be about an array of size 3 (about) print features_to_keep.shape x_feature_selected = X[:,features_to_keep] #pull X values corresponding to the most important features print x_feature_selected