Recursive Feature Elimination (RFE)

 

Features Selection Technique

Recursive feature elimination (RFE) is a feature selection method that fits a model and removes the weakest feature (or features) until the specified number of features is reached. In other words, Given an external estimator that assigns weights to features (e.g., the coefficients of a linear model), the goal of recursive feature elimination (RFE) is to select features by recursively considering smaller and smaller sets of features. First, the estimator is trained on the initial set of features and the importance of each feature is obtained (and ranked) either through a coef_ attribute or through a feature_importances_ attribute. Then, the least important features are pruned from the current set of features. That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached.

 

The following step-by-step guide shows you how RFE works:

  1. Fit your model on the training dataset

  2. Record the corresponding scoring metrics for the model e.g. - accuracy, precision, recall*

  3. Determine which feature is the least important in making predictions on the testing dataset and drop this feature.

  4. Our model has now reduced its feature set by 1.

  5. Select the feature set which gives the highest (or lowest) scoring metric (depending on which one you use). In this case, we would pick the feature set that gives us the highest accuracy score.

 

If the feature set has more than one feature, repeat from step 1, otherwise skip to step 5.

After following these steps, we are now able to successfully use RFE to generate different feature sets corresponding to a unique scoring metric. It must be noted that when dealing with large datasets, it is wise to subset the features. Choosing not to is not only computationally expensive in terms of runtime, but makes our model susceptible to the curse of dimensionality.

 

To better illustrate how RFE works, please refer to the figure below: