Tuesday Tip #27: Automate your feature selection ๐Ÿค–


Hi Reader,

You might have noticed that I start each Tuesday Tip with a link of the week and end with a humorous/interesting P.S.

If you ever want to nominate a link for either category, please feel free to share it with me! ๐Ÿ’Œ


๐Ÿ”— Link of the week

โ€‹Driverless cars may already be safer than human driversโ€‹

This is not only a fascinating read, but also an excellent case study in the challenges of real-world data gathering and data analysis!


๐Ÿ‘‰ Tip #27: Improve your model with automated feature selection

Recently, a reader asked me how to get โ€œun-stuckโ€ with his Data Science project, given that heโ€™s facing the following challenges:

  • Irrelevant features: How to select the right features for analysis and modeling?
  • High-dimensional data: Best practices for dealing with datasets with a large number of features.

Great questions!

What he needs is โ€œfeature selectionโ€, which is the process of removing uninformative features from your model. These are features that are NOT helping your model to make better predictions. In other words, uninformative features are adding โ€œnoiseโ€ to your model, rather than โ€œsignalโ€. ๐Ÿ“ก

Hereโ€™s how your model can benefit from feature selection:

  • Model accuracy is often improved by removing uninformative features.
  • Models are generally easier to interpret when they include fewer features.
  • When you have fewer features, models will take less time to train, and it may cost less to gather and store the data that is required to train them.

Methods for feature selection

There are many valid methods for feature selection, including human intuition, domain knowledge, and data exploration. But for the moment, I want to focus on automated feature selection that can be included in a scikit-learn Pipeline. โšก

Within the category of automated feature selection, there are subcategories such as intrinsic methods (like L1 regularization) and wrapper methods (like recursive feature elimination), though the most flexible and computationally efficient methods are in the subcategory of filter methods (like SelectPercentile).

As you might guess, automated feature selection is a vast and complex topic! However, Iโ€™ll give you a quick introduction to one of these categories so that you can get started today! ๐Ÿš€


A quick introduction to โ€œfilter methodsโ€

A filter method starts by scoring every single feature to quantify its potential relationship with the target column. Then, the features are ranked by their scores, and only the top scoring features are passed to the model. ๐Ÿ…

Thus, theyโ€™re called filter methods because they filter out what they believe to be the least informative features and then pass on the more informative features to the model.

Filter methods vary in terms of the processes they use to score the features. For example:

  • โ€‹SelectPercentile scores features using univariate statistical tests
  • โ€‹SelectFromModel scores features using the coefficients or feature importances of a model

In each case, you have to select how many features are passed to the prediction model by setting a percentile (for SelectPercentile) or a scoring threshold (for SelectFromModel). And of course, these parameters should be tuned using a grid search! ๐Ÿ”Ž


Using feature selection in scikit-learn

Despite the conceptual complexity, itโ€™s surprisingly simple to add automated feature selection to a scikit-learn Pipeline. Iโ€™ll show you how:

๐Ÿ”— Hereโ€™s my 2-minute video that walks you through it (YouTube)

๐Ÿ”— Hereโ€™s my code from the video (Jupyter notebook)


Want to learn more about feature selection?

Feature selection is a huge topic, but I cover it in detail in Chapter 13 of my upcoming course:

๐Ÿ”— Master Machine Learning with scikit-learn (Data School course)

Iโ€™ve been working on this course for YEARS, and Iโ€™m planning to release the first 16 chapters by the end of 2023! Stay tuned for the launch announcement... ๐Ÿ‘‚

In the meantime, my top recommendation for learning about feature selection is this comprehensive book:

๐Ÿ”— Feature Engineering and Selection (free online book)


If you enjoyed this weekโ€™s tip, please forward it to a friend! Takes only a few seconds, and it really helps me grow the newsletter! ๐Ÿ™Œ

See you next Tuesday!

- Kevin

P.S. Frequency (calculations)

Did someone awesome forward you this email? Sign up here to receive Data Science tips every week!

Learn Data Science from Data School ๐Ÿ“Š

Join 25,000+ aspiring Data Scientists and receive Python & Data Science tips every Tuesday!

Read more from Learn Data Science from Data School ๐Ÿ“Š

Hi Reader, I'm really proud of this week's tip because it covers a topic (data leakage) that took me years to fully understand. ๐Ÿง  It's one of those times when I feel like I'm truly contributing to the collective wisdom by distilling complex ideas into an approachable format. ๐Ÿ’ก You can read the tip below ๐Ÿ‘‡ or on my blog. ๐Ÿ”— Link of the week Building an AI Coach to Help Tame My Monkey Mind (Eugene Yan) In this short post, Eugene describes his experiences calling an LLM on the phone for coaching:...

Hi Reader, Last week, I recorded the FINAL 28 LESSONS ๐ŸŽ‰ for my upcoming course, Master Machine Learning with scikit-learn. That's why you didn't hear from me last week! ๐Ÿ˜… I edited one of those 28 videos and posted it on YouTube. That video is today's tip, which I'll tell you about below! ๐Ÿ‘‰ Tip #45: How to read the scikit-learn documentation In order to become truly proficient with scikit-learn, you need to be able to read the documentation. In this video lesson, Iโ€™ll walk you through the five...

Hi Reader, happy Tuesday! My recent tips have been rather lengthy, so I'm going to mix it up with some shorter tips (like today's). Let me know what you think! ๐Ÿ’ฌ ๐Ÿ”— Link of the week A stealth attack came close to compromising the world's computers (The Economist) If you haven't heard about the recent "xz Utils backdoor", it's an absolutely fascinating/terrifying story! In short, a hacker (or team of hackers) spent years gaining the trust of an open-source project by making helpful...