Hi Reader,
You might have noticed that I start each Tuesday Tip with a link of the week and end with a humorous/interesting P.S.
If you ever want to nominate a link for either category, please feel free to share it with me! 💌
Driverless cars may already be safer than human drivers
This is not only a fascinating read, but also an excellent case study in the challenges of real-world data gathering and data analysis!
Recently, a reader asked me how to get “un-stuck” with his Data Science project, given that he’s facing the following challenges:
Great questions!
What he needs is “feature selection”, which is the process of removing uninformative features from your model. These are features that are NOT helping your model to make better predictions. In other words, uninformative features are adding “noise” to your model, rather than “signal”. 📡
Here’s how your model can benefit from feature selection:
There are many valid methods for feature selection, including human intuition, domain knowledge, and data exploration. But for the moment, I want to focus on automated feature selection that can be included in a scikit-learn Pipeline. ⚡
Within the category of automated feature selection, there are subcategories such as intrinsic methods (like L1 regularization) and wrapper methods (like recursive feature elimination), though the most flexible and computationally efficient methods are in the subcategory of filter methods (like SelectPercentile).
As you might guess, automated feature selection is a vast and complex topic! However, I’ll give you a quick introduction to one of these categories so that you can get started today! 🚀
A filter method starts by scoring every single feature to quantify its potential relationship with the target column. Then, the features are ranked by their scores, and only the top scoring features are passed to the model. 🏅
Thus, they’re called filter methods because they filter out what they believe to be the least informative features and then pass on the more informative features to the model.
Filter methods vary in terms of the processes they use to score the features. For example:
In each case, you have to select how many features are passed to the prediction model by setting a percentile (for SelectPercentile) or a scoring threshold (for SelectFromModel). And of course, these parameters should be tuned using a grid search! 🔎
Despite the conceptual complexity, it’s surprisingly simple to add automated feature selection to a scikit-learn Pipeline. I’ll show you how:
🔗 Here’s my 2-minute video that walks you through it (YouTube)
🔗 Here’s my code from the video (Jupyter notebook)
Feature selection is a huge topic, but I cover it in detail in Chapter 13 of my upcoming course:
🔗 Master Machine Learning with scikit-learn (Data School course)
I’ve been working on this course for YEARS, and I’m planning to release the first 16 chapters by the end of 2023! Stay tuned for the launch announcement... 👂
In the meantime, my top recommendation for learning about feature selection is this comprehensive book:
🔗 Feature Engineering and Selection (free online book)
If you enjoyed this week’s tip, please forward it to a friend! Takes only a few seconds, and it really helps me grow the newsletter! 🙌
See you next Tuesday!
- Kevin
P.S. Frequency (calculations)
Did someone awesome forward you this email? Sign up here to receive Data Science tips every week!
Join 25,000+ intelligent readers and receive AI tips every Tuesday!
Hi Reader, This week, I've got a short tip about AI agents, followed by some Data School news... 👉 Tip #56: What are AI agents? Google is calling 2025 "the agentic era," DeepLearning.AI says "the agentic era is upon us," and NVIDIA's founder says "one of the most important things happening in the world of enterprise is agentic AI." Clearly AI agents are a big deal, but what exactly are they? Simply put, an AI agent is an application that uses a Large Language Model (LLM) to control its...
Hi Reader, Last week, I launched a brand new course: Build an AI chatbot with Python. 120+ people enrolled, and a few have already completed the course! 👏 Want to join us for $9? 👉 Tip #55: Should you still learn to code in 2025? You’ve probably heard that Large Language Models (LLMs) are excellent at writing code: They are competitive with the best human coders. They can create a full web application from a single prompt. LLM-powered tools like Cursor and Copilot can autocomplete or even...
Hi Reader, The Python 14-Day Challenge starts tomorrow! Hope to see you there 🤞 👉 Tuesday Tip: My top 5 sources for keeping up with AI I'll state the obvious: AI is moving incredibly FAST 💨 Here are the best sources I follow to keep up with the most important developments in Artificial Intelligence: The Neuron (daily newsletter) My top recommendation for a general audience. It’s fun, informative, and well-written. It includes links to the latest AI news and tools, but the real goldmine is...