Tuesday Tip #37: Use scikit-learn's magical model ✨


Hi Reader,

How's your January going? I've been playing a lot of Pokémon (the card game) with my 7 year-old... actually I'm the one who's always bugging him to play 😂

Are you a Pokémon fan? Meowscarada ex, anyone?


🔗 Link of the week

Introduction to Polars (Practical Business Python)

Have you heard of Polars? In short, it's a high-performance, memory-efficient alternative to pandas. If you're new to Polars, this blog post walks through basic Polars code and compares it to pandas.


👉 Tip #37: Simplify data preprocessing with this scikit-learn model

When performing supervised Machine Learning, one of the keys to success is effective data preprocessing, which can require a lot of thought and planning.

However, there's a scikit-learn model which has two magical properties that significantly reduce your preprocessing burden. (And you've probably never even heard of it!)

It's called Histogram-Based Gradient Boosted Trees (HGBT). Here are its magical properties:

  1. Native support for missing values
  2. Native support for categorical features

What exactly does that mean? I'll explain below! 👇


1️⃣ Native support for missing values

When your training data contains missing values, normally you have to impute all missing values as part of the data preprocessing step. (The only alternative is to drop samples or features with missing values, which can mean losing valuable training data!)

But if you use HGBT, it will handle the missing values without imputation or dropping any data!

Here's a minimal example from the scikit-learn documentation:

Here's a longer example from one of my videos.

Note: Decision Trees also support missing values as of scikit-learn 1.3, and Random Forests support missing values as of scikit-learn 1.4.


2️⃣ Native support for categorical features

When your training data contains unordered categorical features, normally you have to one-hot encode them as part of the data preprocessing step.

But if you use HGBT, it will handle the categorical features without encoding!

Starting in scikit-learn 1.4, HGBT will also infer which features are categorical directly from the data types of a pandas DataFrame!

Here's an example from the scikit-learn documentation:

Here's a longer comparison of native categorical support versus one-hot encoding and ordinal encoding.


Going further

If you want to learn more HGBT, check out the scikit-learn user guide.

Or if you're new to scikit-learn, check out one of my FREE scikit-learn courses!


👋 Until next time

Did you like this week’s tip? Please send it to a friend or share this link on social. It really helps me out! 🙌

See you next Tuesday!

- Kevin

P.S. Microsoft Excel World Championship (with live commentary)

Did someone AWESOME forward you this email? Sign up here to receive Data Science tips every week!

Learn Artificial Intelligence from Data School 🤖

Join 25,000+ intelligent readers and receive AI tips every Tuesday!

Read more from Learn Artificial Intelligence from Data School 🤖

Hi Reader, The Python 14-Day Challenge starts tomorrow! Hope to see you there 🤞 👉 Tuesday Tip: My top 5 sources for keeping up with AI I'll state the obvious: AI is moving incredibly FAST 💨 Here are the best sources I follow to keep up with the most important developments in Artificial Intelligence: The Neuron (daily newsletter) My top recommendation for a general audience. It’s fun, informative, and well-written. It includes links to the latest AI news and tools, but the real goldmine is...

Hi Reader, Before today’s tip, I wanted to give you a heads up: Tomorrow, I’ll be launching something brand new! Watch out for the announcement 👀 👉 Tip #53: How to get great results from AI models through prompting In the year after ChatGPT was released, I remember noticing two new trends: Articles about “prompt engineers” being hired for hundreds of thousands of dollars just to write prompts Endless guides promising to teach you the secrets of writing the perfect ChatGPT prompt My takeaway...

Hi Reader, Last week, I encouraged you to experiment with different LLMs, since there’s no one model that is superior across all use cases. Specifically, I suggested you try using Chatbot Arena, which allows you to chat with multiple models at once. It’s completely free, but has two significant disadvantages: Your chats are not private and may be used for research. It lacks the feature-rich interface provided by other LLMs. Today, I want to offer you a better method for experimenting with...