Hi Reader,
Last week, I had an hour-long chat with my friend Ken Jee for his podcast. Starting tomorrow, you can watch the conversation on the Ken’s Nearest Neighbors YouTube channel!
When working on a Machine Learning problem, it’s always a good idea to try different types of models to see which one performs best.
However, you can also use a process called “ensembling” to combine multiple models. The goal is to produce a combined model, known as an ensemble, that performs better than any of the individual models.
The process for ensembling is simple:
The idea behind ensembling is that if you have a collection of individually imperfect models, the “one-off” errors made by each model are probably not going to be made by the rest of the models. Thus, the errors will be discarded (or at least reduced) when ensembling the models.
Here’s a simple example in which I ensembled Logistic Regression and Random Forests using scikit-learn’s VotingClassifier:
Notice that the accuracy of the ensemble (0.725) is significantly better than the accuracy of either individual model. (Check out the full code here.)
➡️ Ensembling is useful any time model accuracy (or another evaluation metric) is your highest priority. Keep in mind that the ensemble will be less interpretable than the individual models.
➡️ It’s ideal to include at least 3 models in the ensemble.
➡️ It’s important that all models you include are performing reasonably well on their own.
➡️ It’s best if the included models generate their predictions using different processes, since they will be likely to make different types of errors. (This is what makes Logistic Regression and Random Forests good candidates for ensembling!)
If you enjoyed this week’s tip, please forward it to a friend! Takes only a few seconds, and it really helps me out! 🙌
See you next Tuesday!
- Kevin
P.S. Gym rats vs data scientists
Did someone awesome forward you this email? Sign up here to receive data science tips every week!
Join 25,000+ intelligent readers and receive AI tips every Tuesday!
Hi Reader, This week, I've got a short tip about AI agents, followed by some Data School news... 👉 Tip #56: What are AI agents? Google is calling 2025 "the agentic era," DeepLearning.AI says "the agentic era is upon us," and NVIDIA's founder says "one of the most important things happening in the world of enterprise is agentic AI." Clearly AI agents are a big deal, but what exactly are they? Simply put, an AI agent is an application that uses a Large Language Model (LLM) to control its...
Hi Reader, Last week, I launched a brand new course: Build an AI chatbot with Python. 120+ people enrolled, and a few have already completed the course! 👏 Want to join us for $9? 👉 Tip #55: Should you still learn to code in 2025? You’ve probably heard that Large Language Models (LLMs) are excellent at writing code: They are competitive with the best human coders. They can create a full web application from a single prompt. LLM-powered tools like Cursor and Copilot can autocomplete or even...
Hi Reader, The Python 14-Day Challenge starts tomorrow! Hope to see you there 🤞 👉 Tuesday Tip: My top 5 sources for keeping up with AI I'll state the obvious: AI is moving incredibly FAST 💨 Here are the best sources I follow to keep up with the most important developments in Artificial Intelligence: The Neuron (daily newsletter) My top recommendation for a general audience. It’s fun, informative, and well-written. It includes links to the latest AI news and tools, but the real goldmine is...