Hi Reader!
Welcome to the first issue of “Tuesday Tips,” a new series in which I’ll share a data science tip with you every Tuesday!
These tips will come from all over the data science spectrum: Machine Learning, Python, data analysis, NLP, Jupyter, and much more!
I hope they will help you to learn something new, work more efficiently, or just motivate and inspire you ✨
In supervised Machine Learning, “hyperparameter tuning” is the process of tuning your model to make it more effective. For example, if you’re trying to improve your model’s accuracy, you want to find the model parameters that maximize its accuracy score.
One common way to tune your model is through a “grid search”, which basically means that you define a set of parameters you want to try out, and your model evaluation procedure (like cross-validation) checks every combination of those parameters to see which one works the best.
Sounds great, right?
Well, one big problem with grid search is that if your model is slow to train or you have a lot of parameters you want to try, this process can take a LONG TIME.
So what’s the solution? I've got two solutions for you:
1. If you’re using GridSearchCV in scikit-learn, use the “n_jobs” parameter to turn on parallel processing. Set it to -1 to use all processors, though be careful about using that setting in a shared computing environment!
🔗 2-minute demo of parallel processing
2. Also in scikit-learn, swap out RandomizedSearchCV for GridSearchCV. Whereas grid search checks every combination of parameters, “randomized search” checks random combinations of parameters. You specify how many combinations you want to try (based on how much time you have available), and it often finds the “almost best” set of parameters in far less time than grid search!
🔗 5-minute demo of randomized search
How helpful was today’s tip?
If you enjoyed this issue, please forward it to a friend! 📬
See you next Tuesday!
- Kevin
P.S. Shout-out to my long-time pal, Ben Collins, who inspired and encouraged me to start this series. He has been sharing weekly Google Sheets tips for almost 5 years! Check out his site if you want to improve your Sheets skills!
Join 25,000+ aspiring Data Scientists and receive Python & Data Science tips every Tuesday!
Hi Reader, Last week, I announced that a new course is coming soon and invited you to guess the topic. Hundreds of guesses were submitted, and four people who guessed correctly got the course for free! (I've already notified the winners.) I'll tell you about the course next week. In the meantime, I've got a new Tuesday Tip for you! 👇 🔗 Link of the week OpenAI just unleashed an alien of extraordinary ability (Understanding AI) If you're curious about what makes OpenAI's new "o1" models so...
Hi Reader, I'm really proud of this week's tip because it covers a topic (data leakage) that took me years to fully understand. 🧠 It's one of those times when I feel like I'm truly contributing to the collective wisdom by distilling complex ideas into an approachable format. 💡 You can read the tip below 👇 or on my blog. 🔗 Link of the week Building an AI Coach to Help Tame My Monkey Mind (Eugene Yan) In this short post, Eugene describes his experiences calling an LLM on the phone for coaching:...
Hi Reader, Last week, I recorded the FINAL 28 LESSONS 🎉 for my upcoming course, Master Machine Learning with scikit-learn. That's why you didn't hear from me last week! 😅 I edited one of those 28 videos and posted it on YouTube. That video is today's tip, which I'll tell you about below! 👉 Tip #45: How to read the scikit-learn documentation In order to become truly proficient with scikit-learn, you need to be able to read the documentation. In this video lesson, I’ll walk you through the five...