Tuesday Tip #1: Speed up your grid search 🔎


Hi Reader!

Welcome to the first issue of “Tuesday Tips,” a new series in which I’ll share a data science tip with you every Tuesday!

These tips will come from all over the data science spectrum: Machine Learning, Python, data analysis, NLP, Jupyter, and much more!

I hope they will help you to learn something new, work more efficiently, or just motivate and inspire you ✨


👉 Tip #1: Speed up your hyperparameter search

In supervised Machine Learning, “hyperparameter tuning” is the process of tuning your model to make it more effective. For example, if you’re trying to improve your model’s accuracy, you want to find the model parameters that maximize its accuracy score.

One common way to tune your model is through a “grid search”, which basically means that you define a set of parameters you want to try out, and your model evaluation procedure (like cross-validation) checks every combination of those parameters to see which one works the best.

Sounds great, right?

Well, one big problem with grid search is that if your model is slow to train or you have a lot of parameters you want to try, this process can take a LONG TIME.

So what’s the solution? I've got two solutions for you:

1. If you’re using GridSearchCV in scikit-learn, use the “n_jobs” parameter to turn on parallel processing. Set it to -1 to use all processors, though be careful about using that setting in a shared computing environment!

🔗 2-minute demo of parallel processing

2. Also in scikit-learn, swap out RandomizedSearchCV for GridSearchCV. Whereas grid search checks every combination of parameters, “randomized search” checks random combinations of parameters. You specify how many combinations you want to try (based on how much time you have available), and it often finds the “almost best” set of parameters in far less time than grid search!

🔗 5-minute demo of randomized search

How helpful was today’s tip?

🤩🙂😐


If you enjoyed this issue, please forward it to a friend! 📬

See you next Tuesday!

- Kevin

P.S. Shout-out to my long-time pal, Ben Collins, who inspired and encouraged me to start this series. He has been sharing weekly Google Sheets tips for almost 5 years! Check out his site if you want to improve your Sheets skills!

Learn Data Science from Data School 📊

Join 25,000+ aspiring Data Scientists and receive Python & Data Science tips every Tuesday!

Read more from Learn Data Science from Data School 📊

Hi Reader, Regardless of whether you enrolled, thanks for sticking with me through the launch of my new course! 🚀 I've already started exploring topics for the next course... 😄 🔗 Link of the week git cheat sheet (PDF) A well-organized and highly readable cheat sheet from Julia Evans, the brilliant mind behind Wizard Zines! 👉 Tip #48: Three ways to set your environment variables in Python I was playing around with Mistral LLM this weekend (via LangChain in Python), and I needed to set an...

Hi Reader, Last week, I announced that a new course is coming soon and invited you to guess the topic. Hundreds of guesses were submitted, and four people who guessed correctly got the course for free! (I've already notified the winners.) I'll tell you about the course next week. In the meantime, I've got a new Tuesday Tip for you! 👇 🔗 Link of the week OpenAI just unleashed an alien of extraordinary ability (Understanding AI) If you're curious about what makes OpenAI's new "o1" models so...

Hi Reader, I'm really proud of this week's tip because it covers a topic (data leakage) that took me years to fully understand. 🧠 It's one of those times when I feel like I'm truly contributing to the collective wisdom by distilling complex ideas into an approachable format. 💡 You can read the tip below 👇 or on my blog. 🔗 Link of the week Building an AI Coach to Help Tame My Monkey Mind (Eugene Yan) In this short post, Eugene describes his experiences calling an LLM on the phone for coaching:...