The missing manual to becoming a successful data scientist―develop the skills to use key tools and the knowledge to thrive in the AI/ML landscape
- Learn from an AI patent-holding engineering manager with deep experience in Anaconda tools and OSS
- Get to grips with critical aspects of data science such as bias in datasets and interpretability of models
- Gain a deeper understanding of the AI/ML landscape through real-world examples and practical analogies
You might already know that there’s a wealth of data science and machine learning resources available on the market, but what you might not know is how much is left out by most of these AI resources. This book not only covers everything you need to know about algorithm families but also ensures that you become an expert in everything, from the critical aspects of avoiding bias in data to model interpretability, which have now become must-have skills.
In this book, you’ll learn how using Anaconda as the easy button, can give you a complete view of the capabilities of tools such as conda, which includes how to specify new channels to pull in any package you want as well as discovering new open source tools at your disposal. You’ll also get a clear picture of how to evaluate which model to train and identify when they have become unusable due to drift. Finally, you’ll learn about the powerful yet simple techniques that you can use to explain how your model works.
By the end of this book, you’ll feel confident using conda and Anaconda Navigator to manage dependencies and gain a thorough understanding of the end-to-end data science workflow.
What you will learn
- Install packages and create virtual environments using conda
- Understand the landscape of open source software and assess new tools
- Use scikit-learn to train and evaluate model approaches
- Detect bias types in your data and what you can do to prevent it
- Grow your skillset with tools such as NumPy, pandas, and Jupyter Notebooks
- Solve common dataset issues, such as imbalanced and missing data
- Use LIME and SHAP to interpret and explain black-box models