Practical Deep Learning for Coders
- __
__
- Practical Deep Learning
-
Practical Deep Learning
-
Part 1 __
-
1: Getting started
-
2: Deployment
-
3: Neural net foundations
-
4: Natural Language (NLP)
-
5: From-scratch model
-
6: Random forests
-
7: Collaborative filtering
-
8: Convolutions (CNNs)
-
Bonus: Data ethics
-
Summaries __
-
Lesson 1
-
Lesson 2
-
Lesson 3
-
Lesson 4
-
Lesson 5
-
Lesson 6
-
Lesson 7
-
Lesson 8
-
-
-
Part 2 __
-
Part 2 overview
-
9: Stable Diffusion
-
10: Diving Deeper
-
11: Matrix multiplication
-
12: Mean shift clustering
-
13: Backpropagation & MLP
-
14: Backpropagation
-
15: Autoencoders
-
16: The Learner framework
-
17: Initialization/normalization
-
18: Accelerated SGD & ResNets
-
19: DDPM and Dropout
-
20: Mixed Precision
-
21: DDIM
-
22: Karras et al (2022)
-
23: Super-resolution
-
24: Attention & transformers
-
25: Latent diffusion
-
Bonus: Lesson 9a
-
Bonus: Lesson 9b
-
-
Resources __
-
The book
-
Forums
-
Kaggle
-
Testimonials
-
-
Welcome!
-
Real results
-
Your teacher
-
Is this course for me?
-
The software you will be using
-
Why deep learning?
-
What you will learn
-
How do I get started?
-
__Report an issue
A free course designed for people with some coding experience, who want to learn how to apply deep learning and machine learning to practical problems.
__
New!
We just launched a new >30 hour video course for more experienced students:
Practical Deep Learning for Coders part 2: Deep Learning Foundations to Stable Diffusion
This free course is designed for people (and bunnies!) with some coding experience who want to learn how to apply deep learning and machine learning to practical problems.
Deep learning can do all kinds of amazing things. For instance, all illustrations throughout this website are made with deep learning, using DALL-E 2.
Practical Deep Learning for Coders 2022 part 1, recorded at the University of Queensland, covers topics such as how to:
- Build and train deep learning models for computer vision, natural language processing, tabular analysis, and collaborative filtering problems
- Create random forests and regression models
- Deploy models
- Use PyTorch, the world’s fastest growing deep learning software, plus popular libraries like fastai and Hugging Face
There are 9 lessons, and each lesson is around 90 minutes long. The course is based on our 5-star rated book, which is freely available online.
You don’t need any special hardware or software — we’ll show you how to use free resources for both building and deploying models. You don’t need any university math either — we’ll teach you the calculus and linear algebra you need during the course.
__
Get started
Start watching lesson 1 now!
Our videos have been viewed over 6,000,000 times already! Take a look at the dozens of testimonials about our book and course by alumni, top academics, and industry experts.
‘Deep Learning is for everyone’ we see in Chapter 1, Section 1 of this book, and while other books may make similar claims, this book delivers on the claim. The authors have extensive knowledge of the field but are able to describe it in a way that is perfectly suited for a reader with experience in programming but not in machine learning. The book shows examples first, and only covers theory in the context of concrete examples. For most people, this is the best way to learn. The book does an impressive job of covering the key applications of deep learning in computer vision, natural language processing, and tabular data processing, but also covers key topics like data ethics that some other books miss. Altogether, this is one of the best sources for a programmer to become proficient in deep learning.
Peter Norvig
Director of Research, Google
By the end of the second lesson, you will have built and deployed your own deep learning model on data you collect. Many students post their course projects to our forum; you can view them here. For instance, if there’s an unknown dinosaur in your backyard, maybe you need this dinosaur classifier!
Alumni of our course have gone on to jobs at organizations like Google Brain , OpenAI , Adobe , Amazon , and Tesla , published research at top conferences such as NeurIPS, and created startups using skills they learned here. Petro Cuenca, lead developer of the widely-acclaimed Camera+ app, after completing the course went on to add deep learning features to his product, which was then featured by Apple for its “machine learning magic”.
__
Get started
Start watching lesson 1 now!
I am Jeremy Howard, your guide on this journey. I lead the development of fastai, the software that you’ll be using throughout this course. I have been using and teaching machine learning for around 30 years. I was the top-ranked competitor globally in machine learning competitions on Kaggle (the world’s largest machine learning community) two years running. Following this success, I became the President and Chief Scientist of Kaggle. Since first using neural networks 25 years ago, I have led many companies and projects that have machine learning at their core, including founding the first company to focus on deep learning and medicine, Enlitic (chosen by MIT Tech Review as one of the “world’s smartest companies”).
Jeremy Howard
I am the co-founder, along with Dr. Rachel Thomas, of fast.ai, the organization behind this course. At fast.ai we care a lot about teaching. In this course, I start by showing how to use a complete, working, very usable, state-of-the-art deep learning network to solve real-world problems, using simple, expressive tools. And then we gradually dig deeper and deeper into understanding how those tools are made, and how the tools that make those tools are made, and so on… We always teach through examples. We ensure that there is a context and a purpose that you can understand intuitively, rather than starting with algebraic symbol manipulation.
__
Get started
Start watching lesson 1 now!
Previous fast.ai courses have been studied by hundreds of thousands of students, from all walks of life, from all parts of the world. Many students have told us about how they’ve become multiple gold medal winners of international machine learning competitions, received offers from top companies, and having research papers published. For instance, Isaac Dimitrovsky told us that he had “ been playing around with ML for a couple of years without really grokking it… [then] went through the fast.ai part 1 course late last year, and it clicked for me ”. He went on to achieve first place in the prestigious international RA2-DREAM Challenge competition! He developed a multistage deep learning method for scoring radiographic hand and foot joint damage in rheumatoid arthritis, taking advantage of the fastai library.
It doesn’t matter if you don’t come from a technical or a mathematical background (though it’s okay if you do too!); we wrote this course to make deep learning accessible to as many people as possible. The only prerequisite is that you know how to code (a year of experience is enough), preferably in Python, and that you have at least followed a high school math course.
Deep learning is a computer technique to extract and transform data–-with use cases ranging from human speech recognition to animal imagery classification–-by using multiple layers of neural networks. A lot of people assume that you need all kinds of hard-to-find stuff to get great results with deep learning, but as you’ll see in this course, those people are wrong. Here’s a few things you absolutely don’t need to do world-class deep learning:
Myth (don’t need) | Truth |
---|---|
Lots of math | Just high school math is sufficient |
Lots of data | We’ve seen record-breaking results with <50 items of data |
Lots of expensive computers | You can get what you need for state of the art work for free |
__
Get started
Start watching lesson 1 now!
In this course, you’ll be using PyTorch, fastai, Hugging Face Transformers, and Gradio.
We’ve completed hundreds of machine learning projects using dozens of different packages, and many different programming languages. At fast.ai, we have written courses using most of the main deep learning and machine learning packages used today. We spent over a thousand hours testing PyTorch before deciding that we would use it for future courses, software development, and research. PyTorch is now the world’s fastest-growing deep learning library and is already used for most research papers at top conferences.
PyTorch works best as a low-level foundation library, providing the basic operations for higher-level functionality. The fastai library one of the most popular libraries for adding this higher-level functionality on top of PyTorch. In this course, as we go deeper and deeper into the foundations of deep learning, we will also go deeper and deeper into the layers of fastai.
Transformers is a popular library focused on natural language processing (NLP) using transformers models. In the course you’ll see how to create a cutting-edge transfomers model using this library to detect similar concepts in patent applications.
__
Get started
Start watching lesson 1 now!
Deep learning has power, flexibility, and simplicity. That’s why we believe it should be applied across many disciplines. These include the social and physical sciences, the arts, medicine, finance, scientific research, and many more. Here’s a list of some of the thousands of tasks in different areas at which deep learning, or methods heavily using deep learning, is now the best in the world:
- Natural language processing (NLP) Answering questions; speech recognition; summarizing documents; classifying documents; finding names, dates, etc. in documents; searching for articles mentioning a concept
- Computer vision Satellite and drone imagery interpretation (e.g., for disaster resilience); face recognition; image captioning; reading traffic signs; locating pedestrians and vehicles in autonomous vehicles
- Medicine Finding anomalies in radiology images, including CT, MRI, and X-ray images; counting features in pathology slides; measuring features in ultrasounds; diagnosing diabetic retinopathy
- Biology Folding proteins; classifying proteins; many genomics tasks, such as tumor-normal sequencing and classifying clinically actionable genetic mutations; cell classification; analyzing protein/protein interactions
- Image generation Colorizing images; increasing image resolution; removing noise from images; converting images to art in the style of famous artists
- Recommendation systems Web search; product recommendations; home page layout
- Playing games Chess, Go, most Atari video games, and many real-time strategy games
- Robotics Handling objects that are challenging to locate (e.g., transparent, shiny, lacking texture) or hard to pick up
- Other applications Financial and logistical forecasting, text to speech, and much more…
__
Get started
Start watching lesson 1 now!
After finishing this course you will know:
- How to train models that achieve state-of-the-art results in:
- Computer vision, including image classification (e.g., classifying pet photos by breed)
- Natural language processing (NLP), including document classification (e.g., movie review sentiment analysis) and phrase similarity
- Tabular data with categorical data, continuous data, and mixed data
- Collaborative filtering (e.g., movie recommendation)
- How to turn your models into web applications, and deploy them
- Why and how deep learning models work, and how to use that knowledge to improve the accuracy, speed, and reliability of your models
- The latest deep learning techniques that really matter in practice
- How to implement stochastic gradient descent and a complete training loop from scratch
Here are some of the techniques covered (don’t worry if none of these words mean anything to you yet–you’ll learn them all soon):
- Random forests and gradient boosting
- Affine functions and nonlinearities
- Parameters and activations
- Transfer learning
- Stochastic gradient descent (SGD)
- Data augmentation
- Weight decay
- Image classification
- Entity and word embeddings
- And much more
__
Get started
Start watching lesson 1 now!
To watch the videos, click on the Lessons section in the navigation sidebar. The videos are all captioned; while watching the video click the “CC” button to turn them on and off. To get a sense of what’s covered in a lesson, you might want to skim through some lesson notes taken by one of our students (thanks Daniel!). Here’s his lesson 7 notes and lesson 8 notes. You can also access all the videos through this YouTube playlist.
Each video is designed to go with various chapters from the book. The entirety of every chapter of the book is available as an interactive Jupyter Notebook. Jupyter Notebook is the most popular tool for doing data science in Python, for good reason. It is powerful, flexible, and easy to use. We think you will love it! Since the most important thing for learning deep learning is writing code and experimenting, it’s important that you have a great platform for experimenting with code.
We’ll mainly use Kaggle Notebooks and Paperspace Gradient because we’ve found they work really well for this course, and have good free options. We also will do some parts of the course on your own laptop. (If you don’t have a Paperspace account yet, sign up with this link to get $10 credit – and we get a credit too.)
We strongly suggest not using your own computer for training models in this course, unless you’re very experienced with Linux system adminstration and handling GPU drivers, CUDA, and so forth.
If you need help, there’s a wonderful online community ready to help you at forums.fast.ai. Before asking a question on the forums, search carefully to see if your question has been answered before.
__
Get started
Start watching lesson 1 now!
1: Getting started __