Skip to content

Latest commit

 

History

History
 
 

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Deep Learning Crash Course

Early Access - Use Code PREORDER for 25% Off
by Benjamin Midtvedt, Jesús Pineda, Henrik Klein Moberg, Harshith Bachimanchi, Joana B. Pereira, Carlo Manzo, Giovanni Volpe
No Starch Press, San Francisco (CA), 2025
ISBN-13: 9781718503922
https://nostarch.com/deep-learning-crash-course


  1. Dense Neural Networks for Classification

  2. Dense Neural Networks for Regression

  3. Convolutional Neural Networks for Image Analysis

  4. Encoders–Decoders for Latent Space Manipulation

  5. U-Nets for Image Transformation

  6. Self-Supervised Learning to Exploit Symmetries

  7. Recurrent Neural Networks for Timeseries Analysis

  8. Attention and Transformers for Sequence Processing

  9. Generative Adversarial Networks for Image Synthesis

  10. Diffusion Models for Data Representation and Exploration

  11. Graph Neural Networks for Relational Data Analysis

  12. Active Learning for Continuous Learning
    Describes techniques to iteratively select the most informative samples to label, improving model performance efficiently.

  • Code 12-1: Training a Binary Classifier with Active Learning
    Demonstrates the implementation of active learning for a simple binary classification task. A logistic regression model is iteratively trained on a dataset with two distinct classes by selecting the most informative samples.

  • Code 12-2: Training a Three-Class Classifier with Active Learning
    Extends active learning to a multi-class classification problem. A dataset with three groups of overlapping data points is used to test the effectiveness of active learning. It evaluates random sampling and uncertainty sampling strategies by plotting decision boundaries and observing sampling behavior near class overlaps.

  • Code 12-A: Training a MNIST Digits Classifier with Active Learning
    Applies active learning to classify handwritten digits from the MNIST dataset using a convolutional neural network. Key elements include preprocessing the MNIST dataset, training a CNN achieving 99% accuracy on the full dataset, and comparing three active learning strategies—random, uncertainty, and adversarial sampling—where adversarial sampling achieves benchmark performance using just 3% of the labeled data.

  1. Reinforcement Learning for Strategy Optimization

  2. Reservoir Computing for Predicting Chaos