Section outline

    • girl fighting against AI with kung fu

    • Kung Fu – Machine Learning

      Machine Learning Kung Fu: Learn the Patterns. Master the Black Box. Beat AI at Its Own Game.
      Training teens to understand machine learning so they don’t get outcompeted by it.


      ✅ Start here (free)

      Start at Lesson 1 and train forward in short, clear micro‑presentations. Machine learning looks like “magic” to most people — this course turns it into understandable rules.

      1. Begin Lesson 1: Telling the Computer What We Want
      2. Train the short slides (one idea at a time)
      3. (Paid members) Take the drill quiz and record your score
      4. Repeat daily — belt by belt

      Goal: understand how ML learns from data, where it fails, and how it’s used to win (or cheat) in the real world.

      👨‍👩‍👧 Why this course matters (for parents)

      AI is changing the job market fast. But AI isn’t magic — it’s mostly machine learning: systems that learn patterns from data and then make decisions at scale.

      This course teaches the fundamentals of how ML works, what kinds of models exist, how they can fail (bias, overfitting, false conclusions), and how they are applied in real products.

      Parent benefit: your teen won’t just “consume AI” — they’ll understand the weapon. And you get measurable progress through quizzes + belt exams (paid membership).

      🔥 For teens (learn the weapon)

      AI is going to outcompete average people at average work. Your move is to become the person who understands how AI works — and can build, test, and control it.

      • Short missions (bite‑size training, not long lectures)
      • Belts that prove you’re leveling up
      • Retakes allowed (best score counts — improve without fear)
      • Real power: understand trees, neural nets, Bayes, clustering, deep learning, language models

      Challenge: reach Yellow Belt (your first real models) and show your score to a parent. If you like it, send it to a friend who wants to win too.

      🧠 What your teen will learn (high value targets)
      • What machine learning is (and how it differs from traditional programming)
      • How to use Python notebooks / Google Colab at a basic level (conceptually)
      • Major ML families: decision trees, neural networks, Bayes/Naive Bayes, genetic algorithms, nearest neighbors
      • Overfitting and real-world pitfalls (bias, misinterpretation, spurious patterns)
      • Core applications: clustering, recommendations, reinforcement learning, vision, language, speech
      • Modern concerns: privacy, fraud potential, correlation vs causation
      • Meta-learning: learning how to learn
      🥋 Belt map (Kung Fu ranks)
      • White Belt — Lessons 1–2: Fundamentals of learning + notebooks/Colab
      • Yellow Belt — Lessons 3–6: First models (trees, nets, Bayes)
      • Orange Belt — Lessons 7–8: More algorithms (genetic, neighbors)
      • Green Belt — Lessons 9–10: Overfitting + real-world pitfalls
      • Blue Belt — Lessons 11–12: Clustering + recommenders
      • Brown Belt — Lessons 13–20: RL, deep learning, language, GANs, speech
      • Black Belt — Lessons 21–25: IRL, causality, privacy, meta-learning
      🏷️ Free vs Dojo Membership (paid)

      Free (Guest Training)

      • Access the free lesson content (training slides / micro‑presentations)
      • Read the curriculum and belt map
      • Try sample drills (optional)

      Dojo Membership (Paid)

      • Full access to all drills, quizzes, and belt tests
      • Belt tracking and certificates
      • Parent progress tracking (scores + activity history)

      Founders / Inauguration Price: $5 per course for 30 days (about the price of a coffee). This is the launch price while the dojo is expanding — as more belts, exams, and courses are added, the price will rise.

      📚 Curriculum (25 lessons)
      1. Telling the Computer What We Want
      2. Starting with Python Notebooks and Colab
      3. Decision Trees for Logical Rules
      4. Neural Networks for Perceptual Rules
      5. Opening the Black Box of a Neural Network
      6. Bayesian Models for Probability Prediction
      7. Genetic Algorithms for Evolved Rules
      8. Nearest Neighbors for Using Similarity
      9. The Fundamental Pitfall of Overfitting
      10. Pitfalls in Applying Machine Learning
      11. Clustering and Semi-Supervised Learning
      12. Recommendations with Three Types of Learning
      13. Games with Reinforcement Learning
      14. Deep Learning for Computer Vision
      15. Getting a Deep Learner Back on Track
      16. Text Categorization with Words as Vectors
      17. Deep Networks That Output Language
      18. Making Stylistic Images with Deep Networks
      19. Making Photorealistic Images with GANs
      20. Deep Learning for Speech Recognition
      21. Inverse Reinforcement Learning from People
      22. Causal Inference Comes to Machine Learning
      23. The Unexpected Power of Over-Parameterization
      24. Protecting Privacy within Machine Learning
      25. Mastering the Machine Learning Process

      Disclaimer: This course provides education and training and cannot guarantee a specific job outcome.

      ✅ Belt Test Rules (read before testing)

      Passing score: 80%
      Retries: Unlimited
      Score policy: Best score counts

      How belts are earned

      1. Training Drills (Lesson Quizzes) — short quizzes after lessons
      2. Belt Test (Rank Exam) — a larger exam covering the belt section

      Eligibility

      You must complete the drills for the lessons in that belt section to unlock the belt test.

      Integrity (important)

      • This is You vs AI: no AI tools or outside help during belt tests.
      • Drills are for learning; belt tests are for proof.
      • Parents are encouraged to be present or nearby during belt tests.

      Disclaimer: Belts and certificates recognize course progress and assessment performance. They do not guarantee a job outcome.

  • Lesson Goal: Introduce what machine learning is and how it differs from traditional programming, including key concepts and real-world examples (medical diagnosis and a green screen effect) that illustrate how we tell a computer what we want through data and objectives.

    • Micro-Topic 1.1: Programming vs. Learning

      Goal: Understand the difference between explicitly programming a computer and having it learn from data, and why the latter is powerful in an AI-driven world.

    • Micro-Topic 1.2: Key Concepts of Machine Learning

      Goal: Introduce fundamental ML concepts and terminology (algorithm, model, training, dataset, and types of learning) to build a foundation for deeper topics.

    • Micro-Topic 1.3: A Brief History of Machine Learning

      Goal: Highlight key milestones in the development of machine learning, to show how the field evolved and why it’s so influential today.

    • Micro-Topic 1.4: Example – ML for Medical Diagnosis

      Goal: See how machine learning can assist in predicting or diagnosing medical conditions (like diabetes) by finding patterns in patient data that might be hard for humans to program explicitly.

    • Micro-Topic 1.5: Example – Green Screen Effect (Traditional vs ML)

      Goal: Compare a traditional programming solution to a visual effect (green screen background removal) with a machine learning solution, illustrating how ML can simplify complex tasks by learning from data.

  • Lesson Goal: Ensure students can access and run machine learning code without installing software on their own computer, by using Jupyter notebooks and Google Colab. This lesson covers why Python is the go-to language for ML, what notebooks are, how to use Colab in a browser, and how this cloud setup provides more power and zero setup headaches.

    • Micro-Topic 2.1: Why Python for Machine Learning?

      Goal: Explain why Python is the most popular language for machine learning and how its features (libraries, simplicity) help beginners and experts alike.

    • Micro-Topic 2.2: Jupyter Notebooks – Interactive Coding

      Goal: Introduce Jupyter notebooks, explaining what they are and why they are useful for learning and experimenting with code, especially in data science and ML.

    • Micro-Topic 2.3: Using Google Colab – Your Cloud Coding Lab

      Goal: Explain what Google Colab is and how it allows you to run notebooks in the cloud, with zero setup and access to powerful computing resources (for free), directly from a browser.

    • Micro-Topic 2.4: Running Your First Code in Colab

      Goal: Walk through a simple example of opening a Colab notebook and running a basic Python ML snippet, to demonstrate the end-to-end process of using Colab for the first time.

  • Lesson Goal: Introduce decision trees as a simple yet powerful machine learning method that learns logical if-then rules from data. Students will see how a decision tree can learn explicit decision rules (like a flowchart) to solve problems, through examples like a spelling rule and a medical prediction. They will also understand how decision trees learn (splitting criteria) and their pros and cons (interpretable but can overfit).

    • Micro-Topic 3.1: What is a Decision Tree?

      Goal: Explain the concept of a decision tree algorithm and how it represents decisions as a flowchart of questions, leading to an outcome. This provides a foundation before diving into examples.

    • Micro-Topic 3.2: Case Study: Learning a Spelling Rule

      Goal: Demonstrate a simple, relatable example of a decision tree learning a logical rule – the English spelling rule “i before e except after c” – showing how the tree can capture such a rule and its exceptions.

    • Micro-Topic 3.3: Case Study: Decision Tree for Diabetes Prediction

      Goal: Follow up from Lesson 1’s medical example in more detail: how to build a decision tree model to predict diabetes risk from patient data, demonstrating how the tree splits on health factors and how we interpret the resulting rules.

    • Micro-Topic 3.4: How Decision Trees Learn (Splitting Criteria)

      Goal: Give a peek under the hood of the decision tree training process – how does the algorithm decide what question to ask, and what are concepts like information gain or Gini impurity (explained intuitively) that guide the tree building?

    • Micro-Topic 3.5: Pros and Cons of Decision Trees

      Goal: Summarize the advantages and disadvantages of decision trees, preparing students to understand when to use them and what pitfalls to watch out for (like overfitting), connecting to the theme of always choosing the right tool in the “war” against complex problems.

    • Micro-Topic 4.1: Why Neural Networks for Perception?

      Goal: Motivate why we need neural networks by discussing tasks like vision and audio that involve patterns too complex for manual rules or simple trees, and how neural networks mimic a brain-like approach to handle these.

    • Micro-Topic 4.2: What is a Neural Network?

      Goal: Provide a conceptual understanding of artificial neural networks – nodes (neurons) connected by weights, organized in layers. Keep it simple: focus on the idea of inputs going through hidden layers to produce outputs, and that the network learns by adjusting weights.

    • Micro-Topic 4.3: Learning and Training Neural Networks

      Goal: Dive a bit more into how neural networks learn from data – the concept of training with many examples, the role of a loss function and optimization (gradient descent), and the need for lots of data. Keep it conceptual (no heavy math) but ensure students know it's a trial-and-error weight adjustment process.

    • Micro-Topic 4.4: Example – Recognizing Handwritten Digits (NN vs. Decision Tree)

      Goal: Compare a neural network’s approach to a decision tree’s approach on a concrete problem: identifying handwritten digits (0-9) from images. This highlights how neural nets handle such perceptual tasks better, and what their results look like.

    • Micro-Topic 4.5: Neural Networks vs. Decision Trees – Key Differences

      Goal: Summarize the differences between neural networks and decision trees in terms of how they work, what they’re good at, interpretability, data requirements, etc., reinforcing why we introduced neural nets after trees for different kinds of problems.

  • Lesson Goal: Demystify what’s happening inside a trained neural network (“open the black box”) by walking through a simple neural network step-by-step. This lesson also revisits the green screen example from Lesson 1, but now using a neural network approach to actually implement the effect, illustrating how the network’s internals operate. The aim is to give students an intuition for how data moves through a network and how we might interpret or debug a network.

    • Micro-Topic 5.1: The “Black Box” Problem

      Goal: Acknowledge and explain why neural networks are often called “black boxes” – it’s hard to know what exactly is happening inside – and why that can be a problem (or a challenge to overcome).

    • Micro-Topic 5.2: Inside a Simple Neural Network

      Goal: Walk through a very small neural network manually to show how data flows and gets processed. Use a simple example with, say, 2 inputs, 2 hidden neurons, and 1 output neuron (a tiny network) to illustrate calculations of a forward pass.

    • Micro-Topic 5.3: Implementing the Green Screen ML Solution

      Goal: Return to the green screen example from Lesson 1, but now actually outline a simple neural network model to perform it. Show how we would set up inputs (pixel color values), outputs (background vs foreground), and how the network would figure out the task, thereby reinforcing understanding of a practical neural net.

    • Micro-Topic 5.4: Debugging a Neural Network (Finding the Bug)

      Goal: Illustrate how one might go about diagnosing issues in a neural network’s performance. This ties into “opening the black box” by not just peeking at weights, but using systematic approaches to find where a network might be going wrong. Use a hypothetical scenario (perhaps a network isn’t learning as expected, or it’s making a silly mistake) and describe debugging steps.

    • Micro-Topic 5.5: Interpreting Neural Networks – Towards Explainable AI

      Goal: Briefly discuss methods or approaches that help interpret neural networks (connecting to the idea of opening the black box). This might include mention of feature importance for networks, visualization of convolutional filters or attention maps, etc., at a high level. Emphasize that while hard, it’s an active area so students appreciate efforts to make AI more transparent.

  • Lesson Goal: Introduce Bayesian reasoning in machine learning, focusing on the Naive Bayes classifier for predicting probabilities (like spam detection). Students will learn Bayes’ theorem conceptually, see how Naive Bayes makes simplifying independence assumptions, and understand how it uses evidence (features) to update probability beliefs. The spam filtering example is used to make it concrete. The lesson emphasizes the “effect to cause” thinking (looking at evidence to infer the cause) that defines Bayesian models.

    • Micro-Topic 6.1: Basics of Bayes’ Theorem

      Goal: Explain Bayes’ theorem in simple terms – how we update probabilities when given new evidence. Provide a straightforward example (not necessarily spam yet, maybe medical test example or something intuitive) to illustrate prior, likelihood, and posterior.

    • Micro-Topic 6.2: Naive Bayes Classifier – How It Works

      Goal: Explain the Naive Bayes algorithm: how it uses Bayes’ theorem to classify data by computing the probability of each class given the features, assuming features are independent given the class (the “naive” assumption). Introduce the idea of prior probability of classes and likelihood of features.

    • Micro-Topic 6.3: Example – Spam Filtering with Naive Bayes

      Goal: Walk through how a Naive Bayes classifier filters spam emails by looking at words. Use a specific example of an email and show how the presence of certain words influences the probabilities for spam vs not-spam.

    • Micro-Topic 6.4: Backward Reasoning – Effects to Causes

      Goal: Emphasize the Bayesian mindset of going from evidence to cause, perhaps contrasting it with forward reasoning. This ties back to the description of going “backwards from effects to causes” in the lesson summary. Possibly give another example or highlight how this appears in ML (like diagnosing why an ML output happened by looking at evidence).

    • Micro-Topic 6.5: Strengths and Weaknesses of Naive Bayes

      Goal: Summarize where Naive Bayes works well and where it might fail, reinforcing the understanding of the assumptions. This prepares students for knowing when to use Bayesian methods.

  • ·         Overview: Genetic Algorithms (GAs) are optimization methods that evolve solutions by mimicking natural selection. This lesson teaches how GAs work and why they’re useful for creating rules or designs that aren’t easily found by conventional programming.

    • Micro-Topic 7.1: What Are Genetic Algorithms?Goal: Explain the concept of a genetic algorithm as “survival of the fittest” applied to computing.

    • Micro-Topic 7.2: Chromosomes, Genes, and FitnessGoal: Show how solutions are encoded and evaluated in a GA.

    • Micro-Topic 7.3: Selection and CrossoverGoal: Describe how genetic algorithms breed new solutions from existing ones.

    • Micro-Topic 7.4: Mutation and Maintaining DiversityGoal: Explain the mutation step and why diversity is important in GAs.

    • Micro-Topic 7.5: Example – Evolving a SolutionGoal: Walk through a concrete example of a genetic algorithm solving a problem.

    • Micro-Topic 7.6: Strengths and Limitations of Genetic AlgorithmsGoal: Highlight when to use GAs, and their pros/cons.

  • ·         Overview: This lesson introduces the k-Nearest Neighbors (KNN) algorithm, a simple yet powerful method that makes predictions based on similarity to known examples. Students learn how “show me your neighbors, and I’ll tell you who you are” works in practice, including how to choose the number of neighbors and measure similarity.

    • Micro-Topic 8.1: Introduction to Nearest NeighborsGoal: Understand the basic idea of using closest examples to make predictions.

    • Micro-Topic 8.2: Measuring Similarity – Distance MetricsGoal: Learn how to quantify “nearest” using distance measures.

    • Micro-Topic 8.3: Choosing K (Number of Neighbors)Goal: Understand how the choice of K affects predictions and the balance between noise and generalization.

    • Micro-Topic 8.4: Applying KNN – A Simple ExampleGoal: See KNN in action with a concrete, easy-to-grasp example.

    • Micro-Topic 8.5: Pros and Cons of KNNGoal: Summarize why KNN can be useful and where it struggles.

  • ·         Overview: Overfitting is one of the biggest hazards in machine learning. This lesson teaches what overfitting is, how it differs from underfitting, why it happens, and how to detect and prevent it. Students will learn through analogies (like studying for a test by memorization vs understanding) to grasp why a model that performs too well on training data can actually fail in the real world.

    • Micro-Topic 9.1: What is Overfitting?Goal: Define overfitting in simple terms and illustrate its effect.

    • Micro-Topic 9.2: Overfitting vs UnderfittingGoal: Contrast these two extremes and highlight the need for balance.

    • Micro-Topic 9.3: Why Overfitting HappensGoal: List the common causes and conditions that lead to overfitting.

    • Micro-Topic 9.4: Detecting OverfittingGoal: Learn how to tell if your model is overfitting by using validation techniques.

    • Micro-Topic 9.5: Preventing & Fixing OverfittingGoal: Cover strategies to avoid or reduce overfitting.

  • ·         Overview: Beyond overfitting, many practical pitfalls can trip you up when applying ML in the real world. This lesson covers a variety of common mistakes and issues: using bad data, evaluation errors (like testing incorrectly), distribution changes when deploying, ethical biases, and blindly trusting models. By recognizing these pitfalls, students will learn to avoid them and build more reliable ML systems.

    • Micro-Topic 10.1: “Garbage In, Garbage Out” – Data PitfallsGoal: Stress the importance of data quality and quantity.

    • Micro-Topic 10.2: Validation & Evaluation MistakesGoal: Identify common errors in testing models and interpreting results.

    • Micro-Topic 10.3: Deployment & Data Drift PitfallsGoal: Discuss problems that arise when moving a model from the lab to the real world.

    • Micro-Topic 10.4: Ethical Pitfalls – Bias and FairnessGoal: Make students aware of bias in ML and the importance of fairness.

    • Micro-Topic 10.5: Over-Reliance on ML and Lack of InterpretabilityGoal: Warn against blindly trusting models and discuss the need for explainability.

  • ·         Overview: This lesson explores unsupervised learning (finding patterns without labels) through clustering, and then introduces semi-supervised learning, which bridges supervised and unsupervised methods. Students will see how algorithms like K-Means form clusters, why choosing the number of clusters is tricky, and how unlabeled data combined with a bit of labeled data can improve learning.

    • Micro-Topic 11.1: Unsupervised Learning & Clustering BasicsGoal: Explain what unsupervised learning is and introduce clustering as grouping data by similarity.

    • Micro-Topic 11.2: K-Means Clustering AlgorithmGoal: Teach how the K-Means algorithm partitions data into a chosen number of clusters.

    • Micro-Topic 11.3: Choosing K and Cluster EvaluationGoal: Discuss the challenge of picking the right number of clusters and how to evaluate clustering quality.

    • Micro-Topic 11.4: Applications of ClusteringGoal: Provide concrete examples of how clustering is used in practice.

    • Micro-Topic 11.5: Introduction to Semi-Supervised LearningGoal: Define semi-supervised learning and explain why it’s useful.

    • Micro-Topic 11.6: Semi-Supervised Techniques – Pseudo-Labeling & Label PropagationGoal: Outline common approaches to perform semi-supervised learning.

  • ·         Overview: Recommendation systems help suggest products, movies, or content to users. This lesson frames recommenders through the lens of the three main machine learning paradigms: supervised learning (predicting ratings or preferences), unsupervised learning (finding similarities, e.g., collaborative filtering), and reinforcement learning (learning by trial and reward). Students will understand how each approach contributes to making smart recommendations.

    • Micro-Topic 12.1: The Recommendation ProblemGoal: Introduce what recommendation systems do and the challenges involved.

    • Micro-Topic 12.2: Supervised Learning for RecommendationsGoal: Explain how recommendations can be treated as a prediction (supervised) problem.

    • Micro-Topic 12.3: Unsupervised Learning for Recommendations (Collaborative Filtering)Goal: Show how finding patterns without explicit labels can drive recommendations, e.g., via user-user or item-item similarity.

    • Micro-Topic 12.4: Reinforcement Learning for RecommendationsGoal: Explain how recommendation can be viewed as a sequential decision problem, optimizing long-term user satisfaction.

    • Micro-Topic 12.5: Combining Approaches – Building Better RecommendersGoal: Discuss how real systems often hybridize methods and consider practical aspects.

    • Micro-Topic 13.1: What Is Reinforcement Learning? (Goal: Understand how reinforcement learning lets an AI agent learn through trial-and-error feedback.)

    • Micro-Topic 13.2: Agents, Environments, and Rewards – RL Building Blocks (Goal: Learn the basic components/terminology of reinforcement learning.)

    • Micro-Topic 13.3: Learning by Playing – A Simple Game Example (Goal: See how an RL agent learns in a concrete scenario, such as a game.)

    • Micro-Topic 13.4: The Exploration–Exploitation Tradeoff (Goal: Understand the balance between trying new things and using known best strategies in RL.)

    • Micro-Topic 13.5: RL in the Real World – Success Stories in Games (Goal: Appreciate how reinforcement learning has achieved major feats, especially in games.)

    • Micro-Topic 14.1: What Is Computer Vision? (Goal: Know what computer vision means and examples of vision tasks.)

    • Micro-Topic 14.2: How Neural Networks Help Vision (Goal: Understand why deep neural networks (especially CNNs) are well-suited for image recognition.)

    • Micro-Topic 14.3: Inside a Convolutional Layer (Goal: Get an intuitive sense of how convolutional filters work to detect image features.)

    • Micro-Topic 14.4: Training Deep Vision Models – Data and Breakthroughs (Goal: Emphasize the importance of large datasets and highlight the 2012 deep learning breakthrough in vision.)

    • Micro-Topic 14.5: Applications and Impact of Deep Vision (Goal: Highlight real-world uses of deep learning in vision and how it’s changed the tech landscape.)

    • Micro-Topic 15.1: When Training Goes Wrong – Recognizing Issues (Goal: Acknowledge that training deep neural networks can encounter problems, and identify common signs of trouble.)

    • Micro-Topic 15.2: Overfitting – When Your Model Memorizes (Goal: Understand overfitting and why it’s problematic.)

    • Micro-Topic 15.3: Underfitting – When Your Model is Too Simple or Not Trained Enough (Goal: Understand underfitting and how it differs from overfitting.)

    • Micro-Topic 15.4: Fixing Overfitting – Generalization Techniques (Goal: Learn methods to reduce overfitting and improve generalization.)

    • Micro-Topic 15.5: Improving a “Stuck” or Underperforming Model (Goal: Learn strategies to handle underfitting or stagnant training – how to tune training to get better performance.)

    • ·         Micro-Topic 16.1: From Words to Numbers – How to Represent Text (Goal: Grasp why and how we turn text into numeric form for machine learning.)

      • Micro-Topic 16.2: Word Embeddings – Giving Meaning to Word Vectors (Goal: Understand what word embeddings are and why they are useful.)
      • Micro-Topic 16.3: Learning Word Vectors – Word2Vec and Friends (Goal: Understand at a high level how word embeddings are learned from large corpora.)
      • Micro-Topic 16.4: Using Embeddings for Text Classification (Goal: Learn how word vectors are utilized in a simple text categorization model.)
      • Micro-Topic 16.5: Example – Categorizing Movie Reviews by Sentiment (Goal: Provide a concrete example of text categorization using embeddings, tying everything together.)
    • ·         17.1 Why Is Generating Language Hard? (Goal: Understand why getting computers to generate human-like text is more challenging than simply reading or classifying text.)

      • 17.2 Recurrent Neural Networks (RNNs) – Memory in Sequences (Goal: Learn how RNNs enable deep networks to generate sequences by keeping a “memory” of previous inputs.)
      • 17.3 Translating Languages with Deep Networks (Goal: Discover how deep learning approaches (sequence-to-sequence models) revolutionized machine translation, achieving near human-level translation quality.)
      • 17.4 AI Storytelling – Generating Text Creatively (Goal: See how AI can generate creative text by predicting one piece at a time, illustrated by a game of progressive story expansion.)
      • 17.5 The Rise of Powerful Language Models (Goal: Introduce modern advanced language models (like Transformer-based models) that can generate remarkably human-like text, and discuss their impact.)
    • ·         18.1 Generator vs. Discriminator – Two Parts of Creative AI (Goal: Grasp the two-stage process in AI creativity: a generator proposes outputs and a discriminator judges them.)

      • 18.2 Approach 1 – Target Image as the Judge (Goal: Learn the first approach to image generation: using a specific target image as the “discriminator” to guide a generator.)
      • 18.3 Approach 2 – Learned Concept as the Judge (Goal: Understand the second approach: using a trained classifier (general concept) as the discriminator, instead of one specific image.)
      • 18.4 Limits of Fixed Judges and a Hint of Adversarial Training (Goal: Recognize the limitations of the above approaches and foreshadow how training a generator and discriminator together (GANs) addresses these issues.)
    • ·         19.1 Generative Adversarial Networks – The Concept (Goal: Introduce GANs as a solution where a generator and discriminator train together in competition, leading to highly realistic outputs.)

      • 19.2 How GANs Learn – Training Dynamics (Goal: Understand the training process of GANs: how generator and discriminator improve through their adversarial game.)
      • 19.3 Cool Things GANs Can Do (Goal: Showcase applications of GANs – how they can be used for creative and practical tasks like image transformation, enhancement, and more.)
      • 19.4 The Dark Side of GANs – Deepfakes and Misuse (Goal: Discuss the ethical and security concerns related to GANs, such as deepfake images/videos and deception.)
    • ·         20.1 Early Efforts in Speech Recognition (Goal: Learn about the history and challenges of getting machines to understand spoken language, from the 1950s onward.)

      • 20.2 How Computers Recognize Speech – The Basics (Goal: Explain the speech recognition pipeline – converting audio signals into text using features and models.)
      • 20.3 How Deep Learning Changed the Game (Goal: See how deep neural networks improved speech recognition by learning features and patterns automatically, surpassing older methods.)
      • 20.4 Hands-On: Training a Simple Speech Recognizer (Goal: Illustrate via a simple example how one might train a deep learning model to recognize a few specific words.)
      • 20.5 The Future – Conversing with AI (Goal: Look ahead to conversational AI – what it takes beyond speech recognition to have a full dialogue, and why it’s challenging but forthcoming.)
  • Lesson Goal: Introduce inverse reinforcement learning (IRL) – how AI can learn what to optimize by observing human behavior – and why this is key for aligning AI with human goals.

    • ·         Micro-topic 21.1: Understanding Reinforcement Learning Basics (Goal: Ensure students recall how standard reinforcement learning works and key terms like agent, reward, and policy)

      • Micro-topic 21.2: Why Do We Need Inverse Reinforcement Learning? (Goal: Illustrate why we might not want to hard-code reward functions and instead learn them from human behavior)
      • Micro-topic 21.3: How Inverse Reinforcement Learning Works (Goal: Describe the mechanics of IRL – what the algorithm takes in and produces, and the challenges in inferring the “true” reward)
      • Micro-topic 21.4: Examples and Challenges of Learning from Humans (Goal: Give a concrete example of IRL in action and discuss practical challenges like imperfect data or suboptimal experts)
      • Micro-topic 21.5: Real-World Uses of Inverse RL (Goal: Highlight practical applications of IRL and how learning from people can give AI a competitive edge in the job war against AI – by aligning AI tools to human strategies and values)
  • Lesson Goal: Explain the difference between correlation and causation, why traditional ML struggles with causal relationships, and introduce tools of causal inference (experiments and causal models) that are increasingly being integrated with machine learning to create AI that truly understands cause and effect.

    • ·         Micro-topic 22.1: Correlation vs. Causation – Why the Difference Matters (Goal: Ensure students grasp that “correlation does not imply causation,” using relatable examples to show why ML models that only learn correlations can be fooled)

      • Micro-topic 22.2: The Basics of Causal Inference (Goal: Introduce what causal inference means – determining cause-effect relationships – and basic concepts like randomized experiments and causal graphs in simple terms)
      • Micro-topic 22.3: Integrating Causal Thinking into Machine Learning (Goal: Show how causal inference ideas are being used to address ML’s limitations – e.g. improving generalization, fairness, and interpretability by focusing on cause-effect rather than pure correlation)
      • Micro-topic 22.4: Causal Inference in Action – Examples (Goal: Provide tangible examples where causal inference improved ML outcomes or insights, reinforcing the lesson with stories or case studies)
      • Micro-topic 22.5: Looking Ahead – Causal AI’s Role in Your Future (Goal: Conclude the lesson by connecting causal inference to the “win against AI” theme, inspiring students to appreciate that mastering causality gives them a leg up in developing and working with future AI)
  • Lesson Goal: Explore the counterintuitive phenomenon in modern ML where using massively more parameters than data (over-parameterization) can actually improve performance. Students will learn what over-parameterization means, why traditionally it was feared due to overfitting, and how deep learning defied expectations through phenomena like double descent, leading to new understanding of generalization.

    • ·         Micro-topic 23.1: Overfitting Refresher – Why Too Many Parameters Can Be Problematic (Goal: Make sure students understand the classical view: more parameters → higher risk of overfitting, and what overfitting entails)

      • Micro-topic 23.2: Deep Learning Revolution – Big Models, Big Data (Goal: Explain how circa 2012-2015, deep learning started using extremely large networks on large datasets, challenging the conventional wisdom on overfitting)
      • Micro-topic 23.3: The “Double Descent” Phenomenon (Goal: Introduce “double descent” – the discovery that beyond the classical overfitting point, increasing model size can lead to improved test performance again, and discuss why this happens)
      • Micro-topic 23.4: Why Over-Parameterization Can Work – Intuitions and Implications (Goal: Give intuitive reasons and summarize implications of over-parameterization success: e.g., implicit regularization, easier optimization, and how this shapes strategy in building ML solutions)
      • Micro-topic 23.5: Embracing Over-Parameterization – Your Black Belt Advantage (Goal: Tie the concept back to the course theme: how understanding when and why over-parameterization works allows our students to outperform or effectively utilize AI, rather than fear complex models)
  • Lesson Goal: Examine the privacy challenges posed by machine learning (using personal data, risk of leaking sensitive info) and explore key techniques to protect privacy, such as data anonymization pitfalls, differential privacy, and federated learning, empowering students to build and use AI systems that respect user privacy.

    • ·         Micro-topic 24.1: Why Privacy Matters in Machine Learning (Goal: Impress on students the importance of privacy – ethical, legal, and trust reasons – and how ML can intrude on privacy if unchecked)

      • Micro-topic 24.2: Data Anonymization and Its Limitations (Goal: Explain what anonymization is (e.g., removing personally identifiable info) and why it often fails to truly protect privacy in the age of big data)
      • Micro-topic 24.3: Introduction to Differential Privacy (Goal: Present differential privacy (DP) in an accessible way – the idea of adding noise to results to guarantee individuals’ data doesn’t significantly affect outputs, thus protecting their privacy while still allowing aggregate analysis)
      • Micro-topic 24.4: Federated Learning – Keeping Data on Device (Goal: Explain federated learning (FL) concept: models are trained across many devices without raw data leaving devices, thus improving privacy by design)
      • Micro-topic 24.5: Best Practices and Future of Privacy-Preserving ML (Goal: Summarize actionable practices to protect privacy (combining methods above, data minimization, encryption), and inspire awareness that privacy-preserving ML is an evolving field where our students can contribute or at least be vigilant)
  • Lesson Goal: Summarize and reinforce the end-to-end process of solving problems with machine learning – from problem definition and data gathering to model deployment and maintenance – giving students a roadmap to follow for their own projects and an appreciation of how all the pieces (from previous lessons) fit into a coherent workflow. This ties together technical skills with project management and strategy, essential for “mastery.”

    • ·         Micro-topic 25.1: Defining the Problem and Success Criteria (Goal: Emphasize that a clear understanding of the problem and what success looks like is the crucial first step in any ML project)

      • Micro-topic 25.2: Data Collection and Preparation – The “80% of the Work” (Goal: Convey that gathering the right data and preparing it (cleaning, labeling, splitting, features) is often the most time-consuming but critical part of ML; teach best practices in data prep)
      • Micro-topic 25.3: Model Selection and Training – Picking the Right Tool for the Job (Goal: Discuss how to choose a suitable model (simple vs complex, interpretable vs black box, etc.), and cover training best practices like using validation sets, avoiding overfitting, and iterating)
      • Micro-topic 25.4: Model Evaluation and Iteration – Refining Your Solution (Goal: Cover how to properly evaluate a model (beyond just overall accuracy; considering confusion matrix, precision/recall, etc. where appropriate), testing on real conditions, and iterating by analyzing errors. Emphasize continuous improvement.)
      • Micro-topic 25.5: Deployment and Maintenance – From Model to Production (Goal: Discuss what happens after modeling: deploying the model into a real environment, considerations like speed, scalability, and the need for ongoing monitoring and updating. Emphasize ML is not done until it’s delivering value in the real world.)
      • Micro-topic 25.6: The Big Picture – Continuous Learning and Improvement (Goal: Conclude the course by putting the process in perspective: encourage a mindset of continuous learning – both the model continuously learning from new data and the practitioner learning from each project; tie back to “winning against AI” theme by stressing that mastering the process is an ongoing journey where adaptability is key)