Section outline

  • Lesson Goal: Introduce Bayesian reasoning in machine learning, focusing on the Naive Bayes classifier for predicting probabilities (like spam detection). Students will learn Bayes’ theorem conceptually, see how Naive Bayes makes simplifying independence assumptions, and understand how it uses evidence (features) to update probability beliefs. The spam filtering example is used to make it concrete. The lesson emphasizes the “effect to cause” thinking (looking at evidence to infer the cause) that defines Bayesian models.

    • Micro-Topic 6.1: Basics of Bayes’ Theorem

      Goal: Explain Bayes’ theorem in simple terms – how we update probabilities when given new evidence. Provide a straightforward example (not necessarily spam yet, maybe medical test example or something intuitive) to illustrate prior, likelihood, and posterior.

    • Micro-Topic 6.2: Naive Bayes Classifier – How It Works

      Goal: Explain the Naive Bayes algorithm: how it uses Bayes’ theorem to classify data by computing the probability of each class given the features, assuming features are independent given the class (the “naive” assumption). Introduce the idea of prior probability of classes and likelihood of features.

    • Micro-Topic 6.3: Example – Spam Filtering with Naive Bayes

      Goal: Walk through how a Naive Bayes classifier filters spam emails by looking at words. Use a specific example of an email and show how the presence of certain words influences the probabilities for spam vs not-spam.

    • Micro-Topic 6.4: Backward Reasoning – Effects to Causes

      Goal: Emphasize the Bayesian mindset of going from evidence to cause, perhaps contrasting it with forward reasoning. This ties back to the description of going “backwards from effects to causes” in the lesson summary. Possibly give another example or highlight how this appears in ML (like diagnosing why an ML output happened by looking at evidence).

    • Micro-Topic 6.5: Strengths and Weaknesses of Naive Bayes

      Goal: Summarize where Naive Bayes works well and where it might fail, reinforcing the understanding of the assumptions. This prepares students for knowing when to use Bayesian methods.