Section outline

  • Lesson Goal: Examine the privacy challenges posed by machine learning (using personal data, risk of leaking sensitive info) and explore key techniques to protect privacy, such as data anonymization pitfalls, differential privacy, and federated learning, empowering students to build and use AI systems that respect user privacy.

    • ·         Micro-topic 24.1: Why Privacy Matters in Machine Learning (Goal: Impress on students the importance of privacy – ethical, legal, and trust reasons – and how ML can intrude on privacy if unchecked)

      • Micro-topic 24.2: Data Anonymization and Its Limitations (Goal: Explain what anonymization is (e.g., removing personally identifiable info) and why it often fails to truly protect privacy in the age of big data)
      • Micro-topic 24.3: Introduction to Differential Privacy (Goal: Present differential privacy (DP) in an accessible way – the idea of adding noise to results to guarantee individuals’ data doesn’t significantly affect outputs, thus protecting their privacy while still allowing aggregate analysis)
      • Micro-topic 24.4: Federated Learning – Keeping Data on Device (Goal: Explain federated learning (FL) concept: models are trained across many devices without raw data leaving devices, thus improving privacy by design)
      • Micro-topic 24.5: Best Practices and Future of Privacy-Preserving ML (Goal: Summarize actionable practices to protect privacy (combining methods above, data minimization, encryption), and inspire awareness that privacy-preserving ML is an evolving field where our students can contribute or at least be vigilant)