We will be following a schedule similar to last year’s; you can refer to the scribe / handwritten notes therein, if you would like to review / read in advance.
Date | Topics | Notes | Additional readings | Homework |
---|---|---|---|---|
Jan 13 | Introduction, motivation, course mechanics | slides | SSBD App B.1 and B.2 | HW1 (calibration) |
Jan 18 | The supervised learning pipeline; The PAC learning framework | Handwritten note 1 | SSBD Chap 2, Sec 3.1 | |
Jan 20 | The consistency algorithm and its analysis; Agnostic PAC learning | Handwritten note 2 | SSBD Chap 2, Sec 3.1 | |
Jan 25 | Agnostic PAC Learning; ERM analysis; begin Hoeffding’s Inequality. | Handwritten note 3, Scribe note by Renee Zhang | SSBD Chap 4, Sec B.4 | |
Jan 27 | Finish Hoeffding’s Inequality; Bernstein’s Inequality; ERM analysis with finite hypothesis class | Handwritten note 4, Scribe note by Yanan Wang | SSBD Chap 4, Sec B.5 | |
Feb 1 | finish ERM analysis with finite hypothesis classes; PAC Learning with infinite hypothesis classes; learning rectangles | Scribe note by Ryan Murphy | SSBD Sec 6.1, Exercise 2.3 | |
Feb 3 | Begin VC theory: definitions and examples | Handwritten note 5 | SSBD Sec 6.2, 6.3 | HW2 |
Feb 8 | VC dimension of homogeneous linear classifiers; Sauer’s Lemma | Handwritten note 6, | SSBD Sec 6.5 | |
Feb 10 | Finish Sauer’s Lemma; VC dimension of composite hypothesis classes; back to uniform convergence | Scribe note by Jing Wu | SSBD Sec 6.6 | |
Feb 15 | Application: Gilvenko-Cantelli Theorem; Proof of uniform convergence for VC classes; Rademacher complexity; Massart’s Lemma | Handwritten note 7 Scribe note by Yuanyuan Sun | SSBD Sec 6.5.2, 26.1, 28.1 | |
Feb 17 | Proof of uniform convergence for VC classes cont’d: McDiarmid’s Inequality, Symmetrization Lemma | Handwritten note 8 Scribe note by Minhang Zhou | SSBD Sec 6.5.2, 26.1, 28.1 | |
Feb 22 | Finish symmetrization lemma; Fundamental Theorem of Statistical Learning; Lower bounds for PAC learning | Handwritten note 9 | SSBD Sec 5.1 | |
Feb 24 | Finish lower bounds for PAC learning; Algorithm / Model selection in supervised learning | Handwritten note 10 | SSBD Chap 7 | HW3 |
Mar 1 | Bias-Complexity Tradeoff; Model selection in supervised learning: validation and SRM | SSBD Chap 7 | ||
Mar 3 | Finish SRM analysis; AdaBoost and its training error analysis | slides Rob Schapire’s boosting tutorial Handwritten note 11 | SSBD Chap 10 | |
Mar 8 | Spring Recess | |||
Mar 10 | Spring Recess | |||
Mar 15 | Finish boosting; Statistical learning with general losses: Rademacher complexity analysis | slides | SSBD 26.1, 26.2, 26.4 | |
Mar 17 | Finish the proof of contraction inequality; Begin margin-based generalization error bounds | slides Handwritten note 12 Handwritten note 13 | SSBD 26.3, 15.1 | |
Mar 22 | Finish margin bound; Regularized loss minization formulations | Handwritten note 14 | ||
Mar 24 | Johnson-Lindenstrauss lemma and margins; Stability and generalization | slides Handwritten note 15 | SSBD Chap 13, J-L Notes by Sham Kakade and Greg Shakhnarovich | |
Mar 29 | Finish Stability; Online learning: definitions and examples | Handwritten note 16 | O Chap 1, Haipeng Luo’s online learning lecture notes 1 | |
Mar 31 | Online to batch conversion; Azuma’s inequality | Handwritten note 17 | O Chap 3 | |
Apr 5 | Finish Azuma’s inequality; Online gradient descent; subgradients | O Chap 2 | ||
Apr 7 | Online gradient descent and its regret analysis | Handwritten note 18 | O Chap 2 | HW4 |
Apr 12 | Online mirror descent; Fenchel conjugate | Handwritten note 19 | O Thm 2.19, 5.2.1, 6.1-6.3 | |
Apr 14 | Online mirror descent examples: p-norm, exponentiated gradient; regret analysis | Handwritten note 20 | O 6.4.1, 6.6, 6.7, 6.4 | |
Apr 19 | Weather forecasting (expert problem) revisited; unknown horizon; lower bounds; Efficient empirical loss minimization | Handwritten note 21 | O 6.8, 5.1, SSBD 14.5.1 | |
Apr 21 | Efficient empirical loss minimization cont’d; OCO with strongly-convex functions; Begin multi-armed bandits | Handwritten note 22 | SSBD 14.4.4, 14.5.3, AJKS Chap 6.1, LS Chap 7 | |
Apr 26 | Multi-armed bandits and the UCB algorithm | Handwritten note 23 | AJKS Chap 6.1, LS Chap 7 | |
Apr 28 | Episodic RL; the optimism approach to RL | Handwritten note 24 | AJKS Chap 7.1-7.3 | |
May 3 | Project presentations: Jing Wu: An empirical evaluation on stochastic optimization algorithms - SGD and SDCA Minhang Zhou: Pure exploration in multi-armed bandit problems Ryan Murphy: Model-free online learning in episodic MDPs Yanan Wang: Fairness accuracy analysis in fair machine learning Yuanyuan Sun: Compressed sensing with RIP |