Set Cover in Random Order (or - what makes the SC adversary strong?)
Speaker: Max HopkinsTitle: Set Cover in Random Order (or - what makes the SC adversary strong?)
Date: 15 Apr 2022 12:00-13:00 EST
Location: SEC Level 3 NW Terrace
Food: Thai
Abstract: The equivalence of realizable and agnostic learning in Valiant and Vapnik and Chervonenkis’ Probably Approximately Correct (PAC)-Learning model is one of the most classical results in learning theory, dating all the way back to Vapnik and Chervonenkis’ original 1974 book on the theory of pattern recognition, and is perhaps among its most surprising. Roughly speaking, the equivalence states that given a set $X$ and a family of binary classifiers $H$, the ability to learn a classifier $h\in H$ from labeled examples of the form $(x, h(x))$ is in fact sufficient for something much stronger: given samples from any distribution $D$ over $X \times {0, 1}$, it is possible to learn the best approximation to $D$ in $H$.
Traditionally, the proof of this fact is complicated and brittle, relying on a third party equivalence with a strong property called uniform convergence (or, equivalently, on a combinatorial parameter called VC-dimension). In this talk, we will review the basic definitions of realizable and agnostic PAC-learning, and then give an elementary proof of their equivalence through direct blackbox reduction, completely avoiding reliance on external properties like uniform convergence.