Tuesday, Feb 18, 2020. 12:00 PM. NSH 3305
Ben Lengerich -- Interaction Effects: Helpful or Hurtful?
Abstract: The large representational capacity of deep learning models is often viewed as a positive attribute which allows us to learn interactions of many input variables. However, large model classes can also present challenges for estimation. In this talk, we take special interest in learning interaction effects. First, we define interaction effects through the statistical framework of the functional ANOVA. By giving care to this definition, we encounter several surprising findings about the nature of interaction effects (e.g. all interaction effects look like XOR). Next, we find that traditional machine learning models (such as tree-based models) gain almost all of their predictive power from low-order interaction effects. Turning to deep models, we find that fully-connected networks tend to estimate a large amount of spurious interaction effects. Finally, we present a view of Dropout as a regularizer against interaction effects.
Bio: Ben Lengerich is a Ph.D. student in the CS Department at Carnegie Mellon University, advised by Prof. Eric Xing.