Jeremy Cohen

I'm a PhD student in the machine learning department at CMU, co-advised by Zico Kolter and Ameet Talwalkar. My main research interest is turning the practice of deep leaning into a mature engineering discipline.

Google Scholar

Email spam is known to be a big problem these days, so like other academics I obfuscate my email address.

My email address is jeremycohen@cmu.edu

Conference publications:

  1. Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability.
    Jeremy Cohen, Simran Kaur, Yuanzhi Li, Zico Kolter, and Ameet Talwalker. In ICLR 2021.

    We empirically demonstrate that full-batch gradient descent on neural network training objectives typically operates in a regime we call the Edge of Stability. In this regime, the leading eigenvalue of the training loss Hessian hovers just above the value 2 / (step size), and the training loss behaves non-monotonically over short timescales, yet consistently decreases over long timescales. Since this behavior is inconsistent with several widespread presumptions in the field of optimization, our findings raise questions as to whether these presumptions are relevant to neural network training. We hope that our findings will inspire future efforts aimed at rigorously understanding optimization at the Edge of Stability.


  2. Certified Adversarial Robustness via Randomized Smoothing.
    Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. In ICML 2019.
    arXiv, Code, short ICML talk, Zico's Simons Talk

    Extending recent work, we show how to turn any classifier that classifies well under Gaussian noise into a new classifier that is provably robust to perturbations in L2 norm. This method is the only provable adversarial defense that scales to ImageNet. It also outperforms all other provable L2 adversarial defenses on CIFAR-10 by a wide margin. Best of all, the method is extremely simple to implement and to understand.