Amanda Bertsch

profile.jpg

I am a PhD student in the Language Technologies Institute at Carnegie Mellon University, advised by Matt Gormley and Graham Neubig. I am a member of NeuLab and an organizer for Queer in AI. I’m fortunate to be funded by an NSF Graduate Research Fellowship and to (for the fall) be a part-time student researcher at Meta GenAI.

I work primarily on conditional generation, particularly long-context modeling and inference-time algorithms; my broader research interests include better ways to reason over large quantities of knowledge, model large-scale structure in text, and effectively integrate external knowledge into models. Currently, I’m excited about rethinking positional embeddings, evaluation for realistic long-context settings, and understanding how community divergence affects whose work we engage with. I’m also broadly interested in meta-analysis of the NLP community, including critically examining the benchmarks, datasets, and modeling choices we take as defaults.

I’m trying to get to know my academic neighbors! If we work on similar things (or very different things that might be connected in interesting ways), I’d love to chat– please email me :) I’m also looking for internships for Summer 2025.

Before coming to CMU, I received my bachelors in math and computer science from the University of Arizona, where I was advised by Steven Bethard. Before coming to NLP, I worked in soil microbiology, built large-scale Rube Goldberg machines, and occasionally published short fiction. In my spare time, I write and read speculative fiction, hike, run, and play tabletop games.

news

May 15, 2024 I’m interning this summer with Mike Lewis at Meta GenAI! Excited to spend the summer thinking about long context & hiking in Seattle :)
Oct 24, 2023 Excited to announce some new work going to EMNLP: a qualitative study of the NLP community (main); a system for distilling a model from a single textual instruction (demo); and an analysis paper about Minimum Bayes Risk decoding, (Big Picture workshop)! Looking forward to seeing folks in Singapore.
Jun 06, 2023 Check out our recent preprints: Unlimiformer, a long-range transformer and a survey on human feedback for generation! (Update September 2023: Unlimiformer was accepted to NeurIPS, and this survey was accepted to TACL!)

selected publications

  1. Preprint
    In-context learning with long-context models: An in-depth exploration
    Amanda Bertsch, Maor Ivgi, Uri Alon, Jonathan Berant, Matthew R Gormley, and Graham Neubig
    In [under submission], 2024
  2. Preprint
    From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models
    Sean Welleck, Amanda Bertsch, Matthew Finlayson, Hailey Schoelkopf, Alex Xie, Graham Neubig, Ilia Kulikov, and Zaid Harchaoui
    In [under submission], 2024
  3. EMNLP
    To Build Our Future, We Must Know Our Past: Contextualizing Paradigm Shifts in Natural Language Processing
    Sireesh Gururaja, Amanda Bertsch, Clara Na, David Gray Widder, and Emma Strubell
    In Empirical Methods in Natural Language Processing., 2023
  4. NeurIPS
    Unlimiformer: Long-Range Transformers with Unlimited Length Input
    Amanda Bertsch, Uri Alon, Graham Neubig, and Matthew R. Gormley
    In Conference on Neural Information Processing Systems., 2023