Amanda Bertsch

my_pic.jpg

I am a PhD student in the Language Technologies Institute at Carnegie Mellon University, advised by Matt Gormley and Graham Neubig. I am a member of NeuLab and an organizer for Queer in AI. I’m fortunate to be funded by an NSF Graduate Research Fellowship.

I work primarily on conditional generation, particularly summarization; my research interests include better ways to reason over large quantities of knowledge, model large-scale structure in text, and effectively integrate external knowledge into models. Currently, I’m excited about modeling for long-range dependencies and long or complex inputs. I’m also broadly interested in meta-analysis of the NLP community, including critically examining the benchmarks, datasets, and modeling choices we take as defaults.

I’m trying to get to know my academic neighbors! If we work on similar things (or very different things that might be connected in interesting ways), I’d love to chat– please email me :) I’m also looking for internships for Summer 2024.

Before coming to CMU, I received my bachelors in math and computer science from the University of Arizona, where I was advised by Steven Bethard. Before coming to NLP, I worked in soil microbiology, built large-scale Rube Goldberg machines, and occasionally published short fiction. In my spare time, I write and read speculative fiction, hike, and play tabletop games.

news

Oct 24, 2023 Excited to announce some new work going to EMNLP: a qualitative study of the NLP community (main); a system for distilling a model from a single textual instruction (demo); and an analysis paper about Minimum Bayes Risk decoding, (Big Picture workshop)! Looking forward to seeing folks in Singapore.
Jun 6, 2023 Check out our recent preprints: Unlimiformer, a long-range transformer and a survey on human feedback for generation! (Update September 2023: Unlimiformer was accepted to NeurIPS, and this survey was accepted to TACL!)
Dec 7, 2022 I’ll be presenting our Findings paper on style transfer for dialogue summarization in the GEM poster session at EMNLP 2022!
Jul 15, 2022 I co-presented work on bias transfer from pretraining datasets at the Gender Bias in NLP workshop at NAACL 2022!
Nov 11, 2021 I presented my undergraduate thesis work on promotional content detection at the 2021 Workshop on Noisy User-generated Text!

selected publications

  1. EMNLP
    To Build Our Future, We Must Know Our Past: Contextualizing Paradigm Shifts in Natural Language Processing
    Gururaja, Sireesh,  Bertsch, Amanda, Na, Clara, Widder, David Gray, and Strubell, Emma
    In Empirical Methods in Natural Language Processing. 2023
  2. Big Picture
    It’s MBR All the Way Down: Modern Generation Techniques Through the Lens of Minimum Bayes Risk
    Bertsch, Amanda, Xie, Alex, Neubig, Graham, and Gormley, Matthew R.
    In Proceedings of the First Big Picture Workshop. 2023
  3. NeurIPS
    Unlimiformer: Long-Range Transformers with Unlimited Length Input
    Bertsch, Amanda, Alon, Uri, Neubig, Graham, and Gormley, Matthew R.
    In Conference on Neural Information Processing Systems. 2023
  4. TACL
    Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language Generation
    Fernandes, Patrick, Madaan, Aman, Liu, Emmy, Farinhas, António, Martins, Pedro Henrique,  Bertsch, Amanda, Souza, José G. C., Zhou, Shuyan, Wu, Tongshuang, Neubig, Graham, and Martins, André F. T.
    In Transactions of the Association of Computational Linguistics. 2023
  5. EMNLP Demo
    Prompt2Model: Generating Deployable Models from Natural Language Instructions
    Viswanathan, Vijay, Zhao, Chenyang,  Bertsch, Amanda, Wu, Tongshuang, and Neubig, Graham
    In Empirical Methods in Natural Language Processing: Demo Track. 2023