publications

publications by categories in reversed chronological order. generated by jekyll-scholar.

2025

  1. preprint
    Efficient Many-Shot In-Context Learning with Dynamic Block-Sparse Attention
    Emily Xiao, Chin-Jou Li, Yilin Zhang, Graham Neubig, and Amanda Bertsch
    In [under submission], 2025
  2. preprint
    Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions
    Emmy Liu, Amanda Bertsch, Lintang Sutawika, Lindia Tjuatja, Patrick Fernandes, Lara Marinov, and 6 more authors
    In [under submission], 2025
  3. ICLR
    Better Instruction-Following Through Minimum Bayes Risk
    Ian Wu, Patrick Fernandes, Amanda Bertsch, Seungone Kim, Sina Pakazad, and Graham Neubig
    In International Conference on Learning Representations (ICLR), 2025
  4. NAACL
    In-context learning with long-context models: An in-depth exploration
    Amanda Bertsch, Maor Ivgi, Uri Alon, Jonathan Berant, Matthew R Gormley, and Graham Neubig
    In 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics, 2025

2024

  1. CONDA
    A Taxonomy for Data Contamination in Large Language Models
    Medha Palavalli, Amanda Bertsch, and Matthew R Gormley
    In The 1st Workshop on Data Contamination (CONDA), 2024
  2. TMLR
    From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models
    Sean Welleck, Amanda Bertsch, Matthew Finlayson, Hailey Schoelkopf, Alex Xie, Graham Neubig, and 2 more authors
    In Transactions on Machine Learning Research, 2024

2023

  1. EMNLP
    To Build Our Future, We Must Know Our Past: Contextualizing Paradigm Shifts in Natural Language Processing
    Sireesh Gururaja, Amanda Bertsch, Clara Na, David Gray Widder, and Emma Strubell
    In Empirical Methods in Natural Language Processing., 2023
  2. Big Picture
    It’s MBR All the Way Down: Modern Generation Techniques Through the Lens of Minimum Bayes Risk
    Amanda Bertsch, Alex Xie, Graham Neubig, and Matthew R. Gormley
    In Proceedings of the First Big Picture Workshop., 2023
  3. NeurIPS
    Unlimiformer: Long-Range Transformers with Unlimited Length Input
    Amanda Bertsch, Uri Alon, Graham Neubig, and Matthew R. Gormley
    In Conference on Neural Information Processing Systems., 2023
  4. TACL
    Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language Generation
    Patrick Fernandes, Aman Madaan, Emmy Liu, António Farinhas, Pedro Henrique Martins, Amanda Bertsch, and 5 more authors
    In Transactions of the Association of Computational Linguistics., 2023
  5. EMNLP Demo
    Prompt2Model: Generating Deployable Models from Natural Language Instructions
    Vijay Viswanathan, Chenyang Zhao, Amanda Bertsch, Tongshuang Wu, and Graham Neubig
    In Empirical Methods in Natural Language Processing: Demo Track., 2023
  6. Preprint
    LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs
    Tongshuang Wu, Haiyi Zhu, Maya Albayrak, Alexis Axon, Amanda Bertsch, Wenxing Deng, and 18 more authors
    In arXiv., 2023
  7. ClinicalNLP
    SummQA at MEDIQA-Chat 2023: In-Context Learning with GPT-4 for Medical Summarization
    Yash Mathur, Sanketh Rangreji, Raghav Kapoor, Medha Palavalli, Amanda Bertsch, and Matthew Gormley
    In Proceedings of the 5th Clinical Natural Language Processing Workshop., Jul 2023

2022

  1. Findings
    He Said, She Said: Style Transfer for Shifting the Perspective of Dialogues
    Amanda Bertsch, Graham Neubig, and Matthew R. Gormley
    In Findings of the Association for Computational Linguistics: EMNLP 2022., Jul 2022
  2. GeBNLP
    Evaluating Gender Bias Transfer from Film Data
    Amanda Bertsch, Ashley Oh, Sanika Natu, Swetha Gangu, Alan W. Black, and Emma Strubell
    In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)., Jul 2022

2021

  1. W-NUT
    Detection of Puffery on the English Wikipedia
    Amanda Bertsch, and Steven Bethard
    In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)., Nov 2021