VI. PUBLICATION LIST

a.      Chapters in Books

1.      Rudnicky, A. I. Multi-modal dialog systems. In Minker, W., Bühler, D. and Dybkjær, L.  Spoken Multimodal Human-Computer Dialogue in Mobile Environments. Kluwer Academic, 2004.

2.      Bohus, D. and Rudnicky, A. I. LARRI: A Language-based Maintenance and Repair Assistant. In Minker, W., Bühler, D. and Dybkjær, L.  Spoken Multimodal Human-Computer Dialogue in Mobile Environments. Kluwer Academic, 2004.

3.      Rudnicky, A.I.  The design of spoken language interfaces. In Syrdal, A., Bennett, R. and Greenspan, S.  Applied Speech Technology. Boca Raton: CRC Press, 1995, 403-428.

4.      Rudnicky, A.I. and Hauptmann, A.G.  Multi-modal interaction in speech systems.  In Blattner, M. and Dannenberg, R.  Multimedia interface design. New York: ACM, 1992, 147-182.

5.      Cole, R.A., Rudnicky, A.I., Zue, V.W., and Reddy, J.R. Speech as patterns on paper.  In R.A. Cole (Ed.) Perception and Production of Fluent Speech.  Hillsdale, N.J.: Lawrence Erlbaum Associates, 1980.

b.      Refereed Journal Articles - Published

6.      Bohus, D. and Rudnicky, A. The RavenClaw dialog management framework: architecture and systems. Computer Speech and Language, 2008, 23(3), 332-361.

7.      Oh, A. H. and Rudnicky, A. Stochastic natural language generation for spoken dialog. Computer Speech and Language, 2002, 16/3-4, 387-407.

8.      Rudnicky, A.I., Lee, K-F. and Hauptmann, A.G.  Survey of current speech technologyCommunications of the ACM, 1994, 37(3), 52-57.

9.      Hauptmann, A.G. and Rudnicky, A.I.  Talking to Computers: An empirical investigation.  International Journal of Man-Machine Communication, 1988, 28, 583-604.  (Also Carnegie-Mellon Computer Science Department Technical Report CMU-CS-87-186.)

10.  Jakimik, J.A., Cole, R.A., and Rudnicky, A.I. Sound and spelling in spoken word recognition. Journal of Verbal Learning and Verbal Behavior, 1985, 24, 165-178.

11.  Rudnicky, A.I. and Kolers, P.A. Size and case of type as stimuli in reading. Journal of Experimental Psychology: Human Perception and Performance, 1984, 10, 231-249.

12.  Cole, R.A. and Rudnicky, A.I.  What's new in speech perception?  The research and ideas of William Chandler Bagley, 1874-1946. Psychological Review, 1983, 90, 94-101.

13.  Rudnicky, A.I. and Cole, R.A. Effect of subsequent context on syllable perception Journal of Experimental Psychology: Human Perception and Performance, 1978, 4, 638-647.

14.  Rudnicky, A.I. and Cole, R.A. Adaptation produced by connected speech. Journal of Experimental Psychology: Human Perception and Performance, 1977, 3, 51-61.

15.  Bregman, A.S. and Rudnicky, A.I. Auditory segregation: Stream or streams? Journal of Experimental Psychology: Human Perception and Performance, 1975, 1, 263-267.

c.       Refereed Conference/Workshop Papers

16.  Chen, YN, Sun, M,. Rudnicky A.I., and Gershman, A. Leveraging Behavioral Patterns of Mobile Applications for Personalized Spoken Language UnderstandingProceedings of The 17th ACM International Conference on Multimodal Interaction (ICMI 2015), Seattle WA, (to appear).

17.  Chen, YN, Wang, WY and Rudnicky, A.I. Learning Semantic Hierarchy with Distributional Representations for Unsupervised Spoken Language Understanding Proceedings of The 16th Annual Conference of the International Speech Communication Association (INTERSPEECH 2015), Dresden DE, 2015.

18.  Sun, M., Chen, YN and Rudnicky, A.I. Learning OOV through Semantic Relatedness in Spoken Dialog Systems Proceedings of The 16th Annual Conference of the International Speech Communication Association (INTERSPEECH 2015), Dresden DE, 2015.

19.  Chiu, J, Yajie Miao, Y, Black, AW and Rudnicky, A.I. Distributed Representation-based Spoken Word Sense Induction, Proceedings of Interspeech, Dresden DE, 2015.

20.  Marge, M.  and Rudnicky, A. Miscommunication Recovery in Physically Situated Dialogue Proceedings of SIGdial, Prague CZ, 2015.

21.  Chen, YN, Wang, WY, Gershman, A. and Rudnicky, A.I. Matrix Factorization with Knowledge Graph Propagation for Unsupervised Spoken Language UnderstandingProceedings of The 53rd Annual Meeting of the ACL and The 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP 2015), Beijing China, 2015.

22.  Chen YN, Wang, WY, Rudnicky, A.I., Jointly Modeling Inter-Slot Relations by Random Walk on Knowledge Graphs for Unsupervised Spoken Language Understanding, Proceedings of NAACL, Denver CO, 2015.

23.  Yu, Z., Papangelis, A. & Rudnicky, A. TickTock: Engagement Awareness in a non-Goal-Oriented Multimodal Dialogue System. AAAI Spring Symposium, 2015.

24.  Chen, YN & Rudnicky, A.I. Dynamically Supporting Unexplored Domains in Conversational Interactions by Enriching Semantics with Neural Word Embeddings Proceedings of SLT, December 2014, Lake Tahoe, NV.

25.  Chen, YN, Wang, WY & Rudnicky, A.I.  Leveraging Frame Semantics and Distributional Semantics for Unsupervised Semantic Slot Induction in Spoken Dialogue Systems Proceedings of SLT, December 2014, Lake Tahoe, NV.

26.  Justin Chiu, Rudnicky, A, LACS System Analysis on Retrieval Models for the MediaEval 2014 Search and Hyperlinking Task, Proceedings of MediaEval, Barcelona, 2014

27.  Pappu, A. & Rudnicky, A.I. Learning Situated Knowledge Bases through Dialog. Proceedings of Interspeech, September 2014, Singapore.

28.  Justin Chiu, J., Wang, Y., Trmal, J., Povey, D., Chen, G., Rudnicky, A. Combination of FST and CN Search in Spoken Term Detection  Proceedings of Interspeech, September 2014, Singapore.

29.  Qin, L. & Rudnicky, A.I. Building a vocabulary self-learning speech recognition systemProceedings of Interspeech, September 2014, Singapore.

30.  Pappu, A. & Rudnicky, A.I. Knowledge Acquisition Strategies for Goal-Oriented Dialog Systems. Proceedings of SIGDIAL, June 2014, Philadelphia, PA.

31.  Chen, YN & Rudnicky, AI Two-Stage Stochastic Natural Language Generation for Email Synthesis by Modeling Sender Style and Topic Structure, Proceedings of the 8th Int’l Natural Language Generation Conference (INLG), June 2014, Philadelphia, PA.

32.  Pappu, A., Sun, M., Sridharan, S. & Rudnicky, A.I. Conversational Strategies for Robustly Managing Dialog in Public Spaces Proceedings of EACL Dialog in Motion Workshop, 2014, Gothenburg, Sweden.

33.  Smailagic, A., D. Siewiorek, A. Rudnicky, S. N. Chakravarthula, A. Kar, N. Jagdale, S. Gautam, R. Vijayaraghavan, S. Jagtap: Emotion Recognition Modulating the Behavior of Intelligent Systems. Int’l Symp on Multimedia, 2013: 378-383, Anaheim, CA.

34.  Gandhe, A., L. Qin, F. Metze, A. Rudnicky, I. Lane, M. Eck Using Web Text to Improve Keyword Spotting in Speech, Proceedings of ASRU , 2013, Olomouc CZ.

35.  Qin, L. & A. Rudnicky Learning Better Lexical Properties for Recurrent OOV Words, Proceedings of ASRU, 2013, Olomouc, CZ.

36.  Y-N Chen, W. Y. Wang, A. I. Rudnicky Unsupervised Induction and Filling of Semantic Slots for Spoken Dialogue Systems Using Frame-Semantic Parsing, Proceedings of ASRU 2013, Olomouc, CZ.

37.  Pappu, A. & Rudnicky, A.  Predicting Tasks in Goal-Oriented Spoken Dialog Systems using Semantic Knowledge Bases, In Proceedings of the SIGDIAL 2013 Conference, 2013, Metz, France.

38.  Chiu, J. & A.I. Rudnicky Using Conversational Word Bursts in Spoken Term Detection, Proc. of Interspeech, 2013, Lyon, France.

39.  Qin, L. & A Rudnicky Finding Recurrent Out-of-Vocabulary Words. Proc. of Interspeech, 2013, Lyon, France.

40.  Teodoro, G., Martin, N., Keshner, E., Shi, J. Y. and Rudnicky, A. Virtual clinicians for the treatment of aphasia and speech disorders Proceedings of  ICVR, 2013, Philadelphia, PA US, 158-159.

41.  Marge, M. and Rudnicky, A. I. Towards evaluating recovery strategies for situated grounding problems in human-robot dialogue Proceedings of RO-MAN, 2013, Gyeongiu, KR, 340-241.

42.  Pappu, A., M. Sun, S. Sridharan, A. Rudnicky Situated Multiparty Interaction between Humans and Agents. Proceedings of HCII 2013, Las Vegas, US.

43.  Chen, Y-N, Wang, W. Y. and Rudnicky, A.I.  An empirical investigation of sparse log-linear models for improved dialogue act classification Proceedings of ICASSP, 2013, Vancouver, BC CA, 8317-8721.

44.  Sridharan, S., Y-N Chen, K-M Chang, and A. I. Rudnicky NeuroDialog: An EEG-enabled spoken language interface. Proceedings of ICMI, 2012, Santa Monica, CA US.

45.  Qin, L. & A. Rudnicky OOV Word Detection using Hybrid Models with Mixed Types of Fragments. Proceedings of Interspeech, 2012, Portland, OR US.

46.  Pappu, A. & A. Rudnicky The structure and generality of spoken route instructions. Proceedings of SIGdial (Seoul, Korea), 2012.

47.  Elijah Mayfield, David Adamson, Alexander I. Rudnicky, and Carolyn Penstein Rosé Computational Representations of Discourse Practices across Populations in Task-based Dialogue. International Conference on Intercultural Collaboration (ICIC), 2012.

48.  L. Qin, M. Sun, A. I. Rudnicky, System Combination for Out-of-vocabulary Word Detection, Proceedings of ICASSP, March 2012, Kyoto, Japan.

49.  M. Marge and A. I. Rudnicky, Towards Overcoming Miscommunication in Situated Dialogue by Asking Questions. Proceedings of AAAI Fall Symposium - Building Representations of Common Ground with Intelligent Agents, November 2011, Arlington, VA

50.  Qin, L., Sun, M. & Rudnicky, A.I. OOV detection and recovery using hybrid models with different fragments, Proceedings of INTERSPEECH, 2011, Florence, IT.

51.  M. Marge and A. I. Rudnicky, The TeamTalk Corpus: Route Instructions in Open Spaces. RSS Workshop on Grounding Human-Robot Dialog for Spatial Tasks, July 2011, Los Angeles, CA.

52.  C. Lee, T. Kawahara, A. Rudnicky Combining Slot-based Vector Space Model for Voice Book Search. Proceedings of Int’l Workshop on Spoken Dialog Systems, 2011.

53.  A. I. Rudnicky, A. Pappu, P. Li, and M. Marge, Instruction Taking in the TeamTalk System. Proceedings of AAAI Fall Symposium - Dialog with Robots, November 2010, Arlington, VA.

54.  M. Marge and A. I. Rudnicky, Comparing Spoken Language Route Instructions for Robots across Environment Representations. Proceedings of SIGdial, September 2010, Tokyo, Japan.

55.  M. Marge, J. Miranda, A. Black, and A. I. Rudnicky, Towards Improving the Naturalness of Social Conversations with Dialogue Systems. Proceedings of SIGdial, September 2010, Tokyo, Japan.

56.  M. Marge, S. Banerjee and A. I. Rudnicky Using the Amazon Mechanical Turk to Transcribe and Annotate Meeting Speech for Extractive Summarization. Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. NAACL HLT 2010, June 2010, Los Angeles.CA.

57.  L. Qin and A. Rudnicky The effect of lattice pruning on MMIE training.  Proceedings of ICASSP, 2010, Dallas, TX.

58.  M. Marge, S. Banerjee and A. I. Rudnicky, Using the Amazon Mechanical Turk to Transcribe and Annotate Meeting Speech for Extractive Summarization. NAACL Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, June 2010, Los Angeles, CA.

59.  L. Qin and A. Rudnicky Implementing and improving MMIE training in SphinxTrain CMU Sphinx User and Developers Workshop. 2010, Dallas, TX.

60.  M. Marge, S. Banerjee and A. I. Rudnicky. Using the Amazon Mechanical Turk for Transcription of Spoken Language. Proceedings of ICASSP, March 2010, Dallas, TX.

61.  M. Marge, A. Pappu, B. Frisch, T. K. Harris and A. I. Rudnicky Exploring Spoken Dialog Interaction in Human-Robot Teams. Robots, Games, and Research: Success stories in USARSim IROS Workshop, October 2009, St. Louis, MO, USA.

62.  S Banerjee and AI Rudnicky Detecting the Noteworthiness of Utterances in Human Meetings. Proceedings of SIGDIAL, September 2009, London, UK.

63.  Kazunori Komatani, Alexander I. Rudnicky  Predicting Barge-in Utterance Errors by using Implicitly-Supervised ASR Accuracy and Barge-in Rate per User. ACL-IJCNLP 09, Short Papers, pp.89–92, 2009.

64.  David Huggins-Daines and Alexander I. Rudnicky Combining Mixture Weight Pruning and Quantization for Small-Footprint Speech Recognition. Proceedings of ICASSP-2009, Taipei, Taiwan, April 2009.

65.  Satanjeev Banerjee and Alexander Rudnicky An Extractive-Summarization Baseline for the Automatic Detection of Noteworthy Utterances in Multi-Party Human-Human Dialog. In the Proceedings of the 2008 IEEE Workshop on Spoken Language Technologies. December 15 – 18, 2008, Goa, India.

66.  Ananlada Chotimongkol and Alexander I. Rudnicky  Acquiring Domain-Specific Dialog Information from Task-Oriented Human-Human Interaction through an Unsupervised Learning  Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, October 2008, Honolulu, HI.

67.  David Huggins-Daines and Alexander I. Rudnicky Mixture Pruning and Roughening for Scalable Acoustic Models. Proceedings of ACL-08 Workshop on Mobile Language Processing, Columbus, OH, USA, June 2008.

68.  David Huggins-Daines and Alexander I. Rudnicky Interactive ASR Error Correction for Touchscreen Devices. Demo presented at ACL 2008, Columbus, OH, USA, June 2008.

69.  Mohit Kumar, Dipanjan Das and Alexander I. Rudnicky. A System for Recommending Briefing Drafts from Non-textual Events. Proceedings of the International Workshop on Recommendation and Collaboration (IUI 2008 Workshop). Jan 13, 2008. Canary Islands, Spain.

70.  D. Das, M. Kumar and A. I. Rudnicky. Automatic Extraction of Briefing Templates. Proceedings of the Third International Joint Conference on Natural Language Processing (IJCNLP 2008). Jan 7-12, 2008. Hyderabad, India.

71.  Yi Wu, Rong Zhang and Alexander Rudnicky Data Selection for Speech Recognition. Proceedings of ASRU, Kyoto, Japan, December 2007.

72.  Thomas K. Harris and Alexander I. Rudnicky. TeamTalk: A platform for multi-human-robot dialog research in coherent real and virtual spaces. Association for the Advancement of Artificial Intelligence, Vancouver B.C., Canada, 2007.

73.  D. Bohus and A. I. Rudnicky Implicitly-supervised Learning in Spoken Language Interfaces: an Application to the Confidence Annotation Problem, Proceedings of 8th SIGdial Workshop, 2007, Antwerp, Belgium, pp. 256-264.

74.  M. Kumar, D. Das, A. I. Rudnicky, Summarizing Non-textual Events with a 'Briefing' Focus, Recherche d'Information Assistée par Ordinateur (RIAO), May 30-Jun 1, 2007, Pittsburgh, USA.

75.  M. Kumar, N. Garera, A. I. Rudnicky, Learning from Report-writing Behavior of Individuals, International Joint Conference on Artificial Intelligence (IJCAI), Jan 6-12, 2007, Hyderabad, India.

76.  Banerjee, S. and Rudnicky, A. I. Segmenting meetings into agenda items by extracting implicit supervision from human note-taking. In: Proceedings of the 2007 International Conference on Intelligent User Interfaces 2007. pp. 151-159.

77.  David Huggins-Daines and Alexander I. Rudnicky, Implicitly Supervised Language Model Adaptation for Meeting Transcription, Proceedings of HLT-NAACL 2007, Rochester, NY, USA, May 2007.

78.  Bohus, D., Raux, A., Harris, T., Eskenazi, M., and Rudnicky, A. Olympus: an open-source framework for conversational spoken language interface research, Proceedings of HLT-NAACL 2007, Rochester, NY, USA, May 2007.

79.  Bohus, D., Grau, S., Huggins-Daines, D., Keri, V., Krishna, G., Kumar, R., Raux, A., and Tomko, S. Conquest - an Open-Source Dialog System for Conferences. Proceedings of HLT-NAACL 2007, Rochester, NY, USA, May 2007.

80.  M. Kumar, N. Garera, A. I. Rudnicky, A Briefing Tool that Learns Report-writing Behavior, IEEE International Conference on Tools with Artificial Intelligence (ICTAI), Nov 13-15, 2006, Washington D.C., USA.

81.  S. Banerjee and A. I. Rudnicky, A TextTiling Based Approach to Topic Boundary Detection in Meetings In Proceedings of the Interspeech – ICSLP 2006 Conference. September 17 – 21, 2006, Pittsburgh, PA.

82.  S. Banerjee and A. I. Rudnicky, SmartNotes: Implicit Labeling of Meeting Data through User Note-Taking and Browsing, In Proceedings of the Conference of the North American Association of Computational Linguistics – Human Languages Technology (NAACL-HLT) – Demonstration Track. June 5th to 7th 2006, New York, NY.

83.  S. Banerjee and A. I. Rudnicky, You Are What You Say: Using Meeting Participants’ Speech to Detect their Roles and Expertise,  In the NAACL-HLT 2006 workshop on Analyzing Conversations in Text and Speech, June 8th 2006, New York, NY.

84.  Bohus, D., Langner, B., Raux, A., Black, A., Eskenazi, M. and Rudnicky A., Online Supervised Learning of Non-understanding Recovery Policies, in SLT-2006, Palm Beach, Aruba.

85.  Bohus, D., and Rudnicky, A.,  A K Hypotheses + Other Belief Updating Model, in AAAI Workshop on Statistical and Empirical Approaches to Spoken Dialogue Systems, 2006, Boston, MA.

86.  D. Huggins-Daines and A. I. Rudnicky, A Constrained Baum-Welch Algorithm for Improved Phoneme Segmentation and Efficient Training, in Proceedings of Interspeech 2006, Pittsburgh, USA, September 2006.

87.  D. Huggins-Daines, M. Kumar, A. Chan, A.W Black, M. Ravishankar, and A. I. Rudnicky, PocketSphinx: A Free, Real-Time Continuous Speech Recognition System for Hand-Held Devices, in Proceedings of ICASSP 2006, Toulouse, France, May 2006.

88.  R. Zhang and A. I. Rudnicky, Investigations of Issues for Using Multiple Acoustic Models to Improve Continuous Speech Recognition, Proceedings of ICSLP 2006.

89.  R. Zhang and A. I. Rudnicky, A New Data Selection Principle for Semi-Supervised Incremental Learning, in Proceedings of ICPR 2006.

90.  R. Zhang and A. I. Rudnicky, A New Data Selection Approach for Semi-Supervised Acoustic Modeling, in Proceedings of ICASSP 2006.

91.  A. I. Rudnicky, P. Rybski, S. Banerjee, and M. Veloso , Intelligently Integrating Information from Speech and Vision Processing to Perform Light-weight Meeting Understanding , in the International Workshop on Multimodal Multiparty Meeting Processing, 7 October 2005, Trento, Italy.

92.  S.  Banerjee, C. Rosé, and A. I. Rudnicky The Necessity of a Meeting Recording and Playback System, and the Benefit of Topic--Level Annotations to Meeting Browsing, In Proceedings of the 10th International Conference on Human-Computer Interaction, 12-16 September 2005, Rome, Italy.

93.  S. Banerjee and A. I. Rudnicky Aspects of the Virtuality Continuum and Multi-Participant Interaction Modeling in the Artificial Agent-Assisted Meeting Scenario, In Workshop on the Virtuality Continuum Revisited, April 2-7, 2005, Portland, OR.

94.  T. Harris, S. Banerjee, and A. I. Rudnicky Heterogeneous Multi-Robot Dialogues for Search Tasks, (a modified version of "A Research Platform for Multi-Agent Dialogue Dynamics"). AAAI Spring Symposium: Dialogical Robots: Verbal Interaction with Embodied Agents and Situated Devices, March 21-23, 2005, Stanford, CA.

95.  A. Chan , R. Mosur and A. I. Rudnicky, On Improvements of CI-based GMM Selection, in Proceedings of Interspeech 2005, Lisbon, Portugal.

96.  R. Zhang, Z. Al Bawab, A. Chan, A. Chotimongkol, D. Huggins-Daines, A. I. Rudnicky, Investigations on Ensemble Based Semi-Supervised Acoustic Model Training, in Proceedings of Interspeech 2005, Lisbon, Portugal.

97.  Bohus, D., and Rudnicky, A., Constructing Accurate Beliefs in Spoken Dialog Systems, in Proceedings of ASRU-2005, San Juan, Puerto Rico.

98.  Bohus, D., and Rudnicky, A., Error Handling in the RavenClaw dialog management architecture, in Proceedings of HLT-EMNLP-2005, Vancouver, CA.

99.  Bohus, D., and Rudnicky, A., Sorry, I Didn't Catch That! - An Investigation of Non-understanding Errors and Recovery Strategies, in Proceedings of SIGdial-2005, Lisbon, Portugal.

100. Bohus, D., and Rudnicky, A., A Principled Approach for Rejection Threshold Optimization in Spoken Dialog Systems, in Proceedings of Interspeech-2005, Lisbon, Portugal.

101. A. Chan , J. Sherwani, R. Mosur and A. I. Rudnicky, Four-Level Categorization Scheme of Fast GMM Computation Techniques in Large Vocabulary Continuous Speech Recognition Systems, International Conference of Speech and Language Processing 2004, Jeju, Korea.

102. S. Banerjee, J. Cohen, T. Quisel, A. Chan , Y. Patodia, Z. Al Bawab, R. Zhang, A. Black, R. Stern, R. Rosenfeld, A. I. Rudnicky, Creating Multi-Modal, User-Centric Records of Meetings with the Carnegie Mellon Meeting Recorder Architecture, NIST Meeting Recognition Workshop at ICASSP 2004, Montréal, Québec.

103. P. E. Rybski, S. Banerjee, F. de la Torre, C. Vallespi, A. I. Rudnicky, and M. Veloso, Segmentation and Classification of Meetings using Multiple Information Streams. In the Proceedings of the Sixth International Conference on Multimodal Interfaces, October 14th-15th, 2004, State College, Pennsylvania.

104. T. Harris, S. Banerjee, A. I. Rudnicky, J. Sison, K. Bodine and A. W Black A Research Platform for Multi-Agent Dialogue Dynamics, In the 13th International Workshop on Robot and Human Interactive Communication (RO-MAN), September 20-22, 2004, Kurashiki, Japan.

105. S. Banerjee and A. I. Rudnicky Using Simple Speech-Based Features to Detect the State of a Meeting and the Roles of the Meeting Participants , In Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004 - ICSLP), October 4-8, 2004, Jeju Island, Korea.

106. Rong Zhang and Alexander I. Rudnicky, Comparative Study of Boosting and Non-Boosting Training for Constructing Ensembles of Acoustic Models, Proceedings of Eurospeech, 2003 (Geneva, Switzerland), pages 1885-1888.

107. Rong Zhang and Alexander I. Rudnicky, Apply N-Best List Re-Ranking to Acoustic Model Combinations of Boosting Training, in Proceedings of ICSLP, 2004.

108. Rong Zhang and Alexander I. Rudnicky, A Frame Level Boosting Training Scheme for Acoustic Modeling, in Proceedings of ICSLP, 2004.

109. Rong Zhang and Alexander I. Rudnicky, Optimizing Boosting with Discriminative Criteria, in Proceedings of ICSLP, 2004.

110. Bohus, D. and Rudnicky, A.I.  RavenClaw: Dialog Management Using Hierarchical Task Decomposition and an Expectation Agenda, in Proceedings of Eurospeech, 2003 (Geneva, Switzerland), pages 597-600.

111. Rong Zhang and Alexander I. Rudnicky, "Improving the Performance of an LVCSR System through Ensembles of Acoustic Models", Proc. of ICASSP, 2003.

112. Bohus, D. and Rudnicky, A.I. LARRI: A Language-Based Maintenance and Repair Assistant Proceedings of the ISCA Tutorial and Research Workshop on Multi-Modal Dialogue in Mobile Environments [IDS-2002] (Kloster Irsee, Germany)

113. Bennett, C. and Rudnicky, A.I. The Carnegie Mellon Communicator Corpus Proceedings of ICSLP, 2002 (Denver, Colorado), pages 341-344.

114. Zhang, R. and A.I. Rudnicky Improve Latent Semantic Analysis based Language Model by Integrating Multiple Level Knowledge Proceedings of ICSLP, 2002 (Denver, Colorado), pages 893-896.

115. M. Walker, A. Rudnicky, R. Prasad, J. Aberdeen, E. Bratt, J. Garofolo, H. Hastie, A. Le, B. Pellom, A. Potamianos, R. Passonneau, S. Roukos, G. Sanders, S. Seneff, and D. Stallard. DARPA Communicator: Cross-system results for the 2001 evaluation. In Proceedings of ICSLP, 2002.

116. Frederking, R.E., Black, A. W, Brown, R.D, Rudnicky, A., Moody, J. and Steinbrecher, E. Speech translation on a tight budget without enough data. Proceedings of the ACL-02 Workshop on Speech-to-Speech Translation, 2002 ( Philadelphia, PA), v.7 pages 77-84.

117. Chotimongkol, A. and Rudnicky, A.I. Automatic Concept Identification In Goal-Oriented Conversations Proceedings of ICSLP, 2002 (Denver, Colorado), pages 1153-1156.

118. Bennett, C., Font Llitjós, A., Shriver, S., Rudnicky, A.I. and Black, A.W. Building VoiceXML-based Applications Proceedings of ICSLP 2002 (Denver, Colorado), pages 2245-2248.

119. Rong Zhang and Alexander I. Rudnicky, "A Large Scale Clustering Scheme for Kernel K-Means", Proc. of ICPR, 2002.

120. M. Walker,  J. Aberdeen,  J. Bol,  E. Bratt,  J. Garofolo,  L. Hirschman,  A. Le,  S. Lee,  K. Papineni,  B. Pellom,  J. Polifroni,  A. Potamianos,  P. Prabhu,  A. Rudnicky,  S. Seneff,  D. Stallard,  S. Whittaker DARPA Communicator dialog travel planning systems: The June 2000 data collection. Proceedings of Eurospeech 2001 (Aalborg, Denmark), pages 1371-1374.

121. Rong Zhang and Alexander I. Rudnicky, Word Level Confidence Annotation using Combinations of Features, Proceedings of Eurospeech 2001 (Aalborg, Denmark)..

122. Chotimongkol, A. and Rudnicky, A.I. N-best Speech Hypotheses Reordering Using Linear Regression Proceedings of Eurospeech 2001 (Aalborg, Denmark), pages 1829-1832.

123. Zhang, R. and Rudnicky, A.I. Word Level Confidence Annotation using Combinations of Features Proceedings of Eurospeech 2001 (Aalborg, Denmark), pages 2105-2108.

124. Carpenter, P., Jin, C., Wilson, D., Zhang, R., Bohus, D., Rudnicky, A. Is this Conversation on Track? Proceedings of Eurospeech 2001 (Aalborg, Denmark), pages 2121-2124.

125. Du Bois, T. and Rudnicky, A.I. An Open Concept Metric for Assessing Dialog System Complexity Proceedings of the 2001 Automatic Speech Recognition and Understanding Workshop (Madonna di Campiglio, Italy) Paper a01td123.

126. Bohus, D. and Rudnicky, A.I. Modeling the Cost of Misunderstanding Errors in the CMU Communicator Proceedings of the 2001 Automatic Speech Recognition and Understanding Workshop (Madonna di Campiglio, Italy) Paper a01db097.

127. Oh, A. H. and Rudnicky, A. Stochastic language generation for spoken dialogue systems. ANLP/NAACL 2000 Workshop on Conversational Systems, May 2000, pp. 27-32.

128. Xu, W. and Rudnicky, A. Task-based dialog management using an agenda. ANLP/NAACL 2000 Workshop on Conversational Systems, May 2000, pp. 42-47.

129. Rudnicky, A., Bennett, C., Black, A., Chotimongkol, A., Lenzo, K., Oh, A., Singh, R. Task and domain specific modeling in the Carnegie Mellon Communicator system Proceedings of ICSLP, 2000 (Beijing, China) Paper G4-01.

130. Xu, W. & Rudnicky, A. Can artificial neural networks learn language models? Proceedings of ICSLP 2000 (Beijing, China). Paper M1-13.

131. Xu, W. and Rudnicky, A. Language modeling for dialog system? Proceedings of ICSLP 2000 (Beijing, China). Paper B1-06.

132. Frederking, R., Hogan, C, and Rudnicky, A. A New Approach to the Translating Telephone. In Proceedings of the Machine Translation Summit VII: MT in the Great Translation Era, Singapore, September 1999.

133. Rudnicky, A. and Xu W. An agenda-based dialog management architecture for spoken language systems. IEEE ASRU Workshop, December 1999, p. I-337.

134. Eskenazi, M., Rudnicky, A., Gregory, K., Constantinides, P., Brennan, R., Bennett, C., Allen, J. Data Collection and Processing in the Carnegie Mellon Communicator. Proceedings of Eurospeech, 1999, 6, 2695-2698.

135. Rudnicky, A., Thayer, E., Constantinides, P., Tchou, C., Shern, R., Lenzo, K., Xu W., Oh, A. Creating natural dialogs in the Carnegie Mellon Communicator system. Proceedings of Eurospeech, 1999, 4, 1531-1534.

136. Constantinides, P. and Rudnicky, A. Dialog analysis in the Carnegie Mellon Communicator. Proceedings of Eurospeech, 1999, 1, 243-246.

137. Constantinides, P., Hansma, S., Tchou, C. and Rudnicky, A. A schema-based approach to dialog control. Proceedings of ICSLP. 1998, Paper 637.

138. Frederking, R., Rudnicky, A. and C. Hogan DIPLOMAT: a voice-to-voice translation system. ACL Workshop on Spoken Language Translation, Madrid, 1997, pp. 61-66.

139. Rudnicky, A.I., Reed, S. and Thayer, E.H. SpeechWear: A mobile speech  system, Proceedings of ICSLP, 1996.

140. Rudnicky, A.I. Hub 4: Business Broadcast News. Proceedings of the 1996 ARPA Workshop on Speech Recognition Technology, 1996, pp.8-11.

141. Hauptmann, A.G., Witbrock, M.J., Rudnicky, A.I., and Reed, S. Speech for Multimedia Information Retrieval. UIST-95 Proceedings of the User Interface Software Technology Conference, Pittsburgh, November 1995.

142. Rudnicky, A.I. Language modeling with limited domain data. Proceedings of the Arpa Spoken Language Systems Technology Workshop. San Mateo: Morgan Kaufmann, 1995, pp. 66-69.

143. Rudnicky, A.I. Factors affecting choice of speech over keyboard and mouse in a simple data-retrieval taskProceedings of EUROSPEECH'93, 1993, 2161-2164.

144. L. Hirschman, M. Bates, D. Dahl, W. Fisher, J. Garofolo, D. Pallett, K. Hunicke-Smith, P. Price, A. Rudnicky, E. Tzoukermann. Multi-site data collection and evaluation in spoken language understanding. Proceedings of the Arpa Workshop on Human Language Technology.  San Mateo: Morgan Kaufmann, 1993, 19-24.

145. Rudnicky, A.I.  Mode preference in a simple data-retrieval task. Proceedings of the Arpa Workshop on Human Language Technology.  San Mateo: Morgan Kaufmann, 1993, 364-369.

146. Teal, S.L. and Rudnicky, A.I.  A performance model of system delay and user strategyProceedings of the CHI conference, 1992, 295-306.

147. Rudnicky, A.I., Lunati, J.-M. and Franz, A.M.  Spoken language recognition in an office management domainProceedings of the Intl.  Conf. on Acoustics, Speech, and Signal Processing, 1991, 829-832.

148. Lunati, J.-M. and Rudnicky, A.I.  Spoken language interfaces: The OM systemProceedings of the CHI conference, April 1991, 453-454.

149. Rudnicky, A.I. and Hauptmann, A.G.  Models for evaluating interaction protocols in speech recognitionProceedings of the CHI conference, April 1991, 285-291.

150. Rudnicky, A.I.  System response delay and user strategy selection.  Invited poster, CHI Conference, 1990.

151. Rudnicky, A.I., Sakamoto, M.H. and Polifroni, J.H.  Spoken language interaction in a spreadsheet task.  In D. Diaper et al., Human-Computer Interaction -- INTERACT'90, New York: Elsevier, 1990, 767-772.

152. A.G. Hauptmann and A.I. Rudnicky, Speaking, Typing and Gesturing. In CHI-90 Workshop on Multimedia and Multimodal Interfaces, International Conference on Human Factors in Computing Systems, Seattle WA, April 1990.

153. Rudnicky, A.I., Sakamoto, M.H., and Polifroni, J.H.  Spoken language interaction in a goal-directed taskProceedings of the Intl.  Conf. on Acoustics, Speech, and Signal Processing, 1990, 1, 45-48.

154. Lunati, J.-M. and Rudnicky, A.I. The design of a spoken language interface, Proceedings of the Third DARPA Speech and Natural Language Workshop, San Mateo: Morgan-Kaufmann, 1990, p. 225-229.

155. Hauptmann, A.G. and Rudnicky, A.I.  A comparison of speech versus typed inputProceedings of the Third DARPA Speech and Natural Language Workshop, San Mateo: Morgan-Kaufmann, 1990, p. 219-224.

156. Rudnicky, A.I., Sakamoto, M.H., and Polifroni, J.H. Evaluating spoken language interaction. Proceedings of the DARPA Workshop on Speech and Natural Language, San Mateo: Morgan Kaufmann, October 1989, 150-159.

157. Rudnicky, A.I.  The design of voice-driven interfaces.  Proceedings of the DARPA Workshop on Speech and Natural Language, San Mateo: Morgan Kaufmann, February 1989, 120-124.

158. Rudnicky, A.I., Li, Z., Polifroni, J.H., Thayer, E.H, and Gale, J. An unanchored matching algorithm for lexical accessProceedings of the Intl. Conf. on Acoustics, Speech, and Signal Processing, 1988, 1, 469-472.

159. Murveit, H., Weintraub, M., Cohen, M., Bernstein, J., and Rudnicky, A.I.  Three approaches for the design of a lexical access module for the DARPA Angel speech recognition system.  Proceedings of the Intl. Conf. on Acoustics, Speech, and Signal Processing, 1987, 837-840.

160. Rudnicky, A.I., Baumeister, L.K., DeGraaf, K.H., and Lehmann, E. The lexical access component of the CMU continuous speech recognition system.  Proceedings of the Intl. Conf. on Acoustics, Speech, and Signal Processing, 1987, 376-379.

d.      Other Refereed Publications

161. Lee, K-F, Hauptmann, A.G. and Rudnicky, A.I.  The spoken word.  Byte, 1990, 15(7), 225-232.

162. Rudnicky, A.I. and Hauptmann, A.G.  Errors, Repetition, and Contrastive Emphasis in Speech Recognition.  AAAI Symposium on Spoken Language Systems, March 1989.

163. Rudnicky, A.I. and Stern, R.M.  Spoken language research at Carnegie Mellon.  Speech Technology, 1989, 4(4), 38-43.

164. Rudnicky, A.I.  Goal-directed speech in a spoken language system, Journal of the Acoustical Society of America, 1989, 86, S76(A).

165. Polifroni, J.H. and Rudnicky, A. I. Modeling lexical stress in read and spontaneous speech.  Journal of the Acoustical Society of America, Journal of the Acoustical Society of America, 1989, 86, S77(A).

166. Rudnicky, A.I., Brennan, R.A, Polifroni, J.H., Thayer, E.H. Interactive problem solving with speech.  Journal of the Acoustical Society of America, 1988, 84, S213(A).

167. Rudnicky, A.I. and Li, Z. Prosodic information in lexical access. Paper presented at the DARPA Speech Workshop, June 1988.

168. Rudnicky, A.I. Using features to empirically generate pronunciation networks.  Journal of the Acoustical Society of America, 1987, 82, (A).

169. Rudnicky, A.I. Speaker-independent recognition of vocalic segments. Journal of the Acoustical Society of America, 1984, 61, S46(A).

170. Rudnicky, A.I., Waibel, A.H., and Krishnan, N.V.  Using zero crossing counts to provide discriminative information in isolated word recognition. Journal of the Acoustical Society of America, 1981, 70, S60(A).

171. Rudnicky, A.I. Units of perception in phoneme monitoring. Paper presented at the 21st meeting of the Psychonomic Society, St. Louis, November, 1980.

172. Rudnicky, A.I. The role of structure in speech perception: Evidence from phoneme monitoring. Paper presented at the Spring Meeting of the Midwestern Psychological Association, St. Louis, May, 1980.

173. Rudnicky, A.I. The perception of speech in an unfamiliar language. In Speech Communications papers presented at the 97th meeting of the Acoustical Society of America, J. Wolf and D. Klatt (Eds.) New York: Acoustical Society of America, 1979.

174. Rudnicky, A.I. and Cole, R.A. Vowel identification and subsequent context.  Journal of the Acoustical Society of America, 1977, 61, S39(A).

175. Rudnicky, A.I. and Cole, R.A. Selective adaptation produced by ongoing speech.  Journal of the Acoustical Society of America, 1976, 59, S26(A).

e.       UnRefereed Conference/Workshop Papers

176. Rudnicky, A. I. and Oh, A. H. Dialog Annotation for Stochastic Generation. Proceedings of the ISLE Workshop on Dialogue Tagging for Human Computer Interaction, Edinburgh, U.K., December 15-17th, 2002. [invited paper]

177. Lunati, J-M and Rudnicky, A.I.  Human factors in the design of a spoken language system.  Proceedings of Speech Tech'91, April 1991.

f.       Technical Reports

178. Bohus, D. and Rudnicky, A. Integrating multiple knowledge sources for utterance-level confidence annotation in the CMU Communicator spoken dialog system. Carnegie Mellon University School of Computer Science Technical Report CMU-CS-02-190, November 2002.

179. Miller, B.W, Hwang C.H, Lee Y., Roberts J., Rudnicky A.I, "The I3S Project: A Mixed, Behavioral and Semantic Approach to Discourse/Dialogue Systems." MCC Technical Report I3S-107-00, January 2000.

180. Rudnicky, A.I The design of spoken language interfaces.  Carnegie Mellon University School of Computer Science Technical Report CMU-CS-90-118, March 1990.

181. Rudnicky, A. I. and Hauptmann, A. G. Conversational interaction with speech systems.  Carnegie Mellon University School of Computer Science.  Technical Report CMU-CS-89-203, December 1989.

182. Rudnicky, A.I. and Sakamoto, M.H. Transcription conventions for spoken language research. Carnegie Mellon University School of Computer Science Technical Report CMU-CS-89-194, November 1989.

183. Rudnicky, A.I. Waibel, A.H., and Krishnan, N. Adding a zero-crossing count to spectral information in template-based speech recognition. Carnegie-Mellon Computer Science Department, Technical Report CMU-CS-82-140, October, 1982.

g.      Unpublished Papers

184. Ayoob, E., Bodine, K., Bohus, D., Rudnicky, A. I.  & Siegel, J.   Users’ Performance and Preferences for Online Graphic, Text and Auditory Presentation of Instructions. Carnegie Mellon University, 2004.

185. Damiba, B. A. & Rudnicky, A. I.   Internationalizing Speech Technology through Language Independent Lexical Acquisition. Carnegie Mellon University, 1998.

186. Damiba, B. A. & Rudnicky, A. I. Language-Independent Lexical Acquisition. Carnegie Mellon University, 1997.

187. Rudnicky, A.I.  Speech Interface Guidelines. Carnegie Mellon University, May, 1996.

h.      Software Artifacts

                                                                                              

188. TeamTalk, 2005-2009. A spoken dialog system supporting multiple participants (with Harris, Pappu, Li, Frisch and others).

189. Logios, 2007-2009. A set of tools for creating speech system knowledge bases (with Harris)

190. LMTool, 1996-2009. A set of web-based tools for creating language models and pronunciation dictionaries for the Sphinx ASR system.

191. Carnegie Mellon Communicator, 1998-2002. A telephone-based spoken language travel planning system. (with Xu, Oh, Constantinides, Thayer and others).

192. Speech Workbench, 1997-98. A suite of modules and tools that simplify the creation of speech applications. (with Scott Hansma.)

193. Scheduler, 1997 A dialog system for interacting with a personal calendar. Used to develop ideas in stack-based dialog control. (with Paul  Constantinides and Scott Hansma).

194. Cruiser, 1996 [a dialog system for access to auto registration information. Developed in conjunction with CGI, tested with the Pittsburgh Police Department.]

195. SpeechWear, 1995 [a PC-based wearable speech system, incorporating a speech hypertext browser.]

196. OM (Office Manager), 1990 [An environment supporting speech interaction with basic office applications, also an early example of a distributed multi-application client-server speech system. System was used for interface experiments.]

197. Voice Spreadsheet, 1989 [A spreadsheet system incorporating the Sphinx recognition system, used to conduct experiments on speech interfaces.]

i.        Video Productions

198. Carnegie Mellon Team Talk (2009)  TeamTalk is a platform for multi-participant dialog between humans and robots.

199.  Communicator (March 2002)  A telephone-based spoken dialog system for air travel reservations.

200. Symphony (July 2001)  A voice-enabled mobile maintenance system.

201. Information access through speech (1997)  An early example of a mobile, browser-based system. It was the first system to incorporate ASR into the browser (Mosaic) and to implement html tags for speech.

202. The SpeechWear system (June 1995)

203. Spoken Language Interfaces: The OM System (1990)  The CMU Office Manager, a multi-application speech interface. [refereed, appears in CHI'91 video program]

204. Examples of Voice Interfaces (1989)  A voice spreadsheet and an early version of the OM PID system

205. Early CMU Speech Systems (1988)  Voice Calculator and Voice PID