Seshadri sridharan

 
MUSIC!HOBBIES.html
MISC. CODEPROJECTS.html
RESUME
https://www.dropbox.com/s/kymxwll8jzpgmhq/seshadri-resume.pdf
RESEARCH
HOME
ABOUT_ME.html
 

LEXICAL SEMANTICS

I am very much interested in statistical models of Natural Language. I am fascinated by the semantics that can be discovered from the company a word keeps. Although word distributions are extremely poor approximations of the mental lexicon, they tend to be very useful! I worked on quality-quantity trade-offs for distributional semantics models. Now, I am working on fusing statistical and human-compiled semantic dictionaries to yield better models of language.

<PAPER> Sridharan, S., Murphy, B. “Modeling Word Meaning: Distributional Semantics and the Corpus Quality-Quantity Trade Off.” (to appear)  Proceedings of the CogAlex Workshop, 24th International Conference on Computational Linguistics (2012).

<REPORT> Sridharan, S.Using Google Books Ngrams for Neurolinguistic Decoding”

MULTIMODAL LANGUAGE UNDERSTANDING

I am excited about the utility of multimodal evidence streams for dialog agents. I think there is a lot more to communication that we discard by just looking at speech. I am working on fusing gestural, speech and space cues to enhance human-agent interaction. I am studying how the mutual information between these multiple streams can be used for robust input understanding. I am also exploring how Electroencephalography(EEG) can be used to decode dialog relevant mental states that the dialog system can react to. In other words, how can we leverage non-verbal cues?

<PAPER> Sridharan, S., Chen, Y., Chang, K., Rudnicky, A. “NeuroDialog: An EEG-Enabled Spoken Dialog Interface.14th ACM International Conference on Multi-modal Interaction (2012).

<VIDEO> Sridharan, S. “NeuroDialog Classifier Demo: Misunderstanding detection in Dialog using an EEG signal based classifier.

SITUATED DIALOG

I built embodied conversational agents that use the Microsoft Kinect, a far-field microphone array and an advanced vision sensor for audio source localization, user awareness and voice activity detection. I am interested in the challenges posed by public environments for social dialog agents. This is ongoing work with my peers at the LACS lab. I believe that fluid interaction with robots in public spaces is an hard problem that needs to be attacked with multi-sensory perception.

<VIDEO> Sridharan, S., Ming S., Aasish P.: Compuceptionist, a receptionist agent for the Gates Hillman building.

<VIDEO> Ming S., Aasish P, Sridharan, S.: SocBot, an embodied social dialog agent for the Gates Hillman building, in the kiosk form.

MACHINE LEARNING (FOR  NLP)

I worked on question quality prediction in online QA communities using both offline and online models of the question posted. I modeled the abstract measure of quality using multiple concrete metrics that I annotated. I also explored how we can better estimate the quality measure in the online scenario using co-training, where a large number of labelled instances were exchanged between the two models for training, thereby sharing knowledge.

<REPORT> Juan, C., Sridharan, S.: Co-training for better Question-quality prediction : Using Offline Predictive Models

MACHINE LEARNING (FOR SIGNALS)

I worked on adapting an acoustic model (for Speech Recognition) to a particular microphone/microphone array. I modeled the channel of the microphone by learning a transformation function from the recording of a frequency chirp signal that would give the original acoustic signal. This was used to learn an adapted acoustic models by distorting pre-recorded audio data.

<CODE> Sridharan, S.: Microphone Adaptation : Learning adapted acoustic models for a microphone’s audio channel.

<MODEL> Sridharan, S.: An acoustic model trained on the WSJ audio, adapted to the Kinect sensor.

SPEECH

I built an intelligent calendar assistant that let users manage (add, delete and query) calendar events using speech, where the input language is designed to be unrestricted. I used the Olympus Framework to build the system and the UIUC QA corpus to adapt the dialog. In another study, I explored whether disfluencies in speech can be used to improve word predictability in spontaneous speech. Using the Switchboard corpus I showed that speech disfluencies do carry information that can help decrease language perplexity.

<VIDEO> Sridharan, S., Yun, W.: Personal Calendar Assistant : Managing appointments using speech input.

<REPORT> Sridharan, S., Yun, W.: Personal Calendar Assistant

<REPORT> Sridharan, S.: Do Speech Disfluencies help Word Predictability?

PAGE UNDER CONSTRUCTION!