Office: Wean Hall - WEH 4125
Phone: 412-268-4885
Email: hhibshi [at] cmu [dot] edu
Software engineers find designing user surveys not as simple as
compiling a list of questions that seem reasonable to the investigator.
Survey design should leverage the wealth of theory that informs whether
a proposed survey measures what is claimed to be measured. For example,
engineering researchers studying methods often default to measuring the
time of completion or the number of tasks achieved to evaluate a
solution. Alternatively, the engineering problems are rich with
human-subject-relevant phenomena that can advance our knowledge in
requirements engineering. Furthermore, researchers in psychology and the
social sciences have discovered foundational theories that can be used
as guidelines to create experiments to access that knowledge.
In this tutorial, we introduce the audience to relevant social science theories and show how they can be applied in survey design. Students will learn this application using a sample survey in class where they apply what they have learned. We aim to teach the community about the challenges in user survey design and how to address these challenges and reduce bias. We also explain the different scales and metrics in surveys, and we discuss theories from the psychometrics field behind choosing scales for the construct of interest in a survey. We base our survey design techniques on well-known methods in the social science community aimed at increasing conclusion reliability. In addition, the tutorial will explain analysis techniques for survey data. We will explain different types of statistical tests their differences, and how we choose the appropriate test. We will explore topics like: sampling, test conditions, assumptions, and statistical power. We will also explain how to present findings from user surveys in research papers and how to report the statistics.
For time and location information please check the RE16 program here
We broke our tutorial into 4 sessions. In each session, we will introduce concepts and definitions supported by examples, exercises, and recommendations for do's and dont's.
In this introductory session we will define terminology related to survey design and online experiments. We will also explain how empirical research is vital for scientific research. Then, we will explain the different types of claims in experimental research: descriptive, relational and causal and the methods to handle different claims. We will introduce in this session the survey preparation check-list where we explain the steps that is needed before running any survey or experiment.
In this session, we will explain in detail the different types of survey
questions and how to construct questions that better relates to the
problem of interest.
We will introduce a model of survey response
and explain how to construct surveys while keeping in mind things like: working memory, semantic effects, context effects, etc.
Understanding the details discussed in this session help to design surveys with reduced response bias.
In this session, we will explain how to analyze and report the quantitive data in user surveys and experiments. We will cover the following topics: the meaning and importance of statistical significance, some common types of statistical tests, the effect of between and within subjects design on data analysis, and how to analyze different experiment designs. The session will also include details about randomized sampling, power analysis, and threats to validity.
This section will discuss the value of collecting qualitative data in surveys. We will also explain how to code and analyze the data using grounded analysis. This session will also include information about inter-rater reliability and reporting qualitative data.