Big Design 2017 Presentation on Survey Design

I just got back home from my first official speaking gig at Big Design in Dallas, Texas. I was very nervous and the talk didn't go as smooth as I would have liked but I had a really engaged audience who peppered me with thoughtful questions throughout the talk and after. I was pleasantly surprised that people were so interested in surveys.

As promised, here are my slides:

As a bonus, the University of Texas User Experience Club did sketchnotes of all of the talks. Here is the sketchnote from mine.

Location, Location, Location: On-site, Remote, or Lab Research

When it comes to qualitative research, your location choice matters. Unlike quantitative research, where your only decision is whether to use a snail-mail or online system, and hardly anyone does snail-mail these days, when comes to qualitative you have lots of options and choosing the right one is dependent on the research objectives.

Here is a simplified list of location options...

Option 1: On-site

On-site has more than a few drawbacks. It biases the research because participants come in knowing who the research is for, this can affect their ability to be honest or give negative feedback. They're not dumb. They know you're watching and walking into someone else's office space can be intimidating.

You're also only drawing people from the local area to participate. Unless you're only serving your local community, that's a really limited sample. Plus, if you're conducting the research during normal work hours you are limiting yourself to people who either work odd hours, are in a position to be away from work during your session, or have no job at all.

So why do it? A really good reason is that you're testing a prototype or some kind of media or product that hasn't been released to the public and you want to keep it under wraps. Having your company secrets stay at the office is a great way to make sure they stay secret.

Option 2: Remote Research

Honestly, this is my favorite kind of research. It's relatively cheap, there are no associated travel expenses, and it allows for a lot more flexibility for respondents, researchers, and clients. With remote research, the research is being conducted online using either a web-conferencing system or an online research platform.

I recently worked on a study with another researcher where, between the two of us, we interviewed and did user testing on a voice interface with thirty CTOs across the United States. My research partner and I each had the device that was being tested near our computers and we asked the CTOs to interact with the devices using their webcams. CTOs, and other high level professionals, are very strapped for time and our conducting the research via webcam allowed us to be available at their convenience. If we had done these interviews in-person, it would have taken much longer and required a lot of money in travel expenses and extra money to make it worthwhile for the participants (the CTOs).

Webcam research is great for in-depth-intervews (IDIs or one-on-ones), but what about focus groups? When speed and convenience is key, such as a design sprint, I recommend online bulletin board platforms. Invite research participants to answer questions using an online bulletin board system over the course of two days. Anywhere form fifteen to thirty participants can come and go as they please as long as they contribute two hours of their time over the two days the platform is open. Once participants answer the predetermined questions, they are then allowed to comment on other participants' answers, and the moderator can probe as needed. These bulletin boards can be incredibly helpful for requirements discovery and concept testing.

Option 3: Lab Research

A lab is usually a focus group facility that has been set up to do user testing. The room should be set up with all of the devices you need for your test, picture-in-picture video capturing the interface being tested, a way to pipe that video into the back room for observers to see, and technical support.

This is expensive. You are paying for everyone's travel expenses (researcher and the observers). You are paying more for incentives because you are inconveniencing the participants. You are also paying for someone who knows what they are doing to set everything up, have backups for your videos, and be able to handle the problems which arise without fail. I once had an agency try to save money by doing the lab setup themselves. They sent the facility a laptop PC and a cheap webcam to use and I went cold with terror. I, the moderator with limited technical skill, would be responsible for making this work. Everything that could go wrong with that project went wrong. Videos were lost, devices didn't pair, fingers were pointed. It wasn't fun and it distracted from the work.

Lesson: If you're going to shell out for a lab test, shell out for technical support too. Peace of mind during a research study is priceless.

Lab tests are also limiting. Instead of talking to people from a dispersed geographic area, you're only talking to people from that metropolitan region who are able to spend a few hours away from work or home to be available during your window.

So what are the benefits? First person experience for one. Nothing is more convincing to a stakeholder than seeing a user actually have a problem in the moment or say something about the experience that hits home. If your team feels stuck or disconnected from your customers, getting outside the office and hearing from users directly is invaluable.


Regardless of your budget, you have options when it comes to doing good research. Just because your budget is small and you need a fast turn around, you can still get quality insights. Ask a research professional about methodology options before you assume it can't be done.

Magic Numbers

There are a two numbers in research that are seemingly magical: 5 and 400

According to the Nielsen Norman Group (one of the originators of user research), you only need five participants to understand if your website, app, or software or ready for launch. Why? Because the direct observation experience is universal. If you do enough user tests, you'll find that the problems you discover with a given participant start repeating themselves.

NNG actually did some math on the subject and you can read their post if you are so inclined. If not, it basically says that a study will identify 75% of usability issues with five users. If you choose to go beyond five, you will start to see repeats of the same issues.

I usually recommend recruiting six people for sake of safety. Just in case one of your five doesn't show up or just ends up being weird and you can't use their data for some reason.

This can make your usability test deceptively cheap. The test itself is cheap, but as you can see from my process page, you are not meant to test only once. User testing is an iterative process. Design > develop > test > analyze > redesign > redevelop > retest .... over and over again until you solve your usability issues and the product is ready for launch. That can be expensive, but necessary for the product's success.

Another magic number is 400.

When doing quantitative research you want to have enough respondents for the data to have statistical significance. This is your sample size. Below is the equation for determining your sample size and I know it looks scary, but don't worry, there are lots of online sample size calculators available to do the heavy lifting for you.

Sample Size Equation

Sample Size Equation

The sample size is dependent on a few variables:

  • Confidence Level - This is how confident we are that our sample accurately reflects the population we are studying. 95% is acceptable in most cases.
  • Confidence Interval/Margin of Error - This describes the accuracy of your findings. I usually default to 5%. This means that whatever findings we get back, there has to be at least a 5% difference for it be statistically significant. Under 5%, there is not enough of a difference to report.
  • Population Size -  This is the number of people in your entire population. If you are doing a survey of your friends and you only have five friends, then your population is 5. If you are surveying everyone in California, then your population size is 38.8 million.

Let's say we're catering a party at work and we need to decide between hamburgers and hot dogs. We work in a large office with 200 people.

  • Confidence Level  - 95%
  • Confidence Interval/Margin of Error - 5%
  • Population Size - 200

Sample Size  = 132

Now let's say that we're organizing a party for everyone in my hometown of Vancouver, BC and we need to make the same hamburger vs. hotdog decision.

  • Confidence Level - 95%
  • Confidence Interval/Margin of Error - 5%
  • Population Size - 2.4 million

Sample Size = 385

If we were to increase that population size to 5 million or even 5 billion, the sample size would still be 385. See, magic. I quote 400 as the magic number for padding.

So now you know. If you want a successful user test, you need to recruit at least 5 people. If you want a statistically significant data on a large population, then you need to get at least 385 survey respondents.

Research Toolkit

I'm working on a contract with a local company. Most of the time I'm asked to rewrite their surveys and make them less bad*, but I also prepare research project plans for A/B tests, program evaluations, qualitative discussion guides, and employee engagement benchmarking. Another project I've been asked to do is to give the marketing department a basic understanding of research practices with a research toolkit.

The toolkit was written to give the team a basic understanding of the use of research so that they can properly advocate for it in stakeholder meetings without having the benefit of a researcher in the room to advocate for it themselves and to teach the basics of research to junior members of the team. The topics I covered include:

  • The Research Process - What is market research? How to get the most from working with a research department.
  • Formulating a Research Problem - What is a research problem vs. a management problem? How to define a research problem:
    • Specify the research objectives
    • Review the environment and the context of the research problem
    • Explore the nature of the problem
    • Define the variable relationships
    • Explore the consequences of alternative courses of action
  • Writing a Research Objective - What is a research objective and how to establish a research objective
  • General Research - What is market research? Why conduct market research? When to conduct market research
  • Qualitative Research - What is qualitative research? Data analysis, Confirmation, and Methods of qualitative research:
    • Focus groups
    • In-depth interviews
    • Open-ended survey questions
    • Ethnography
    • User Research and Testing
  • Quantitative Research - What is quantitative research? Quantitative methodology, and statistical analysis.
  • Questionnaire Design - What makes a good questionnaire? Survey design elements, questionnaire introductions and conclusions, question wording, testing, and basic best practices:
    • Keep it short
    • Do not require respondents to answer questions
    • Do not abuse grid questions
    • Adhere is available stye guides
    • Randomize your answer options
    • Always make your rating scales Likert (odd numbered) scales
    • Scales should move from lowest to highest
  • Sampling and Sample Sizes - What is a sample? How to calculate sample size, sampling and statistical testing, and types of sampling:
    • Random (probability) sampling
    • Systematic sampling
    • Stratified samples
    • Quota sampling
    • Cluster sampling
    • Area sampling
  • Analysis and Reporting - How to analyze data, qualitative data analysis (coding and analysis), quantitative data analysis (cross-tabulation and filtering, benchmarking, trending, competitive data, longitudinal analyses, statistical analysis), and how to report research findings:
    • Tell a story
    • Don't get fancy
    • Keep your slides simple

Maybe I'll do a deep dive into one of these subjects in a future post. Setting up an internal toolkit for your organization is dependent on the goals and functions that already exist, but there are universal best practices that can be applied to any given situation.

*Surveys are almost always bad. When was the last time you got a survey invitation and squealed with joyous excitement? It just doesn't happen. I've gotten very good at making surveys less bad.