Big Design 2017 Presentation on Survey Design

I just got back home from my first official speaking gig at Big Design in Dallas, Texas. I was very nervous and the talk didn't go as smooth as I would have liked but I had a really engaged audience who peppered me with thoughtful questions throughout the talk and after. I was pleasantly surprised that people were so interested in surveys.

As promised, here are my slides:

As a bonus, the University of Texas User Experience Club did sketchnotes of all of the talks. Here is the sketchnote from mine.

Location, Location, Location: On-site, Remote, or Lab Research

When it comes to qualitative research, your location choice matters. Unlike quantitative research, where your only decision is whether to use a snail-mail or online system, and hardly anyone does snail-mail these days, when comes to qualitative you have lots of options and choosing the right one is dependent on the research objectives.

Here is a simplified list of location options...

Option 1: On-site

On-site has more than a few drawbacks. It biases the research because participants come in knowing who the research is for, this can affect their ability to be honest or give negative feedback. They're not dumb. They know you're watching and walking into someone else's office space can be intimidating.

You're also only drawing people from the local area to participate. Unless you're only serving your local community, that's a really limited sample. Plus, if you're conducting the research during normal work hours you are limiting yourself to people who either work odd hours, are in a position to be away from work during your session, or have no job at all.

So why do it? A really good reason is that you're testing a prototype or some kind of media or product that hasn't been released to the public and you want to keep it under wraps. Having your company secrets stay at the office is a great way to make sure they stay secret.

Option 2: Remote Research

Honestly, this is my favorite kind of research. It's relatively cheap, there are no associated travel expenses, and it allows for a lot more flexibility for respondents, researchers, and clients. With remote research, the research is being conducted online using either a web-conferencing system or an online research platform.

I recently worked on a study with another researcher where, between the two of us, we interviewed and did user testing on a voice interface with thirty CTOs across the United States. My research partner and I each had the device that was being tested near our computers and we asked the CTOs to interact with the devices using their webcams. CTOs, and other high level professionals, are very strapped for time and our conducting the research via webcam allowed us to be available at their convenience. If we had done these interviews in-person, it would have taken much longer and required a lot of money in travel expenses and extra money to make it worthwhile for the participants (the CTOs).

Webcam research is great for in-depth-intervews (IDIs or one-on-ones), but what about focus groups? When speed and convenience is key, such as a design sprint, I recommend online bulletin board platforms. Invite research participants to answer questions using an online bulletin board system over the course of two days. Anywhere form fifteen to thirty participants can come and go as they please as long as they contribute two hours of their time over the two days the platform is open. Once participants answer the predetermined questions, they are then allowed to comment on other participants' answers, and the moderator can probe as needed. These bulletin boards can be incredibly helpful for requirements discovery and concept testing.

Option 3: Lab Research

A lab is usually a focus group facility that has been set up to do user testing. The room should be set up with all of the devices you need for your test, picture-in-picture video capturing the interface being tested, a way to pipe that video into the back room for observers to see, and technical support.

This is expensive. You are paying for everyone's travel expenses (researcher and the observers). You are paying more for incentives because you are inconveniencing the participants. You are also paying for someone who knows what they are doing to set everything up, have backups for your videos, and be able to handle the problems which arise without fail. I once had an agency try to save money by doing the lab setup themselves. They sent the facility a laptop PC and a cheap webcam to use and I went cold with terror. I, the moderator with limited technical skill, would be responsible for making this work. Everything that could go wrong with that project went wrong. Videos were lost, devices didn't pair, fingers were pointed. It wasn't fun and it distracted from the work.

Lesson: If you're going to shell out for a lab test, shell out for technical support too. Peace of mind during a research study is priceless.

Lab tests are also limiting. Instead of talking to people from a dispersed geographic area, you're only talking to people from that metropolitan region who are able to spend a few hours away from work or home to be available during your window.

So what are the benefits? First person experience for one. Nothing is more convincing to a stakeholder than seeing a user actually have a problem in the moment or say something about the experience that hits home. If your team feels stuck or disconnected from your customers, getting outside the office and hearing from users directly is invaluable.

***

Regardless of your budget, you have options when it comes to doing good research. Just because your budget is small and you need a fast turn around, you can still get quality insights. Ask a research professional about methodology options before you assume it can't be done.

Facebook and CNET

I just finished two more projects I can't really talk about.

One was a user testing project for CNET (a CBS Interactive property). I conducted six remote interviews over two days and delivered the report a week later with recommendations and next steps.

The other was an onsite user testing project for Facebook. I conducted four (six were scheduled, but only four made it) in-person interviews over two days and delivered the report two weeks later, also with recommendations and next steps. If anyone asks, the snack options at Facebook are INSANE.

I'd like to thank Applause and Motivate Design for opportunities to work on such interesting projects and trusting me with their clients.

Now I'm ramping up another research project for Lucky Brand.

Facebook - Super Top Secret Study

It's exciting to land high-profile project, but the problem with these types of projects is that you're only allowed to talk about them in vague terms.

It was a month long project involving 12 one-on-one phone and screen-share interviews, analysis, and report writing. The report was delivered to Facebook via web video yesterday.

I was subcontracting on behalf of AnswerLab and was very grateful for the opportunity.

I wish I could say more and make this a proper case study, but I can't.

Magic Numbers

There are a two numbers in research that are seemingly magical: 5 and 400

According to the Nielsen Norman Group (one of the originators of user research), you only need five participants to understand if your website, app, or software or ready for launch. Why? Because the direct observation experience is universal. If you do enough user tests, you'll find that the problems you discover with a given participant start repeating themselves.

NNG actually did some math on the subject and you can read their post if you are so inclined. If not, it basically says that a study will identify 75% of usability issues with five users. If you choose to go beyond five, you will start to see repeats of the same issues.

I usually recommend recruiting six people for sake of safety. Just in case one of your five doesn't show up or just ends up being weird and you can't use their data for some reason.

This can make your usability test deceptively cheap. The test itself is cheap, but as you can see from my process page, you are not meant to test only once. User testing is an iterative process. Design > develop > test > analyze > redesign > redevelop > retest .... over and over again until you solve your usability issues and the product is ready for launch. That can be expensive, but necessary for the product's success.

Another magic number is 400.

When doing quantitative research you want to have enough respondents for the data to have statistical significance. This is your sample size. Below is the equation for determining your sample size and I know it looks scary, but don't worry, there are lots of online sample size calculators available to do the heavy lifting for you.

 Sample Size Equation

Sample Size Equation

The sample size is dependent on a few variables:

  • Confidence Level - This is how confident we are that our sample accurately reflects the population we are studying. 95% is acceptable in most cases.
  • Confidence Interval/Margin of Error - This describes the accuracy of your findings. I usually default to 5%. This means that whatever findings we get back, there has to be at least a 5% difference for it be statistically significant. Under 5%, there is not enough of a difference to report.
  • Population Size -  This is the number of people in your entire population. If you are doing a survey of your friends and you only have five friends, then your population is 5. If you are surveying everyone in California, then your population size is 38.8 million.

Let's say we're catering a party at work and we need to decide between hamburgers and hot dogs. We work in a large office with 200 people.

  • Confidence Level  - 95%
  • Confidence Interval/Margin of Error - 5%
  • Population Size - 200

Sample Size  = 132

Now let's say that we're organizing a party for everyone in my hometown of Vancouver, BC and we need to make the same hamburger vs. hotdog decision.

  • Confidence Level - 95%
  • Confidence Interval/Margin of Error - 5%
  • Population Size - 2.4 million

Sample Size = 385

If we were to increase that population size to 5 million or even 5 billion, the sample size would still be 385. See, magic. I quote 400 as the magic number for padding.

So now you know. If you want a successful user test, you need to recruit at least 5 people. If you want a statistically significant data on a large population, then you need to get at least 385 survey respondents.

Research Toolkit

I'm working on a contract with a local company. Most of the time I'm asked to rewrite their surveys and make them less bad*, but I also prepare research project plans for A/B tests, program evaluations, qualitative discussion guides, and employee engagement benchmarking. Another project I've been asked to do is to give the marketing department a basic understanding of research practices with a research toolkit.

The toolkit was written to give the team a basic understanding of the use of research so that they can properly advocate for it in stakeholder meetings without having the benefit of a researcher in the room to advocate for it themselves and to teach the basics of research to junior members of the team. The topics I covered include:

  • The Research Process - What is market research? How to get the most from working with a research department.
  • Formulating a Research Problem - What is a research problem vs. a management problem? How to define a research problem:
    • Specify the research objectives
    • Review the environment and the context of the research problem
    • Explore the nature of the problem
    • Define the variable relationships
    • Explore the consequences of alternative courses of action
  • Writing a Research Objective - What is a research objective and how to establish a research objective
  • General Research - What is market research? Why conduct market research? When to conduct market research
  • Qualitative Research - What is qualitative research? Data analysis, Confirmation, and Methods of qualitative research:
    • Focus groups
    • In-depth interviews
    • Open-ended survey questions
    • Ethnography
    • User Research and Testing
  • Quantitative Research - What is quantitative research? Quantitative methodology, and statistical analysis.
  • Questionnaire Design - What makes a good questionnaire? Survey design elements, questionnaire introductions and conclusions, question wording, testing, and basic best practices:
    • Keep it short
    • Do not require respondents to answer questions
    • Do not abuse grid questions
    • Adhere is available stye guides
    • Randomize your answer options
    • Always make your rating scales Likert (odd numbered) scales
    • Scales should move from lowest to highest
  • Sampling and Sample Sizes - What is a sample? How to calculate sample size, sampling and statistical testing, and types of sampling:
    • Random (probability) sampling
    • Systematic sampling
    • Stratified samples
    • Quota sampling
    • Cluster sampling
    • Area sampling
  • Analysis and Reporting - How to analyze data, qualitative data analysis (coding and analysis), quantitative data analysis (cross-tabulation and filtering, benchmarking, trending, competitive data, longitudinal analyses, statistical analysis), and how to report research findings:
    • Tell a story
    • Don't get fancy
    • Keep your slides simple

Maybe I'll do a deep dive into one of these subjects in a future post. Setting up an internal toolkit for your organization is dependent on the goals and functions that already exist, but there are universal best practices that can be applied to any given situation.

*Surveys are almost always bad. When was the last time you got a survey invitation and squealed with joyous excitement? It just doesn't happen. I've gotten very good at making surveys less bad.

Accessible Research Presentation at Style & Class

Last night I gave a presentation at a local web design and developer meet-up, Style & Class. I spoke about being inclusive to disabled people in qualitative research. This is not something I have direct experience with, I'm just as guilty as anyone else on this subject. Most of the advice has been adapted from a presentation by Maya Middlemiss, from Saros Research in the UK.

I learned a lot in writing this presentation and I will be encouraging clients to involve disabled people in their research going forward.

Megaphone Magazine Mobile App - Pre-Launch User Test

Client: Denim & Steel and Megaphone Magazine

Product: Mobile Application

Background: Megaphone Magazine is a not-for-profit social enterprise that empowers Vancouver's at-risk population through commerce. Magazines are produced at a fixed cost and sold to the vendor, who turns around and sells it to his or her customer base for cash at a reasonable markup. Vendors keep all of the revenue they make from their sales.

The growing ubiquity of credit and debit cards caused a decrease in sales for the vendors. Megaphone received a grant to develop a mobile application which would allow regular buyers to purchase magazines using their credit card without bypassing the vendor.

Denim & Steel asked the researcher to conduct the first round of user testing on the mobile application prototype with limited financial resources as is typical for non-profits.

Objective: Ensure that the Megaphone mobile application will work and be understood by magazine buyers, magazine vendors, and Megaphone administrators upon first use.

Research Process: A temporary lab was created in the Megaphone Magazine offices using a quiet meeting space, a simple hand-held voice recorder, and Reflector to mirror and record the actions taken on the app. A fake credit card was also used to assist in setting up the app for each participant.

The researcher instructed Megaphone administrators on who to recruit, how many of each, and how to schedule them. Buyers were qualified by having a smart phone and being regular buyers of the magazine. Megaphone was allowed to use its best judgement on who to recruit as vendors and admins could be anyone employed by Megaphone who would be likely to edit information on the backend or deliver payments to vendors.

Each testing session was viewed as a multi-step process -

  1. Buyers - Setting up the app for payments, then buying a magazine, then editing an order, and confirming payment
  2. Vendors - Recording the transaction number and then getting paid by an administrator
  3. Administrators - Paying a vendor and then editing vendor information for the app

Participants were asked to go through each step in the sequence and the researcher and the developers watched as they succeeded and struggled with the process.

Outcome: Most of the problems found with the app were minor, but one interaction stood out as a potential fault in what was considered a critical point in the purchase process - the confirmation screen. One user swept past the confirmation screen without event realizing it. While researcher suggested an overlay, Denim & Steel went a very large step further. They created a benign disruption by turning half the screen upside down. This created enough cognitive dissonance to catch the user's attention and prevent a reflexive dismissal of the confirmation screen and helped maintain personal space between the vendor and the buyer.

 Before and after rendering of the Megaphone Magazine cashless payment mobile application.

Before and after rendering of the Megaphone Magazine cashless payment mobile application.

Another insight from the user test was the need to train the vendors to act as an ad hoc help desk for the buyers when they use the app. Suggested subjects included:

  • What is an app? What kind of phones does it work on?

  • How do you get the app?

  • Credit cards vs. debit cards

  • Phone etiquette and personal space

  • How is the service fee calculated and who does it go to?

  • What to do if the customer skips the transaction screen?

"Denim & Steel Interactive turned to Lauren to run our user testing sessions for the Megaphone App. Though we often do user testing internally, we wanted an additional degree of certainty that we had properly tested and validated the app’s design. Lauren’s professional manner and ease of collaboration made the process easy. Her final report both gave assurance to Megaphone of the app’s quality and helped us identify areas for optimization that led to truly novel solutions we would not have otherwise explored. On the next project where we need a higher benchmark for research, we’ll be calling Lauren in." 
Todd Sieling, Co-Founder of Denim & Steel