There are a two numbers in research that are seemingly magical: 5 and 400

According to the Nielsen Norman Group (one of the originators of user research), you only need five participants to understand if your website, app, or software or ready for launch. Why? Because the direct observation experience is universal. If you do enough user tests, you'll find that the problems you discover with a given participant start repeating themselves.

NNG actually did some math on the subject and you can read their post if you are so inclined. If not, it basically says that a study will identify 75% of usability issues with five users. If you choose to go beyond five, you will start to see repeats of the same issues.

I usually recommend recruiting six people for sake of safety. Just in case one of your five doesn't show up or just ends up being weird and you can't use their data for some reason.

This can make your usability test deceptively cheap. The test itself is cheap, but as you can see from my process page, you are not meant to test only once. User testing is an iterative process. Design > develop > test > analyze > redesign > redevelop > retest .... over and over again until you solve your usability issues and the product is ready for launch. That can be expensive, but necessary for the product's success.

Another magic number is 400.

When doing quantitative research you want to have enough respondents for the data to have statistical significance. This is your sample size. Below is the equation for determining your sample size and I know it looks scary, but don't worry, there are lots of online sample size calculators available to do the heavy lifting for you.

The sample size is dependent on a few variables:

**Confidence Level**- This is how confident we are that our sample accurately reflects the population we are studying. 95% is acceptable in most cases.**Confidence Interval/Margin of Error**- This describes the accuracy of your findings. I usually default to 5%. This means that whatever findings we get back, there has to be at least a 5% difference for it be statistically significant. Under 5%, there is not enough of a difference to report.**Population Size**- This is the number of people in your entire population. If you are doing a survey of your friends and you only have five friends, then your population is 5. If you are surveying everyone in California, then your population size is 38.8 million.

Let's say we're catering a party at work and we need to decide between hamburgers and hot dogs. We work in a large office with 200 people.

Confidence Level - 95%

Confidence Interval/Margin of Error - 5%

Population Size - 200

**Sample Size = 132**

Now let's say that we're organizing a party for everyone in my hometown of Vancouver, BC and we need to make the same hamburger vs. hotdog decision.

Confidence Level - 95%

Confidence Interval/Margin of Error - 5%

Population Size - 2.4 million

**Sample Size = 385**

If we were to increase that population size to 5 million or even 5 billion, the sample size would still be 385. See, magic. I quote 400 as the magic number for padding.

So now you know. If you want a successful user test, you need to recruit at least 5 people. If you want a statistically significant data on a large population, then you need to get at least 385 survey respondents.