The Toilet Paper

Money can’t buy happiness, but it can buy survey respondents

Researchers often use financial incentives to increase response rates for their surveys. How well do these work for online surveys?

A maze where whoever manages to find an exit receives a monetary reward.
Money can be an a-maze-ing incentive

I can name a lot of things in life that are more fun than completing online surveys. This poses problems for researchers who conduct surveys, as they often need responses from a large number of respondents. Many people believe that rewarding respondents for their participation can help surveys “compete” against those other things and thus increase the response rate. While true, it’s only part of the story.

More isn’t always better


Generally speaking, there are three types of monetary incentives:

  1. prepaid incentives that are given before a respondent has completed the survey (as a form of reciprocity);
  2. conditional incentives that are only given after completion as rewards;
  3. lotteries, where for their efforts.

The first works very well for mail and in-person surveys, but is frequently infeasible for online surveys. Researchers who conduct online surveys are therefore more likely to choose lotteries or conditional incentives, which are less researched and have yielded mixed results.

But response rates aren’t everything: you don’t just need responses from a large number of respondents, you also need responses from the right respondents in order to obtain a representative sample. Some individuals are more motivated by financial incentives than others. For example, studies have shown that lottery incentives attract female respondents to web surveys disproportionately. This may result in biased responses.

Is more better?


An experiment with a between-subjects design was conducted among 1,000 randomly selected undergraduate students at a large, American Midwest university.

Students were assigned to one of three groups and invited to participate in a survey. Those in the control group were not offered any incentives, while those in the second and third group were promised a reward in the form of a two-dollar or five-dollar credit respectively, that could be used at the campus bookstore or dining services.

Demographic information about individuals was provided both by the university and the respondents themselves, which could be used to determine whether any response biases were present between groups.

More is better


Students in the five-dollar group were significantly more likely to respond than those in the control or two-dollar groups. This suggests that conditional incentives are seen more as a payment than a trigger for reciprocity. Interestingly, the differences in response rates largely occur during the first week in which the survey was distributed, which suggests that incentives have a stronger effect on early respondents.

None of the three groups was radically different from the population. However, there were some differences. The control group had the most significant deviations in the form of higher GPAs, a larger proportion of females, and a larger proportion of on-campus students. The two-dollar group similarly had a larger proportion of females, whereas the five-dollar group was virtually identical to the sampling frame.

Altogether the results suggest that a five-dollar conditional incentive increases representativeness while also improving response. On the other hand, failing to use incentives does not necessarily result in “bad data” due to radical biases.


  1. Conditional rewards can increase response rates for online surveys and improve representativeness of the sample