Why Online Sample Source Matters: Not All Panels Are Created Equal

March 11th, 2020 | Ed Morawski, President & COO, Angus Reid

Woman and man answering surveys on laptops

In today’s digital world, online panels are a cost-effective and often essential method of gathering data through online surveys for primary research purposes. One defines the characteristics of the population one wishes to understand (the “universe”) and number of respondents appropriate for the objectives at hand and a survey is launched targeting said sample. For example, a sample of a few hundred Canadians will give marketers an answer that is representative of the country. Unfortunately, things are not so simple and the process is increasingly fraught with danger.

I started my market research career at the Angus Reid Group in the 1990s when the world wide web and email were in their infancy, and almost all quantitative research data collection was conducted via landline telephones. Representivity (the key to data accuracy), was greatly enhanced with the introduction of Random Digit Dial (RDD) platforms that, in theory, served up random telephone numbers that could potentially reach any household in the country. This allowed researchers to design sample frames that were reasonably well defined (certainly by geography and often by other variables such as income and household composition) and truly random; the basis of good research.

One of the reasons we sold the company (The Angus Reid Group) in 2000 was the realization that an online world was fast-approaching, and the telephone was becoming an increasingly inadequate and inaccurate data collection tool. Leaving aside the advent of mobile phones, making RDD more difficult, it became very clear to anyone walking around a call centre that the vast majority of Canadians were fed up with having their dinners interrupted to give their opinions to strangers. In short, the whole notion of random probability sampling is a moot point when much of your universe is either unreachable or unwilling to partake in the exercise at hand. However, all was not lost, the internet was coming.

Fast forward to current practices where the vast majority of quantitative (and increasingly qualitative) data collection now takes place by way of surveys distributed via email to “online panels”. Panels were invented as the solution to replicating, as close as possible, the scientific rigor of random digit dialing in a world where spam (the online equivalent to telephone RDD) is unworkable and often illegal.

Panels are large numbers (often in the hundreds of thousands) of Canadians that have agreed to participate in online market research engagements. When done properly, panels are recruited through multiple channels, and are composed of a truly representative group of Canadians profiled on their individual geographic, demographic, behavioural and psychographic characteristics; a good panel replicates the full diversity of the country in close composition to reality.

Furthermore, a good panel is representative, engaged (i.e. high response rates), maintains panelist tenure, and is of a size where no panelist is sent too many or too few surveys. Finally, and perhaps most importantly, good panels treat their panelists with respect and transparency where the panelists and the sponsoring organization know and trust each other.

Unfortunately, similar to what happened with RDD telephone data collection, the world of online research is being transformed by online technology and behavior. The result: good panels are becoming increasingly difficult to find and data accuracy is suffering often with calamitous results.

As is usually the case, the root cause of panel deterioration is driven by money. It is simply too expensive to recruit and maintain a stable representative panel. The reality is that many “panels” today are often nothing more than portals to massive autonomous online populations or email databases. Individuals are recruited to participate in surveys on a one-time basis through a variety of channels and inducements. This means that one has little to no control over the sample source underlying any particular study. With this loss of control it is often impossible to ensure that a sample is representative of pretty much anything but what those surveyed think.

The end results are increasingly inexpensive surveys with increasingly dubious reliability and accuracy. All panels are not created equally. Without a proper understanding of panel recruitment techniques, panelist churn, response rates and panel composition one cannot assess data accuracy. In short, anyone can get a hundred people to answer a question, but the real skill is asking a hundred people a question that accurately represents the views of the whole population. A well-managed panel can be a valuable asset, but as with most things, buyer beware.

So what are we as marketing professionals to do? Before starting any research initiative, one should conduct some due diligence with the research supplier under consideration. Straightforward questions such as: where is the sample coming from? Are these known and profiled respondents? What is the average tenure of these panel members and what does panelist churn look like? Is the sample being supplemented from outside sources and, if so, what sources? What routine steps are taken to ensure the respondents are real people (not bots) and what is done to clean out dirty data? All reasonable questions to ask and any reputable research supplier will have ready answers.

As with many things in the digital world, all is not what it seems, and this is true in market research as with anything else.

 

Ed Morawski, President & COO, Angus Reid

Before launching Angus Reid with Dr. Angus Reid, Ed launched the research and consulting practice at Vision Critical (2006-2016). Prior to joining Vision Critical, Ed worked for Ipsos North America as Senior Vice President (2000-2006) and started his career with Angus Reid at the Angus Reid Group in Vancouver (1996).

Ed has a BAH in Economics from Queen’s University and an MA in Public Policy from the University of British Columbia. Ed’s research background, combined with his academic experience analyzing public opinion and communications on a variety of public policy and other government initiatives, makes him particularly well suited to drive greater insight with this project.