When it comes to tracking studies, or really any study where you’re comparing results wave to wave, I think most researchers understand that you use the same sample provider(s) the entire time. It is also typical to routinely investigate or defend differences in data across multiple points in time.
While sometimes a change in data is simply because of a change in the marketing mix. Perhaps advertising has changed over that period of time. Maybe something happened in the marketplace to explain this change – be it distribution changes, mergers, negative news, etc.
However, there are times when a difference in data just doesn’t make sense at all. Many researchers question the data itself, looking at seemingly minor differences in field to explain the shift. Maybe there were more females in one wave. Perhaps one region had more interviews in one wave, putting the geographic representation out of balance. Those are all somewhat legitimate reasons, but differences in data can be due to one primary reason that many researchers overlook or are unaware of: sample selection bias.
Online panels are different, it’s a fact. One primary reason panels differ is because of recruiting methods. Due to new technology available today, there are a variety of ways to recruit respondents. Some are using a phone, some use Facebook, many have proprietary relationships with certain websites, some recruit via other in-person methods, and the list goes on. Panels also manage their members differently and have different rules. How often do they send invitations to surveys? How do they send survey invitations? How do they incentivize? At what point can members redeem their incentives, etc.? Each of these differences can create different overall attitudes and behaviors from its members. We have several years’ worth of research on the sample industry that demonstrates these differences.
Another explanation for unexplained differences is that the make-up and behaviors of individual panels change over time. You’ll see panels evolve over time for a variety of reasons. Panels adjust their recruitment spend over time, creating panels that have more or less tenure than others. Additionally, panels are recruiting based on client needs, affecting the overall composition of their panel. Recruiting more unacculturated Hispanics, more Millennials, more smartphone owners, or other hard-to-reach targets sounds great in theory but can produce different results from the same panel. Companies also try to improve their validation methods, possibly adding in a text verification step, adding in red herring trap questions, or connecting other social accounts, which can also slightly affect the panel’s overall composition.
In addition, the consolidation of panels has also affected how panels change over time. We have seen a lot of mergers and acquisitions in the past few years, such as Research Now and SSI.
How they decide to combine their panels could potentially change the data you receive.
As you have likely read in our Sample Landscape blog series, we have done multiple waves of research on research. We test partners in order to measure quality and service. Our research has helped us see how panels change, as well as compare attitudes and behaviors, which are typically not questions that are used to balance/set quotas on. If you have been keeping up with the series, the following findings from research-on-research will probably seem a bit familiar:
Is there a fix to reduce the inconsistency? Yes, and the answer is strategic blending. Some of the benefits of blending include:
At EMI, we pride ourselves on being incredibly experienced in blending. We have patented our own blending product, IntelliBlend®, that combines sample from traditional and non-traditional sources. We blend in an intentional and controlled way that has numerous benefits:
Contact us for your next tracker study to receive a custom sample plan that best fits your project!