The Struggle: Comparing Data Wave-To-Wave

EMI Named WIRe’s Best Places To Work Runner-Up
October 3, 2018
3 Key Takeaways From Brandemonium
October 15, 2018
Show all

The Struggle: Comparing Data Wave-To-Wave

When it comes to tracking studies, or really any study where you’re comparing results wave to wave, I think most researchers understand that you use the same sample provider(s) the entire time. It is also typical to routinely investigate or defend differences in data across multiple points in time.

While sometimes a change in data is simply because of a change in the marketing mix.  Perhaps advertising has changed over that period of time.  Maybe something happened in the marketplace to explain this change – be it distribution changes, mergers, negative news, etc.

However, there are times when a difference in data just doesn’t make sense at all.  Many researchers question the data itself, looking at seemingly minor differences in field to explain the shift.  Maybe there were more females in one wave.  Perhaps one region had more interviews in one wave, putting the geographic representation out of balance.  Those are all somewhat legitimate reasons, but differences in data can be due to one primary reason that many researchers overlook or are unaware of: sample selection bias.

 

Panels Are Different

Online panels are different, it’s a fact.  One primary reason panels differ is because of recruiting methods. Due to new technology available today, there are a variety of ways to recruit respondents.  Some are using a phone, some use Facebook, many have proprietary relationships with certain websites, some recruit via other in-person methods, and the list goes on.  Panels also manage their members differently and have different rules.  How often do they send invitations to surveys?  How do they send survey invitations?  How do they incentivize?  At what point can members redeem their incentives, etc.?  Each of these differences can create different overall attitudes and behaviors from its members.  We have several years’ worth of research on the sample industry that demonstrates these differences.

 

Panel Make-Up & Behaviors

Another explanation for unexplained differences is that the make-up and behaviors of individual panels change over time.  You’ll see panels evolve over time for a variety of reasons.  Panels adjust their recruitment spend over time, creating panels that have more or less tenure than others.  Additionally, panels are recruiting based on client needs, affecting the overall composition of their panel.  Recruiting more unacculturated Hispanics, more Millennials, more smartphone owners, or other hard-to-reach targets sounds great in theory but can produce different results from the same panel.  Companies also try to improve their validation methods, possibly adding in a text verification step, adding in red herring trap questions, or connecting other social accounts, which can also slightly affect the panel’s overall composition.

In addition, the consolidation of panels has also affected how panels change over time.  We have seen a lot of mergers and acquisitions in the past few years, such as Research Now and SSI.
How they decide to combine their panels could potentially change the data you receive.

As you have likely read in our Sample Landscape blog series, we have done multiple waves of research on research.  We test partners in order to measure quality and service.  Our research has helped us see how panels change, as well as compare attitudes and behaviors, which are typically not questions that are used to balance/set quotas on.  If you have been keeping up with the series, the following findings from research-on-research will probably seem a bit familiar:

  • The device on which respondents take surveys varies by panel.  One panel could have nearly no mobile users, while another could have half or more survey takers on a mobile device. Our research suggests that those that take surveys on mobile devices differ than those that take surveys on a desktop/laptop, even when properly balancing.  We always recommend keeping a close eye on device distribution across waves, as it can greatly vary. (Read the full blog)
  • Brand awareness, brand ratings, and brand concept ratings vary greatly by panel. Even on big brand names, where you typically would expect to have little variance in ratings, there are large differences from partner to partner and from wave to wave.  Some panels tend to rate items extremely high while others tend to rate them lower.
  • Validation questions can also vary.  We ask a lot of questions that we verify using 3rd party data – items such as smoking and flu shot incidence.  For example, 16.5% of the US population admits to smoking.  However, in our research, there were some panels where as little as 10% or as many as 37% of respondents admit they smoked.

Is there a fix to reduce the inconsistency? Yes, and the answer is strategic blending.  Some of the benefits of blending include:

  • Reduce risk by using multiple partners
  • Ability to replace a supplier if necessary with minimal impact to project results
  • Exponentially increased feasibility and avoiding top-up situations
  • Wave-to-wave sample consistency

At EMI, we pride ourselves on being incredibly experienced in blending.  We have patented our own blending product, IntelliBlend®, that combines sample from traditional and non-traditional sources.  We blend in an intentional and controlled way that has numerous benefits:

  • Increased accuracy and representational demographics, behavioral, and attitudinal data
  • Increased feasibility and ability to deliver on quotas
  • Replicable for wave studies
  • Avoid “top-up” situations

Contact us for your next tracker study to receive a custom sample plan that best fits your project!