The Struggle: Comparing Data Wave-To-Wave

EMI Named WIRe’s Best Places To Work Runner-Up
October 3, 2018
3 Key Takeaways From Brandemonium
October 15, 2018
EMI Named WIRe’s Best Places To Work Runner-Up
October 3, 2018
3 Key Takeaways From Brandemonium
October 15, 2018

When it comes to tracking studies, or really any study where you’re comparing results wave-to-wave, most researchers understand that you use the same sample provider(s) the entire time. It is also typical to routinely investigate or defend differences in data across multiple points in time.

You want to ensure that any changes in your data are due to shifts in the market, not inconsistencies in your data. However, there are times when a difference in data just doesn’t make sense at all. Many researchers question the data itself, looking at seemingly minor differences in field to explain the shift. Maybe there were more females in one wave. Perhaps one region had more interviews in one wave, putting the geographic representation out of balance. Those are all legitimate reasons, and it can be easy to understand how demographic differences can skew data, but differences in data can be due to another reason that many researchers overlook or are unaware of: sample selection bias.

Sample selection bias occurs when a single panel is used, and the data is limited to the attitudes and behaviors of the respondents within that panel. We know every panel is different whether due to recruitment differences, panel makeup, management, and more.

Recruitment and Incentives

Online panels are different. It’s a fact. One primary reason is because of recruiting methods. Due to the new technology available today, there are a variety of ways to recruit respondents. Some are using a phone, some use social media, many have proprietary relationships with certain websites, some recruit via other in-person methods, video games, and the list goes on. Panels also manage their members differently and have different rules. How often do they send invitations to surveys? What method do they use to send survey invitations? How do they incentivize? At what point can members redeem their incentives, etc.? Each of these differences can create different overall attitudes and behaviors from its members. We have over a decade of research on the sample industry to demonstrate these differences.

Makeup and Behavior

Another explanation for unexplained differences is that the makeup and behaviors of individual panels change over time. Panels evolve over time for a variety of reasons. Panels adjust their recruitment spend over time, creating panels that have more or less tenure than others. Additionally, panels are recruiting based on client needs, affecting the overall composition of their panel. Recruiting more unacculturated Hispanics, more Millennials, more smartphone owners, or other hard-to-reach targets sounds great in theory but can produce different results from the same panel. Companies also try to improve their validation methods, possibly adding in a text verification step, adding in red-herring trap questions, or connecting other social accounts, which can also affect the panel’s overall composition.

Industry Consolidation

The consolidation of panels has also affected how panels change over time. We have seen countless mergers and acquisitions in the past few years. How these panels combine could potentially change the data you receive. For example, new management could mean changes to how the panels are run, the payment methods used, etc. Combining panels also changes the panel makeup which means there will be an impact to its attitudes and behaviors.

As you have likely read in The Sample Landscape, we have done many waves of research-on-research over the last 10+ years. We test partners in order to measure quality, service, performance, and differences. Our research has helped us see how panels change, as well as compare attitudes and behaviors which are typically not questions that are used to balance/set quotas on. If you’ve read the latest edition of The Sample Landscape report, the following findings from our research-on-research will probably seem a bit familiar:

  • Awareness levels can vary by as much as 25 percentage points based on sample provider selection.
  • Brand ratings can vary by as much as 20 percentage points based on sample provider selection.
  • Concept ratings can vary by as much as 30 percentage points based on sample provider selection.

Is there a way to reduce the inconsistency? Yes, and the answer is strategic sample blending. Some of the benefits of blending include:

  • Reduce risk by using multiple partners
  • Strategically selected, complementary sample sources
  • Ability to replace a supplier, if necessary, with minimal impact to project results
  • Exponentially increased feasibility and avoidance of top-up situations
  • Wave-to-wave sample consistency

At EMI, we pride ourselves on being incredibly experienced in blending. We have patented our own strategic blending product, Intelliblend®, that combines sample from traditional and non-traditional sources. We blend in an intentional and controlled way that has numerous benefits:

  • Increased accuracy and representational demographics, behavioral, and attitudinal data
  • Increased feasibility and ability to deliver on quotas
  • Replicable for wave studies
  • Avoid top-up situations

Reach out to us for your next tracker study to receive a custom sample plan that best fits your project!