From time to time, I hear industry pundits talk about how ensuring research quality is as simple as applying a bunch of disparate tools throughout the research supply chain. These folks suggest that panel companies could match their panelists against industry databases to cleanse out non-real panelists, and that research companies can prevent duplicate panelists and speedsters by applying digital fingerprinting and proprietary speeding measures within surveys.
In my opinion, piecemeal solutions like these don’t address the most critical aspects of quality that clients have been pleading for: “transparency” and “consistency”. If a buyer has no way to audit or visually examine the overall impact that each quality assurance tool has made on their research, then they have no way to measure the quality of the project or the supplier. This doesn’t seem like a fair trade-off for the clients who have stated outright that they are willing to pay a premium for quality – but only if they can measure it and depend on it.
Submitted by Russ Rubin on August 11, 2010 - 17:18
For years, people have told me I should write a book. I ask them what should I write about and they say, “You’re always telling stories, why don’t you write a book of all of your stories?” Then I ask, “Well who would read that, let alone buy it?” And they say, “Well, I would.” I guess that this is a rudimentary form of marketing research similar to “Do you like my new dress?” You can ask a question and get an answer, but at the end of the day, this is misleading research which will lead to a bad business decision.
Now - back to the point about me writing a book. I’m not sure that I have a book in me, but I sure do have a lot of chapters that I could write. And the first chapter is What Are We Trying To Do?
When I was on the client side, the Marketing teams would come to me and say, “We need an Attitude and Usage Study. What will it cost?” I had two ways of responding to this. The first way was obnoxious – “Do you want fries with that”. The second way was more thoughtful – “What are you trying to learn?”
Submitted by Michael Conklin, Chief Methodologist, on August 4, 2010 - 16:24
Don't read too much into the title above – you wouldn't want to extract meaning that isn't there.
I have been reading How We Decide by Jonah Lehrer, an excellent book that looks at the neuroscience behind the decisions we make. One key insight for market research professionals is that the human mind wants to find patterns and does a great job at finding them even when they are not present. The title of this post refers to an experiment where Yale college students and a rat were competing against each other in "finding the reward" in a maze. In the experiment, there was a single decision to be made, go left or right at the beginning of the maze. The reward was placed via random assignment with a probability of 60% for the left branch of the maze and a probability of 40% for the right branch.
Submitted by Mark Menig on July 22, 2010 - 16:06
This is the second in an occasional series looking at the list of 26 Questions to Help Research Buyers of Online Samples assembled by ESOMAR, the global non-profit market research organization. You can review the first in the series here.
This time around we’ll look at a sample management/sample blending issue that's a critical question to pose to your online survey panel provider: “Do you have a policy regarding multi-panel membership? What efforts do you undertake to ensure that survey results are unbiased given that some individuals belong to multiple panels?” (Question #21 on the list.)
Submitted by Michael Conklin, Chief Methodologist, on July 20, 2010 - 15:28
Marketing Researchers are often asked by clients to defend a particular market research methodology, sampling scheme, or result by answering the question “If I do as you propose, will I make the same business decision?”. Fortunately, statistics gives us the tools to answer this question, and the clear answer is “I don’t know”. That is because the answer is unknowable – because while we may know the specific criteria a client used to make the business decision, we don’t know if it was the right decision. And, if the decision was the wrong decision we would hope that we would indeed make a different decision the next time around.
Let’s look at this in a little more detail. Suppose we are deciding whether to develop a product concept to introduce into the market. Our decision criteria is that at least 50% of the population in a survey say that they would like to buy this product, based on its description. I do a survey, and get an answer – 54%. For some reason, one week later we do the same survey and we get an answer of 46%. I am shocked and appalled! I would make a completely different business decision based on these two studies. This is exactly the scenario painted by Kim Dedeker (then of Procter & Gamble) a few years ago in her indictment of the online research industry. How could the same study be done a week apart and get significantly different results AND get results that led to completely different business decisions?
Submitted by Greg Marek on July 6, 2010 - 11:16
I ran across an interesting piece today on the New York Times Freakonomics blog: Daniel Hamermesh writes about how his local grocery store no longer carries his favorite coconut sorbet because, although it sells well in the chain’s Austin store, it doesn’t sell well in the rest of the chain’s stores, all in Texas. The chain purchases centrally, so they’ve discontinued the sorbet.
This got me thinking about SKU Rationalization – the analytical optimization process used to determine the merits of adding, retaining, or deleting items from a retailer's merchandise assortment. From the retailer’s point of view, optimizing SKUs helps them maintain shopper satisfaction while increasing average shopping basket size and shelf productivity.
Submitted by Greg Marek on June 25, 2010 - 09:19
Forrester has started a research project to benchmark the use of social technologies across the enterprise, and is especially interested in hearing from market research professionals. I say “hallelujah!”
Much like Tamara Barber mentions on her blog, our market research clients run the gamut in their use of (and success with) social media, and those clients are definitely curious to know how their use of social media for online market research compares with that of their peer organizations. We get that question all the time.
The goal of Forrester’s research is to gauge the current reality of social media practitioners across the enterprise, and we’re definitely encouraging our market research practitioners to participate by taking the Forrester survey.
Our view on the drivers of success for social media programs for online market research or Enterprise Feedback Management (EFM):
About the MarketTools Blog
MarketTools Blog Team
VP, Client Development, Market Research
Regional Vice President, Market Research
Senior Research Manager, Market Research
Senior Manager, Marketing Communications / Blog Editor
Vice President, Strategic Accounts, Market Research
Director, All Channel Tracker, Market Research
VP, Client Development, Market Research
SVP, Client Services, Market Research
Senior Product Marketing Manager, Market Research
Copyright © 2013 MarketTools, a MetrixLab Company. All rights reserved.