Day 2 of Qual 360 North America just finished up. Here are the key insights and takeaways told in short soundbites through Tweets...
Big day for our product team here at Dialsmith. We just launched a major upgrade to Perception Analyzer Online, our full-featured survey service that features online dial testing for recorded media. It's a milestone release as we've rebuilt it from the ground up on HTML 5 to allow for iPad/tablet compatibility and to provide greater flexibility in using our online dial testing element in conjunction with other survey and community platforms. We've also added an optional "Take Action" button that participants can use during the exercise to indicate a moment where they would take a specific action such as changing the channel or making a purchase.
Qual 360 North America is slated for next week in Atlanta and yours truly, Dialsmith, is queued up to co-host a first-of-its-kind workshop with fellow market research tech developer/service provider Invoke Solutions. We're also sponsoring the official after hours party hosted by The Research Club so it should be an exciting few days in the A-T-L.
Whether you're attending or just tracking the action from the conference, here are some helpful links you'll want to take note of:
Qual 360 Workshop co-hosts Invoke Solutions’ Wayne Goodreau and Dialsmith’s David Paull preview their interactive, live research event and chat about how technologies are converging to make real-time, online quali-quant studies a reality.
Of course, we'll also be live blogging from the conference so make sure you check back here for news, updates and photos from Qual 360 and The Research Club party.
Yes, we know that at this point, the giant sugar rush we all experienced around this year’s Super Bowl ads has subsided, but when we got the opportunity to "talk shop" with a fellow Super Bowl ad "junkie" especially one with the acumen in brand and marketing strategy that Tim Calkins possesses, we weren't about to pass it up.
Tim is clinical professor of marketing at Northwestern University’s Kellogg School of Management and co-academic director of Kellogg’s branding program. He has studied and analyzed the strategies employed by Super Bowl advertisers for almost two decades and leads the Kellogg Super Bowl Advertising Review. We crossed paths with Tim as both he and Dialsmith CEO David Paull participated in this market research panel which discussed and dissected the spots that ran during this year's Big Game. Tim had some great insights during the panel discussion so we asked (and he graciously agreed) to dive deeper with us on his research approach and what he's learned from his years of studying these ads and the companies that invest in them.
Q: Where did your interest in the Super Bowl ads come from?
Tim: I have been studying Super Bowl ads for almost two decades. Before joining the Kellogg School of Management, I spent 11 years at Kraft Foods. While at Kraft I regularly reviewed the Super Bowl spots as a learning exercise with my team. I began the Kellogg Super Bowl Advertising Review in 2006.
Q: How have brands changed their approach to Super Bowl advertising since you first began studying them?
Tim: Two things have changed significantly for Super Bowl advertisers. First, the stakes have gone up. The price of a Super Bowl ad has increased dramatically. Viewership is also up. This puts enormous pressure on the advertisers. Second, the growth of digital communication has transformed the Super Bowl marketing opportunity. Ten years ago most Super Bowl advertisers would create a Super Bowl spot and run it. Now almost every advertiser uses the spot along with a website, social media effort and PR campaign.
Q: How has your methodology for studying the ads changed from when you first began studying them?
Tim: We have formalized our process over the years. In particular we embraced the ADPLAN framework to use when evaluating the ads. This framework involves six different factors. Each is important: attention, distinction, positioning, linkage, amplification and net equity.
The event has grown in size and scope. This year we had almost 100 people gathered at the Kellogg School of Management to watch the Super Bowl spots.
Q: What do you feel are the most important factors in determining whether a Super Bowl ad is successful or not?
Tim: There isn’t one critical factor. To be successful, a spot has to succeed on a number of different dimensions. Each part of the ADPLAN framework matters.
Linkage is a factor that many companies struggle with; they create captivating spots but there isn’t much linkage to the brand. As a result, people remember the creative but not the brand. This year, for example, Toyota ran a campaign saluting fathers. The advertising was heart-warming but didn’t connect to the brand.
Q: Are there bigger lessons learned for brands and advertisers from the results of your Super Bowl ad studies?
Tim: You can learn an enormous about advertising and marketing by studying the Super Bowl. Each advertiser on the game is working exceptionally hard to break through the clutter. As a result, on the Super Bowl you see the latest in marketing techniques.
People sometimes say the Super Bowl is a completely unique event when it comes to marketing. I disagree; the Super Bowl gets a lot of attention, but the fundamentals of good communication apply to the Super Bowl just as they apply to every other event.
Q: The brands that you gave high scores to this year, what did they do right?
Tim: The top advertisers on the Super Bowl this year according to the Kellogg Super Bowl Advertising Review panel were McDonalds, Coke, Fiat, Clash of Clans, Always and Bud/Bud Light.
There wasn’t one formula for success. McDonalds, Coke, Bud and Bud Light all benefitted from exceptionally strong branding. It was very clear who the ad was from. Fiat and Budweiser told engaging stories that really drew people in; this helped with attention. Always ran a very unique spot and excelled on distinction; there was nothing else like it on the Super Bowl. All the best spots attracted attention and had solid brand linkage.
Q: The brands that scored low, what did they do wrong?
Tim: There were several advertisers that received low scores this year. The list includes: Squarespace, Lexus and Nissan. Lexus simply didn’t break through. The brand ran two spots featuring cars driving in a dynamic fashion. This just isn’t enough to stand out on the game.
Squarespace developed an ad featuring Jeff Bridges chanting “ommmmm.” This ad scored poorly largely due to a positioning program. People on our panel weren’t clear what was being advertised or why they should buy it.
Q: What will we be talking about next year at this time in regards to the Super Bowl ads?
Tim: Next year we will be marveling at the new, record high price for a Super Bowl spot. This year the Super Bowl set a record in terms of viewership. In a world of fragmenting audiences, the Super Bowl has unique reach. Networks understand this and will keep increasing the fee to participate.
If you want to revisit our results from this year's Super Bowl ad ratings, click here.
[Thanks to our guest blogger Megan O'Hara for contributing this article.]
In our line of work, we’re constantly reminded that words have a profound impact on how we form our opinions and preferences.
That’s a topic discussed in detail in this article by Mike Seccombe of "PowerHouse." Seccombe cites numerous examples of how “linguistic sleights of hand” are important tactics in England’s political power struggle. Per Seccombe, “One political person’s ‘refugee’ is another’s ‘asylum seeker,’ is another’s ‘illegal,’ is another’s ‘boat person’ or ‘queue jumper.’”
Whether its politics or policy, litigation, sales or marketing, putting a method behind the words you choose to use can make the difference between winning and losing.
My consulting firm, Presentation Testing, uses Perception Analyzers to help our clients weed out those words, phrases, messages and arguments that engage and help audiences form opinions. The dials are an efficient tool for A/B testing messages and words, and pinpointing which ones have the most impact with respondents. As a case-in-point, here’s a real world project that Presentation Testing did for a client a few years ago:
The project focused on the topic of international trade. In Round One, the client was message testing with the dials and a portion of that messaging stated that, “protectionism is bad.” The respondents’ dial lines plummeted. Presentation Testing found that while most respondents didn’t know exactly what "protectionism" was, they dialed down on the message that it was bad because the word “protectionism” sounded like a good thing that we should be doing.
Based off this feedback, we changed the wording in Round Two from “protectionism” to “economic isolationism.” Most respondents, again, didn’t know what “economic isolationism” meant. Despite this, respondents dialed up the message that “economic isolationism” was bad because it sounded like a bad thing and something they should be against.
As this exercise showed, the words you select can have powerful implications even if your audience doesn’t know their exact meaning.
It’s also important to remember that when it comes to words, meaning is in the eye of the beholder or to be more specific, your intended audience.
So, when we test client messages, we look at the dial results based on whatever demographics are the most relevant to the project—in the case of political or policy message testing, we typically look at political persuasion. In the “PowerHouse” article, Seccombe notes that, “A growing body of research indicates people of different political persuasions literally hear, see and even smell the world differently from one another.” Seeing how left-leaning versus right-leaning respondents react to a message helps us figure out what words and messages resonate with each specific audience as well as which appeal to both.
As both my real-world example and Seccombe’s article attest to, the words you choose can make all the difference. So, make sure you have a method in place to test and validate. Your client’s and/or your firm’s success may depend on it. Want to learn more about our method of message and word testing? Download this free eBook on the Essentials of Moment-to-Moment Research.
While everyone was busy digesting the on-field action as well as all those wings, nachos, pizza, etc. during the Super Bowl, our Dialsmith team was busy trying to make sense of the Slidermetrix polling we were seeing on the Super Bowl ads.
As a quick refresher before we jump into the results, we use Slidermetrix to measure "gut reaction" to these ads. Using Slidermetrix, we ask viewers to watch and continuously rate the ads on a scale from 0 (Hate It!) to 100 (Love It!) using an on-screen slider. We collect data on the slider position every second as the viewer watches, which allows us to report on how they are reacting in-the-moment to what they are experiencing. With the data we collect, we can report on an overall mean score for the ad as well as pinpoint specific seconds where the ads peaked or where they hit bottom. We can also split the data by gender.
With the game over and the results tallied, here's what Slidermetrix told us about this year's ads:
Want to dive deeper into our Super Bowl ad ratings? See Dialsmith CEO David Paull discuss our Super Bowl ad results on Portland, Oregon's NBC-Affiliate and read our final Slidermetrix report on the 2015 Super Bowl ads.
We've been busily working to keep pace with the Super Bowl advertisers who have pre-released their spots. So far, so good. We have 15 ads available for viewing and rating second-by-second on our Slidermetrix VideoLink site as of this posting and more on the way.
Early returns are starting to reveal some interesting trends. The "pull at the heartstring" themed ads are repeating their success from last year provided they feel genuine. And the funny ads? Well, they better be really funny or else watch out. Case-in-point:
With our fingers on the pulse of how viewers are responding to these early bird Super Bowl ads, we've been adding our two cents to the discussions. If you want to see what we've been saying, here are some links:
Stay tuned as we continue to track the good, the not-so-good and the downright ugly, leading up to the Big Game. Be sure to follow us at @Dialsmith for the latest.
For 30 seconds of air during this year's Super Bowl, advertisers are paying as much as $4.5 million. That's a whopping $150K per second and a huge investment to make an impression on viewers as they enjoy the Big Game. It goes to show that in the world of big boy and big girl advertising, every second counts.
With six-figure seconds the norm now for Super Bowl ad buys, our research geeky brains here at Dialsmith want to know more than simply which of the Big Game ads viewers like the most. We want to know which of those "Super Seconds" are capitalizing on the investment and which are not. The only way of getting at this granular level of viewer feedback is to use a tool like Slidermetrix.
So, for the third year in a row, Dialsmith is working with a major media partner (the online video industry news and commentary website VideoInk) to give viewers the chance to score the Super Bowl ads on a continuous, second-by-second basis using Slidermetrix.
Using Slidermetrix, a viewer is asked to continuously rate what they are watching second-by-second. So, in the case of a Super Bowl ad, we get a real-time, second-by-second snapshot of each viewer's opinion, providing a deeper level of data than what you get from other types of ratings (like the star or thumbs up/down ratings you typically see online).
This method is similar to the Moment-to-Moment dial testing that market research consultants do for their media clients who use their data for placement and programming decisions as well as for content direction, including decisions by advertisers on the content for ads that air during the Super Bowl.
Be sure to check back on our blog and on the VideoInk site for news, updates and results of our 2015 Super Bowl ad ratings. For more background, check out these links:
It’s always energizing to see our dials back in action during high stakes political events like on the broadcast coverage of last night’s State of the Union. And while some of the networks have traded their in-person dial groups for new second screen polling tools, the method employed to score and dissect the President’s address can be traced back to the use of our dials in CNN’s coverage of the 2008 Presidential debates. This is the same continuous, moment-to-moment method of gathering feedback that we, here at Dialsmith, have pioneered and facilitated through the development and support of our Perception Analyzer tools for more than 30 years. Why is this method still so relevant today as it was a decade or even two decades ago? Well, the benefits were apparent again last night as it gave those reporting the ability to pinpoint the specific moments or messages with the highest impact.
For its coverage of the State of the Union, Fox News leaned heavily again on Political Pollster Frank Luntz who ran his own in-studio focus group equipped with our Perception Analyzer dials. Here’s a clip.
CNN and MSNBC took a different approach in their SOTU coverage this year, using a second screen polling tool to gather continuous feedback from thousands of viewers across the country. While reporting these results is more infotainment than scientific, the ability to continuously poll a large population of viewers from their homes as they watch opens doors to all new applications for dial testing.
For us here in the dial testing world, this year's coverage of the SOTU is another indicator that the dial testing methodology is alive and well and expanding in new directions. New tools and new applications of the method are being explored and we here at Dialsmith are excited to be right in the middle of it. If this year's SOTU got you thinking about how dial testing can help you in your research, check out the free "Essentials of Moment-to-Moment Research" eBook or our List of "10 Ways Dial Testing Will Improve Your Research."