Assessing What Consumers See

By Peter Gold

Is your package noticeable? Does it convey the right message? Solid answers require a method to measure design impact early on.
Attitudes and perceptions alone won’t help you to evaluate a package design effectively. Consumer reactions are what count when the package sits on the store shelf.
You get a true measure of how a brand stacks up against its peers with side-by-side comparisons. That way, consumers can evaluate your brand against competitors.
This is where evaluation goes astray in some package development projects. Brand managers and designers attempt to gauge the impact of packaging without a typical competitive environment. That’s like trying to gauge the meaning of a single word without the context of a sentence. What you want is an approach that does a better job of replicating reality. It should measure your package’s performance against three measurements:
- Impact, which measures a package’s ability to break through competitive “clutter” on the shelf.
- Findability, which determines how easily consumers can locate your package among competitors.
- Imagery, which assesses the “thoughts” and “feelings” your package communicates.

Let’s examine two of the best assessment methods: eye-tracking and Internet-based research. While both come with strengths, be wary of their potential weaknesses, too.
Method 1: Eye-Tracking
Eye-tracking offers one of the best methods for letting consumers examine package design in a more realistic setting.
Respondents view packages set within the context of a shelf set while researchers track their eye movements using a passive device. Researchers determine where the eye goes and how long it fixates on a particular spot. This enables them to measure a package’s “noticeability.”
Eye-tracking offers four advantages.
First, it uses larger sample sizes than focus groups. It allows brand managers and package designers to develop statistical projections by polling large groups of respondents.
Second, it more realistically represents the act of shopping. Eye-tracking research is absent of discussion or group dynamics that can influence an individual’s behavior.
Third, eye-tracking displays packages within a competitive framework.
Fourth, eye-tracking is more or less passive. It focuses on consumer responses rather than consumer thoughts and opinions.
But eye-tracking research also carries drawbacks. Little definitive evidence exists to suggest that the length of time an eye trains on any particular package correlates to sales or the overall impression the package makes in terms of effective reactions.
You can probably recall situations when you have stared mindlessly, only to have a stray sound or thought bring you back to consciousness.
When conducting eye-tracking research, you can’t assure that staring at an item is the same as having noticed it and cognitively processed any aspect of it.
In addition, eye-tracking, like in-person focus groups, usually requires a central location for conducting research. These locations pose their own problems.
Central-location methods also tend to introduce demographic and geographical biases because eye-tracking studies concentrate on a small group of locations. Participants may exhibit characteristics that differ markedly from the general population.
Why? Only certain types of people visit malls, the most common central-location setting. Therefore, their reactions may vary significantly from the product’s target consumer.
Consider, too, the influence of the research setting. How strange it must feel to sit in a room inside a mall with a device that measures eye movements.
Eye-tracking shares one concern with the focus-group approach. An interviewer is present who can influence respondent behavior inadvertently.
While eye-tracking is probably preferable to focus groups for downstream packaging evaluation, it has enough limitations to make brand managers and design professionals think carefully when considering it.
Couple these concerns with the overall higher cost of conducting eye-tracking studies. This method is usually much more expensive than focus groups.
Method 2: The Internet
The Internet has helped to pave a new way of conducting marketing research. The successful conversion of offline methods to the Internet has been meteoric.
Whereas phone research took almost 20 years to replace in-person interviews, the Internet has become an accepted method in less than five years.
Five factors are fueling the Internet’s growth as a data-collection tool:
- The relatively low cost per interview, particularly for large samples.
- Faster data collection. Depending upon the study, the Internet can take 25 percent to 75 percent less time than traditional research techniques.
- Online researchers’ development of specialized weighting techniques that overcome potential sampling biases.
- The increasing difficulty associated with obtaining telephone interviews. The rapid rise of call screening and the do-not-call registry have made telephone interviews a major problem.
- Some malls that house interviewing and testing facilities have closed. This trend has made offline research more expensive and less reliable.

When it comes to evaluating package design, the Internet can furnish speed, low cost, geographic dispersion and an environment that’s free of interviewer bias.
Brand managers can now also use the Internet to generate a competitive context and tighter controls over the time given to respondents to view package designs.
Many marketers using online research often ask respondents to simply rate the attributes of an individual package (or a line of packages), as well as their overall purchase intent.
Some Internet-based package testing occurs in a competitive setting. Respondents see all the design options at once. Some Internet researchers even use a sequential-monadic technique where consumers view design options one at a time, with the first package rotating among respondents.
Unfortunately, comparative techniques in general are rather unrealistic. Consumers never get to choose among a variety of package design options when standing in front of a store shelf.
Ultimately, your goal should be to test your package in as realistic a setting as possible. This means determining how a package design might perform in measurements such as impact, findability and imagery, when placed next to competitors on the store shelf.
You also want minimal interference from interviewers.
The author, Peter Gold, is Vice President Consumer Packaged Goods Research at Harris Interactive, a market research firm in Rochester,.Y. Contact Peter at 203.840.4740 or pgold@harrisinteractive.com

Harnessing The Internet’s Power In Assessing Package Design
When assessing package impact on consumers during the development process, two ingredients guide a successful project. First, a “true” package image. Second, a process for viewing the package as consumers would see it inside a store.
The Internet provides an effective tool for creating a collaborative workflow that enables you to achieve both of these goals. One option, a new patent-pending technology from Harris Interactive, enhances online packaging reviews.
Harris Interactive’s technology, called “Shelf Impact,” uses special image files to make images sharper on a computer screen. It provides high clarity even on small images.
This enables consumers to view more realistic on-screen representations of packages amid shelf sets of competing brands without the time and expense of setting up in-person meetings.
Besides special image files, the second component driving the Shelf Impact technology is a method for controlling the length of time that respondents can view a package image on their screen.
“This ’flash’ of packaging images resembles the method associated with a tachistoscope—a tool that exposes images for a fraction of a second—but at a more economical cost to brand managers,” says Peter Gold, Vice President for Consumer Packaged Goods Research at Harris Interactive.
Here’s how the first phase of the Shelf Impact technology works:
Respondents visit a Web page to view a shelf set of packages for a fraction of second. Next, the computer screen displays all the packages and prompts respondents to identify as many as they can recall.
This sequence of an image “flash” and a prompt for recall repeats three times. It determines whether respondents properly identify the test and control targeted packages. It also assesses whether recall and identification improve with repeated exposure.
Harris Interactive defines this measurement as Shelf Impact.
The second procedure is called Findablity. It determines how easily respondents can locate a package on a shelf of competitors.
Respondents preview an image of a single package. The screen then flashes a shelf set of packages, including those for the target brand.
Next, they view an empty shelf set with a grid for responses. They are asked to pinpoint the target brand’s location in the shelf set.
In both the Findability and Impact sections, target package images rotate randomly in their positions in the shelf sets to reduce potential positional biases.
Imagery, the final aspect of Shelf Impact, measures package attributes, respondent likes and dislikes, and purchase interest.
—Jim George, Senior Editor

So, What About Focus Groups?
In-person focus groups are a common marketing research technique for evaluating package design. This qualitative technique offers a low-cost method of providing immediate feedback.
However, design teams often make the mistake of not allowing focus groups to view their own packaging alongside competitors.
If you want to use the focus-group approach, be clear in your objectives. If the goal is to understand how consumers think about a package design in order to facilitate the creative process, then focus groups can be powerful.
However, when we ask these same respondents to evaluate individual design elements, or which designs are best overall, are we asking too much? Ask yourself these questions:
- What qualifies a small group of respondents to act as package design consultants?
- What experience do they have that justifies listening to their comments?

Having consumer focus groups evaluate design options is a different assessment process than what might actually occur when those same consumers encounter shelf options.