Bridging the Qual/Quant Divide

Concern with “big data” have dominated conversations in the past few years. What does “big data” really means? What constitutes “big” versus “small” data? How does “big data” lead to real insights? The promise of data hasn’t played out as planned and we are starting to see a rethinking of how it should be used. Data has valuable uses, to be sure, but the belief that data would become The Thing that changes the world simply hasn’t manifested because it can’t provide meaning to the human condition behind the numbers. Quite simply, questions regarding people and markets cannot be answered by brute force, number crunching.

It’s important to note that the social sciences have a long-standing relationship with analysis of, and interpretation through, quantitative data at all scales and granularities. We know that data is neither good nor bad. It’s what one does with the data that matters. It’s how one understands and works with the benefits and the curses, the strengths and the limitations, of the data that makes the information useful.

Data is comforting because it is fixed, it’s solid, it is an object. It lends a veneer of scientific legitimacy to the things we create. But with the promise of data-driven creative not being fulfilled, we have an opportunity to resist taking data as given, an opportunity to bring an more expansive lens to the collection, management and curation of data. Not just agencies, but the companies for whom we work, as well. Only by looking for meaning in the data traces, the data “fumes”, will we be able to understand what is of value to people, and able to create messages that people value. To be able to do this well, to do this better than we are currently doing it, we need better tools for dealing with data at all scales and granularities—from collection to curation to manipulation to analysis to the drawing of defensible insights and conclusions.

I am a strong enthusiast for and advocate of data triangulation, of mingling data from multiple sources at many levels of granularity I’ve also always balked at the division of data into qualitative and quantitative, believing that behind every quantitative measure is a qualitative judgement imbued with a set of agendas. The distinction between qualitative and quantitative is of lim­ited use and creates needless barriers between input and outcomes. The cornerstone of a good strategic plan, campaign, etc. is the blending of the qualitative and the quantitative, and the embracing and connecting of very different representations from disparate sources at multiple levels of granularity.

That’s because in an industry that has to create ideas not just related to how and when people interact with a brand, but also why, an flexible perspective on multi-faceted data is the path forward to a creative spark.

As part of our evolution, we need to establish and foster deeper relationships with our colleagues, whether it’s planners, designers, data sciences, statistics, engineering, or developers. Data is a material for understanding, not a given from which we deduce that which lies latent within the data, waiting to be revealed. Data analyses should be more than incremental refinement on what is already known. We should work with data to challenge what we know, and to actively seek surprise. This is how we develop an understanding of what is meaningful by understanding people in context.

Advertisements

Coffee: When Simple Things Change Everything

I, like many people, start my day with a steaming cup of coffee. When I was younger the process of waking up began with a book, the newspaper, or occasionally a pad of paper as I reflected on something that felt meaningful at the time. Today, my coffee is taken with a shot of news via the iPad and a heaping mound of email. The consistent element through time has always been coffee. But even as coffee has remained in some ways the same coffee culture and my personal practices around it, from the brand I drink to its role as a post-lunch pick-me-up has changed over time.

Anthropologist William Roseberry wrote in 1996 that coffee drinkers would have had a tough time finding specialty coffee in the 1970s, pointing out that “the roasts were light and bland.” Coffee was uniform, a commodity, not unlike gasoline or saltines. Due to changing tastes of a younger generation weaned on soda, consumption was in fact on the decline. As the now famous story goes, Kenneth Roman, Jr., the president of Ogilvy and Mather, made a suggestion to the company’s client, Maxwell House: emphasize quality, value, and image by creating segmented products to increase appeal. And to emphasize value, quality, and image, the consumer needed to be made more aware about what made coffee worth the price. Specific blends and origins were advertised, lifestyles were marketed, and roasting types were displayed for consumers to see. And so it was that the specialty coffee market was born.

Coffee was meant to permeate every aspect of life. And while many of the large manufacturers have seen market share decline over time, smaller roasters marketing individual brands have found a niche, even if it has meant a higher cost to consumers. Coffee moved from being a commodity to something we savor, we contemplate, we find meaning in. From the brand and styles we drink to the places in which we drink it, coffees has become personal. We’re identified by the brand we buy, by the coffee shops we frequent, and by the types of coffee we drink (a Cubano, a cup of fair trade dark roast, a bag of organic Blue Mountain, etc.).  And we do love our coffee:

  • a third of the country’s population drinks coffee daily
  • half of the population drinks coffee at least weekly
  • two-thirds of the population has coffee at least occasionally

Among those who drink coffee, the average consumption is higher now than it has been in past years. The average person in the U.S. spends around $25 on coffee each week. A fair amount of that is spent out of home, but the coffee we do buy for home brewing isn’t the $1.99 stuff of yesteryear. In other words, we aren’t necessarily drinking more coffee than other generations, but we are spending more money on coffee. Younger generations in particular have a lot of disposable income but they aren’t spending it like their parents did. Instead of cars and homes, they’re spending it on a better food and beverage experience. Indeed, we’re seeing that the focus on quality and experience is finding its way into other generations. The cycle of change has taken root and is beginning to cut across age groups.

And all of this seems to point in a new direction for food and beverages in general. We’ve been taught to pay for coffee; for artistry, for the geography, for the experience. These factors contribute to a re-valuation of the beverage and its role in defining our identities, personally and culturally. Can the same be done with, say, a hamburger or a yard beer? By re-couching something that once represented modernity but has come to represent blandness, uniformity and mass production as something experiential, can we reinvent a category? I believe we can. Kenneth Roman, Jr., believed we could.  

Coffee offers us a way to look at our relationship to the larger world and see that sometimes our choices are not really our own. Brands create us even as we create them. It is not the transaction, but the relationship that matters. 

Fire, Meat, and Spring

Spring is a celebration of life, warmth, and sunlight. It ushers in outdoor dining, drinking, and cooking. It’s time to brush the remaining winter detritus off the barbecue and throw masses of meat on the grill. It is also a time to ponder the notion that cooking over an open fire is an ancient ritual. Traces of ash found in the Wonderwerk cave in South Africa suggest that hominins were controlling fire at least 1 million years ago, the time of our direct ancestor Homo erectus. Burnt bone fragments also found at this site suggest that Homo erectus was cooking meat. Our ancestors largely ate whatever they could; berries, grasses, fruits, and bits of small animals were probably the main fare. We know early proto-humans had an eclectic, mostly vegetarian, diet 3 million years ago because of the shape and size of their teeth – small front teeth and with short canines and large, flat molars. They had mouths built for grinding, not for ripping apart flesh. Then, around 2.5 million years ago, meat became a very big deal.

Katherine Milton of the University of California, Berkeley, claims that early humans were forced into this dietary change because the forests of Africa were receding and these hominids simply couldn’t get enough plant matter to stay alive. In support of this claim, archaeologists have found 2.5 million year old stone tools clearly used to butcher animals and to smash bones to access the marrow. And for the next few million years, humans apparently stuffed themselves with raw meat. Then, something radical happened. Somewhere, somehow, somebody offered up that meat cooked. Maybe early humans stumbled across the charred remains of an antelope killed in a brush and took advantage of the moment. Maybe they lit a fire themselves for light and warmth and while eating a bison dropped a leg into the fire by mistake. Whatever the impetus, humans began eating cooked meats at least 700,000 years ago, and they never looked back.

But why bother with cooking meat at all? It takes time and energy to build a fire, create specific cuts, invent the grill, and then clean up. At the most basic level, cooked meat simply tastes better, and our ancestors were apparently instant turned on to this. From the moment meat met fire, humans became gourmets. But the shift may also have evolutionary reasons. Harvard anthropologist Richard Wrangham speculates that controlled fire and cooked meat were implicated in human brain evolution. He asserts that humans actually may have been cooking their prey as far back as 1.6 million years ago, just when our genus was experiencing major brain expansion. Cooked meat, it turns out, was still full of protein when raw but easier to digest when cooked, and so natural selection might have opted for smaller guts. All that saved digestive energy may well have then gone into making bigger brains. If that position is right, the big human dietary shift was not so much the move to meat, but the move to cooked meat, which made us smarter and more inventive.

And then there is the cultural aspect of cooking. The oldest remains of obvious hearths are just 400,000 years old. Cooking requires cognitive skills that go beyond controlling fire, such as the ability to resist the temptation to scoff the ingredients, patience, memory and an understanding of the transformation process. With that comes greater organization, richer storytelling, and increasingly stronger social bonds. In other words, without fire and without the grill, even in its most primitive form, culture wouldn’t have developed, at least not in a way we would recognize today. Criticism of meat centers around modern manufacturing methods, which are often seen as lowering the quality of meat. Another part of the criticism is that meat derives from animals, raising ethical dilemmas (as well as a sense of unease or repulsion for some people). Meat is frequently perceived as unhealthy. Regardless of how we perceive meat, the fact remains that its place in the evolution of human culture and the significance it holds at the table today are deeply rooted in the shaping of the human experience. Cooking has evolved into one of the most varied and inventive elements of human culture. We cook thousands of different types of animal, plant, and fungus using a dazzling array of techniques. We spend far more hours planning, shopping for, and preparing food than actually eating it. We then sit down to watch programs about it, hosted by people who have become millionaire household names. Meat’s status reflects the myriad cultural contexts in which it is socially constructed in people’s everyday lives, particularly with respect to religious, gender, communal, racial, national, family, and class identity. We barbeque, therefore we are. Something to ponder when gathering around the table.

A Different Approach to Focus Groups

When something becomes a running joke on every sitcom since the 80s, you know it’s been overdone. The traditional focus groups is overdone. But I don’t think the focus group, or something akin to it more precisely, is dead. It’s an imperfect methodology but it has its place and it can be done well – if we rethink the process. Instead, there is the “un-focused” group; a gathering of individuals in a workshop or open discussion forum where they have access to a wide range of creative things to stimulate interaction and creation. The sample is smaller and the setting more intimate, which can lead to more effort and resources, but the outputs are closer to what you want to know (namely why people believe what they do) than you get from a traditional format.

Ultimately, the structure helps uncover perceptions, emotional ties, values and shared meaning, as well as activities and processes of use. Placing individuals in a more organic, open setting stimulates interaction and minimizes the biggest flaw of the traditional focus group: the Hawthorne Effect (the tendency to perform or perceive differently when one knows they are being observed).

Preparing and Staging. Setting up the location is pivotal to the success of this research format.  Rather than relying on a conference table and a two-way mirror, the goal is to produce a more natural setting to strike a balance between a living space and a professional space. One process utilizes two rooms, one where the “pre-discussion” will occur and another that will be used for the majority of the session.

In both rooms, furniture should be soft and result in collective interaction, meaning a mix of sofas and chairs.  Traditionally, sofas are avoided in focus groups because the assumption is that it infringes on personal space, making participants uncomfortable, but considering that the intention is to disrupt preconceived notions of what takes place in a focus group, participants typically become comfortable quickly.  Their psychological frame of what they are “supposed to do” breaks down and they subconsciously see it as a chance to open up.

Floor lamps should dominate the room (not overhead lighting) and colors should reflect a home-like atmosphere. The idea is to create the kind of environment that facilitates conversation rather than a corporate or laboratory-like setting.

Of course, this also impacts the size of the sample. The traditional method is to gather anywhere from 8 to 12 participants.  Changing the structure to a more conversational dynamic means reducing the sample to between 6 and 8 participants per session.  While the larger sample certainly puts more bodies in a room it doesn’t guarantee an increase in discussion or viewpoints because the dynamic is not conducive to conversation.  The smaller sample, coupled with the change in environment, fosters conversation and consequently, better information.

The Discussion before the Discussion. Before the primary conversation begins, it is helpful set the mood and get people relaxed with a brief pre-discussion, preferably around a meal.   This is not just courtesy.  Human beings are hardwired to respond to the act of sharing a meal.  In every society, gathering around food signals trust and intimacy, promoting honest, open interactions with each other.  Beginning the focus group around a substantial meal (not simply snacks) people are more apt to talk freely getting them primed for discussion. This is also a good time to start informally discussing the main topic of the evening.

Introductions, personal stories, and an overview of the discussion should be emphasized during this phase.  If topics come up that will be revisited during the main discussion it is fine, but the moderator should redirect the conversation so that not all the information is revealed early on.  Allowing the participants to start talking primes them to provide more expansive, clear, and detailed responses during the main discussion. During this initial phase, no camera is used because the goal is to get participants into a relaxed, conversational state of mind.  By eliminating the camera, there is no threat of “performance” and participants become comfortable with each other and the moderator.  Since valuable information will no doubt begin to emerge at this stage, and since no camera is recording the event, it is imperative that the facilitator be a skilled note taker.

The Main Event. In the primary discussion area, changing the setting will alter how information is captured and relayed to the clients.  There are no hidden cameras and no two-way mirrors.  Cameras are set up in unobtrusive locations and addressed openly when the group comes together.  Information is then broadcast to the clients/viewers.  Once again, the reason is to be intentionally disruptive to the mental model people have about focus groups.  The disruption is interpreted as an expression of honesty and the camera is quickly forgotten.  The truth is that participants in traditional focus groups are already aware of and performing for the camera, even if they can’t see it – if nothing else, the mirror is a constant reminder they are being watched.

Facilitation is done using a dual moderator method, where one moderator ensures the session progresses smoothly, while another ensures that all the topics are covered.  In addition to ensuring all the material is covered and questions addressed, the dual moderator process helps maintain the conversational tone by shifting the power dynamic of the group.  Rather than a single person leading and everyone following, the second moderator (seated among the participants) breaks up the dynamic and redirects the exchange of information.   Opening up the information exchange process means having an opportunity for more open and honest disclosure and discussion in a setting where participants are validated.

The Follow Up. The final step is to close the session. Once a typical focus group is over, there is usually a bit of time where some participants linger and offer bits of information they felt weren’t expressed clearly or share stories with others.  In this model, participants are actively encouraged to spend 20 minutes or so talking with the moderators.  The first step is to turn the camera off.  The key point is that the end of a focus group represents an opportunity that is all too frequently overlooked.  Keeping the participants for a post-discussion phase often captures pieces of information that go unspoken or unarticulated during the main discussion.

Changing the structure of the focus group can be uncomfortable for both those moderating and those watching it.  It appears much less structured than traditional methods because the focus is getting the target audience to open up and give real answers, not perform for the camera.

Remember, the goal is to put participants in a state of mind where they feel in control, instead of simply telling the moderators what they want to hear. Changing the format to a more relaxed, expansive session means worrying less about data and more about generating creative thinking and new ideas. Giving yourself license to think broadly is the key to success.

Moderating vs. Learning

Let me state that I am not a moderator. At least, not a traditional one. I am an ethnographer, an anthropologist, and a strategist. And while both moderators and ethnographers speak to people, they are not the same thing. This isn’t just a matter of semantic difference, it is at the heart of how practitioners execute their work and how they practice their craft.

A moderator is defined as a presenter, a host. A moderator is a person or organization responsible for running an event. A moderator is a person given special powers to enforce the rules of a collective event, be it a focus group, a forum, a blog, etc.  Moderation is the process of eliminating or lessening extremes. It is used to ensure consensus and limit deviation. In other words, moderators assume control and direct. They maintain power and tease out information that is essentially qualitative hypothesis testing. Understand, I have no problem with moderation and moderators – the approach is useful and has its place in the inquiry toolkit. But the practice of moderation is limited not just by its structure but its theoretical underpinnings.

An anthropological approach (ethnography in particular) is aimed to learn and understand cultural phenomena which reflect the knowledge and guiding the life of a cultural group. Data collection methods are meant to capture the social meanings and ordinary activities of people in naturally occurring settings. Multiple methods of data collection may be employed to facilitate a relationship that allows for a more personal and in-depth portrait of the informants and their community. These can include participant observation, mind mapping, interviews, etc.  In order to accomplish a neutral observation a great deal of reflexivity on the part of the researcher is required.

Reflexivity asks us to explore the ways in which a researcher’s involvement with a particular study influences, acts upon and informs such research.  The goal is to minimize the power structure and allow people, our participants, to inform and guide the researcher according to what matters most to them, be it spoken or unspoken. In other words, we are not moderating, we are learning and exploring.

Ethnography’s strength comes from the ability to work fluidly with participants as opposed to moderating a setting or social interaction. The researcher who refers to him or herself as a moderator of ethnography, through his or her choice of words, is indicating how they will do fieldwork, how they will interpret findings and how they subconsciously see their role in the field. Again, moderation is a terrific tool but it is not ethnographic. Nor is ethnography the same as moderation — they both have things to contribute, but they are not methodological equivalents. If you’re going to hire an ethnographer it isn’t enough to ask what markets they will work in or how big the sample population will be. If you’re intending to conduct it yourself, it isn’t enough to have people who are comfortable with conducting interviews. Ask the questions: “What do you call yourself and what’s your job when interacting with people.” Then get them to articulate not only their methods, but the rationale behind them. Be specific. It’s your money. Be sure you are paying for what you have commissioned.

Reveling in BBQ: Dining Out and Ritual

I am in smoked meat paradise this week. Kansas City is perhaps the focal point of this marvelous cuisine – whether you’re a fan of Carolina or Texas styles, KC is a defining setting in the near-religious sects that defined American BBQ. There are places where it is served by waiters in white jackets and places where the meat is smoked under corrugated metal roofs out back. Road crews and guys in three-pieces suits rub shoulders and revel in the scent of smoke, sauce, and dry rub.   

If the rituals of eating out have become less grand for the mass of people over time, it still retains its aura as an “event.” The grand aspects are retained in expeditions to restaurants both simple and offensively overpriced. We spend not so much for the food as for the entertainment value and the naughty thrill of being (we hope) treated like more than average Joes in the routine of daily life. The family outing to the local BBQ joint still has an air of preparation and difference; it can still be used to coax youngsters to eat, and provide an air of difference so as to be “restorative.” Even the necessary lunch for workers who cannot eat at home has been made into a ritual event by the relatively affluent among them. 

“Doing lunch” in the business world is regarded as a kind of sacred operation where, the mythology has it, the most important deals are made. A puritanical campaign against the “three-martini lunch” by the then President Carter had Americans as roused and angry as they had been over the tax on tea that sent their ancestors to their muskets. The business-meal tax deduction was fought for with passion, and the best the government could do was to reduce its value by 20%. There may not be a free lunch, but it sure as hell is deductible. Very little of this has to do with business, of course, and everything to do with status. Just to be having business lunches at all marks one down as a success in the world of business, for only “executives” (the new order of aristocracy) can have them. 

At the other end of the scale, reverse snobbery asserts itself in the positive embrace of “junk food,” otherwise condemned as non-nutritious, vulgar, or even dangerous to our health. Junk food can be socially acceptable if indulged in as part of a nostalgia for childhood: the time when we were allowed such indulgences as “treats.” So giant ice cream sundaes with five different scoops of ice cream, maraschino cherries, pecans, chocolate sauce, and whipped cream; sloppy joes with french fries and gravy; milk shakes and root beer floats; hot dogs with mustard, ketchup, and relish – all these are still OK if treated as a kind of eating performance. Hot dogs at football games, or ice cream at the shore are more or less de rigeur. The settings in which these are eaten vary from the simple outdoors to elaborate ice cream shops with bright plastic furniture and a battery of machines for producing the right combinations of fat, sugar, and starch. Ostensibly these are for children, but adults eat there with no self-consciousness and without the excuse of accompanying children. But for adults, as for children, these places are for “treats,” and so always remain outside the normal rules of nutrition and moderation. 

We continue to make eating out special when we can. Romantic dinners, birthday dinners, anniversary dinners, retirement dinners, and all such celebrations are taken out of the home or the workplace and into the arena of public ritual. Only the snootiest restaurants will not provide a cake and singing waiters for the birthday boy. The family outing is specially catered for by special establishments – “Mom’s Friendly Family Restaurant” can be found in every small American town (although the wise saying has it that we should never eat at a place called Mom’s). But even in the hustle and bustle of these family establishments the individuality of the family is still rigidly maintained. No family will share a table with another. This is very different to the eating out of the still communalistic East. Lionel Tiger, in his fascinating description of Chinese eating, describes how people are crowded together in restaurants – strangers at the same table all eating from communal dishes. And far from having a reservation system, restaurants encourage a free-for-all in which those waiting in line look over the diners to find those close to finishing, then crowd behind their tables and urge them on.

The democratization of eating out is reflected in the incredible burgeoning of fast food joints and their spread beyond the United States. McDonald’s is the fastest-growing franchise in Japan, and has extended its operations to China. When it opened its first franchise in Beijing, it sold so many burgers so fast that the cash registers burned out. Kentucky Fried Chicken has now opened in Beijing, and has become the chic place to eat in Berlin. These are humble foods – a ground meat patty that may or may not have originated in Hamburg; a sausage of dubious content only loosely connected to Frankfurt; deep fried chicken that was a food of the rural American South; a cheese and tomato pie that probably came from Naples. But they have taken the world by storm in one of the greatest eating revolutions since the discovery of the potato. In a curious twist, two indigenous foods of the East are rapidly turning into the fast food specials of the yuppies who would not be seen dead eating the proletarian hamburger: the Japanese raw-fish sushi, and the Chinese dim sum lunch.

The proletariat has evolved its own forms of eating out. The transport café in Britain with its huge portions of bacon and eggs; the French bistro, which was a working-class phenomenon before reverse snobbery turned it into bourgeois chic, with its wonderful casseroles and bifstekpommefrit; the Italian trattoria with its cheap seafood, again gentrified in foreign settings; the incomparable diner in America; the grand fish-and-chip warehouse in the north of England; the beer-and-sausage halls of Germany; the open-air food markets in all the warm countries. If we could do a speeded-up film of social change in the last fifty years we would see a grand ballet in which eating moved out of the home and into the public arena on a scale which makes rural depopulation look like a trickle. Sociologists, as usual, have still even to figure out that it is happening, much less come up with an explanation. 

To be literate in the world of eating out, to be even ahead of the trends (knowing that fantastic little Portuguese bistro that no one has discovered), is to demonstrate that one is on top of the complex cosmopolitan civilization of which eating out has come to be a metaphor. And so, it’s time to start thinking about which BBQ join to hit for lunch.

Insights in an Age of Emotion

When historians look back on the early years of the 21st century they will note a paradigm shift from the closing years of the Information Age to the dawning of a new age, The Age of Emotion. Now, there are those that would argue that in a period defined by prolonged economic ennui ROI is the only thing that really matters and pricing is the only real consideration consumers think about – the rest is fluff. But I disagree. Why? Because we’re not talking about trends here, which are ultimately short lived, but cultural patterns which are sustained and signal a shift in worldview.

On a fundamental level, we are more in tune with our emotional needs than at any time in recent history, or at the very least we have more time to reflect on them. We focus increasingly on satisfying our emotional needs and pop culture both reflects and creates this. It is a cycle. One needs look no further than the multi-billion dollar self-help industry as an example. Talk shows abound focusing on the emotional displays of the masses and the advice given out in front of an audience of millions.

And this growing focus on the emotional has extended into the shopping and retail experience. Increasingly we will see a subtle, yet profound difference in the way people relate to products, services and the world around them. Retailers increasingly focus on the nature of the in-store experience, converting the space from a place to showcase goods, to a location, a destination, a stage on which we perform. And indeed, shopping is as much about performance as it is about consumption. Just as fulfilling emotional needs has become the domain of brand development, it is increasingly becoming a centerpiece of the retail experience, at least for retailers focused on margins rather than volume. Rationality will take a back-seat to passion as we move from the sensible to the sensory. While ROI is the obsession today, Return on Insights and Return on Emotional Satisfaction will be the leading factors in the years to come.

For the developed world and the world’s emerging economies, time and money equate to an increased use of brands and shopping as emotional extensions of ourselves. Status, power, love, etc. are wrapped into the subconscious motivations for choosing one location over another. And while we are still bargain hunters, the hunt is less about price than it is about the experience of the hunt. Again, emotion drives the process, even when we say it doesn’t. “Experience” is emotional shorthand.

Successful companies will learn to pay more attention to how their customers react emotionally and how their brands can fulfill emotional needs. In the Emotion Age, brands will either lead the way to customer satisfaction or be left in the dust.

Does Size Really Matter?

Sample size is a fixation in research. We fret about it, argue over it, and generally have very little to back up the relevance we ascribe to it. Rather than thinking about the outcomes and goals, we get hung up on the n, sometimesusing it as a weapon to bolster a preconceived belief rather than identifying points of interest. And when thinking about online communities and panels, even when the focus is qualitative research, this fixation becomes almost manic. So does size really matter?

The anthropologist and evolutionary psychologist Robin Dunbar focuses on the number 150. Dunber had the idea that as humans we can only comfortably maintain about 150 stable relationships before intimacy and connection is lost. Whether online or off, 150 is about all we can really handle and still have meaningful exchanges.

Over the years I have managed communities as small as a dozen to as large as several thousand. Each has been successful in its own way, because at the center of these samples is what we really want and are striving for – a community, meaning the people involved all have a shared interest or experience. That said, the ranges produce very different results. That isn’t bad, but it does mean you have to think very carefully about what you want from your community.

The primary thing to consider is quality first over quantity. After all, you only get out as much as you put into it, so we should strive to make our communities more dynamic, interactive, stimulating, and engaging. That simple truth is what leads to the best insights – the kinds of insights that drive change. There are exceptions to the quality over quantity argument – it’s called doing a quantitative study. If you need to conduct quantitative research with your community then of course you are going to need to have a larger sample. Additionally, you may also need to ensure that you have sufficient numbers of sub-groups or segments accounted for too. But on the whole, the strength of an online community lies in the qualitative richness that emerges as members become closer, share more, and engage regularly.

Having a larger community this doesn’t mean that you have to lose the human touch / connection with your members and it has become the norm for us to invite sub-groups or segments to take part in communities of smaller size. The intention is to foster discussion, debate, and innovation. It is about looking at the edges of the bell curve rather than at its center.

Whether as a sub-group or as part of the general population, focusing on a smaller sample ensures that we don’t lose sight of those things that drive strong community connections purely for the sake of hitting a number. Regardless of the size it is a best practice to be appreciative of your members by showing them you are listening, to encourage and facilitate discussion, and most importantly to show them that we value them. That’s exceedingly difficult to do with hundreds or thousands of people.

Where The Data Beasts Lie

Since the mid-nineties, the story about IT has been that the “New Information Economy” would give way to vast gains in productivity and creativity. We’ve been told that if we simply implement ERP, CRM, etc., our marketing efforts would be smarter and more efficient. But after 20+ years that is not exactly the case.

The benefits of massive IT implementation and big data are elusive. We have experienced increased efficiency, but far less than was promised us. We are certainly nowhere near the hyper-targeted, data-fueled industry we were told we would become. Take Amazon, for example. Amazon has had more than thirty years to learn our preferences and fine-tune its recommendation engine. Yet it still recommends products that we have just bought and, hence, no longer need and even suggests items to buy that are already in our shopping carts. We’ve been told that the more data a company amasses, the better it will be at targeting consumers. But is this really the case? Think about the old story of the disruptor, Netflix, and the fallen giant, Blockbuster – wasn’t that really one of supply and demand and a better distribution model? Or did we all really watch Netflix because of suggestions generated by its AI, big data, and machine learning? I’m guessing not. Amazon may be an excellent distribution company, but from any human consumer perspective, its recommendation algorithms are deeply flawed.

I can’t help but wonder if this is because big data analytics aren’t generating the insights needed to target consumers any better than what TV advertising and newspapers have always done. Billions of dollars have moved from offline to online advertising, but, in the offices of big advertisers, there is a sinking feeling that the data-fueled Facebook or Google advertising model isn’t going to help them as much as they had hoped.

Part of the marketing model of information technology is that the next big thing is always around the corner. These days, that means deep learning and AI. We’re being told that machine learning is going to revolutionize and disrupt everything as we know it and spur amazing insights that lead to brilliant creative work. You know, just the way big data was meant to just a few years ago. But before this hype-cycle takes off again, maybe it’s time to take a step back and ask ourselves if there’s a better way. What makes these technologies that much different from those of the past? Will they truly stimulate unprecedented insight into what motivates people? Or is this just another round of hype that will distract us from real matters of human importance?

Creating American Wine Culture

 “Wine is sunlight, held together by water”, Galileo Galilei. It is a favorite quote of mine, a beautiful turn of phrase, and for me, an absolute poetic truth. Wine factors into my life in subtle and less than subtle ways. As winter approaches, I settle into my nightly routine with a glass of something rich, heavy, and red – something that holds a bit of foreboding, something dark, something refined. Every spring, my attention turns to luscious, soft, pink rosés. They are vibrant, delicate, and sensual. Wine is more than a drink, it is a touch of poetry in a glass that grounds the season and establishes a dining pattern for myself and my family, though they probably don’t really know the connection between my wine habits and what I tend to cook. But as an American, I know that am an exception to the rule in this regard.

In per capita wine consumption globally, the United States ranks 39th. That’s one spot behind Bulgaria. We even trail Iceland, Canada and Estonia (34, 33 and 32). But while the US is 39th in per capita consumption, we rank 4th in wine production. In addition to sheer scale, American winemakers produce some of the best wine on earth. Wine better, cheaper, and more available in the US than it ever has been. And yet, 38 countries enjoy more wine per person than we do. But to my mind this isn’t discouraging, it’s exciting. Very, very exciting.

People who gravitate to wine imagine that everyone has at least a few bottles on hand, this isn’t the norm everywhere in the country. And looking at the statistics it would be easy to assume that wine is not as much a part of our culture of food and drink. Our country’s winemaking history dates to the arrival of British settlers in New England and the Spanish settlement of the Southwest. When Spanish explorer Ponce de Leon arrived in Florida in 1513, he was followed by Spanish and French Huguenot settlers who began making wine with the native American grape, Muscadine, as early as 1565. So wine is quintessential to our history. But looking back on the timeline of wine production in the US and it’s loss of ground to other alcoholic beverages, we are in something of a rebound. That said there are wineries in every state now, from Alaska to Florida. It’s no longer a matter of Napa.

If we contextualize the actual per capita amounts of wine we drink, it comes out to about 3 gallons of wine per year, or about a bottle a month. The French drink 15.3 gallons a year. So there is clearly room to grow. But wine is part of French culture – it’s part of the national identity (along with Luxembourg, Argentina, Austria, etc.). The question is, how do we make it part of ours in the US?

A study of 2,000 wine drinkers showed that the average Americans will only start to fully appreciate a good bottle of vino toward the tail end of their 20s, but how they get into it, and what they prefer, varies greatly. But the most common way Americans get into wine is from a friend, with 30% reporting that’s how they originally tried it. One in five discovered it on their own, and 17% were drawn into wine by a partner. So relationships tend to be the dominant entry point.

The question is, can you take those simple introductions into a craft and turn them into something bigger? Can you create a rite of passage, a “wine awakening”? Imagine travel packages specifically targeting the expert and apprentice, so to speak, that makes the process smooth, unintimidating, and transformative. Or fostering a practice (e.g. a holiday) that celebrates a person’s graduation from the days of rum and coke and sloppy Saturday nights into a world of refinement and reflection. Or meal prep delivery services that include wines and points of discussion. The point is, there is an opportunity to turn wine from a mysterious consumable to a cultural symbol.  

The good news is that wine is on the rise. The upward trend for wine consumption in America is positive, and expected to keep growing at a small but steady rate of around 2% to 3 % per year. Consumers are finding more reasons to “celebrate” with a bottle of wine or drinking more wine when a bottle is opened, and in some cases doing both. But the process is slow and unlike other drinks, wine has yet to become an American cultural standard. It is still mysterious, threatening, and perhaps a touch too sophisticated in the minds of some. But with mystery comes curiosity. With threat comes excitement. With sophistication comes transformation. Those all represent powerful tools for creative problem solvers.