Building Craft Beer Bands When “Craft” Is A Thing Of Mystery

There’s a story that’s often being sold to beer drinkers. On the one hand, you’ve got 800 pound gorillas: the faceless corporate giants who mass-produce tasteless, watery beer by stuffing it with corn and rice and other things that make purists cringe. On the other hand, there’s the artisan: the little guys with an undying commitment to quality and flavor, who brew every batch by hand with a heart full of love, a bucket of rare hop varietals and a pinch of yeast extracted from dating back to the Sumerians. The problem with this story is that it’s at least 20 years out of date, and more importantly, it bears little resemblance to how the most dedicated and active craft beer enthusiasts view the industry.

Cynical advertisers on both sides of the supposed divide find it to their benefit to perpetuate the myth. Large independents like Samuel Adams and Sierra Nevada are straining against the upper limits of what could be considered craft brewing. Or more accurately, they’re actively working to raise those limits so that they can stay in the club. Meanwhile, giants like MillerCoors and Anheuser-Busch are openly courting lovers of simple beer and hoping that the “snobs” won’t notice that they now own beloved craft brewers like Anchor Steam, Goose Island, and Ballast Point.

If brands want to connect with American craft beer enthusiasts, they are going to have to understand what the market trends are that drive drinkers’ choices right now. Surprisingly, “making good beer” doesn’t appear to be the best way to attract the business of highly invested beer drinkers anymore. That doesn’t mean that they’ve all lost their sense of taste; it just means that the craft beer world offers such a variety that quality is no longer the best way to distinguish your brand. So where do brewers head?

The language of “Craft” matter. Craft beer used to be a nebulous category that conveyed both quality and independence, but increasingly it is defined by size, ownership, and production. The Brewers Association defines “craft” as:

  1. Producing fewer than 6 million barrels of year annually
  2. Less than 25 percent owned by “a beverage alcohol industry member which is not itself a craft brewer”
  3. Utilizing flavors made from “traditional or innovative brewing ingredients and their fermentation”

That’s not to say that the BA’s definitions are stable or that they coincide exactly with what is in the minds of craft beer enthusiasts. For example, its past criteria excluded “adjunct” grains like corn, rice or oats, which are now generally accepted as fair game for many craft brewers. They also used to cap the production level at 2 million barrels. With the success of companies like the Boston Beer Company and Boulevard Brewing they’ve had to make to accommodate their growing production levels.

The purchase of many icon craft breweries by giant corporations has led to a crisis within the craft beer community, as taste no longer serves to distinguish the independents from the majors. To inform drinkers, the Brewers Association created the independent craft brewer seal, an authorized indicator that the product is an authentic craft beer. As of fall 2018, more than 3,700 craft brewing companies had adopted the seal, representing more than 80% of the volume of craft beer.

Widespread use of the seal should go a long way toward informing beer drinkers about the craft status of the beer they’re drinking, and displaying it looks like an essential move for up-and-coming brewers. It’s too soon to predict whether the growing visibility of the independents will counteract the tendency of successful brewers to sell out to the majors. It also remains to be seen whether enthusiasts will tolerate leaving all the power in the hands of the Brewers Association to decide what is and isn’t craft beer. But it isn’t too soon to say that the shifts in the industry are making marketing challenges more complex.

Craft beer and inclusion. It’s also worth noting that in recent years we’ve seen a minor backlash against the craft beer community, focusing on the belief that enthusiasts are overwhelmingly straight white men with beards. Data does show that white people, professional men in particular, make up somewhere around 75% of the craft brew consumer population. Other demographics, then, constitute a major untapped source of revenue for brewers. And they signify an image problem for brands. If craft brewers can figure out how to authentically connect to women and people of color, they could sell a lot more a lot more beer.

Where brewers tend to go wrong is by assuming that it’s possible to bring in the missing demographics by devising new beer recipes. The widely accepted common wisdom states that men like IPAs while women prefer fruity or spiced beers; why couldn’t we find the beer types that appeal to black or Hispanic consumers as well? But the truth is that we don’t have hard data on these supposed preferences, and there’s no reason to believe that offering different varieties will bring in drinkers who previously have shown little interest in beer. What beer do women like? It’s an asinine question. In fact, pandering to women and minorities by offering beer styles that the brewer wouldn’t otherwise be interested in is a great way to undermine a brand’s reputation for quality and authenticity. If you want to combat craft beer’s image problem and bring in new drinkers at the same time, a better bet is to strive for diversity among the people making the beer.

Where’s the technology? TV advertising remains the traditional domain of the giant beer producers, and it’s rare to see craft brewers other than outlier Samuel Adams trying to beat them at their own game. Where smaller brands should look to connect with devoted customers is through social media and apps that have appeared in recent years. Untappd and Barley give users the ability to log and review beers, as well as to receive special offers and learn what’s available at nearby watering holes. Reflecting what we’ve seen about the politics of the craft beer world, Craft Check offers to verify that a given beer is truly independent instead of a covert major.

Loyalty programs provide an enticing opportunity to court return customers and gather data about what fans of your brand enjoy, but they probably won’t be feasible in the near future. The patchwork of state and local blue laws, which often prohibit giving beer away for free or offering people incentives to drink, combine to keep such programs from being scalable. While waiting on legal reform, brewers should focus on opening lines of communication with customers and offering them new beer suggestions.

Collaboration builds tribes. Most craft breweries are regional affairs without national distribution networks. Very few of them have the advertising budget to do much. For brands seeking exposure, collaboration tends to be the most low-cost and effective strategy for increasing name recognition. A common approach is collaboration on a particular beer between two breweries or a brewer and a chef, which has the effect of theoretically multiplying each brand’s exposure and fostering a sense of camaraderie over competition. The key point is that by creating a sense of connection and collaboration, a brand also creates a sense of identity. It creates tribes that anyone can join.

So what? As crowded as the craft beer market is, you might expect it to be increasingly competitive, with ruthless breweries buying up the brands that they can and driving the others out of business. But for the most part, this mentality hasn’t taken over the market yet. That atmosphere of benevolence and fair play is a big part of what the most dedicated craft beer drinkers find so appealing. Celebrating smallness, community, and authenticity go a long way in fostering the brands. This also helps drive greater diversity in the consumer base by establishing a sense of shared identity between consumers.

Is the Local Food Movement Over? Hardly

Since the organic food movement took off decades ago, a growing group of conscientious consumers have shown themselves to be interested in the quality and nutritional value of the food they put into their bodies. Over time, this movement has evolved to include greater questions of environmental impact, local economic development, etc. Local food sales swelled from $5 billion to $12 billion between 2008 and 2014, and they are expected to hit $20 billion this year. Interest in farmers markets has grown more than 370% in that time, and over 20% of households eat local regularly. People care more than ever about where their food comes from, how it is treated, whether it is good for them, and how it impacts the environment.

But the reality is that the movement has its limitations, detractors, and problems, and some people have begun to call it into question. Shifts is economic stratification, sociopolitical identities, lack of clarity on the labeling, and simple access to these foods are making the decisions to buy harder. So how can the local food movement evolve from where it is to help consumers make the decision process easier?

Eating Local. For all the holes in the locavore argument, there are many ways in which the movement has succeeded:

  • Local food sourcing, even if some produce is coming from 400 miles away, can help diversify economies by offering opportunities for smaller family farms and growers to network with local businesses and farmers’ markets.
  • Local food networks can spark innovation as farmers try to live up to the advertised environmental benefits of the locavore movement and reduce food waste in the production process.
  • Local food can build communities around farmers markets, restaurants, local groceries and other related businesses that participate.

But there are deeper connections for many of the people who deeply value the “eat local” movement. Quite simply, they want to believe their food is not evil. That it comes from a good place. Ideally a sparkling, clean land free of pesticides and greenhouses gases, and full of frolicking livestock eating as much wild grass as they please before becoming meat. Unfortunately, the reality of farming doesn’t necessarily mesh with the ideal. And it’s at that point where things can break down.

The problem is vague definitions, such as what constitutes “local” and the fact that proponents of the movement often cite iffy science such as reduced “food miles” as an argument that local food reduces the carbon footprint in food transportation. The commonly held belief that reducing “food miles” is always good for the environment because it reduces the use of transportation fuel and associated carbon dioxide emissions turns out to be a red herring. Indeed, local food uses about the same amount of energy  per pound  to transport as long-distance food. Big box chains can ship food more efficiently –  even if it travels longer distances –  because of the gigantic volumes they work in. Plus, ships, trains and even large trucks driving on interstate highways use less fuel, per pound per mile, than small trucks driving around town. Dissenters point out shortcomings of local farming ad nauseam:

  • Local farms Can’t feed enough people.
  • Local farms that actually employ organic practices aren’t efficient.
  • Local farms can’t scale without losing either their integrity or their profits.

While these may all be valid points, few of them help consumers in the immediate sense. People want the ideal. Despite the negativity cited above, consumers primarily need to know that their food is “clean” and trust the people who make it.

Easing the concerns. Some local farms, markets, groceries, and restaurants are telling their stories well, but many fall short on specifics. They rely on buzzwords like “locally sourced,” “GMO-free,” “grass-fed” and “organic” that are easy  for consumers to question. Moreover, beyond boilerplate descriptions on a home page, the personalities behind these community staples are often muted on social media and barely visible in real life, negating, or at least diminishing, trust. If the “eat local” movement is truly meant to build trust and community between consumer and vendor, then the brands participating should feel more like part of that community. The following strategies can help their narratives along:

  • Talk openly about specific food practices on all media platforms. If you are a family farm that always fully harvests, note it and explain why. Talk about your last harvest. Speak openly if you had to use a pesticide and note why you thought it was a less harmful one than a conventional farm might use. If you are a restaurant, talk about the family farm and what practices they use that you like. Do you freeze the burgers when they come in? Why? Grocery stores, what are you doing to reduce food waste?
  • Work to benefit your community in ways that also engage your community. Do you give day-old bread from Joe’s Family Farms to the homeless? Invite a local high school football team to help hand it out. Have a member of the farming family and an owner of the grocery store present to socialize with the participants. Part of telling a good story is creating a good story in the first place.
  • Collaborate with other local vendors to solve problems your audience cares about. People care about the local economy and about food waste. They eat local, after all. So what if you, the restaurant, teams up with a local soap-making business that can turn your organic bacon grease into an all-natural surface cleaner?
  • Get personal. If you and your wife opened a Southern-themed pub to honor your grandmother who supplied moonshine to the Appalachian communities during the Prohibition era, tell everyone how she used to tie you to a chair to help you sit up straight and told the best jokes after knocking back a few. People will be reminded of his grandma.
  • Strive to live up to their ideals. You might have to use pesticides sometimes. You might have to harvest half your crop to meet demands. You might use too much water. But where you can, let buyers see how you are always working to improve. Show them that you are saving up for irrigation sensors. Find a new all-natural pest deterrent that they will be on board with. Form new relationships with environmental/community goals in mind.

The locavore movement as it stands has been extremely effective. Most people still sign on for local simply because they have seen time and again that their peers accept it as “good.”  With so much information available about, people have learned to distrust advertising and look to influencers and sources they “know.” That’s largely what drove them to the local food movement in the first place. Remind people that “local” means connections and community, not just practices.

Forgotten Audiences: Women, Mobile, and Gaming

You can find any number of articles online that will trumpet the news that nearly half of the world’s gamers are female. But what does that figure really tell us? What the data often miss is that this side of the gaming world skews considerably older than the male equivalent; the average woman playing video games is 37 and financially independent. And in fact, women outnumber male gamers in the 50-64 and 65+ demographics.

With an audience of adult women gamers whose numbers are nearly double those of boys under 18, it is important to question why so many games seem designed with adolescent males as the default audience. But just as critical is the question of why middle-aged and older women don’t seem to be targeted for any gaming news or entertainment content of their own. What sets them apart and makes the industry as a whole feel comfortable ignoring them?

The key to this mystery is the perceived divide between “hardcore” and “casual” (terms that seem suspect from the outset), with middle-aged women relegated to the latter category. And while it’s wrong to assume that casual gamers don’t deserve our attention, it is true that their tastes are very different from the (largely young and male) hardcore gamers. This older segment of women prefers to play mostly puzzle and strategy games, most often on a mobile device. They gravitate toward the type of “snackable” games that can be picked up and put down at a moment’s notice.

Historically, women have not been particularly well-served by the sedentary nature and limited distribution of traditional games. But casual mobile games have made inroads with women who previously never had the opportunity or inclination to set aside time for lengthy gaming sessions. Prior to the advent of mobile, a person might sit in their basement and play for an hour at a time. Now, the games are always with us.

Many studies that examine the role of women as video game consumers approach them fundamentally as a single monolithic audience, which runs the risk of erasing the distinctive qualities and needs of middle-aged and older women. When you read that 30% of people watching YouTube gaming videos are female or that 21 million people subscribe to the top 10 female gamers on the platform, it might be easy, though debatable, to come away with the impression that women are already reasonably well served and well represented in the world of gaming content. But to what extent do these figures only reflect engagement of younger women? Without studies that break down these numbers across different age demographics, it’s difficult to say.

There is an increasing number of influencers on YouTube or Twitch who focus on mobile gaming, but these rising stars are largely male and almost always young. And an examination of the most popular female streamers doesn’t appear to overlap much with the age range or game preferences of the women who are devoting the most time and money to video games. This shouldn’t come as a surprise, either, since we know that middle-aged women are playing on their phones and often on the go. How many of them are interested in sitting down for hours to watch never-ending game streams? And how interesting would it really be to watch someone else play Pokémon Go for hours? Games like this are only expected to hold players’ attention for a few minutes at a time.

Middle-aged women who play casual mobile games deserve to have gaming content that speaks to them on their own terms, and it’s not likely that streaming video is going to do the trick. Dedicated websites focused on mobile game news and reviews would likely do well, especially if they were optimized for reading on mobile devices. Smartphone apps that allow users to rate and review their favorite casual games with a social dimension (think Goodreads for games) would also have the potential to be more accessible than YouTube or Twitch. Middle-aged women have shown that they are willing to spend plenty of time and money on their favorite casual games, so why force them to rely on word of mouth to discover new favorites?

The media is paying lots of attention, especially in the wake of Gamergate, to the issue of how to make gaming a more inclusive space for women. And while this impulse is welcome and important, it tends to be concerned with girls and younger women who are actual or potential members of the hardcore gamer audience. Recruiting more female developers and creating games with woman-oriented narratives might revolutionize gaming culture, but it won’t necessarily change the lives of women who enjoy stealing a few spare moments to play Candy Crush. Identifying middle-aged women who like mobile games as “casual” gamers shouldn’t be a reason to write them off or neglect their unique information needs. But just as gaming culture hasn’t been quick to embrace them, they haven’t tended to identify with the culture. That’s why it will be easier to create new gaming content hubs from scratch with female casual gamers as the target audience than to rope in middle-aged women with a new Rooster Teeth series or a special section on Kotaku.

Just as the average middle-aged female gamer isn’t likely to join an Overwatch league, she probably won’t be well served by the type of gaming content that speaks to the people who do. But she represents a huge and well-off market segment whose spending power has yet to be fully tapped. King, the hugely successful developer of Candy Crush Saga and other popular casual games, was acquire by Blizzard for $5.9 billion. But the games themselves are only the tip of the iceberg. Content creators have the opportunity to create an entirely independent media ecosystem for casual gamers and her friends if they’re brave enough to throw away the blueprint and try some new ideas.

Food is Storytelling: Easter Ham, Culture, and Marketing

On Easter Sunday you’re most likely to see lamb on the menu in most of the world, at least where Easter is celebrated. That is, everywhere but the North America and Northern Europe. Lamb never experienced the level of popularity in America that it sees elsewhere, and so it is that ham is the central fixture of the meal. In the US, Easter ham is ubiquitous. It’s just a sliver of the 50 pounds of pork we eat a year per capita, but it has tremendous significance as the symbolic cornerstone of the holiday. So, how did the U.S. come to change up the traditional Easter meal? 

First, a look at lamb in the US. In 2018, American meat companies produced roughly 26 billion pounds of beef compared to 150.2 million pounds of mutton and lamb (the only meat we eat less of is veal, while chicken is at the top of our list). The average American eats less than a pound of lamb a year. Lamb tends to be pricey, tricky to cook for the inexperienced, and has become an acquired taste for American palates. Those who did grow up eating lamb at home probably associate it with copious amounts of mint jelly, meant to mask the gamey flavor and leathery texture that comes from overcooking it (which happens all too often).

But this wasn’t always the case. Lamb used to be more common when wool was in higher demand. As synthetic fabrics began to emerge in the 1940s and wool was no longer needed for uniforms and other material in the war effort, the need for sheep decreased as well. In the past 75 years, the number of sheep in the U.S. has gone from 56 million to just six million. With the popularity of lamb waning, the door was wide open for a new star of the Easter meal. And the timing for ham to step in was perfect.

From a production standpoint, ham also tended to make more sense than lamb. Sheep typically give birth in the spring. The result is that a farmer has to sacrifice one of the flock, giving up a source of wool later in the year. Being such a production-focused society, the loss of that single animal can be a hard sell. Conversely, pigs are traditionally slaughtered in the fall when the weather cools and the meat could stay fresh in the lower temperatures as it was broken down. Back when refrigeration was rare or nonexistent, farmers would set aside the meat they hadn’t sold to be cured throughout the winter to preserve it.. By spring, the cured meat was ready to eat – just in time for Easter.

But there is a social element to it as well. Hams are larger than lamb and easily serve a crowd. You can buy it fresh or frozen, prepared or ready to add your own flourishes. Leftovers are easily preserved and readily adapted once the crowd is gone. In a country where we increasingly see interactions with the extended family becoming less routine, a ham can accommodate these uncommon gatherings and provide a point of familial intimacy.

From last night’s dinner to feasts of celebration, food has always played a fundamental role in a nation’s culture. Whether eating an Easter ham or steamed fish off a green Brazilian banana leaf, food is always about more than just nutrients. We connect how food can trigger memories both good and bad. Food is more than just what you eat every day, but what it does to our sense of place, our sense of well-being? 

Food is a medium of communication. There are subtle messages in everything food-related: who sits first, who cooks what, when to take the last piece of pizza, when are you comfortable enough to eat leftovers off someone’s plate. Food can be a history lesson. For example, many West African recipes feature tomatoes, a colonial cash crop that only arrived in Africa via the slave trade. The potato is a fixture of Irish cuisine, but it is a latecomer to the island’s history. By paying attention to culinary details such as these, we learn the intricacies of one of its most necessary features of life. Understanding these intricacies means better communication and better marketing strategies.

Bridging the Qual/Quant Divide

Concern with “big data” have dominated conversations in the past few years. What does “big data” really means? What constitutes “big” versus “small” data? How does “big data” lead to real insights? The promise of data hasn’t played out as planned and we are starting to see a rethinking of how it should be used. Data has valuable uses, to be sure, but the belief that data would become The Thing that changes the world simply hasn’t manifested because it can’t provide meaning to the human condition behind the numbers. Quite simply, questions regarding people and markets cannot be answered by brute force, number crunching.

It’s important to note that the social sciences have a long-standing relationship with analysis of, and interpretation through, quantitative data at all scales and granularities. We know that data is neither good nor bad. It’s what one does with the data that matters. It’s how one understands and works with the benefits and the curses, the strengths and the limitations, of the data that makes the information useful.

Data is comforting because it is fixed, it’s solid, it is an object. It lends a veneer of scientific legitimacy to the things we create. But with the promise of data-driven creative not being fulfilled, we have an opportunity to resist taking data as given, an opportunity to bring an more expansive lens to the collection, management and curation of data. Not just agencies, but the companies for whom we work, as well. Only by looking for meaning in the data traces, the data “fumes”, will we be able to understand what is of value to people, and able to create messages that people value. To be able to do this well, to do this better than we are currently doing it, we need better tools for dealing with data at all scales and granularities—from collection to curation to manipulation to analysis to the drawing of defensible insights and conclusions.

I am a strong enthusiast for and advocate of data triangulation, of mingling data from multiple sources at many levels of granularity I’ve also always balked at the division of data into qualitative and quantitative, believing that behind every quantitative measure is a qualitative judgement imbued with a set of agendas. The distinction between qualitative and quantitative is of lim­ited use and creates needless barriers between input and outcomes. The cornerstone of a good strategic plan, campaign, etc. is the blending of the qualitative and the quantitative, and the embracing and connecting of very different representations from disparate sources at multiple levels of granularity.

That’s because in an industry that has to create ideas not just related to how and when people interact with a brand, but also why, an flexible perspective on multi-faceted data is the path forward to a creative spark.

As part of our evolution, we need to establish and foster deeper relationships with our colleagues, whether it’s planners, designers, data sciences, statistics, engineering, or developers. Data is a material for understanding, not a given from which we deduce that which lies latent within the data, waiting to be revealed. Data analyses should be more than incremental refinement on what is already known. We should work with data to challenge what we know, and to actively seek surprise. This is how we develop an understanding of what is meaningful by understanding people in context.

Coffee: When Simple Things Change Everything

I, like many people, start my day with a steaming cup of coffee. When I was younger the process of waking up began with a book, the newspaper, or occasionally a pad of paper as I reflected on something that felt meaningful at the time. Today, my coffee is taken with a shot of news via the iPad and a heaping mound of email. The consistent element through time has always been coffee. But even as coffee has remained in some ways the same coffee culture and my personal practices around it, from the brand I drink to its role as a post-lunch pick-me-up has changed over time.

Anthropologist William Roseberry wrote in 1996 that coffee drinkers would have had a tough time finding specialty coffee in the 1970s, pointing out that “the roasts were light and bland.” Coffee was uniform, a commodity, not unlike gasoline or saltines. Due to changing tastes of a younger generation weaned on soda, consumption was in fact on the decline. As the now famous story goes, Kenneth Roman, Jr., the president of Ogilvy and Mather, made a suggestion to the company’s client, Maxwell House: emphasize quality, value, and image by creating segmented products to increase appeal. And to emphasize value, quality, and image, the consumer needed to be made more aware about what made coffee worth the price. Specific blends and origins were advertised, lifestyles were marketed, and roasting types were displayed for consumers to see. And so it was that the specialty coffee market was born.

Coffee was meant to permeate every aspect of life. And while many of the large manufacturers have seen market share decline over time, smaller roasters marketing individual brands have found a niche, even if it has meant a higher cost to consumers. Coffee moved from being a commodity to something we savor, we contemplate, we find meaning in. From the brand and styles we drink to the places in which we drink it, coffees has become personal. We’re identified by the brand we buy, by the coffee shops we frequent, and by the types of coffee we drink (a Cubano, a cup of fair trade dark roast, a bag of organic Blue Mountain, etc.).  And we do love our coffee:

  • a third of the country’s population drinks coffee daily
  • half of the population drinks coffee at least weekly
  • two-thirds of the population has coffee at least occasionally

Among those who drink coffee, the average consumption is higher now than it has been in past years. The average person in the U.S. spends around $25 on coffee each week. A fair amount of that is spent out of home, but the coffee we do buy for home brewing isn’t the $1.99 stuff of yesteryear. In other words, we aren’t necessarily drinking more coffee than other generations, but we are spending more money on coffee. Younger generations in particular have a lot of disposable income but they aren’t spending it like their parents did. Instead of cars and homes, they’re spending it on a better food and beverage experience. Indeed, we’re seeing that the focus on quality and experience is finding its way into other generations. The cycle of change has taken root and is beginning to cut across age groups.

And all of this seems to point in a new direction for food and beverages in general. We’ve been taught to pay for coffee; for artistry, for the geography, for the experience. These factors contribute to a re-valuation of the beverage and its role in defining our identities, personally and culturally. Can the same be done with, say, a hamburger or a yard beer? By re-couching something that once represented modernity but has come to represent blandness, uniformity and mass production as something experiential, can we reinvent a category? I believe we can. Kenneth Roman, Jr., believed we could.  

Coffee offers us a way to look at our relationship to the larger world and see that sometimes our choices are not really our own. Brands create us even as we create them. It is not the transaction, but the relationship that matters. 

Fire, Meat, and Spring

Spring is a celebration of life, warmth, and sunlight. It ushers in outdoor dining, drinking, and cooking. It’s time to brush the remaining winter detritus off the barbecue and throw masses of meat on the grill. It is also a time to ponder the notion that cooking over an open fire is an ancient ritual. Traces of ash found in the Wonderwerk cave in South Africa suggest that hominins were controlling fire at least 1 million years ago, the time of our direct ancestor Homo erectus. Burnt bone fragments also found at this site suggest that Homo erectus was cooking meat. Our ancestors largely ate whatever they could; berries, grasses, fruits, and bits of small animals were probably the main fare. We know early proto-humans had an eclectic, mostly vegetarian, diet 3 million years ago because of the shape and size of their teeth – small front teeth and with short canines and large, flat molars. They had mouths built for grinding, not for ripping apart flesh. Then, around 2.5 million years ago, meat became a very big deal.

Katherine Milton of the University of California, Berkeley, claims that early humans were forced into this dietary change because the forests of Africa were receding and these hominids simply couldn’t get enough plant matter to stay alive. In support of this claim, archaeologists have found 2.5 million year old stone tools clearly used to butcher animals and to smash bones to access the marrow. And for the next few million years, humans apparently stuffed themselves with raw meat. Then, something radical happened. Somewhere, somehow, somebody offered up that meat cooked. Maybe early humans stumbled across the charred remains of an antelope killed in a brush and took advantage of the moment. Maybe they lit a fire themselves for light and warmth and while eating a bison dropped a leg into the fire by mistake. Whatever the impetus, humans began eating cooked meats at least 700,000 years ago, and they never looked back.

But why bother with cooking meat at all? It takes time and energy to build a fire, create specific cuts, invent the grill, and then clean up. At the most basic level, cooked meat simply tastes better, and our ancestors were apparently instant turned on to this. From the moment meat met fire, humans became gourmets. But the shift may also have evolutionary reasons. Harvard anthropologist Richard Wrangham speculates that controlled fire and cooked meat were implicated in human brain evolution. He asserts that humans actually may have been cooking their prey as far back as 1.6 million years ago, just when our genus was experiencing major brain expansion. Cooked meat, it turns out, was still full of protein when raw but easier to digest when cooked, and so natural selection might have opted for smaller guts. All that saved digestive energy may well have then gone into making bigger brains. If that position is right, the big human dietary shift was not so much the move to meat, but the move to cooked meat, which made us smarter and more inventive.

And then there is the cultural aspect of cooking. The oldest remains of obvious hearths are just 400,000 years old. Cooking requires cognitive skills that go beyond controlling fire, such as the ability to resist the temptation to scoff the ingredients, patience, memory and an understanding of the transformation process. With that comes greater organization, richer storytelling, and increasingly stronger social bonds. In other words, without fire and without the grill, even in its most primitive form, culture wouldn’t have developed, at least not in a way we would recognize today. Criticism of meat centers around modern manufacturing methods, which are often seen as lowering the quality of meat. Another part of the criticism is that meat derives from animals, raising ethical dilemmas (as well as a sense of unease or repulsion for some people). Meat is frequently perceived as unhealthy. Regardless of how we perceive meat, the fact remains that its place in the evolution of human culture and the significance it holds at the table today are deeply rooted in the shaping of the human experience. Cooking has evolved into one of the most varied and inventive elements of human culture. We cook thousands of different types of animal, plant, and fungus using a dazzling array of techniques. We spend far more hours planning, shopping for, and preparing food than actually eating it. We then sit down to watch programs about it, hosted by people who have become millionaire household names. Meat’s status reflects the myriad cultural contexts in which it is socially constructed in people’s everyday lives, particularly with respect to religious, gender, communal, racial, national, family, and class identity. We barbeque, therefore we are. Something to ponder when gathering around the table.

A Different Approach to Focus Groups

When something becomes a running joke on every sitcom since the 80s, you know it’s been overdone. The traditional focus groups is overdone. But I don’t think the focus group, or something akin to it more precisely, is dead. It’s an imperfect methodology but it has its place and it can be done well – if we rethink the process. Instead, there is the “un-focused” group; a gathering of individuals in a workshop or open discussion forum where they have access to a wide range of creative things to stimulate interaction and creation. The sample is smaller and the setting more intimate, which can lead to more effort and resources, but the outputs are closer to what you want to know (namely why people believe what they do) than you get from a traditional format.

Ultimately, the structure helps uncover perceptions, emotional ties, values and shared meaning, as well as activities and processes of use. Placing individuals in a more organic, open setting stimulates interaction and minimizes the biggest flaw of the traditional focus group: the Hawthorne Effect (the tendency to perform or perceive differently when one knows they are being observed).

Preparing and Staging. Setting up the location is pivotal to the success of this research format.  Rather than relying on a conference table and a two-way mirror, the goal is to produce a more natural setting to strike a balance between a living space and a professional space. One process utilizes two rooms, one where the “pre-discussion” will occur and another that will be used for the majority of the session.

In both rooms, furniture should be soft and result in collective interaction, meaning a mix of sofas and chairs.  Traditionally, sofas are avoided in focus groups because the assumption is that it infringes on personal space, making participants uncomfortable, but considering that the intention is to disrupt preconceived notions of what takes place in a focus group, participants typically become comfortable quickly.  Their psychological frame of what they are “supposed to do” breaks down and they subconsciously see it as a chance to open up.

Floor lamps should dominate the room (not overhead lighting) and colors should reflect a home-like atmosphere. The idea is to create the kind of environment that facilitates conversation rather than a corporate or laboratory-like setting.

Of course, this also impacts the size of the sample. The traditional method is to gather anywhere from 8 to 12 participants.  Changing the structure to a more conversational dynamic means reducing the sample to between 6 and 8 participants per session.  While the larger sample certainly puts more bodies in a room it doesn’t guarantee an increase in discussion or viewpoints because the dynamic is not conducive to conversation.  The smaller sample, coupled with the change in environment, fosters conversation and consequently, better information.

The Discussion before the Discussion. Before the primary conversation begins, it is helpful set the mood and get people relaxed with a brief pre-discussion, preferably around a meal.   This is not just courtesy.  Human beings are hardwired to respond to the act of sharing a meal.  In every society, gathering around food signals trust and intimacy, promoting honest, open interactions with each other.  Beginning the focus group around a substantial meal (not simply snacks) people are more apt to talk freely getting them primed for discussion. This is also a good time to start informally discussing the main topic of the evening.

Introductions, personal stories, and an overview of the discussion should be emphasized during this phase.  If topics come up that will be revisited during the main discussion it is fine, but the moderator should redirect the conversation so that not all the information is revealed early on.  Allowing the participants to start talking primes them to provide more expansive, clear, and detailed responses during the main discussion. During this initial phase, no camera is used because the goal is to get participants into a relaxed, conversational state of mind.  By eliminating the camera, there is no threat of “performance” and participants become comfortable with each other and the moderator.  Since valuable information will no doubt begin to emerge at this stage, and since no camera is recording the event, it is imperative that the facilitator be a skilled note taker.

The Main Event. In the primary discussion area, changing the setting will alter how information is captured and relayed to the clients.  There are no hidden cameras and no two-way mirrors.  Cameras are set up in unobtrusive locations and addressed openly when the group comes together.  Information is then broadcast to the clients/viewers.  Once again, the reason is to be intentionally disruptive to the mental model people have about focus groups.  The disruption is interpreted as an expression of honesty and the camera is quickly forgotten.  The truth is that participants in traditional focus groups are already aware of and performing for the camera, even if they can’t see it – if nothing else, the mirror is a constant reminder they are being watched.

Facilitation is done using a dual moderator method, where one moderator ensures the session progresses smoothly, while another ensures that all the topics are covered.  In addition to ensuring all the material is covered and questions addressed, the dual moderator process helps maintain the conversational tone by shifting the power dynamic of the group.  Rather than a single person leading and everyone following, the second moderator (seated among the participants) breaks up the dynamic and redirects the exchange of information.   Opening up the information exchange process means having an opportunity for more open and honest disclosure and discussion in a setting where participants are validated.

The Follow Up. The final step is to close the session. Once a typical focus group is over, there is usually a bit of time where some participants linger and offer bits of information they felt weren’t expressed clearly or share stories with others.  In this model, participants are actively encouraged to spend 20 minutes or so talking with the moderators.  The first step is to turn the camera off.  The key point is that the end of a focus group represents an opportunity that is all too frequently overlooked.  Keeping the participants for a post-discussion phase often captures pieces of information that go unspoken or unarticulated during the main discussion.

Changing the structure of the focus group can be uncomfortable for both those moderating and those watching it.  It appears much less structured than traditional methods because the focus is getting the target audience to open up and give real answers, not perform for the camera.

Remember, the goal is to put participants in a state of mind where they feel in control, instead of simply telling the moderators what they want to hear. Changing the format to a more relaxed, expansive session means worrying less about data and more about generating creative thinking and new ideas. Giving yourself license to think broadly is the key to success.

Moderating vs. Learning

Let me state that I am not a moderator. At least, not a traditional one. I am an ethnographer, an anthropologist, and a strategist. And while both moderators and ethnographers speak to people, they are not the same thing. This isn’t just a matter of semantic difference, it is at the heart of how practitioners execute their work and how they practice their craft.

A moderator is defined as a presenter, a host. A moderator is a person or organization responsible for running an event. A moderator is a person given special powers to enforce the rules of a collective event, be it a focus group, a forum, a blog, etc.  Moderation is the process of eliminating or lessening extremes. It is used to ensure consensus and limit deviation. In other words, moderators assume control and direct. They maintain power and tease out information that is essentially qualitative hypothesis testing. Understand, I have no problem with moderation and moderators – the approach is useful and has its place in the inquiry toolkit. But the practice of moderation is limited not just by its structure but its theoretical underpinnings.

An anthropological approach (ethnography in particular) is aimed to learn and understand cultural phenomena which reflect the knowledge and guiding the life of a cultural group. Data collection methods are meant to capture the social meanings and ordinary activities of people in naturally occurring settings. Multiple methods of data collection may be employed to facilitate a relationship that allows for a more personal and in-depth portrait of the informants and their community. These can include participant observation, mind mapping, interviews, etc.  In order to accomplish a neutral observation a great deal of reflexivity on the part of the researcher is required.

Reflexivity asks us to explore the ways in which a researcher’s involvement with a particular study influences, acts upon and informs such research.  The goal is to minimize the power structure and allow people, our participants, to inform and guide the researcher according to what matters most to them, be it spoken or unspoken. In other words, we are not moderating, we are learning and exploring.

Ethnography’s strength comes from the ability to work fluidly with participants as opposed to moderating a setting or social interaction. The researcher who refers to him or herself as a moderator of ethnography, through his or her choice of words, is indicating how they will do fieldwork, how they will interpret findings and how they subconsciously see their role in the field. Again, moderation is a terrific tool but it is not ethnographic. Nor is ethnography the same as moderation — they both have things to contribute, but they are not methodological equivalents. If you’re going to hire an ethnographer it isn’t enough to ask what markets they will work in or how big the sample population will be. If you’re intending to conduct it yourself, it isn’t enough to have people who are comfortable with conducting interviews. Ask the questions: “What do you call yourself and what’s your job when interacting with people.” Then get them to articulate not only their methods, but the rationale behind them. Be specific. It’s your money. Be sure you are paying for what you have commissioned.

Reveling in BBQ: Dining Out and Ritual

I am in smoked meat paradise this week. Kansas City is perhaps the focal point of this marvelous cuisine – whether you’re a fan of Carolina or Texas styles, KC is a defining setting in the near-religious sects that defined American BBQ. There are places where it is served by waiters in white jackets and places where the meat is smoked under corrugated metal roofs out back. Road crews and guys in three-pieces suits rub shoulders and revel in the scent of smoke, sauce, and dry rub.   

If the rituals of eating out have become less grand for the mass of people over time, it still retains its aura as an “event.” The grand aspects are retained in expeditions to restaurants both simple and offensively overpriced. We spend not so much for the food as for the entertainment value and the naughty thrill of being (we hope) treated like more than average Joes in the routine of daily life. The family outing to the local BBQ joint still has an air of preparation and difference; it can still be used to coax youngsters to eat, and provide an air of difference so as to be “restorative.” Even the necessary lunch for workers who cannot eat at home has been made into a ritual event by the relatively affluent among them. 

“Doing lunch” in the business world is regarded as a kind of sacred operation where, the mythology has it, the most important deals are made. A puritanical campaign against the “three-martini lunch” by the then President Carter had Americans as roused and angry as they had been over the tax on tea that sent their ancestors to their muskets. The business-meal tax deduction was fought for with passion, and the best the government could do was to reduce its value by 20%. There may not be a free lunch, but it sure as hell is deductible. Very little of this has to do with business, of course, and everything to do with status. Just to be having business lunches at all marks one down as a success in the world of business, for only “executives” (the new order of aristocracy) can have them. 

At the other end of the scale, reverse snobbery asserts itself in the positive embrace of “junk food,” otherwise condemned as non-nutritious, vulgar, or even dangerous to our health. Junk food can be socially acceptable if indulged in as part of a nostalgia for childhood: the time when we were allowed such indulgences as “treats.” So giant ice cream sundaes with five different scoops of ice cream, maraschino cherries, pecans, chocolate sauce, and whipped cream; sloppy joes with french fries and gravy; milk shakes and root beer floats; hot dogs with mustard, ketchup, and relish – all these are still OK if treated as a kind of eating performance. Hot dogs at football games, or ice cream at the shore are more or less de rigeur. The settings in which these are eaten vary from the simple outdoors to elaborate ice cream shops with bright plastic furniture and a battery of machines for producing the right combinations of fat, sugar, and starch. Ostensibly these are for children, but adults eat there with no self-consciousness and without the excuse of accompanying children. But for adults, as for children, these places are for “treats,” and so always remain outside the normal rules of nutrition and moderation. 

We continue to make eating out special when we can. Romantic dinners, birthday dinners, anniversary dinners, retirement dinners, and all such celebrations are taken out of the home or the workplace and into the arena of public ritual. Only the snootiest restaurants will not provide a cake and singing waiters for the birthday boy. The family outing is specially catered for by special establishments – “Mom’s Friendly Family Restaurant” can be found in every small American town (although the wise saying has it that we should never eat at a place called Mom’s). But even in the hustle and bustle of these family establishments the individuality of the family is still rigidly maintained. No family will share a table with another. This is very different to the eating out of the still communalistic East. Lionel Tiger, in his fascinating description of Chinese eating, describes how people are crowded together in restaurants – strangers at the same table all eating from communal dishes. And far from having a reservation system, restaurants encourage a free-for-all in which those waiting in line look over the diners to find those close to finishing, then crowd behind their tables and urge them on.

The democratization of eating out is reflected in the incredible burgeoning of fast food joints and their spread beyond the United States. McDonald’s is the fastest-growing franchise in Japan, and has extended its operations to China. When it opened its first franchise in Beijing, it sold so many burgers so fast that the cash registers burned out. Kentucky Fried Chicken has now opened in Beijing, and has become the chic place to eat in Berlin. These are humble foods – a ground meat patty that may or may not have originated in Hamburg; a sausage of dubious content only loosely connected to Frankfurt; deep fried chicken that was a food of the rural American South; a cheese and tomato pie that probably came from Naples. But they have taken the world by storm in one of the greatest eating revolutions since the discovery of the potato. In a curious twist, two indigenous foods of the East are rapidly turning into the fast food specials of the yuppies who would not be seen dead eating the proletarian hamburger: the Japanese raw-fish sushi, and the Chinese dim sum lunch.

The proletariat has evolved its own forms of eating out. The transport café in Britain with its huge portions of bacon and eggs; the French bistro, which was a working-class phenomenon before reverse snobbery turned it into bourgeois chic, with its wonderful casseroles and bifstekpommefrit; the Italian trattoria with its cheap seafood, again gentrified in foreign settings; the incomparable diner in America; the grand fish-and-chip warehouse in the north of England; the beer-and-sausage halls of Germany; the open-air food markets in all the warm countries. If we could do a speeded-up film of social change in the last fifty years we would see a grand ballet in which eating moved out of the home and into the public arena on a scale which makes rural depopulation look like a trickle. Sociologists, as usual, have still even to figure out that it is happening, much less come up with an explanation. 

To be literate in the world of eating out, to be even ahead of the trends (knowing that fantastic little Portuguese bistro that no one has discovered), is to demonstrate that one is on top of the complex cosmopolitan civilization of which eating out has come to be a metaphor. And so, it’s time to start thinking about which BBQ join to hit for lunch.