Objectifying Objectivity

“Science is a social phenomenon…It progresses by hunch, vision, and intuition. Much of its change through time is not a closer approach to absolute truth, but the alteration of cultural contexts that influence it. Facts are not pure information; culture also influences what we see and how we see it. Theories are not inexorable deductions from facts; most rely on imagination, which is cultural.” Gould, 1981

Business people often like to think of themselves as scientists of sorts – their science is practical and applied, but first and foremost it is grounded in objectivity and hypothesis testing, the hallmarks of scientific reasoning. Scientists seek concepts and principles, not subjective perspectives. They seek laws, truths and testable, verifiable data.  And we as a society, be the business person or the designer, simply accept objectivity as a fact of life. Thus, we cling to a myth of objectivity: that direct, objective knowledge of the world is obtainable, that our preconceived notions or expectations do not bias this knowledge, and that this knowledge is based on objective weighing of all relevant data on the balance of critical scientific evaluation. And here is where I will no doubt irritate some and flat out piss off others – objectivity is a myth. So from the outset, let’s be clear. I am not implying that objectivity is a fallacy in and of itself. That would be absolutist. Rather, like all myths, objectivity is an ideal for which we strive. The search for objectivity is an intrinsically worthwhile quest, but it should not get in the way of an insight, which frequently happens. If you can’t quantify it, an insight loses its worth. And that is a terrible, terrible thing.

In most business situations the fact of the matter is that we choose which events, numbers, etc. we want to place value on and those we want to dismiss. This is occasionally conscious, but more often is the product of our worldview, what we hope to personally gain from the data we employ (e.g. a promotion), or simply how tired we are when we sit in on our 300th interview at the end of a long day.  Our beliefs and expectations exert a profound control on perceptions. In other words, we see what we expect to see, and we remember what we want to remember. If we believe that moms are the primary decision makers when it comes to buying groceries, we overlook the roles of other family members in the process, roles that may in fact be more important. So, while people misrepresent themselves in most traditional research (itself another topic of discussion for a later date), we in fact twist reality one turn further. Out of all the occurrences going on in the environment, we select those that have some significance for us from our own egocentric position.

What all this means is that the first problem with obtaining objectivity is that perception strengthens opinions, and perception is biased in favor of expectations. The second is, that our involvement by definition alters the situation. In 1927, Werner Heisenberg, in examining the implications of quantum mechanics, developed the principle of indeterminacy, more commonly known as “the Heisenberg uncertainty principle.”  He showed that indeterminacy is unavoidable, because the process of observation invariably changes the observed object. Whether we run a focus group or ask someone to fill out 20 questions in a survey, we are altering “normal” behavior and therefore the how an idea, a product or a brand would play out in real life. What this means is that probability has replaced determinism, and that scientific certainty is an illusion.

So what are we to do? How can we reconcile the profound success of the scientific method with the conclusion that the perception and process make objectivity an unobtainable ideal? Well, we accept a few things and move on. Science depends less on complete objectivity than most of us imagine. Business even less so, especially as it pertains to things like advertising and branding.  Admitting that allows us to use a biased balance to weigh and evaluate data, experiences and good old-fashioned gut reactions. If we’re aware of the limitations by which we assess and measure our area of study, be it cereal shopping habits or car purchase decisions, we can use those biases effectively. To improve the accuracy of a balance, we must know its sources of error.

Pitfalls of subjectivity abound. Some can be avoided entirely; some can only be reduced. The trick is to know when and how to use them to get at a real insight. Some of the more common pitfalls are:

  • Ignoring relevant variables: We tend to ignore those variables that we consider irrelevant, even if others have suggested that these variables are significant. We ignore variables if we know of no way to remove them, because considering them forces us to admit that the experiment has ambiguities. If two variables may be responsible for an effect, we concentrate on the dominant one and ignore the other. The point is, we cherry pick and doing so leads to flaws.
  • Confirmation bias: During the time spent doing our initial research (that stuff we used to call a Lit Review), we may preferentially seek and find evidence that confirms our beliefs or preferred hypothesis. Thus, we select the experiment most likely to support our beliefs. This insidiously frequent pitfall allows us to maintain the illusion of objectivity (for us as well as for others) by carrying out a rigorous experiment, while nevertheless obtaining a result that is comfortably consistent with expectations and desires.
  • Biased sampling: Subjective sampling that unconsciously favors the desired outcome is easily avoided by randomization. Too often, we fail to consider the relevance of this problem during research design, leading to suspect insights.
  • Missing important background characteristics: Research can be affected by a bias of human senses, which are more sensitive to detecting change than to noticing constant detail. In the midst of collecting data, however you chose to think of it, it is easy to miss subtle changes in context. That, unfortunately, often leads to overlooking interrelationships between people, events, etc. In other words, it means you overlook important information because you can’t tear yourself away from what you perceive to be important.
  • Conformation bias in data interpretation: Data interpretation is subjective, and it can be dominated by prior belief. We should separate the interpretation of new data from the comparison of these data to prior results.

Ultimately, there is nothing wrong with embracing our subjective side, our interpretative side, our artistic side. This doesn’t necessarily mean rejecting the search for objectivity (although sometimes that is in fact the best course of action), but it does mean we should recognize that when a client starts freaking out about our research results and, more importantly, our insights, we should be prepared and address it head on rather than trying to defend ourselves as “objective observers”. After all, I’ll be the first to say that I love mythology. That said, I don’t believe life sprang from body of Ymir (look it up) but I do believe we can learn quite a bit from the story about our humanity. Similarly, if we embrace the realities of a subjective, or at least causal world, we produce better thinking, better insights and better results.

 

Advertisements

Getting Over Ourselves: Make research meaningful

The other day I was privy to a discussion by a researcher who was decidedly upset about having to “dumb down” the research report he had completed. The client was impressed by the depth of the work, but equally frustrated with the seemingly academic depth of the language of the report and the use of jargon that was, realistically, more appropriate to anthropological circles than to a business environment. The researcher was upset by the client’s request to strip out discussions of agency, systems design theory, identity formation, etc., and stated something along the lines of “I had to learn this sort of thing in grad school, so they should take the time to do the same”. And while I think it would be lovely (and perhaps beneficial) if clients took such an interest in what we as researchers study, I have to say my views on the matter are very different. Making what we learn useful and meaningful to the client isn’t “dumbing it down”, it’s performing the task for which we were hired. We do not receive grants and write peer-reviewed articles when businesses hire us. Indeed, we may not write at all. What we do is produce insights and information that they can use, from their design team to their CEO. If they aren’t asking us to become expert in supply chain models or accounting, then asking them to embrace often daunting concepts in socio-cultural theory is both unrealistic and, frankly, arrogant.

In general, companies hire ethnographers (anthropologist, sociologists, etc.) for a simple reason: to uncover new ways to achieve competitive advantage and make more money. This translates, most often, into research to understanding new product opportunities, brand positioning, or salient marketing messages. Unfortunately, our clients often have no idea what to do with the research. But more often than not, the fault lies with ethnographers, not the client, and can be overcome if we apply ourselves just a bit.

Usefulness means being a guide, not a lecturer. So why are we so often disinclined to make what we do useful to business people? Part of it, I believe, stems from an unwillingness to address our own biases openly and honestly. There is a tendency among many of us coming out of what have traditionally been academic disciplines to ridicule or react negatively to people in the business world. To be honest, it’s why we chose, say, an anthropology program over a business program in college. We often, consciously or subconsciously, hold these people in contempt and believe that it is they who should bend, not us, as if we are providing secret knowledge are indeed of a higher order of life than they. We resent the idea that these lesser minds would have to audacity to ask us to curb our genius. And yet, there’s nothing new in making complex ideas useful, simple, or intelligible to people without advanced training in the social sciences. Look at any Anthro 101 course and you realize we’ve been doing this for a very long time already. The fact of the matter is that in order to be relevant and to get the client excited about what we do and to value the thinking behind our work, we have to remember that not everyone wants to be an expert in social science any more than they want to be physicians or painters – they want us to be the experts and to know what we’re doing, including crafting what we learn into something they can grasp and apply even as they try to balance their own work load. Balancing jargon with meaning is, or should be, the goal.

Another struggling point I often think stems from how many of us were trained. Traditionally, the researcher is either left to work alone or as part of a very small team. The findings are analyzed, complied and shared with a small group of like-minded individuals. (We would like to believe that the numbers of people who care about what we write are larger, but the truth is most of us don’t give the work of our colleagues the attention they deserve or would at least like to believe they deserve.) Our careers are built on proving our intelligence, which means making an intellectual case that addresses every possible theoretical angle in great detail. But in the business context, to whom are we proving our intelligence? And do they care? They hire us precisely because we are the experts, not to prove how smart we are. This isn’t to say that we can or should forego the rigor good ethnographic research should employ, but it is to say that whether we like it or not, most of the theoretical models we use should end up in the appendix, not in what the client sees, hears or reads. Not only does it overcomplicate our findings, it often comes across as either arrogant or needy, neither quality being something the client finds particularly enticing or reassuring.

The fact is that we do ourselves and the discipline a disservice by not learning the language and needs of business people. We complain that untrained people are slowly “taking over” ethnography, but it’s our own doing nine times out of ten. It isn’t enough to have a better grasp of the complexities of the human condition, we have to learn to translate our work and come to terms with the fact that the people hiring us have a very real, practical need for our findings. If it cannot be translated into something that can be grasped in the first two minutes, then in their way of seeing the world, it is money wasted.

Are we there to educate or inform? Our work is frequently deemed too academic. So what does it mean when a client says, “It’s too academic.”?
 It means that they didn’t hire you to teach a class about anthropological theory and method. It means they don’t want to sit through a 100 page Power Point presentation before getting to the heart of the matter. They are in business and have neither the time nor the interest of a scholar or student.  Again, this doesn’t mean you don’t do the work or fail to set up the points you are trying to make, but it does mean that you be cognizant of the  fact that the audience hired you to improve their business and products, not teach a course on anthropological methods.  And indeed, some concepts are simply too complex to turn into a couple of bullet points. But that doesn’t mean we cannot try, particularly if we hope to get more work from the client.

The people with the luxury of sitting through a lengthy presentation or who have the time to discuss the intricacies of social theory rarely have a significant amount of authority in the decision-making process, and they rarely hold the purse strings.  This isn’t to say that those two hours of research findings we present aren’t meaningful, but rather that presentations need to be tailored to the needs of the people buying your service (research) and product (recommendations). For the business community, the product is not knowledge, but intelligence.  In other words, the product is knowledge that is actionable and useful. And to be fair, it’s worth noting that the client is the one who pays for our work. If the idea of providing them with the service and product they need is unpalatable, then I would argue that the ethnographer needs to quit complaining and start exploring a different line of work, plain and simple.

The researcher, research team, creative team, client, and everyone invested in the project need to work toward turning information into something they can act upon. When the time comes to sit down with the client and explain what you learned, the ethnographer must be prepared to also explain what to do with it next in a simple, clear way.