Objectifying Objectivity

“Science is a social phenomenon…It progresses by hunch, vision, and intuition. Much of its change through time is not a closer approach to absolute truth, but the alteration of cultural contexts that influence it. Facts are not pure information; culture also influences what we see and how we see it. Theories are not inexorable deductions from facts; most rely on imagination, which is cultural.” Gould, 1981

Business people often like to think of themselves as scientists of sorts – their science is practical and applied, but first and foremost it is grounded in objectivity and hypothesis testing, the hallmarks of scientific reasoning. Scientists seek concepts and principles, not subjective perspectives. They seek laws, truths and testable, verifiable data.  And we as a society, be the business person or the designer, simply accept objectivity as a fact of life. Thus, we cling to a myth of objectivity: that direct, objective knowledge of the world is obtainable, that our preconceived notions or expectations do not bias this knowledge, and that this knowledge is based on objective weighing of all relevant data on the balance of critical scientific evaluation. And here is where I will no doubt irritate some and flat out piss off others – objectivity is a myth. So from the outset, let’s be clear. I am not implying that objectivity is a fallacy in and of itself. That would be absolutist. Rather, like all myths, objectivity is an ideal for which we strive. The search for objectivity is an intrinsically worthwhile quest, but it should not get in the way of an insight, which frequently happens. If you can’t quantify it, an insight loses its worth. And that is a terrible, terrible thing.

In most business situations the fact of the matter is that we choose which events, numbers, etc. we want to place value on and those we want to dismiss. This is occasionally conscious, but more often is the product of our worldview, what we hope to personally gain from the data we employ (e.g. a promotion), or simply how tired we are when we sit in on our 300th interview at the end of a long day.  Our beliefs and expectations exert a profound control on perceptions. In other words, we see what we expect to see, and we remember what we want to remember. If we believe that moms are the primary decision makers when it comes to buying groceries, we overlook the roles of other family members in the process, roles that may in fact be more important. So, while people misrepresent themselves in most traditional research (itself another topic of discussion for a later date), we in fact twist reality one turn further. Out of all the occurrences going on in the environment, we select those that have some significance for us from our own egocentric position.

What all this means is that the first problem with obtaining objectivity is that perception strengthens opinions, and perception is biased in favor of expectations. The second is, that our involvement by definition alters the situation. In 1927, Werner Heisenberg, in examining the implications of quantum mechanics, developed the principle of indeterminacy, more commonly known as “the Heisenberg uncertainty principle.”  He showed that indeterminacy is unavoidable, because the process of observation invariably changes the observed object. Whether we run a focus group or ask someone to fill out 20 questions in a survey, we are altering “normal” behavior and therefore the how an idea, a product or a brand would play out in real life. What this means is that probability has replaced determinism, and that scientific certainty is an illusion.

So what are we to do? How can we reconcile the profound success of the scientific method with the conclusion that the perception and process make objectivity an unobtainable ideal? Well, we accept a few things and move on. Science depends less on complete objectivity than most of us imagine. Business even less so, especially as it pertains to things like advertising and branding.  Admitting that allows us to use a biased balance to weigh and evaluate data, experiences and good old-fashioned gut reactions. If we’re aware of the limitations by which we assess and measure our area of study, be it cereal shopping habits or car purchase decisions, we can use those biases effectively. To improve the accuracy of a balance, we must know its sources of error.

Pitfalls of subjectivity abound. Some can be avoided entirely; some can only be reduced. The trick is to know when and how to use them to get at a real insight. Some of the more common pitfalls are:

  • Ignoring relevant variables: We tend to ignore those variables that we consider irrelevant, even if others have suggested that these variables are significant. We ignore variables if we know of no way to remove them, because considering them forces us to admit that the experiment has ambiguities. If two variables may be responsible for an effect, we concentrate on the dominant one and ignore the other. The point is, we cherry pick and doing so leads to flaws.
  • Confirmation bias: During the time spent doing our initial research (that stuff we used to call a Lit Review), we may preferentially seek and find evidence that confirms our beliefs or preferred hypothesis. Thus, we select the experiment most likely to support our beliefs. This insidiously frequent pitfall allows us to maintain the illusion of objectivity (for us as well as for others) by carrying out a rigorous experiment, while nevertheless obtaining a result that is comfortably consistent with expectations and desires.
  • Biased sampling: Subjective sampling that unconsciously favors the desired outcome is easily avoided by randomization. Too often, we fail to consider the relevance of this problem during research design, leading to suspect insights.
  • Missing important background characteristics: Research can be affected by a bias of human senses, which are more sensitive to detecting change than to noticing constant detail. In the midst of collecting data, however you chose to think of it, it is easy to miss subtle changes in context. That, unfortunately, often leads to overlooking interrelationships between people, events, etc. In other words, it means you overlook important information because you can’t tear yourself away from what you perceive to be important.
  • Conformation bias in data interpretation: Data interpretation is subjective, and it can be dominated by prior belief. We should separate the interpretation of new data from the comparison of these data to prior results.

Ultimately, there is nothing wrong with embracing our subjective side, our interpretative side, our artistic side. This doesn’t necessarily mean rejecting the search for objectivity (although sometimes that is in fact the best course of action), but it does mean we should recognize that when a client starts freaking out about our research results and, more importantly, our insights, we should be prepared and address it head on rather than trying to defend ourselves as “objective observers”. After all, I’ll be the first to say that I love mythology. That said, I don’t believe life sprang from body of Ymir (look it up) but I do believe we can learn quite a bit from the story about our humanity. Similarly, if we embrace the realities of a subjective, or at least causal world, we produce better thinking, better insights and better results.