Embracing the Whiteboard

Navigating nearly any company today means being well acquainted with whiteboards, sharpies, and post-it notes. But how they’re used differs from setting to settings. In most corporate environments, you rarely see them used as a tools for innovation. Whiteboards in conference rooms are often devoid of any meaningful content and those hanging in offices are typically to-do lists.  Post-its are reminders to call such and such department, mini to-do lists, or notes to pick up milk on the way home. In contrast, agencies (whether design, advertising, or any other creatively inclined job type) use them as tools for ideation and collaboration. The reason is simple: in any creative firm, we sell time and thinking. Whiteboards and post-its are the tools by which we bring these things to life.

This isn’t to say that corporate environments lack the creative spark, but deign and idea generation follow a different pattern and are one of a number of functions. Additionally, most corporate environments are not designed spatially to drive collaboration in what is, ultimately, a very public, very exposed way of creative problem solving. People are spread out over multiple floors and grouped by departmental function rather than by task. What this means is that cross-functional teams are difficult to bring together in a single space where they can have discussions and working sessions with a shared work pallet. In an agency setting, because we sell ideas above all else, the shared space becomes the norm out of necessity. Collaborative ideation is the central theme of most interactions, and therefore the public expression of the ideas are emergent. In other words, we must come together and work very collaboratively in order to fulfill our central functions – design and innovation. You have to see thoughts develop in real time, respond, build, break, and build again.

Why does any of this matter? Because what I’m suggesting is that for a corporate setting to become more adaptive, more creative, and more inspired, it needs to embrace the idea of an iterative, public work process. It needs to take a design thinking approach to daily problem solving. Design thinking is an approach that can be used to consider issues and resolve problems more broadly than within professional design practice, and has been applied in business and to social issues. Design thinking includes “building up” ideas, with few, or no, limits on breadth during a “brainstorming” phase. This helps reduce fear of failure in the people involved in the work and encourages input and participation from a wide variety of sources in the ideation phases.

In order to survive in today’s complex world, organizations need to generate, embrace, and execute on new ideas. That takes creativity and a creatively capable workforce. It also means embracing the whiteboard and ideation in an open forum.  It’s the secret sauce, or in evolutionary terms, it’s what keeps you fit. Organizations without it can’t compete. So, pick up the sharpie, break out the post-its, and step up to the board. The end results will transform your company and your brand.

 

Simple Steps in Journey Maps

A customer journey map is a very simple idea: a diagram that illustrates the steps your customers go through in engaging with your company, whether it be a product, an online experience, retail experience, or a service, or any combination. It’s nothing new, we’ve all done them or been involved in their development. But what makes for a good map?

First, complexity is, ultimately, your friend. Yes, this flies in the face of the “keep it simple, stupid” mantra, but there is a solid rationale for it.  Journey maps are tools and need to account for as many actions, triggers, and processes as possible to ensure nothing is overlooked. Sometimes customer journey maps are “cradle to grave,” looking at the entire arc of engagement. Other times they may focus on a finite interaction or series of steps. In either case, how people maneuver through the process of making a buying decision is more complex than the channels in which they navigate – it is wrapped up in cultural and behavioral mechanisms that influence and shape every other action. That includes emotional elements that are often overlooked in designing a journey map. With that in mind, capturing emotional, cultural, and symbolic elements of the journey is as important as capturing functional and structural ones.

From a business perspective, it ensures getting the customer through the process and converting them to a long-term advocate. Brand love is big. A great out-of-box experience is like a little piece of theater. Scripting it well helps guide the customer through the first steps of using their new purchase and minimizes expensive calls into help lines.

So, what elements make for a good journey?

  • Actions: What actions are customers taking to move themselves on to the next stage?
  • Motivations: Why is the customer motivated to keep going to the next stage? What emotions are they feeling?
  • Questions: What are the uncertainties, jargon, or other issues preventing the customer from moving to the next stage? What are their pain points? What are the points of breakdown?
  • Barriers: What structural, process, cost, implementation, or other barriers stand in the way of moving on to the next stage?
  • Meaning: What meaning does the product, service, etc. play in their worldview? What meaning does it serve and how is it connected to culture?

Filling all these out is best done if grounded in customer research, preferably including in-depth ethnographic exploration. Ask customers to create mind maps and to map out their journeys for you, while you are visiting them also help create a richer journey, producing a participatory structure that allows for greater clarity.

It’s worth noting that a journey is often non-linear. Depending on the complexity of the product or service, the need, the cost, etc. people will move through different stages over a longer period of time. Personality also plays a role. Someone may jump straight from awareness to purchase if they are not inclined to do research and have a strong recommendation from a friend, for example. But the underlying point remains; the more we can account for their thoughts, trigger, processes, and inter-related actions, the better we can tailor the experience to meet their needs.

In the end, there is no single right way to create a customer journey, and your own organization will need to find what works best for your situation, but there are clear elements that help ensure it has the most relevant outcomes. Ensuing you cover all your bases ensures a better end result.

 

 

Objectifying Objectivity

“Science is a social phenomenon…It progresses by hunch, vision, and intuition. Much of its change through time is not a closer approach to absolute truth, but the alteration of cultural contexts that influence it. Facts are not pure information; culture also influences what we see and how we see it. Theories are not inexorable deductions from facts; most rely on imagination, which is cultural.” Gould, 1981

Business people often like to think of themselves as scientists of sorts – their science is practical and applied, but first and foremost it is grounded in objectivity and hypothesis testing, the hallmarks of scientific reasoning. Scientists seek concepts and principles, not subjective perspectives. They seek laws, truths and testable, verifiable data.  And we as a society, be the business person or the designer, simply accept objectivity as a fact of life. Thus, we cling to a myth of objectivity: that direct, objective knowledge of the world is obtainable, that our preconceived notions or expectations do not bias this knowledge, and that this knowledge is based on objective weighing of all relevant data on the balance of critical scientific evaluation. And here is where I will no doubt irritate some and flat out piss off others – objectivity is a myth. So from the outset, let’s be clear. I am not implying that objectivity is a fallacy in and of itself. That would be absolutist. Rather, like all myths, objectivity is an ideal for which we strive. The search for objectivity is an intrinsically worthwhile quest, but it should not get in the way of an insight, which frequently happens. If you can’t quantify it, an insight loses its worth. And that is a terrible, terrible thing.

In most business situations the fact of the matter is that we choose which events, numbers, etc. we want to place value on and those we want to dismiss. This is occasionally conscious, but more often is the product of our worldview, what we hope to personally gain from the data we employ (e.g. a promotion), or simply how tired we are when we sit in on our 300th interview at the end of a long day.  Our beliefs and expectations exert a profound control on perceptions. In other words, we see what we expect to see, and we remember what we want to remember. If we believe that moms are the primary decision makers when it comes to buying groceries, we overlook the roles of other family members in the process, roles that may in fact be more important. So, while people misrepresent themselves in most traditional research (itself another topic of discussion for a later date), we in fact twist reality one turn further. Out of all the occurrences going on in the environment, we select those that have some significance for us from our own egocentric position.

What all this means is that the first problem with obtaining objectivity is that perception strengthens opinions, and perception is biased in favor of expectations. The second is, that our involvement by definition alters the situation. In 1927, Werner Heisenberg, in examining the implications of quantum mechanics, developed the principle of indeterminacy, more commonly known as “the Heisenberg uncertainty principle.”  He showed that indeterminacy is unavoidable, because the process of observation invariably changes the observed object. Whether we run a focus group or ask someone to fill out 20 questions in a survey, we are altering “normal” behavior and therefore the how an idea, a product or a brand would play out in real life. What this means is that probability has replaced determinism, and that scientific certainty is an illusion.

So what are we to do? How can we reconcile the profound success of the scientific method with the conclusion that the perception and process make objectivity an unobtainable ideal? Well, we accept a few things and move on. Science depends less on complete objectivity than most of us imagine. Business even less so, especially as it pertains to things like advertising and branding.  Admitting that allows us to use a biased balance to weigh and evaluate data, experiences and good old-fashioned gut reactions. If we’re aware of the limitations by which we assess and measure our area of study, be it cereal shopping habits or car purchase decisions, we can use those biases effectively. To improve the accuracy of a balance, we must know its sources of error.

Pitfalls of subjectivity abound. Some can be avoided entirely; some can only be reduced. The trick is to know when and how to use them to get at a real insight. Some of the more common pitfalls are:

  • Ignoring relevant variables: We tend to ignore those variables that we consider irrelevant, even if others have suggested that these variables are significant. We ignore variables if we know of no way to remove them, because considering them forces us to admit that the experiment has ambiguities. If two variables may be responsible for an effect, we concentrate on the dominant one and ignore the other. The point is, we cherry pick and doing so leads to flaws.
  • Confirmation bias: During the time spent doing our initial research (that stuff we used to call a Lit Review), we may preferentially seek and find evidence that confirms our beliefs or preferred hypothesis. Thus, we select the experiment most likely to support our beliefs. This insidiously frequent pitfall allows us to maintain the illusion of objectivity (for us as well as for others) by carrying out a rigorous experiment, while nevertheless obtaining a result that is comfortably consistent with expectations and desires.
  • Biased sampling: Subjective sampling that unconsciously favors the desired outcome is easily avoided by randomization. Too often, we fail to consider the relevance of this problem during research design, leading to suspect insights.
  • Missing important background characteristics: Research can be affected by a bias of human senses, which are more sensitive to detecting change than to noticing constant detail. In the midst of collecting data, however you chose to think of it, it is easy to miss subtle changes in context. That, unfortunately, often leads to overlooking interrelationships between people, events, etc. In other words, it means you overlook important information because you can’t tear yourself away from what you perceive to be important.
  • Conformation bias in data interpretation: Data interpretation is subjective, and it can be dominated by prior belief. We should separate the interpretation of new data from the comparison of these data to prior results.

Ultimately, there is nothing wrong with embracing our subjective side, our interpretative side, our artistic side. This doesn’t necessarily mean rejecting the search for objectivity (although sometimes that is in fact the best course of action), but it does mean we should recognize that when a client starts freaking out about our research results and, more importantly, our insights, we should be prepared and address it head on rather than trying to defend ourselves as “objective observers”. After all, I’ll be the first to say that I love mythology. That said, I don’t believe life sprang from body of Ymir (look it up) but I do believe we can learn quite a bit from the story about our humanity. Similarly, if we embrace the realities of a subjective, or at least causal world, we produce better thinking, better insights and better results.

 

Innovation Is Creative Thinking With Purpose

Innovation is creativity with a purpose. It is the creation and use of knowledge with intent. It is not only creating new ideas but creating with a specific intention and with plans to take those ideas and make something that will find purpose the world. Innovation is ideas in action, not the ideas themselves. Innovation is also a word that gets thrown about, often without really considering the reality that it is, in fact, damn hard work. What makes it hard work isn’t the generation of new ideas, but the fact that turning complexities into simple, clear realities can be excruciatingly difficult, but that is precisely what needs to be done to make innovation useful. Simplicity and clarity are tough to do.

Innovation, whether we’re talking about product design or a marketing plan, should be simple, understandable, and open for a wide range of people. Innovation is becoming more of an open process, or it should be. The days of the closed-door R&D session is gone as we incorporate more engagement of users, customers, stakeholders, subject matter experts, and employees in the process. Most companies are very good at launching, promoting and selling their products and services, but they often struggle with the front end of the innovation process, those stages dealing with turning research and brainstorming insights into new ideas.  The creating, analyzing, and developing side of things is often murky or done in a haphazard way. Articulating a simple system with clearly defined activities is central to bringing innovation to life and involving a wide variety of stakeholders and collaborators who can understand and engage in making the beginning stage of the innovation process less confused. It is as much art as it is science.

Easier said than done – you need a starting point. The simplest and most obvious element in this is to begin with a system of innovation best practices. You would typically generate multiple ideas and then synthesize relevant multiple ideas logically together in the form of a well-developed concept. This is the no-holds-barred side of the idea generation process and allows for people to begin exploring multiple trajectories. The key is to make sure the ideas don’t remain in a vacuum, but are open to everyone. With that in mind, it is extremely important to ensure that ideas are captured and stored in one place, whether electronically or on a wall (literally) dedicated to the task. Truly breakthrough innovations are not solitary work, they are part of a shared experience where ideas build on each other. They are the result of collaboration. This means that the work involves others to help you generate ideas, develop concepts, and communicate the concepts in meaningful and memorable ways. The more open the process, the more likely it is to get buy-in as people engage directly in the innovation process.

Next, make sure people have access to all the information available to them. Research around a problem or a people is often lost once the report is handed over and the presentation of findings complete. Central to the success of an innovation project is to make sure themes and experiences are captured and easily available to the people tasked with generating ideas. So make it visible, make it simple and make sure people are returning to the research (and researchers) again and again. This is about more than posting personas on boards around a room. It involves thinking about and articulating cultural practices in such a way that they are visible, clear and upfront. As people think and create they should constantly be reminded of the people and contexts for which they are creating.

Once the stage is set, the problem and hopeful outcomes need to be made clear. This is fairly obvious, but it’s easy to drift away from the goals as ideas emerge and people have time to simply forget why we’re innovating (or attempting to innovate ate any rate). So make them real, crystallize the problems and challenges. Make them visible at every step of the process.  In addition to posting the goals, be sure to have space to pose questions that are grounded in the problems or opportunities for innovation. Categorize the types of questions and ask that people visit them every step of the way to ensure the process stays on track and is grounded in the goals of the project. Categories of question types to consider might include:

  • How Will This Impact the Community: How can we help people, build communities and reflect the cultures and practices for which we are designing?
  • What is the Opportunity: How can we create something that provides a better life for the intended users?
  • Is It New or are We Simply Tweaking Something: How can the thing we’re creating change the current situation or are we simply creating a variation on an established theme?
  • How Will It Be Interpreted: What challenges do we face in getting people to accept the concepts and what cultural or psychological barriers do we need to overcome?

These are just a few examples, but they represent some of the ideas that might emerge when thinking of new designs, models and messaging strategies. They will, of course, vary depending on the goals of the organization. If your goal is to build a new delivery system for medications or if it is to do something as broad as change the way people eat, then the questions will change. The point is to have a space that opens up the dialog, not just a space to throw out ideas.

The point to all this is that in order to innovate, you need to clarify a simple system that all the various contributors can use. Establish a system and stick to it. Identify and write down the areas you would like to innovate in, get all the parties who will contribute involved and make sure they engage in an open environment. Create questions to ask and areas of exploration. Do that and you will move from a complex mess to something that can be acted upon.

Getting Over Ourselves: Make research meaningful

The other day I was privy to a discussion by a researcher who was decidedly upset about having to “dumb down” the research report he had completed. The client was impressed by the depth of the work, but equally frustrated with the seemingly academic depth of the language of the report and the use of jargon that was, realistically, more appropriate to anthropological circles than to a business environment. The researcher was upset by the client’s request to strip out discussions of agency, systems design theory, identity formation, etc., and stated something along the lines of “I had to learn this sort of thing in grad school, so they should take the time to do the same”. And while I think it would be lovely (and perhaps beneficial) if clients took such an interest in what we as researchers study, I have to say my views on the matter are very different. Making what we learn useful and meaningful to the client isn’t “dumbing it down”, it’s performing the task for which we were hired. We do not receive grants and write peer-reviewed articles when businesses hire us. Indeed, we may not write at all. What we do is produce insights and information that they can use, from their design team to their CEO. If they aren’t asking us to become expert in supply chain models or accounting, then asking them to embrace often daunting concepts in socio-cultural theory is both unrealistic and, frankly, arrogant.

In general, companies hire ethnographers (anthropologist, sociologists, etc.) for a simple reason: to uncover new ways to achieve competitive advantage and make more money. This translates, most often, into research to understanding new product opportunities, brand positioning, or salient marketing messages. Unfortunately, our clients often have no idea what to do with the research. But more often than not, the fault lies with ethnographers, not the client, and can be overcome if we apply ourselves just a bit.

Usefulness means being a guide, not a lecturer. So why are we so often disinclined to make what we do useful to business people? Part of it, I believe, stems from an unwillingness to address our own biases openly and honestly. There is a tendency among many of us coming out of what have traditionally been academic disciplines to ridicule or react negatively to people in the business world. To be honest, it’s why we chose, say, an anthropology program over a business program in college. We often, consciously or subconsciously, hold these people in contempt and believe that it is they who should bend, not us, as if we are providing secret knowledge are indeed of a higher order of life than they. We resent the idea that these lesser minds would have to audacity to ask us to curb our genius. And yet, there’s nothing new in making complex ideas useful, simple, or intelligible to people without advanced training in the social sciences. Look at any Anthro 101 course and you realize we’ve been doing this for a very long time already. The fact of the matter is that in order to be relevant and to get the client excited about what we do and to value the thinking behind our work, we have to remember that not everyone wants to be an expert in social science any more than they want to be physicians or painters – they want us to be the experts and to know what we’re doing, including crafting what we learn into something they can grasp and apply even as they try to balance their own work load. Balancing jargon with meaning is, or should be, the goal.

Another struggling point I often think stems from how many of us were trained. Traditionally, the researcher is either left to work alone or as part of a very small team. The findings are analyzed, complied and shared with a small group of like-minded individuals. (We would like to believe that the numbers of people who care about what we write are larger, but the truth is most of us don’t give the work of our colleagues the attention they deserve or would at least like to believe they deserve.) Our careers are built on proving our intelligence, which means making an intellectual case that addresses every possible theoretical angle in great detail. But in the business context, to whom are we proving our intelligence? And do they care? They hire us precisely because we are the experts, not to prove how smart we are. This isn’t to say that we can or should forego the rigor good ethnographic research should employ, but it is to say that whether we like it or not, most of the theoretical models we use should end up in the appendix, not in what the client sees, hears or reads. Not only does it overcomplicate our findings, it often comes across as either arrogant or needy, neither quality being something the client finds particularly enticing or reassuring.

The fact is that we do ourselves and the discipline a disservice by not learning the language and needs of business people. We complain that untrained people are slowly “taking over” ethnography, but it’s our own doing nine times out of ten. It isn’t enough to have a better grasp of the complexities of the human condition, we have to learn to translate our work and come to terms with the fact that the people hiring us have a very real, practical need for our findings. If it cannot be translated into something that can be grasped in the first two minutes, then in their way of seeing the world, it is money wasted.

Are we there to educate or inform? Our work is frequently deemed too academic. So what does it mean when a client says, “It’s too academic.”?
 It means that they didn’t hire you to teach a class about anthropological theory and method. It means they don’t want to sit through a 100 page Power Point presentation before getting to the heart of the matter. They are in business and have neither the time nor the interest of a scholar or student.  Again, this doesn’t mean you don’t do the work or fail to set up the points you are trying to make, but it does mean that you be cognizant of the  fact that the audience hired you to improve their business and products, not teach a course on anthropological methods.  And indeed, some concepts are simply too complex to turn into a couple of bullet points. But that doesn’t mean we cannot try, particularly if we hope to get more work from the client.

The people with the luxury of sitting through a lengthy presentation or who have the time to discuss the intricacies of social theory rarely have a significant amount of authority in the decision-making process, and they rarely hold the purse strings.  This isn’t to say that those two hours of research findings we present aren’t meaningful, but rather that presentations need to be tailored to the needs of the people buying your service (research) and product (recommendations). For the business community, the product is not knowledge, but intelligence.  In other words, the product is knowledge that is actionable and useful. And to be fair, it’s worth noting that the client is the one who pays for our work. If the idea of providing them with the service and product they need is unpalatable, then I would argue that the ethnographer needs to quit complaining and start exploring a different line of work, plain and simple.

The researcher, research team, creative team, client, and everyone invested in the project need to work toward turning information into something they can act upon. When the time comes to sit down with the client and explain what you learned, the ethnographer must be prepared to also explain what to do with it next in a simple, clear way.

 

 

“No Need to Worry About Usability, Cut the Budget.”

Yes, the title is sarcastic.  In an economy where the lines between the brick-and-mortar and digital experience are increasingly blurred, usability has become a differentiating factor that shoppers consider, consciously and subconsciously, when making purchase decisions. Now, I will be the first to say that I am not always a fan of usability because it has a capacity to oversimplifying a situation and testing in a context-free environment.  But, it IS a central part of any good strategy and can’t be overlooked in a hyper-competitive market.  Any business considering cheeping out on the usability in the design process is asking to lose at the register.

A web search about a potential new purchase, be it a digital camera, a box of cereal, or fishing rod will uncover myriad reviews that include commentary on more than technological specs, nutritional values, etc. Searches will also include commentary on the ease of menu navigation, design elements, taxonomy, and usability. In other words, the overall user experience is as important as the products and service provided.

Unfortunately, organizations often use the wrong methods to understand their users, relying on a series of tests that have little relevance in the real world. A site may test well in the lab, but fail when put into the hands of people trying to use the website under real-life conditions. But many systems are designed with a minimal understanding of the end user or the motivations and challenges they face when shopping. This is why those of us who do this sort of work for a living aren’t surprised when usability is criticized by reviewers.

Part of the reason that good products and brands can’t break through the virtual wall is that unlike a brick-and-mortar experience, where a consumer buys after handling the product, on the web, the consumer experiences usability first – and then makes the decision to buy or search other venues.

The web has become so ubiquitous. It is accessed everywhere and on any number of devices. As such, it has become a natural part of the fabric of getting things done in modern life. Consequently, the methods used to understand web users under real-life conditions deserve special attention. We advocate specifically testing and iterative designing in the field precisely because it allows the design team to develop an interface that speaks both to functional needs and those deep, human issues that defy quantitative processes. The point is simply this; context is often overlooked in the need to get the product out the door. Cheap, fast, good may be the mantra in the current economic climate, but it frequently means the user is and the context in which he/she operates are compromised. We suggest several key elements when designing:

  1. Don’t just think about who the end user may be, go out and meet them. We often design based on assumptions that are rooted in our own biases. Getting into the lives of the user means uncovering nuances that we might normally overlook.
  2. Get past the clipboard. Asking questions is pivotal, but knowing the right question to ask is harder than it sounds. The process begins with identifying the various contexts in which a product or UI will be put to use. This may involve taking the product into a participant’s home and having both the participant and other members of the social network use it with all the external stresses going on around them. It may mean performing tasks as bullets fly overhead and sleep deprivation sets in. The point is to define the settings where use will take place, catalog stresses and distractions, and then learn how these stresses impact factors like performance, cognition, and memory.
  3. Design, build, break, and design again. Before investing the time and effort needed to build and code an interface, use paper prototyping and scenario testing to uncover both functional and conceptual bugs. Even if the product is the most amazing thing since the invention of the wheel, it won’t matter if it doesn’t fit into the cognitive scheme of the shopper.

Of course, usability is not the only factor that contributes to the buying decision, but it can be a deciding factor when a shopper is deciding between one company or brand and another. Not only does it impact their decisions functionally, it shapes their perceptions of the brand and the quality of service they can expect to receive from it. Getting usability and the user experience right is central to the success of you brand.

Article Source: http://EzineArticles.com/5660314