Playtime and Innovation, Part 2

In human evolution there are many manifestations about the importance of play, because it’s what enables the individual to discover new approaches to deal with the world. In fact, the most creative individuals often exhibit great playfulness. Many theories suggest that experiences, skills, problem-solving abilities and knowledge needed for serious purposes later in life, are actively acquired or enhanced through playful engagement with the environment when we are kids. Versatility, flexibility and creativity in adulthood are causally linked to play earlier in life. The implication being that while we’re all in the serious business of Business, we should perhaps take a step back and explore a playful approach when thinking about innovation.

Many of us will acknowledge the importance of the creative ability to find novel solutions and that it has had significant impact for our ancestors until today, in terms of surviving, reproducing and evolving our world. Also in our context today, the survival of business organizations is closely related to their emphasis on playfulness, and hence creativity in the organization, company culture and for their employees. This means to be ready to use more flexible and open approaches , which is something that traditionally companies may not be used to. These companies typically feel safer promoting standard tools and rigid methods to avoid risk and reduce uncertainty by leaning on inflexible processes, gates and so on. However, today it is known that many companies that use these rigid, inflexible approaches to solving problems develop seemingly good solutions that turn out, in practice, to be of little value.

Today’s organizations need to create new situations, and a particular kind of positive mood because this mindset, culture, and approach can be especially beneficial in affecting creativity and aid in the generation of novel ideas that can be transformed into innovation. Organizations won’t be able to convince their best people to take risks if it entails a possible cost to their careers. Similarly, employees are unlikely to expose themselves to being chastised for trying to develop new approaches that are more flexible and open. The “playing mood” is a good solution that facilitates the creation of special environments and behaviors. Successful organizations have recognized that they need to tolerate and support differences among employees, and they are encouraging a company culture which allows an environment for time to play, to break established patterns and to combine actions and thoughts in new ways. These companies know that play is an effective mechanism for encouraging creativity and consequently facilitating innovation.

Play involves a certain type of mood and state of mind; a special experience generally outside our normal behaviors and environments, beyond roles and expertise, a kind of “lack of inhibition” situation that open our minds. The power of play is this special “playful mood” that can be a powerful driver for making creativity and innovation happens because play involves braking normal rules. From play emerges a new perspective, a source for producing new ideas. Play is also a “cognitive tool” that provides flexibility, collaboration, novelty and openness, while creating a sense of inclusion, where people share meaning with one another. It is a strategic tool that can be used at any time to solve a new challenge, to unleash creativity.

 

 

Advertisements

Designing for the Aging

People still think that elderly means pathetic, uninventive and unfortunate.  Yes, there is the occasional nod to the statistics showing Boomers have more disposable income than Gen Y folks, for instance and people address the fact that the elderly market is the largest market there has ever been, making them a worthy group financially to go after.  Early retirement means that a sizeable part of this market is commercially significant and has the money, and willingness, to pay for design. But that’s often where the story ends.  Before we can really addresses this population, we have to fundamentally shift our view of the aging and rethink our own perceptions.  We need to ask, what is such a market looking for?

Why is that so important?  Because in focusing too intently on designing for this population means, more often than not, that we are in fact designing for ourselves – our preconceived notions of the aged, our subconscious fears or growing old, our cultural biases about what it means to be over 65, etc.  Consequently, I believe that an approach of designing just for the elderly is too narrow and therefore possibly problematic. In a time that people are getting older and older, many over 65 have the physical and mental capacity of people that are twenty years younger, engage in demanding professional endeavors and personal activities, and would hate to be called “elderly.” They might have a different time horizon than younger people but they are not less able. Redesigning a bottle top is one thing, redesigning a dashboard is quite another.

An additional issue is that many of the problems that some elderly face are not unique to them, but also affect a host of other people – the disabled, parents with strollers, young children, people with various health problems, etc. Rather than narrowly focusing on the elderly, a broader approach can helps ensure that the real underlying cause for design addresses a real problem. Beyond being able to address a wider range of markets, there is also a social advantage: people don’t feel excluded or singled out.

It’s also important to remember that like all people, being older doesn’t mean that your world is restricted to interaction with a single, similar population.  People do not exist in vacuums.  Yes, I’ll say it again – CONTEXT IS EVERYTHING. Elderly populations are part of a shared community that spans age cohorts and easy classification.  Consequently, you are not designing for the elderly, but rather a host of interactions and agents.

With these overarching considerations, what, then, are some of the specific things to keep in mind as you design? There are several.

  •  Avoid designing “special” products for elderly people. Most elderly people are not disabled. There are exceptions, of course, but typically it is the condition of the disability, not the age of the user that is the issue. A shoe, a telephone or a saucepan designed for a disabled foot or hand may not suit an elderly foot or hand, designing for an elderly hand or foot to the exclusion of other populations will certainly not suit the broader population. Provided elderly people are considered at the right life stage, most products should be suitable for young and old. Design for the young and you exclude the old. Design for the old and you include the young.  You also run the risk of leaving your would be elderly clientele feel singled out.  While this may make a population feel catered to, it also runs the risk of feeling like pandering.
  • Good body use (what we should do) is far more important than what we can do. Ergonomic data may demonstrate what the body is capable of reaching. It is not part of design or ergonomic education to know whether such actions are healthy or natural. Elderly people may be able to reach a certain height, but should they?  The same can be asked of children. Peter Laslett, demonstrated not only the special potential of people in the “Third Age” but also some of the similarities between older and younger people. Provided certain things are understood, products for elderly people can suit younger ones, too.
  • Remember what, where, when, who and how. With the exception of hermits, the elderly do not in complete isolation. They are part of the broader social dialog and members of the cultural milieu. As such, designing for the elderly all too often revolves around methods and assumptions that treat this population as if they were lab experiments or living in complete isolation.  Products live shared lives as do the people who use them. That means designing with multiple users in mind and designing according to the contexts in which products will be used.

Anthropology and Usability: Getting Dirty

There are significant methodological and philosophical differences between ethnographic processes and laboratory-based processes in the product development cycle.  All too frequently, proponents of these data collection methods are set at odds, with members on both sides pointing fingers and declaring the shortcomings of  the methods in question.  Methodological purity, ownership and expertise are debated, with both ends of the spectrum becoming so engrossed in justifying themselves that the fundamental issues of product development are compromised.  Namely, will the product work in the broadest sense of term. One side throws out accusations of a lack of measures and scientific rigor.  The other side levels accusations about the irrelevance of a sterile, contextually detached laboratory environment.  At the end of the day, the both sides make valid points and the truth, such as it is, lies somewhere between the two extremes in the debate.  As such, we suggest that rather than treating usability and exploratory work as separate projects, that a mixed approach be used.

So why bridge methodological boundaries? Too frequently final interface design and product planning begin after testing in a laboratory setting has yielded reliable, measurable data.  The results often prove or disprove the functionality of a product and any errors that may take place during task execution.  Error and success rates are tabulated and tweaks are made to the system in the hopes of increasing performance and/or rooting out major problems that may delay product or site release and user satisfaction.  The problem is that while copious amounts of data are produced and legitimate design changes ensue, they do not necessarily yield data that are valid in a real-life context.  The data are reliable in a controlled situation, but may not necessarily be valid when seen in context. It is perfectly possible to obtain perfect reliability with no validity when testing. But perfect validity would assure perfect reliability because every test observation would yield the complete and exact truth.  Unfortunately, neither perfection nor quantifiable truth does exist in the real world, at least as it relates to human performance.  Reliable data must be supported with valid data which can best be found through field research.

Increasingly, people have turned to field observations as an effective way of checking validity.  Often, an anthropologist or someone using the moniker of “ethnographer” enters the field and spends enough time with potential users to understand how environment and culture shape what they do.  Ideally, these observations lead to product innovation and improved design.  At this point, unfortunately, the field expert is dropped from the equation and the product or website moves forward with little cross-functional interaction. The experts in UI take over and the “scientists” take charge of ensuring the product meets measures that are, often, somewhat arbitrary.  The “scientists” and the “humanists” do not work hand in hand to ensure the product works as it should in the hands of users going about their daily lives.

Often the divide stems from the argument that the lack of a controlled environment destroys the “scientific value” of research (a similar argument is made over the often small sample size), but by its very nature qualitative research always has a degree of subjectivity.  But to be fair, small performance changes are given statistical relevance when they should not.  In fact, any and all research, involves degrees of subjectivity and personal bias.  We’re not usually taught this epistemological reality by our professors when we learn our respective trades, but it is true nonetheless.  Indeed, if examining the history of science, there countless examples of hypothesis testing and discovery that would, if we apply the rules of scientific method used by most people, be considered less than scientifically ideal James Lind’s discovery of the cure for scurvy or Henri Becquerel discovery the existence of radioactivity serve as two such examples.  Bad science from the standpoint of sample size and environmental control, brilliant science if you’re one of  the millions of to people to have benefited from these discoveries.  The underlying problem is that testing can exist in a pure state and that testing should be pristine.  Unfortunately, if we miss the context we usually overlook the real problem. A product may conform to every aspect of anthropometrics, ergonomics, and established principles of interface design.  It may meet every requirement and have every feature potential consumers asked for or commented on during the various testing phases. You may get an improvement of a second in reaction time in a lab, but what if someone using an interface is chest deep in mud while bullets fly overhead.  Suddenly something that was well designed in a lab becomes useless because no one accounted for shaking hands, decrease in computational skills under physical and psychological stress, or the fact that someone is laying on their belly as they work with the interface.  Context, and how it impacts performance with a web application, software application, or any kind of UI now becomes of supreme importance, and knowing the right question to ask and the right action to measure become central to usability.

So what do we do?  We combine elements of ethnography and means-based testing, of course, documenting performance and the independent variables as part of the evaluation process.  This means detaching ourselves from a fixation with controlled environments and the subconscious (sometimes conscious) belief that our job is to yield the same sorts of material that would be used in designing, say, the structural integrity of the Space Shuttle.  The reality is that most of what we design is more dependent on context and environment than it is on being able to increase performance speed by 1%.  Consequently, for field usability to work, the first step is being honest with what we can do. A willingness to adapt to new or unfamiliar methodologies is one of the principal requirements for testing in the field, and is one of the primary considerations that should be taken into account when determining whether a team member should be directly involved.

The process begins with identifying the various contexts in which a product or UI will be put to use.  This may involve taking the product into their home and having them use it with all the external stresses going on around them.  It may mean performing tasks as bullets fly overhead and sleep deprivation sets in.  The point is to define the settings where use will take place, catalog stresses and distractions, then learn how these stresses impact performance, cognition, memory, etc.  For example, if you’re testing an electronic reading device, such as the Kindle, it would make sense to test it on the subway or when people are laying in bed (and thus at an odd angle), because those are the situations in which most people read — external variables are included in the final analysis and recommendations.  Does the position in bed influence necessary lumens or button size? Do people physically shrink in on themselves when using public transportation and how does this impact use?  The idea is simply to test the product under the lived conditions in which it will find use.  Years ago I did testing on an interface to be used in combat.  It worked well in the lab, but under combat conditions the interface was essentially useless.  What are seemingly minor issues dramatically changed the look, feel, and logic of the site. Is it possible to document every variable and context in which a product or application will see use?  No. However, the bulk of these situations will be uncovered.  And those which remain unaddressed frequently produce the same physiological and cognitive responses as the ones that were uncovered.  Of course, we do not suggest foregoing measurement of success and failure, time of task, click path or anything else.  These are still fundamental to usability.  We are simply advocating understanding how the situation shapes usability and designing with those variables in mind.

Once the initial test is done, we usually leave the product with the participant for about two weeks, then come back and run a different series of tests.  This allows the testing team to measure learnability as well as providing test participants time to catalog their experience with the product or application.  During this time, participants are asked to document everything they can about not only their interaction with the product, but also what is going on in the environment.  Once the research team returns, participants walk us through behavioral changes that have been the result of the product or interface.  There are times when a client gets everything right in terms of usability, but the user still rejects the product because it is too disruptive to their normal activities (or simply isn’t relevant to their condition).  In that case, you have to rethink what the product does and why.

Finally, there is the issue of delivery of the data.  Nine times out of ten the reader is looking for information that is quite literal and instructional.  Ambiguity and/or involved anecdotal descriptions are usually rejected in favor of what is more concrete. The struggle is how to provide this experience-near information.  It means doing more than providing numbers.  Information should be broken down into a structure such that each “theme” is easily identifiable within the first sentence.  More often than not, specific recommendations are preferred to implications and must be presented to the audience in concrete, usable ways.  Contextual data and its impact on use need the same approach.

A product or UI design’s usability is only relevant when taken outside the lab.  Rather than separating exploratory and testing processes into two activities that have minimal influence on each other, a mixed field method should be used in most testing.  In the final analysis, innovation and great design do not stem from one methodological process, but a combination of the two.

Innovation Is Creative Thinking With Purpose

Innovation is creativity with a purpose. It is the creation and use of knowledge with intent. It is not only creating new ideas but creating with a specific intention and with plans to take those ideas and make something that will find purpose the world. Innovation is ideas in action, not the ideas themselves. Innovation is also a word that gets thrown about, often without really considering the reality that it is, in fact, damn hard work. What makes it hard work isn’t the generation of new ideas, but the fact that turning complexities into simple, clear realities can be excruciatingly difficult, but that is precisely what needs to be done to make innovation useful. Simplicity and clarity are tough to do.

Innovation, whether we’re talking about product design or a marketing plan, should be simple, understandable, and open for a wide range of people. Innovation is becoming more of an open process, or it should be. The days of the closed-door R&D session is gone as we incorporate more engagement of users, customers, stakeholders, subject matter experts, and employees in the process. Most companies are very good at launching, promoting and selling their products and services, but they often struggle with the front end of the innovation process, those stages dealing with turning research and brainstorming insights into new ideas.  The creating, analyzing, and developing side of things is often murky or done in a haphazard way. Articulating a simple system with clearly defined activities is central to bringing innovation to life and involving a wide variety of stakeholders and collaborators who can understand and engage in making the beginning stage of the innovation process less confused. It is as much art as it is science.

Easier said than done – you need a starting point. The simplest and most obvious element in this is to begin with a system of innovation best practices. You would typically generate multiple ideas and then synthesize relevant multiple ideas logically together in the form of a well-developed concept. This is the no-holds-barred side of the idea generation process and allows for people to begin exploring multiple trajectories. The key is to make sure the ideas don’t remain in a vacuum, but are open to everyone. With that in mind, it is extremely important to ensure that ideas are captured and stored in one place, whether electronically or on a wall (literally) dedicated to the task. Truly breakthrough innovations are not solitary work, they are part of a shared experience where ideas build on each other. They are the result of collaboration. This means that the work involves others to help you generate ideas, develop concepts, and communicate the concepts in meaningful and memorable ways. The more open the process, the more likely it is to get buy-in as people engage directly in the innovation process.

Next, make sure people have access to all the information available to them. Research around a problem or a people is often lost once the report is handed over and the presentation of findings complete. Central to the success of an innovation project is to make sure themes and experiences are captured and easily available to the people tasked with generating ideas. So make it visible, make it simple and make sure people are returning to the research (and researchers) again and again. This is about more than posting personas on boards around a room. It involves thinking about and articulating cultural practices in such a way that they are visible, clear and upfront. As people think and create they should constantly be reminded of the people and contexts for which they are creating.

Once the stage is set, the problem and hopeful outcomes need to be made clear. This is fairly obvious, but it’s easy to drift away from the goals as ideas emerge and people have time to simply forget why we’re innovating (or attempting to innovate ate any rate). So make them real, crystallize the problems and challenges. Make them visible at every step of the process.  In addition to posting the goals, be sure to have space to pose questions that are grounded in the problems or opportunities for innovation. Categorize the types of questions and ask that people visit them every step of the way to ensure the process stays on track and is grounded in the goals of the project. Categories of question types to consider might include:

  • How Will This Impact the Community: How can we help people, build communities and reflect the cultures and practices for which we are designing?
  • What is the Opportunity: How can we create something that provides a better life for the intended users?
  • Is It New or are We Simply Tweaking Something: How can the thing we’re creating change the current situation or are we simply creating a variation on an established theme?
  • How Will It Be Interpreted: What challenges do we face in getting people to accept the concepts and what cultural or psychological barriers do we need to overcome?

These are just a few examples, but they represent some of the ideas that might emerge when thinking of new designs, models and messaging strategies. They will, of course, vary depending on the goals of the organization. If your goal is to build a new delivery system for medications or if it is to do something as broad as change the way people eat, then the questions will change. The point is to have a space that opens up the dialog, not just a space to throw out ideas.

The point to all this is that in order to innovate, you need to clarify a simple system that all the various contributors can use. Establish a system and stick to it. Identify and write down the areas you would like to innovate in, get all the parties who will contribute involved and make sure they engage in an open environment. Create questions to ask and areas of exploration. Do that and you will move from a complex mess to something that can be acted upon.

Experimenting With Ethnography

Ethnography means many things to many people these days and heaven knows I’ve spouted off about that topic on more than one occasion, so I won’t go down that path again (at least not for today). But there are underlying currents in how people define ethnography that seem to be representative of a larger degree of consensus. One of the central themes that emerges again and again is that notion of ethnographer as simple observer.  We document, we learn and we report but rarely do we experiment. And that is something I think we need to see change.

“Experimental ethnography” emerged as a general movement in anthropology that focused on issues of representation in ethnographic writing in the aftermath of the “writing culture” critique of the 1980s. Those critiques were largely informed by the poststructuralist, feminist and Marxist assessments of the historical relativism and construction of Western sciences. Long story short, the nature of how we construct, conduct and think about ethnographic research and representation was challenged. The primary meaning of experimental ethnography was the experimentation of writing ethnographies and the representation of cultural worlds, traditions, and things. Interestingly enough, this is also the period when ethnographers began leaving academia for the business and design worlds in noticeable numbers. However, the notion of experimental ethnography remained largely inside academic and/or public sector fields of study.

So, traditionally what are we talking about when we say “experimental ethnography”? Experimental ethnography is a mode of fieldwork in which given, prior and assumed areas of knowledge are used and recirculated in fieldwork activities, dynamics, and practices. The goal is to produce outcomes that hold direct relevance to and for the communities with which research is conducted. From its inception, experimental ethnography then had an affinity to applied anthropology with the goals of effecting a “social change” in a community, producing knowledge for use in policy generation or aiding communities to rediscover and revitalize aspects of their cultural traditions. Again, while these are all noble and worthy pursuits, this approach to how we gain and use knowledge remained in areas other than the private sector. And that needs to change.  Why?

Because it produces better results for our clients, plain and simple. We are here to help the people who hire us build better things. That can certainly spring from a purely observational model, indeed it frequently does, but it also limits our trajectory.  In this emergent paradigm of experimental ethnography, “knowledge” is not being “tested” for truth to produce facts by a determined structure of fieldwork procedures. Rather, fieldwork practices are recombined to explore their utility through the activity of the exploratory bricolage. In other words, the experimentation is not about testing but about fluid modes acquiring knowledge and considering methods of co-constructing outputs. This exploration for utility is where a different notion of experimentality enters into play. In thinking about ethnographic fieldwork in this way, it allows us to incorporate techniques from various fields when working with participants in a methodologically sound way, rather than simply pulling in a range of techniques with little or no clear system or rigor.

As this model of ethnography plays out, the idea is that by engaging the participants, the designers and the ethnographer in a dialog in the field, the participant gains both in terms of good product development and in terms of psychological investment. All parties have a direct connection to the process and therefore the end results. It also means that the parties engaged in the fieldwork and creation/translations of the insights that emerge are not tied to the underlying one-for-one trade of information. The roles are stripped bare and the researcher, designer and participant take on a shared understanding that the intent is to create rather engage in the transaction of knowledge.

Of course, this means that the researcher needs to be well versed in a range of methods and nimble enough to change direction quickly. It also means letting go of the notion, a myth in fact, that purely objective observation is possible. A terrifying notion to some, no doubt, but very real nonetheless. Power, politics, environment, etc. all factor into how fieldwork unfolds. Tricking ourselves into a belief that the more removed we are, the more valid the results, is perhaps the first thing that needs to be discarded. After all, the point of ethnography is exploration and learning, not recreating in a live setting what one gets from a survey. Open the possibilities of an experimental approach to ethnography means opening the door to a host of outcomes that may be overlooked.

Experimental designs offer greater internal validity for learning what the effects of a social program are, and ethnographic methods offer greater insight into why the effects were produced. The prospects for such integration depend on the capacity of parties within social science to work together for the common goal of discovering insights and how to implement them.

 

 

Co-Creation and Managing What Matters

Co-creation has become a central theme for brands and innovators over the last decade, and rightfully so. The idea of collaboration in a postmodern world where information and opinions reach millions in the blink of an eye is a necessity. But what do we mean when we talk about co-creation and is it the panacea it’s made out to be?

Co-creation views products, brands and markets as forums for companies and customers to share, combine and renew each other’s resources and capabilities.  This creates value through new forms of interaction, service and learning mechanisms. In other words, it ideally establishes a dialog between all actors involved in the company’s offerings.  Co-creation is about collaboration. It’s about working together to solve problems, uniting a range of perspectives and approaches to an issue. Very often this collaboration involves consumers working directly with professionals from inside and outside a client organization, to define and create a range of outputs, from strategy to communications, from products to experiences.

Value is co-created with customers if and when a customer is able to personalize his or her experience using a firm’s brand promise and product/service proposition to a level that is best suited to get his or her tasks done or need fulfilled. This, in turn, allows the company to derive greater value from its product-service investment in the form of new knowledge, higher profitability and/or increased brand loyalty.  The interaction established through co-creation produces a sort of community where the company and the user/buyer engage in an ongoing, continuously evolving relationship, defined by and defining a shared set of actions and beliefs.

A key element in all of this is the notion of personalization on the part of the customer.  But what does personalization mean? Personalization is about the customer becoming a co-creator of the content of their experiences.  This doesn’t mean providing products and content that can then be tweaked to meet their needs, because that is still largely a passive process – the company makes it, the consumer buys it and then reconstructs it in something of a vacuum. There is no feedback loop.  In a true co-creation model, customers and actors inside the company are taking active roles in developing and sharing new ideas. Competencies of the consumer and stakeholders within the company come to interact and harness a range of ideas, functional and symbolic.

This is done along four axes: engage in dialogue with customers, mobilize communities, manage customer diversity and co-create experiences with customers. Ultimately, the goal is to leverage customers for a shared creative experience, going beyond insights and creating a constant interaction that produces brand experiences and better products and services. The increase in the number of collaborators and the numerous interactions among them, across each stage of development, leads to products and services that better meet customer needs.  We see a greater diversity of individuals, functions across organizations and stakeholders across the product/service/brand ecosystem getting involved.

While I am a proponent of co-creation, there are problems with a co-creative model. A customer who believes he or she has the expertise and chooses to co-produce may be more likely to make self-attributions for success and failure than a customer who lacks the expertise. A customer who lacks the expertise but feels forced to co-produce may make more negative attributions about co-production. The dialog can backfire.

The second pitfall is that co-creation assumes customers can readily articulate what they want and need. Customers take on roles, which means what they tell the stakeholders inside the organization may not reflect anything more than a whim. Think of cars with 17 cup holders and fins a mile high. What we can articulate is often a manifestation of something else, something we can’t articulate well, which may lead to creating the absurd. Rather than taking suggestions at face value, ideas need to be analyzed through the lens of detachment and we need to tease out meaning and innovation from the unsaid as well as the said.

Finally, co-creation often assumes a fixed identity for the customer, meaning that the person with whom we’re working and the person for whom we’re building changes according to context. If the co-creating customer is in the role of “mom” in one instance, she may be in the role of “artist” later in the day. The dramaturgical shift in identity will shape what he or she says and does as it relates to a brand, product or service at any given point in time. So even though the idea is well developed and well thought out in the co-creation process, whether that be an ideation session or an online forum, it may have little relevance once that stage is abandoned and the customer moves on with the rest of his or her day.

Co-creation can help break the yo-yo effect of research and development, where clients go back and forward between creative agencies, research agencies and their audience. By working with your consumers, rather than directing stuff at them, companies get a real sense of what works and what doesn’t as the ideation takes place. But it is not without risk. As co-creation becomes a mainstay at companies, we will need to figure out how to keep a diverse set of participants engaged, how to share the risks and value of innovation, how to manage the complexity of the system without laying out too many constraints. We will need to learn how to tease out what is actually needed and what are simply flights of fancy. We will need to learn to balance the said and the unsaid. But in the end, the payoffs can and will be tremendous.

Doing Microethnography

Microethnography is a powerful method of research for studying practices in dynamic social systems where interactions reproduce unexplored or poorly understood conditions. It is a powerful intervention for discovering, making visible, or getting at what is happening as it happens in the interactions. Analyzing moment-to-moment interactions enables a better understanding of practices and expectations in order to create spaces to transform meaning and activities that maintain the status quo. But what is it and how does it differ from traditional ethnography?

First, microethnography is NOT simply a small group of in-depth interviews. While the sample is generally small and the timelines compressed, there are process behind doing it well and producing something useful for the client,. Microethnography is the study of a smaller experience or a slice of everyday reality. Microethnography is the process of data collection, content analysis, and comparative analysis of everyday situations for the purpose of formulating insights. It is tight, focused and targeted.

Like traditional ethnography, microethnographic research that attends to big social issues through careful examination of “small” communicative behaviors, tying them back to specific business and design needs. The research and/or research teams study the audible and visible details of human interaction and activity, as these occur naturally within specific contexts or institutions. Microanalysis may be coupled with statistical data to form a more complete understanding of the question at hand, but microethnography always employs ethnographic methods such as informant interviews and participant observations, all in an effort to better understand practices and problems.

Microethnographic methods provide qualitative, observational, cross-cultural, and ethnographic data, giving researchers the potential to 1) examine consumers, users, etc. across their community contexts, explicitly addressing class, power, and cultural structures of that community and 2) explain disproportional uses and buying patterns among subgroups.

While it also takes observation and environment in to account, microethnography focuses largely on how people use language and other forms of communication to negotiate consent with attention given to social, cultural, and political processes. Informed by critical discourse analysis, it emphasizes how the uses of language simultaneously shape local social interactions and reproduce patterns of social relations in society. The central difference between microethnography and in-depth interviewing ultimately is the analytical process and the phases that make up the research itself.

Data collection and analysis for microethnography typically takes place in six stages:

  • Stage One: Data Collection for the Primary Record – This consists largely of passive observation in the settings/contexts in which an activity occurs. It is meant to give a grounding in the activities occurring with objects, people and brands to create not only data points, but the right questions.
  • Stage Two: Reconstructive Data Analysis of the Primary Record – This consists of rough, unstructured, brief interviews and information gleaned for intercepted conversation. Initial meaning reconstruction, horizon analysis, and validity reconstruction take place at this stage through the review of transcripts and videotape.
  • Stage Three: Dialogical Data Generation – During this phase the research relies on a mix of in-depth interviews and feedback interviews with participants. A series of hypotheses are in place and pinpointed concepts are addressed with the participants.
  • Stage Four: Reconstructive Data Analysis of the Interviews – Once interviews are conducted, a second phase of meaning reconstruction and stage horizon analysis are conducted to uncover contradictions and pattern of practice and meaning. Out of this process, specific design and business needs are aligned.
  • Stage Five: High-level Coding – At this stage linguistic and behavioral matches are made. Out of this analysis, the multidisciplinary team begins to create new product or branding concepts and build out how they would actually function and gain traction with customers or users.
  • Stage Six: Final Reconstructive Analysis – This is the stage when we put new concepts and old to use. During this phase, new design or branding ideas are presented to participants, who work directly with the research and design team to generate co-created ideas and concepts.