By Greg Guest
By Kathleen MacQueen
By Emily Namey
Anthropologists, along with researchers in other disciplines, have spent significant effort over the past decades trying to distinguish academic and applied forms of research. Discussions typically center on the types of research questions that drive each, how study results are used, or for whom the results are intended. One often overlooked difference between these two forms of research is the piece that falls in the middle — the choice of methods and procedures used in data collection, management, and analysis.
So what exactly is applied research? The three of us currently work in the Behavioral and Social Sciences department at FHI 360, a large nonprofit human development organization headquartered in Durham, NC. FHI 360 operates from 60 offices around the world and translates the results of its work into publications, tools, and training materials. These resources are made available for use or adaptation by policymakers, healthcare providers, community leaders and others involved in improving lives through human development. As applied anthropologists within this organization, we are responsible for the design, management, and implementation of public health research initiatives across a wide range of countries and research contexts. Our research projects often involve multiple sites and data sources and are always carried out in teams. We have consistently observed in our day-to-day work (at FHI 360, and in other applied research organizations where we have worked) that many of the concepts, methods, and procedures developed for traditional, academically-oriented anthropological research are impractical for applied research.
Take, for example, the use of theoretical saturation as a benchmark for establishing non-probability sample sizes. In long-term ethnographic research, which is highly inductive and flexible, determination of sample size via saturation works extremely well. In applied settings, however, inductive sampling is typically not feasible. Shorter timelines and funding constraints often don’t permit the iterative process required to truly assess theoretical saturation. Instead, the applied qualitative researcher needs to consider upfront the likely sources of variability in the area of inquiry and then design a sampling strategy accordingly.
Another example of the difference in academic and applied research implementation is the issue of teamwork. Traditionally, academic anthropological fieldwork features a lone anthropologist heading off to an “exotic” location for an extended period of time. In many cases, ethnographic researchers are intimately familiar with ‘their’ field site(s) and speak the local language fluently. In this lone ethnographer model, all of the study components – data collection and management, analysis, and write up – are designed and carried out by one individual.
Contrast this scenario with the types of projects that we and other applied anthropologists often work on. Most of our studies involve multiple field sites and languages, two or more types of data collection methods (often including a quantitative component), and a combined study team of more than a dozen individuals. If we’re working in concert with another study, such as a clinical trial or epidemiologic assessment, this complexity is compounded. The research design and procedures must be communicated to all parties involved and study documents must be translated and back-translated into local languages. Detailed operating manuals and hands-on training of field teams are needed to minimize confusion and errors as numerous physical and electronic documents are translated, transferred, stored, and retrieved across multiple locations.
Similarly, data collection, management, and analysis procedures must be rigorous and consistent across individuals and sites if meaningful syntheses and comparisons are to be made. Traditional qualitative data analysis approaches (e.g., grounded theory or discourse analysis) are often not practical methods for handling the diversity and volume of data collected in applied, multi-site studies. Specific data management and data reduction techniques are often required to help parse, organize, and make sense of the various pieces.
The differences in academic and applied research are perhaps most important, then, in the methods used to get from research questions to results — the essential middle step in the process that encompasses data collection, management, and analysis. These methods, fortunately, are very skills-based and teachable. Yet a review of current textbooks on qualitative research methods reveals a decidedly academic presentation of methods: many texts devote a quarter to half of their material to epistemology, reflexivity, and other theoretical matters, and fill the remainder with a grand tour of methods that serves more as a philosophical treatise than a practical handbook. For applied researchers concerned with generating credible results that will be useful for program and policy, traditional qualitative methods textbooks provide little instruction on how to get from research question to useable finding via systematic data collection and analysis.
In the absence of such a text, we and our colleagues have been providing this type of step-by-step instruction to our domestic and international research teams for many years. In discussing our combined lessons learned on a Nigerian highway after one of these trainings, we became inspired to document and impart to other researchers what we felt were useful and practical procedures for larger, team-based qualitative research initiatives. Our first applied methods book was born, Handbook for Team-based Qualitative Research (AltaMira, 2008). The eleven chapters in this edited volume cover the most commonly encountered challenges of working in qualitative research teams: ethics, politics, data preparation and analysis, and quality control and assurance.
At the same time we were putting together the team-based book, we began to receive requests for qualitative methods training from a number of different applied research organizations. In response, we created and implemented (and modified multiple times!) intensive training courses in qualitative data collection and analysis. Student feedback not only improved the content and delivery or our courses over the years, it also made clear to us that many academic research programs were not teaching students how to actually collect or analyze qualitative data, especially in applied contexts. The positive and often enthusiastic response of our students inspired us to transform our trainings into a format that provided broader and deeper coverage than a 2-day workshop. The result is a set of two in-depth how-to books that offers researchers procedures, tips, tools, and templates to collect and analyze qualitative data in a rigorous, ethical, and efficient manner.
The first of these books, Applied Thematic Analysis (Sage 2012), provides instructions for conducting inductive thematic analyses on textual data. The contents cover the entire analysis process: planning and preparing analyses, coding, comparing and reducing data, and writing up results. The book also contains dedicated chapters on enhancing validity of results, supplemental techniques (e.g., word searches, deviant case analyses, enhancing focus group data), integrating qualitative and quantitative datasets, and choosing data analysis software.
The prequel to this analysis book is currently in press and expected to be available in June of this year. Collecting Qualitative Data: A Field Manual for Applied Research (Sage 2012) adheres to the same hands-on, practical philosophy as its predecessors. Using diverse real-world examples, step-by-step instructions, and practice exercises, the field manual guides researchers through the three most commonly employed qualitative data collection methods – participant observation, in-depth interviews, and focus groups. The book also includes detailed chapters on sampling, research ethics, qualitative data management, and supplemental data collection methods, such as listing/categorizing, creating timelines, visual techniques, ethnographic decision modeling, and document analysis.
As study managers and scientific directors of applied research, we are responsible for ensuring a study’s scientific and ethical integrity. We can bolster both of these by training researchers who are not only adept at designing epistemologically and theoretically sound research, but who also have a firm grasp of the essential skills and steps necessary to conduct (or supervise) rigorous qualitative data collection and systematic thematic analysis. We’ve made plenty of mistakes in our research projects over the years, and hope that these books, by conveying the practical lessons we’ve learned in the field, will help other researchers avoid making the same mistakes in the critical “middle” of a project. In the process, we hope to begin closing the instructional gap for applied qualitative researchers.