Uploaded by Seryozha Rogozhin

MalterudSiersmaGuassora2015

advertisement
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/284904065
Sample Size in Qualitative Interview Studies: Guided by Information Power
Article in Qualitative Health Research · November 2015
DOI: 10.1177/1049732315617444
CITATIONS
READS
5,052
57,640
3 authors, including:
Ann Dorrit Guassora
University of Copenhagen
59 PUBLICATIONS 5,908 CITATIONS
SEE PROFILE
All content following this page was uploaded by Ann Dorrit Guassora on 10 December 2015.
The user has requested enhancement of the downloaded file.
617444
research-article2015
QHRXXX10.1177/1049732315617444Qualitative Health ResearchMalterud et al.
Article
Sample Size in Qualitative Interview
Studies: Guided by Information Power
Qualitative Health Research
1­–8
© The Author(s) 2015
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1049732315617444
qhr.sagepub.com
Kirsti Malterud1,2,3, Volkert Dirk Siersma1,
and Ann Dorrit Guassora1
Abstract
Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means.
The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific
methodology, and the term is inconsistently applied. We propose the concept “information power” to guide adequate
sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant
for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient
information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality
of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant
dimensions are related to information power. Application of this model in the planning and during data collection of
a qualitative study is discussed.
Keywords
sample size; participants; methodology; saturation; information power; qualitative
Background
Qualitative researchers need tools to evaluate sample size
first while planning a study, then during the research process to appraise sample size continuously, and finally to
ascertain whether the sample size is adequate for analysis
and final publication (Guest, Bunce, & Johnson, 2006;
Morse, 1995; Sandelowski, 1995). In quantitative studies, power calculations determines which sample size (N)
is necessary to demonstrate effects of a certain magnitude
from an intervention. For qualitative interview studies, no
similar standards for assessment of sample size exist.
Reviews indicate that qualitative researchers demonstrate a low level of transparency regarding sample sizes
and the underlying arguments for these (Carlsen &
Glenton, 2011; Mason, 2010). Often, the authors just
claim that saturation was achieved, inferring that addition
of more participants did not add anything to the analysis,
without specifying their understanding of how saturation
has been assessed. The saturation concept was originally
coined by Glaser and Strauss (1999) as a specific element
of constant comparison in Grounded Theory (GT) analysis. Within the GT framework, sample size is appraised as
an element of the ongoing analysis where every new observation is compared with previous analysis to identify similarities and differences. The saturation concept is, however,
again and again claimed in studies based on other analytic
approaches, without any explanation of how the concept
should be understood in this non-GT context and how it
serves to justify the number of participants.
A commonly stated principle for determining sample
size in a qualitative study is that N should be sufficiently
large and varied to elucidate the aims of the study (Kuzel,
1999; Marshall, 1996; Patton, 2015). However, this principle provides no guidance for planning, although experienced researchers seem to follow their own rules of thumb
about approximate numbers of units that were needed in
previous comparable studies to arrive at a responsible analysis (Mason, 2010).
The authors of the present article have extensive experience from planning, conducting, publishing, and supervising qualitative as well as quantitative studies, and we
share a concern for methodology across research methods. We agree with Mason (2010) that qualitative
researchers should try hard to make our methods as robust
and defensible as possible, aiming for intersubjectivity on
1
University of Copenhagen, Copenhagen, Denmark
Uni Research Health, Bergen, Norway
3
University of Bergen, Bergen, Norway
2
Corresponding Author:
Kirsti Malterud, Research Unit for General Practice, Uni Research
Health, Kalfarveien 31, N-5018 Bergen, Norway.
Email: Kirsti.malterud@gmail.com
Downloaded from qhr.sagepub.com at Copenhagen University Library on November 30, 2015
2
Qualitative Health Research 
why and how decisions regarding design, sampling, and
analysis are taken (Malterud, 2001). We shared the preconception that an approximation of sample size is necessary for planning, while the adequacy of the final sample
size must be continuously evaluated during the research
process. Reviewing principles of sample size in qualitative studies, we shall below argue that sample size cannot
be predicted by formulae or by perceived redundancy.
Tools to guide sample size should not rely on procedures from a specific analysis method, but rest on shared
methodological principles for estimating an adequate
number of units, events, or participants. For this purpose,
we propose the concept information power. The larger
information power the sample holds, the lower N is
needed, and vice versa. In this article, we shall concentrate on information power applied in the context of qualitative interview studies.
The aim of this article is to present and discuss a pragmatic model for assessment of sample size in qualitative
studies, reflecting on how the information power needed
for a specific study can be achieved.
Method
We have developed and elaborated the model inductively.
First, we sketched a case presented as a fictional study. This
study has neither been planned nor conducted but served as
a specific reference for our discussions and elaborations.
Then, we took this case as our point of departure for a
review of conditions we considered to have an impact on
information power and sample size in this specific study.
Finally, we conceptualized the items and their dimensions
as a model, intended to be transferable to interview studies
beyond the particular context of the fictional study.
We conducted the process as a pragmatic focus group
conversation between the authors, taking our shared
experiences as a point of departure for constructing the
case. Our ongoing discussions functioned as analysis,
identifying and bit by bit prioritizing the most important
items having an impact on sample size from a logical
point of view. A parallel discussion concerned the concept “information power.” The process was supported by
available literature about the current state of the art
regarding sample size in qualitative studies as well as literature discussing weaknesses in these standards. Below,
we shall present and discuss the model.
The Case: Planning an Interview
Study on Diabetic Foot Ulcer
Experiences
We situated the case as the first of three subprojects of a
PhD study where the overall objective was to contribute
to theories of self-care and to describe patients’ practices
for health professionals. The aim of the present subproject was to explore self-care among patients with diabetic
foot ulcers by describing activities performed by patients
to treat the ulcers and their motivation for doing so. The
PhD student was a young MD who already had some
experience with qualitative research from a previous
project where descriptive cross-case analysis had been
conducted. Participants would be recruited among
patients in a diabetes out-patient clinic who had recently
been diagnosed with their first ulcer. Further sampling
strategies and criteria would be informed by stepwise
analysis along with data collection by means of semistructured individual interviews.
When to stop recruitment during this process would
not be a simple decision for the novice researcher. The
grant proposal requested an advance estimate of the
number of interviews to plan how many participants
were needed to elucidate the aim of the study and to
get an idea of how much time the data collection would
require. From previous research, her supervisor had
some ideas about the number of participants needed for
this project. The student would, however, prefer to
plan her study and make her decisions on the basis of
some standards about how many participants she
would need to conduct a responsible analysis. Below,
we present the model we developed as a tool to appraise
sample size by appraisal of five major items that in different ways determine the information power of the
sample.
Items Having an Impact on
Information Power
Reviewing alternative choices of design and method for
the fictional interview study, we identified five items that
along different dimensions have an impact on the information power of the sample: (a) study aim, (b) sample
specificity, (c) use of established theory, (d) quality of
dialogue, and (e) analysis strategy. Below, we present
these items and their dimensions separately and systematically. In real life, however, the items are related and
have a mutual impact on each other.
Study Aim—Narrow or Broad?
Information power, guiding adequate sample size is
related to the study aim. A broad study aim requires a
larger sample than a narrow aim to offer sufficient information power, because the phenomenon under study is
more comprehensive. A study aiming to explore how
patients with their first diabetic foot ulcer manage shift of
bandages would need notably fewer participants than a
study about how patients with foot ulcer generally manage self-care in everyday life.
Downloaded from qhr.sagepub.com at Copenhagen University Library on November 30, 2015
3
Malterud et al.
In our case, the researcher would have to choose
between extending the number of participants by recruiting a larger, purposive sample, or narrowing the aim of the
study to maintain sufficient information power. If, however, the aim of the study concerns a very specific or rare
experience, such as self-care among blind patients with
diabetic foot ulcers, this would in itself limit the number
of eligible participants. An alternative emphasis of the
study could be to explore how individual resources interfere with self-care of diabetic foot ulcers. If so, a study
based on interviews with one single or just a few participants might provide access to exciting hypotheses from a
high level of information power. Defining the aim of the
interview study, the researcher also offers some promises
regarding transferability of the findings. The information
power of the sample will be critical to achieve the aim.
Sample Specificity—Dense or Sparse?
Information power is also related to the specificity of
experiences, knowledge, or properties among the participants included in the sample. To offer sufficient information power, a less extensive sample is needed with
participants holding characteristics that are highly specific for the study aim compared with a sample containing participants of sparse specificity. Specificity concerns
here participants who belong to the specified target group
while also exhibiting some variation within the experiences to be explored.
A sample including individuals from the target group
holding experiences not previously described could also
enhance information power. Knowing that self-care is
limited by patient resources, we could, for example, aim
for an especially specific sample identified by discussions
with the nurses at the diabetic clinic including variations
of both success in handling ulcers and some variation in
age, gender, and type of diabetes. If we do not constrain
recruitment procedures to include only patients with foot
ulcers, a much larger number of participants would be
needed to cover those whose experiences we study.
Still, a purposive sample, established with specific
aspects of variation in mind, is not always feasible. The
strategy of convenience sampling, accepting participants
who are available, without trying to influence the configuration of the sample, implies the risk of more limited specificity, thereby requiring more participants. Following such
a recruitment strategy, we would probably need more interviews and participants to obtain a sufficiently broad scope
of activities performed by patients to treat the ulcers and
their underlying motivations. However, we might be
fortunate and drop into a group of participants with a diversity of experiences. Hence, sample specificity cannot
always be predicted but can be supported by suitable
recruitment.
Established Theory—Applied or Not?
Furthermore, information power, guiding adequate sample size is related to the level of theoretical background of
the study. A study supported by limited theoretical perspectives would usually require a larger sample to offer
sufficient information power than a study that applies
specific theories for planning and analysis. Theories from
social science about the authority that professionals exercise might, for example, enhance the information power
of our study about self-care experiences with diabetic
foot ulcers. New knowledge, even from a rather small
sample, might be obtained by looking for strategies used
by patients to counter professional authority intended to
make them perform specific self-care. Theory serves to
synthesize existing knowledge as well as extending the
sources of knowledge beyond the empirical interview
data. On the contrary, another study starting from scratch
with no theoretical background must establish its own
foundation for grounding the conclusions. If so, a larger
sample size would probably be needed to grant sufficient
information power. Theoretical frameworks offer models
and concepts that may explain relations between different
aspects of the empirical data in a coherent way. Empirical
studies with very small numbers can make a difference if
they address and elucidate something crucial to theory.
Quality of Dialogue—Strong or Weak?
Information power is also related to the quality of the
interview dialogue. A study with strong and clear communication between researcher and participants requires
fewer participants to offer sufficient information power
than a study with ambiguous or unfocused dialogues. In a
qualitative study, empirical data are co-constructed by
complex interaction between researcher and participant,
and a number of issues determine the quality of the communication from which the information power is established. Analytic value of the empirical data depends on
the skills of the interviewer, the articulateness of the participant, and the chemistry between them, and it is difficult to predict the quality of the dialogue in advance.
In our study, the PhD student holds more than average
background knowledge about diabetic foot ulcers,
because she has been a consultant for the home nursing
service in her area within this field the last 2 years. For
her, the interviews would not be her first encounter with
the subject area, and she would easily approach the participants’ self-care practices. However, by nature, she is
rather shy, and it takes her some time to establish trust
and report. It might therefore be necessary for her to
obtain some extra interview training in advance, or to
increase sample size. Her more experienced supervisor,
well read in diabetes complications, and an experienced
Downloaded from qhr.sagepub.com at Copenhagen University Library on November 30, 2015
4
Qualitative Health Research 
interviewer, used six interviews to establish a sample
with adequate information power for an analysis that
could contribute to existing knowledge in her previous
project. An interview interaction with tensions and conflicting views may reduce the confidence needed to talk
about intimate details. However, a researcher who never
challenges his or her participant runs the risk of developing empirical data holding low information power, which,
during analysis, only reproduces what is known from
before.
Analysis Strategy—Case or Cross-Case?
Finally, information power is related to the strategy chosen for analysis in the specific project. An exploratory
cross-case analysis requires more participants to offer
sufficient information power compared with a project
heading for in-depth analysis of narratives or discourse
details from a few, selected participants. In this project, a
thematic cross-case analysis will be conducted, because
we want to uncover realistic and pragmatic descriptions
of customary self-care practices and their foundations as
a contribution applicable in clinical practice (Malterud,
2012). Referring to the supervisor’s experience, a purposive sample of six to 10 participants with diverse experiences might therefore provide sufficient information
power for descriptions of different self-care practices
teaching health professionals some useful lessons.
Within an exploratory analysis, the ambition is not to
cover the whole range of phenomena, but to present
selected patterns relevant for the study aim. A single,
deliberately chosen and well-articulated participant might
illustrate a typical case but not demonstrate variations in
self-care. Two participants with diametrically opposite
habits might illustrate different aspects of a continuum
but would not be sufficient to embrace discrepancies
deviating from the main line. Fifty participants might
provide all the sufficient variations as well as deviances
regarding the actual practices. However, the overview of
empirical data, needed as the point of departure for an
accountable, thematic analysis of potentially relevant
patterns, would become difficult to grasp, to present
appropriate intersubjectivity, and to organize for further
analysis.
Information Power in Qualitative
Interview Studies—The Model
From our reflections above, we have conceptualized the
items and their dimensions as a model intended as a tool
to appraise sample size in qualitative interview studies in
general (Figure 1). The model can be used to reflect systematically on items with an impact on the information
power in the actual study.
Figure 1. Information power—Items and dimensions.
According to the model, considerations about study
aim, sample specificity, theoretical background, quality
of dialogue, and strategy for analysis should determine
whether sufficient information power will be obtained
with less or more participants included in the sample. A
study will need the least amount of participants when the
study aim is narrow, if the combination of participants is
highly specific for the study aim, if it is supported by
established theory, if the interview dialogue is strong, and
if the analysis includes longitudinal in-depth exploration
of narratives or discourse details. A study will need a
larger number of participants when the study aim is
broad, if the combination of participants is less specific
for the research question, if it is not theoretically informed,
if the interview dialogue is weak, and if cross-case analysis is conducted, especially if the aim is to cover the
broadest possible range of variations of the phenomena
studied.
The dynamic interaction between the different items
included in the model involves a trade-off between conditions that require more versus fewer participants in a
sample. For example, an experienced researcher who
expresses a narrow aim and achieves an excellent interview dialogue may be able to conduct a cross-case analysis with sufficient variation of results even with a small
sample. However, a novice researcher with limited theoretical knowledge may need a larger group of participants
to reveal something new although the aim is well-focused
and the interview dialogues are good.
Our model is not intended as a checklist to calculate N
but is meant as a recommendation of what to consider
systematically about recruitment at different steps of the
research process. An initial appraisal of the number of
Downloaded from qhr.sagepub.com at Copenhagen University Library on November 30, 2015
5
Malterud et al.
informants needed in our case should consider the fact
that the researcher is a novice researcher. Her personal
shyness affects her ability to establish a good dialogue
(more participants). Her study is, however, theoretically
founded, and she has thorough experience with the
empirical matters in question (less participants). She is
heading for cross-case analysis requiring more participants, and the aim of her study is neither especially broad
nor narrow. Because nurses will help her select participants with characteristics specific to her study, the need
for participants will be smaller. Finally, her experienced
research supervisor conducted a similar study last year,
with thick data from six successful interviews. Based on
these considerations, a provisional number of 10 participants could be an example of a cautious initial appraisal
for our case.
Appraisal of information power should be repeated
along the process, supported by preliminary analysis.
After the first three interviews, a first review of the data
can be done and first suggestions of relevant theory can
be made. In our case, it appears that some patients do not
want to participate and that it might not be possible to
achieve as much variation of self-care as expected. Due
to some extra interview training and extensive reading,
the researcher manages to make good report and steer the
dialogue well. The interviews conducted so far have a
high relevance for the research question. Initial analytic
ideas have emerged at this point and are helpful in making the aim of the study more accurate, and some information seems promising in terms of adding new
knowledge to the field. At this point, the attained and projected information power appears to be unexpectedly
strong, and the number of participants needed may be
adjusted downward. This assessment will have to be considered again before closing data collection.
Besides the use of conducting own research projects,
our model may be used to evaluate empirical data from
other researchers, if the five items included in the model
can be derived from study reports. We therefore encourage fellow researchers to present some reflections on
information power in their publications.
Discussion
The Logic of Particularities
Formal power calculations have been proposed as an
alternative to informal, heuristic rules of thumb in qualitative studies for appraisal of sample size (DePaulo,
2000; Guest et al., 2006). The basic principle behind such
attempts assumes a population where a set of information
(such as self-care methods for management of diabetic
foot ulcers) of some sort is available, each with different
prevalence, and the aim is to identify as much of this
information as possible with the least number of participants, selected at random. We do not repudiate the existence of settings where such assumptions might be
adequate. Most often, however, they will be violated in a
qualitative study. Participants are selected purposively as
to provide the most information, and information will
simply not exist, but is elaborated by the researcher, supported by the theory applied (Kvale, 1996; Patton, 2015;
Sandelowski, 1995).
A straightjacket of untenable assumptions may harm
the research process (Bacchetti, 2010). McWhinney urged
medical researchers to focus more on particularities, not
only universals (McWhinney, 1989), and Sandelowski
argued that the case study (N = 1) is the basic unit of analysis in any qualitative study, independent of the amount of
empirical data (Sandelowski, 1996). In qualitative
research, belonging to the interpretative paradigm, the
logic of exploration is more emphasized than the logic of
justification, and other assumptions for sampling are usually more adequate than what can possibly be predicted or
calculated (Kuhn, 1962; Malterud, 2001; Marshall, 1996;
Sandelowski, 1996).
We have presented a pragmatic model for appraisal of
sample size in qualitative interview studies. Our model
offers a manageable strategy where the principal assumptions have been explicated for implementation and can be
contested for methodological elaboration. Below, we
shall discuss the strengths and limitations of this model
and compare it with current leading standards regarding
sample size in qualitative studies.
The Model—Strengths and Limitations
Information power is the core concept of our model. We
have argued that information power of an interview sample is determined by items such as study aim, sample
specificity, use of established theory, quality of dialogue,
and analysis strategy. For each of these items, we have
proposed dimensions along a continuum where researchers are invited to position themselves and their study to
assess an approximate number of participants needed for
responsible analysis. We argue that such an assessment
should be stepwise revisited along the research process
and not definitely decided in advance. In this way, recruitment can be brought to an end when the sample holds
sufficient information power. Still, the model may offer
support also in the initial planning of a qualitative interview study.
The five items we have included in our model are neither mutually exclusive nor the only conceivable determinants
of information power. A common denominator is that exploration of a comprehensive phenomenon requires data with
appropriate variation regarding some selected qualities.
However, a pragmatic model intended for implementation
Downloaded from qhr.sagepub.com at Copenhagen University Library on November 30, 2015
6
Qualitative Health Research 
calls for prioritization. Following the inductive development path we have described, we therefore decided to
include a limited and feasible amount of vital compatible
items whose dimensions with an impact for information
power could be easily identified, appraised, and presented.
On a list of potential items to be included in the model,
we have omitted the recruitment issue, which actually
raises a paradox. When recruitment is easy, the researcher
is at liberty to select a relevant and purposive sample and
thereby reduce the number of participants. However, if
only a few among many potential participants volunteer,
the specificity of the sample may be jeopardized and
thereby increase the number of participants necessary. If
so, information power may be enhanced by considering
the reasons for the declines. Simple changes in procedure, such as interviewing at home instead of in the clinic,
may remove these obstacles and contribute to a sample
where fewer participants are needed. The five items do
not have universal importance, and their relative importance may therefore change from project to project and
over the course of a research process.
To make the model simple and readily understood, we
chose to develop it for the context of individual interview
studies, where the question of sample size usually refers
to the number of participants. The sample size concept is
more ambiguous when it comes to other qualitative
research designs, such as focus group studies (number of
groups, number of participant, or number of interviews),
observational studies (number of events to be recorded,
number of people to be included, number of sites to visit),
or studies with data from written sources (pages of text,
number of documents, number of organizations).
Something Old, Something New, Something
Borrowed, Something Blue . . .
The information power concept and the items it comprises share some features with existing concepts and
ideas within qualitative methodology. In our model, specificity covers issues usually discussed as matters of sampling (Patton, 2015). The role of aim in our model with
regard to sample size has also been discussed by Morse
(2000), and it is likewise related to Patton’s discussion of
trade-offs between breadth and depth in a study (Patton,
2015).
The dialogue item in our model shares some features
with Spradley’s notion of “good informants” (Spradley,
1979), which is discussed as an aspect of adequacy by
sampling (Morse, 1991, 2000, 2015b). Our model differs,
however, in that we emphasize the quality of the dialogue
rather than the nature of the topic, although these dimensions both cover the accessibility of the data. Adequacy,
as discussed by Morse, concerns the sufficiency and quality of data. Unlike the concept of adequacy, our model is
not tied to development of theory or theoretical sampling,
which are specific procedures of GT. The most notable
advantages of our model are therefore perhaps the addition of the relevance of established theory applied in a
study, furthermore that the model considers types of analysis beyond cross-case analysis.
The best qualitative analysis is conducted from empirical data containing abundant and various accounts of
new aspects of the phenomenon we intend to explore
(Morse, 1991, 2015a; Patton, 2015). The sample should
be neither too small nor too large (Kvale, 1996;
Sandelowski, 1995). In our experience, reviewers often
seem to be more concerned with samples being too small
than being too large, instead of appraising the outcome of
analysis from these particular interviews. We would warn
against methodological ideologies or strategies unreflectedly leading to too large samples (Chamberlain, 2000).
By initial and consecutive assessment of information
power, the researcher may avoid waste of time and
resources for collection of unnecessary data, elaboration
of information that is not relevant for the aim of the study,
and lack of overview needed for a thorough analysis. Our
model indicates that this can be obtained even with a
sample of rather few participants, provided that the information power is sufficient.
Should “Saturation” Be Replaced by
“Information Power?”
Saturation is often mentioned as a criterion for sample
size in qualitative studies (Morse, 1995). The concept has
been presented as an element of the constant comparative
method, which is a central element of GT, intended to
generate theories from empirical data (Glaser & Strauss,
1999). During data collection, the researcher compares
sequentially added events until exhaustive saturation of
properties of categories and of relations among them is
obtained (Charmaz, 2006). Furthermore, theoretical sampling based on preliminary theory developed in the study
is required for saturation in a GT analysis to finally arrive
at saturation. Saturation occurs when the researcher no
longer receives information that adds to the theory that
has been developed.
These procedures are, however, not part of all qualitative studies, and O’Reilly and Parker (2013) argue that
adopting saturation as a generic quality marker is inappropriate. Although GT has clear guidance about what
constitutes theoretical saturation, the meaning of saturation within other qualitative approaches is not clear.
Authors claiming saturation are not always transparent
about how it has been achieved (Morse, 2015a), and several studies are actually not compatible with the saturation concept of GT. Reviews reveal that the concept is
often poorly specified and definitely not corresponding
Downloaded from qhr.sagepub.com at Copenhagen University Library on November 30, 2015
7
Malterud et al.
with the original meaning of saturation from GT (Carlsen
& Glenton, 2011).
For an exploratory study, we do not head for a complete description of all aspects of the phenomenon we
study. We are usually satisfied when a study offers new
insights that contribute substantially to or challenge current understandings. Furthermore, the epistemological
anticipation of GT that exhaustive sampling of a definite
set of variations can be obtained and covered by saturation is not the theory of science at the heart of most qualitative research (Malterud, 2012). To be sure, Morse
rejects such an understanding of saturation, spelling out
characteristics within categories as the domain to be saturated (Morse, 2015a).
We consider Morse’s accuracy on this point as rather
unusual among qualitative researchers, who more often
refers to “heard it all” (Morse, 2015a). Research with
social constructivist roots, where knowledge is considered partial, intermediate, and dependent of the situated
view of the researcher, does not support an idea that
qualitative studies ideally should comprise a “total”
amount of facts (Alvesson & Sköldberg, 2009; Haraway,
1991). There are differences in how various approaches
frame research questions, sample participants, and collect data to achieve richness and depth of analysis.
DePaulo warns against the risk of missing something
important when the sample of a qualitative study is
inappropriate or too small (DePaulo, 2000). We agree
to his point, but not to his ambitions of covering the full
range of the phenomenon in question. Finally, saturation is not as objective and indisputable as it might
appear, at least from a peer reviewer’s perspective. One
researcher may regard the case as closed and get bored
by further interviewing, while another colleague, perhaps with a less thorough knowledge of the field or
with empirical data containing less variation, may
assess further data as new information (Malterud, 2012;
Morse, 1995).
Information power is a concept that differs from
saturation in several respects. Our model is, however,
not based on a very original methodological idea. We
look on information power as an aspect of internal
validity, influencing the potential of the available
empirical data to provide access to new knowledge by
means of analysis and theoretical interpretations
(Cohen & Crabtree, 2008; Kvale, 1996). In this regard,
sample adequacy, data quality, and variability of relevant events are often more important than the number
of participants. Hence, information power of a sample
is not very different from being sufficiently large and
varied to elucidate the aims of the study but can be considered a specification of how to accomplish it (Kuzel,
1999; Marshall, 1996; Morse, 1995; Patton, 2015;
Sandelowski, 1995).
Implications for Research Practice
Qualitative interview studies may benefit from sampling
strategies by shifting attention from numerical input of
participants to the contribution of new knowledge from
the analysis. Information power indicates that the more
information the sample holds, relevant for the actual
study, the lower number of participants is needed. An initial approximation of sample size is necessary for planning, while the adequacy of the final sample size must be
evaluated continuously during the research process. The
results presented in the final publication will demonstrate
whether actual sample held adequate information power
to develop new knowledge, referring to the aim of the
study at hand.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with
respect to the research, authorship, and/or publication of this
article.
Funding
The authors received no financial support for the research,
authorship, and/or publication of this article.
References
Alvesson, M., & Sköldberg, K. (2009). Reflexive methodology:
New vistas for qualitative research (2nd ed.). Los Angeles:
SAGE.
Bacchetti, P. (2010). Current sample size conventions:
Flaws, harms, and alternatives. BMC Medicine, 8, 17.
doi:10.1186/1741-7015-8-17
Carlsen, B., & Glenton, C. (2011). What about N? A methodological study of sample-size reporting in focus group studies. BMC Medical Research Methodology, 11, Article 26.
doi:10.1186/1471-2288-11-26
Chamberlain, K. (2000). Methodolatry and qualitative health
research. Journal of Health Psychology, 5, 285–296.
doi:10.1177/135910530000500306
Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Thousand Oaks,
CA: SAGE.
Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for
qualitative research in health care: Controversies and recommendations. Annals of Family Medicine, 6, 331–339.
DePaulo, P. (2000). Sample size for qualitative research: The
risk of missing something important. Quirk’s Marketing
Research Review. Retrieved from http://www.quirks.com/
articles/a2000/20001202.aspx
Glaser, B., & Strauss, A. (1999). The discovery of grounded
theory: Strategies for qualitative research. New York:
Aldine de Gruyter.
Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data Saturation and
variability. Field Methods, 18, 59–82. doi:10.1177/15258
22x05279903
Downloaded from qhr.sagepub.com at Copenhagen University Library on November 30, 2015
8
Qualitative Health Research 
Haraway, D. (1991). Situated knowledges: The science question in feminism and the privilege of partial perspective. In
D. Haraway (Ed.), Simians, cyborgs, and women: The reinvention of nature (pp. 183–201). New York: Routledge.
Kuhn, T. S. (1962). The structure of scientific revolutions.
Chicago: University of Chicago Press.
Kuzel, A. (1999). Sampling in qualitative inquiry. In W. Miller
& B. Crabtree (Eds.), Doing qualitative research (2nd ed.,
pp. 33–45). Thousand Oaks, CA: SAGE.
Kvale, S. (1996). InterViews: An introduction to qualitative
research interviewing. Thousand Oaks, CA: SAGE.
Malterud, K. (2001). Qualitative research: Standards, challenges, and guidelines. The Lancet, 358, 483–488.
Retrieved from http://goo.gl/irFdLB
Malterud, K. (2012). Systematic text condensation: A strategy
for qualitative analysis. Scandinavian Journal of Public
Health, 40, 795–805. doi:10.1177/1403494812465030
Marshall, M. N. (1996). Sampling for qualitative research.
Family Practice, 13, 522–525. Retrieved from http://fampra.oxfordjournals.org/content/13/6/522.full.pdf
Mason, M. (2010). Sample size and saturation in PhD studies
using qualitative interviews. Forum: Qualitative Social
Research, 11, Article 8.
McWhinney, I. R. (1989). An acquaintance with particulars.
Family Medicine, 21, 296–298.
Morse, J. M. (1991). Strategies for sampling. In J. M. Morse
(Ed.), Qualitative nursing research—A contemporary dialogue (pp. 127–145). Newbury Park, CA: SAGE.
Morse, J. M. (1995). The significance of saturation. Qualitative
Health Research, 5, 147–149. doi:10.1177/1049732395
00500201
Morse, J. M. (2000). Determining sample size. Qualitative Health
Research, 10, 3–5. doi:10.1177/104973200129118183
Morse, J. M. (2015a). Data were saturated. Qualitative Health
Research, 25, 587–588. doi:10.1177/1049732315576699
Morse, J. M. (2015b). All data are not equal. Qualitative Health
Research, 25, 1169–1170. doi:10.1177/1049732315597655
O’Reilly, M., & Parker, N. (2013). “Unsatisfactory saturation”:
A critical exploration of the notion of saturated sample
sizes in qualitative research. Qualitative Research, 13,
190–197. doi:10.1177/1468794112446106
Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice (4th ed.). Thousand
Oaks, CA: SAGE.
Sandelowski, M. (1995). Sample size in qualitative research.
Research in Nursing and Health, 18, 179–183. Retrieved
from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=
Retrieve&;db=PubMed&dopt=Citation&list_uids=7899572
Sandelowski, M. (1996). One is the liveliest number: The case
orientation of qualitative research. Research in Nursing
and Health, 19, 525–529. doi:10.1002/(SICI)1098240X(199612)19:6<525::AID-NUR8>3.0.CO;2-Q
Spradley, J. (1979). The ethnographic interview. New York:
Holt, Rinehart, & Winston.
Author Biographies
Kirsti Malterud, MD, PhD, is a senior researcher and professor
of general practice at the Research Unit for General Practice
(Copenhagen/Denmark), the Research Unit for General Practice,
Uni Research Health (Bergen/Norway) and Department of Global
Public Health and Primary Care, University of Bergen/Norway.
Volkert Dirk Siersma, PhD, is statistician at The Research
Unit for General Practice and The Section of General Practice,
Department of Public Health, University of Copenhagen
Ann Dorrit Guassora, MD, PhD, is an associate research professor at The Research Unit for General Practice and assistant
professor at The Section of General Practice, Department of
Public Health, University of Copenhagen.
Downloaded from qhr.sagepub.com at Copenhagen University Library on November 30, 2015
View publication stats
Download