Guba and Lincoln (1981) stated that while all research must have “truth value”,

Guba and Lincoln (1981) stated that while all research must have “truth value”,

“applicability”, “consistency”, and “neutrality” in order to be considered worthwhile, the

Morse, Barret, Mayan, Olson, & Spiers, RELIABILITY AND VALIDITY 5

International Journal of Qualitative Methods 1 (2) Spring 2002

nature of knowledge within the rationalistic (or quantitative ) paradigm is different from

the knowledge in naturalistic (qualitative) paradigm. Consequently, each paradigm

requires paradigm-specific criteria for addressing “rigor” (the term most often used in the

rationalistic paradigm) or “trustworthiness”, their parallel term for qualitative “rigor”.

They noted that, within the rationalistic paradigm, the criteria to reach the goal of rigor

are internal validity, external validity, reliability, and objectivity. On the other hand, they

proposed that the criteria in the qualitative paradigm to ensure “trustworthiness” are

credibility, fittingness, auditability, and confirmability (Guba & Lincoln, 1981). These

criteria were quickly refined to credibility, transferability, dependability, and

confirmability (Lincoln & Guba, 1985). They recommended specific strategies be used

to attain trustworthiness such as negative cases, peer debriefing, prolonged engagement

and persistent observation, audit trails and member checks. Also important were

characteristics of the investigator, who must be responsive and adaptable to changing

circumstances, holistic, having processional immediacy, sensitivity, and ability for

clarification and summarization (Guba & Lincoln, 1981).

These authors were rapidly followed by others either using Guba and Lincolns’ criteria

(e.g., Sandelowski, 1986) or suggesting different labels to meet similar goals or criteria

(see Whittemore, Chase, & Mandle, 2001). This resulted in a plethora of terms and

criteria introduced for minute variations and situations in which rigor could be applied.

Presently, this situation is confusing and has resulted in a deteriorating ability to actually

discern rigor. Perhaps as a result of this lack of clarity, standards were introduced in the

1980’s for the post hoc evaluation of qualitative inquiry (see Creswell, 1997; Frankel,

Morse, Barret, Mayan, Olson, & Spiers, RELIABILITY AND VALIDITY 6

International Journal of Qualitative Methods 1 (2) Spring 2002

1999; Hammersley, 1992; Howe & Eisenhardt, 1990; Lincoln, 1995; Popay, Rogers &

Williams, 1998; Thorne, 1997).


While standards are a comprehensive approach to evaluating the research as a whole,

they remain primarily reliant on procedures or checks by reviewers to be used following

completion of the research. They represent either a minimally accepted level or an

unobtainable gold standard for the researcher in the field. Subsequent clashes between the

“ideal” and the “real” in the attainment of each standard are sometimes unavoidable.

Those who evaluate completed research often forget that decisions that greatly influence

the quality of the finished product may have, of necessity, been made quickly in the field

without the privilege of knowing the overall research outcome or without being able to

see the ramifications of such a decision. Using standards, therefore, is a judgement of the

relative worth of the research applied on completion of the project at a time when it is too

late to correct problems that result in a poor rating.

Problems with post-hoc evaluation

Using standards for the purpose of post-hoc evaluation is to determine the extent to which

the reviewers have confidence in the researcher’s competence in conducting research

following established norms. Rigor is supported by tangible evidence using audit trails,

member checks, memos, and so forth. If the evaluation is positive, one assumes that the

study was rigorous. We challenge this assumption and suggest that these processes have

little to do with the actual attainment of reliability and validity. Contrary to current

practices, rigor does not rely on special procedures external to the research process itself.

For example, audit trails may be kept as proof of the decisions made throughout the

Morse, Barret, Mayan, Olson, & Spiers, RELIABILITY AND VALIDITY 7

International Journal of Qualitative Methods 1 (2) Spring 2002

project, but they do little to identify the quality of those decisions, the rationale behind

those decisions, or the responsiveness and sensitivity of the investigator to data. Of

importance, an audit trail is of little use for identifying or justifying actual shortcomings

that have impaired reliability and validity. Thus, they can neither be used to guide the

research process nor to ensure an excellent product, but only to document the course of

development of the completed analysis.


Looking for a Similar Assignment? Hire our Top Techical Tutors while you enjoy your free time! All papers are written from scratch and are 100% Original. Try us today! Active Discount Code FREE15

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *