In order for EMS physicians to provide state-of-the-art medical direction and oversight of EMS systems, a basic grasp of the methodology behind clinical research is required. Research concepts and the definitions of common research terms are prerequisites for this understanding.
Efficacy is a description of how well the treatment works in clinical trials (“explanatory”).
Effectiveness refers to how well the treatment works in the practice of medicine (“pragmatic”).
Validity in research is the degree to which a tool measures what it claims to measure. The validity of a study refers to determining the likelihood that the conclusions drawn from the study are correct or reasonable. A valid study asks the appropriate questions, uses the correct sample (in size and character), collects the correct outcome measures, and utilizes correct statistical methods. This is a very complex process that requires strict adherence and reassessments to produce a truly valid study.
Internal validity: Internal validity considers the direct effect of one variable (the independent variable) on another variable (the dependent variable). It is important in studies designed to show cause-and-effect relationships. At times, even properly designed studies may have confounding variables that interfere with internal validity.
Confounding variables: There are eight types of confounding variables that interfere with internal validity: (1) history—specific events occurring between measurements (in addition to any experimental variables); (2) maturation—participant changes over time (eg, becoming tired, growing older, etc); (3) testing effect—the effect that taking the first test has on taking any additional testing; (4) instrumentation— refers to changes in measurement tool calibration or the changes in observers that may change measurements; (5) statistical regression—when selected groups are selected based on their extreme score; (6) selection bias—results from the differential (nonrandom) selection of respondents for the comparison groups; (7) experimental mortality—the loss of participants from comparison groups; (8) selection-maturation interaction—occurs when participant variables (eg, hair or skin color) and time variables (eg, age, obesity) interact.1
External validity: Relates to the extent to which the results of a (internally valid) study remain true in other cases (ie, different populations, places, or times). Can the study findings be generalized? Are the research participants representative of the general population? Many studies are performed in a single geographic area, with smaller samples or patients, who also possess unique characteristics or are representative of a specific population only (ie, volunteers, military cadets, medical students). Studies that are not generalizable have low external validity. Other factors adversely affect external validity include (1) testing effect; (2) selection bias; (3) experimental arrangements— which are not generalizable to patients in a nonexperimental setting; and (4) multiple-treatments interferences—effects of previous treatments that interfere with present testing and are not erasable.
Internal versus external validity: It might appear as if internal and external validity contradicts each other. If a strict adherence to experimental designs control as many variables as possible, the study may have high internal validity. Yet, this highly artificial setting lowers the external validity. Alternatively, in performing observational research, it is difficult to control for interfering variables and lowers the low internal validity. However, the study of environmental or other measures in a natural setting results in higher external validity. Fortunately, these apparent contradictions are resolvable, as a great many studies primarily wish to deductively test a theory, in which the major consideration is the rigor (internal validity) of the study.
Blinding refers to procedures undertaken to ensure that neither the study participants nor any member of the study team know to which group the participant belongs (treatment or nontreatment). Some studies have been classified as single, double, or triple blinded, depending on whether it was the participants, care team, or outcome assessors that were blinded. It is currently accepted that investigators should refrain from this terminology and simply state the type of blinding within the test of the paper.2
Inclusion criteria are conditions that must be met for the appropriate recruitment of subjects into a clinical study.
Exclusion criteria are conditions that must be met for the appropriate rejection of subjects from a clinical study.
There are different paradigms for performing analysis of data from clinical studies. When attempting to limit bias and evaluate the effect of introducing a clinical intervention on a particular population it is best to utilize a study design that incorporates intention-to-treat analysis. Other poststudy analysis and interim analysis may be appropriate in some circumstances, but the conclusions drawn from these may be less accurate.
Intention–to-treat (ITT) analysis: The objective is to analyze each group exactly as they existed upon randomization. A true ITT analysis is possible only when complete outcome data are available for all randomized subjects. This means to include all subjects, including those that drop out. ITT analyses decrease outcome bias.
Subgroup analysis: Analyzing groups within the groups being studied. Subgroup analyses are discouraged because multiple comparisons may lead to false-positive findings that cannot be confirmed.
Interim analysis: A pretrial strategy for stopping a trial early if the results show large outcome differences between groups. It allows for periodic assessments for beneficial or harmful effect of treatment compared to concurrent placebo or control group while a study is ongoing. It is used as a cost-saving measure and importantly the ethical obligation that only the minimum number of patients should be entered into a trial to achieve the study's primary objective and reduce the participants' exposure to the inferior treatment. This analysis may occur following the inclusion of a certain number of treatments or after a set period of time. However, the way in which the interim analysis is to be conducted must be expressly stated in the study protocol. Additionally, the results of the interim analysis should be evaluated by an independent data monitoring committee.3 Other reasons to stop an interim analysis are that there are unacceptable side effects or toxicity, accumulation is so slow that the trial is no longer sufficient, outside information makes the trial unnecessary or unethical, poor execution compromises the studies' ability to meet its objectives, or disastrous fraud or misconduct.
Bias usually refers to any unintended influence that a particular facet of the study design may have that will alter or skew the results. Typically investigators seek to limit bias as much as possible; however, much of the medical literature is affected by bias in some form.
Selection bias: Usually results from an error in choosing the individuals or groups to participate in a study. It distorts the statistical analyses and may result in drawing incorrect conclusions regarding the study outcome(s). It weakens internal validity.
Sampling bias: A systematic error in a study that occurs because the participants do not represent a random sampling of the population. This occurs in some instances due to participant self-selection or prescreening of trial participants. The result is that some members of the population are less likely to be included than others. It weakens external validity.
Attrition bias: A kind of selection bias caused by the loss of participants. It includes patients that dropout or do not respond to a survey (nonresponders), or who withdraw or deviate from the study protocol. It results in biased results because a study intervention or nonintervention is unequal or underrepresented in the outcome.
Publication bias: A bias regarding what is most likely to be published, positive or negative findings. If negative findings are underreported, it leads to a misleading bias in the overall published literature. Studies suggest that positive studies are three times more likely to be published than negative studies. Trial registration is now required by many journals to ensure that unfavorable results are not withheld from publication.4
Participants are arbitrarily assigned to a treatment (intervention) or nontreatment (control) group. Randomization eliminates bias in group assignment, aids in blinding the investigator, participants, and other assessors from knowing the grouping of study participants, and allows the use of probability theory to express the likelihood that any outcome differences between groups merely indicate a chance finding.
Cluster randomization: A preexisting group of study participants (schools, poisoning victims, families) are randomly selected to receive (or not receive) an intervention. Cluster randomization is sometimes done due to factors related to study participants (ie, all family members are placed in the same treatment or nontreatment group)
Factorial randomization: Each participant is randomly assigned to a treatment (or nontreatment group) that receives a particular combination of interventions (or noninterventions).
Randomization procedure: Generation of an unpredictable sequence of allocations to be distributed to participants to treatment or control groups using an element of chance, following the patient's evaluation of eligibility and recruitment into the study.
Allocation concealment: Very strict protocols to ensure that patient group assignments are not revealed prior to their allocation to a group. Sequentially numbered, opaque, sealed envelopes (SNOSE) is a type of allocation concealment
As of July 1, 2005, the International Committee of Medical Journal Editors (ICMJE) announced that all RCTs must be registered to be considered for publication in member journals. This registration may occur late.5
During the design phase of a study, it is important to define groups. Based on the study type, there may be several different types of groups needed for a particular study.
Control group: A patient group that receives no treatment
Placebo control group: A patient group that receives no treatment, but will receive a “sham” or “placebo” treatment that will mimic what is being performed in the “test group” but without physiological or real effect.
Parallel group: Participants are randomly allocated to a group, and study participants either receive or do not receive an intervention.
Crossover group: Participants are randomly allocated to a group, and, over time, all study participants receive and do not receive an intervention in a random sequence.