Chat with us, powered by LiveChat Reflect on the knowledge that you have gained throughout this course and envision how this information will assist you in the future. Analyze your most important discovery about h | Wridemy

Reflect on the knowledge that you have gained throughout this course and envision how this information will assist you in the future. Analyze your most important discovery about h

 Reflect on the knowledge that you have gained throughout this course and envision how this information will assist you in the future.

  • Analyze your most important discovery about health care information systems.
  • Highlight how you plan to apply what you have learned to your current or future position(s).

Effectiveness of Computerized Decision Support Systems LinkedtoElectronicHealthRecords:ASystematicReview and Meta-Analysis

We systematically reviewed

randomized controlled tri-

als (RCTs) assessing the ef-

fectivenessofcomputerized

decision support systems

(CDSSs) featuring rule- or

algorithm-based software

integrated with electronic

health records (EHRs) and

evidence-based knowledge.

We searched MEDLINE,

EMBASE, Cochrane Central

Register of Controlled Tri-

als, and Cochrane Database

of Abstracts of Reviews of

Effects. Information on sys-

tem design, capabilities, ac-

quisition, implementation

context, and effects on mortal-

ity, morbidity, and economic

outcomes were extracted.

Twenty-eight RCTs were

included. CDSS use did not

affect mortality (16 trials,

37395 patients; 2282 deaths;

risk ratio [RR] = 0.96; 95% con-

fidence interval [CI] = 0.85,

1.08; I2 = 41%).A statistically

significant effect was evi-

dent in the prevention of mor-

bidity, any disease (9 RCTs;

13868patients;RR = 0.82;95%

CI = 0.68, 0.99; I2 = 64%), but

selectiveoutcomereporting

or publication bias cannot

be excluded. We observed

differences for costs and

health service utilization, al-

though these were often

small in magnitude.

Across clinical settings,

new generation CDSSs in-

tegrated with EHRs do not

affect mortality and might

moderately improve morbid-

ity outcomes. (Am J Public

Health.2014;104:e12–e22.doi:

10.2105/AJPH.2014.302164)

Lorenzo Moja, MD, MSc, PhD, Koren H. Kwag, BSc, MSc, Theodore Lytras, MD, MPH, Lorenzo Bertizzolo, MD, Linn Brandt, MD, Valentina Pecoraro, BSc, Giulio Rigon, MD, MSc, Alberto Vaona, MD, MSc, Francesca Ruggiero, BA, MA, Massimo Mangia, Alfonso Iorio, MD, PhD, Ilkka Kunnamo, MD, PhD, and Stefanos Bonovas, MD, MSc, PhD

THE QUALITY OF MEDICAL

care is variable and often subop- timal across health care systems.1

Despite the growing availability of knowledge from randomized con- trolled trials (RCTs) and system- atic reviews to guide clinical prac- tice, there remains a discrepancy in the application of evidence into health care services.2 Current re- search demonstrates the potential of computerized decision support systems (CDSSs) to assist with problems raised in clinical prac- tice, increase clinician adherence to guideline- or protocol-based care, and, ultimately, improve the overall efficiency and quality of health care delivery systems.1,3,4

CDSSs have been additionally shown to increase the use of pre- ventive care in hospitalized pa- tients, facilitate communication between providers and patients, enable faster and more accurate access to medical record data, improve the quality and safety of medication prescribing, and decrease the rate of prescription errors.5—9 A recent study esti- mated that the adoption of Com- puterized Physician Order Entry and Clinical Decision Support could prevent 100 000 inpatient adverse drug events (ADEs) per year, resulting in increased inpa- tient bed availability by more than 700 000 bed-days and opportu- nity savings approaching €300 million in the studied European Union member states (i.e., the Czech Republic, France, the Netherlands,

Sweden, Spain, and the United Kingdom).10

Electronic Health Records (EHRs) represent another innova- tion that is gaining momentum in health care systems. In the United States, the use of EHRs is encour- aged by the $27 billion allocated in reimbursement incentives by the 2009 Health Information Technology for Economic and Clinical Health (HITECH) Act. Under the Act, clinicians and hos- pitals must demonstrate “mean- ingful use” of EHRs by adhering to a set of criteria, which includes the implementation of clinical de- cision support rules relevant to a specialty or high priority hospital condition such as diagnostic test ordering.11 The integration of CDSSs with EHRs through the delivery of guidance messages to health care professionals at the point of care may maximize the impact of both innovations.

A primary barrier to successful CDSS evaluation is its broad defini- tion adopted by the research com- munity, which encompasses a di- verse range of interventions and functions (see the box on page e2). The inclusion of studies with var- iable interventions across diverse health care settings precluded systematic reviews from reaching a decisive understanding of the impact of CDSSs.9,12—14 To address this issue, we conducted a system- atic review to rigorously evaluate the impact of CDSSs linked to EHRs on critical outcomes—mortality,

morbidity, and costs—and adopted a narrow definition of the inter- vention to facilitate its coherent and accurate evaluation.

METHODS

Our study protocol18 is regis- tered on PROSPERO: the inter- national prospective register of systematic reviews (ID: 2014: CRD42014007177). This work was performed in accordance with the PRISMA statement for report- ing systematic reviews and meta- analyses of studies that evaluate health care interventions.19

Eligibility Criteria

Population. Postgraduate health professionals (medical, nursing, and allied health) in primary, sec- ondary, and tertiary care settings. Only interventions that were im- plemented in real, nonsimulated, clinical settings were considered. Types of interventions. We adap-

ted the definition of a CDSS by Haynes et al.20 and Eberhardt et al.21 We defined a CDSS as an information system aimed to support clinical decision-making, linking patient-specific informa- tion in EHRs with evidence-based knowledge to generate case-specific guidance messages through a rule- or algorithm-based software. Our inclusion criteria emphasize the implementation of evidence- based medicine, meaning that computer-generated guidance messages had to be based on

SYSTEMATIC REVIEW

e12 | Systematic Review | Peer Reviewed | Moja et al. American Journal of Public Health | December 2014, Vol 104, No. 12

literature or a priori evidence (e.g., guidelines or point-of-care ser- vices) and not on expert opinions. This knowledge had to then be delivered to medical doctors or allied health care professionals through electronic media (e.g., computer, smartphone, or tablet). We did not exclude a CDSS, however, based on the degree of literature it covered in the litera- ture surveillance system. In other words, we included a CDSS if it integrated a single evidence-based

guideline or incorporated multiple evidence-based guidelines. We also included CDSSs irrespective of the level of patient information archived in the EHR.

Systems that alter the guidance based on previous experience or average behaviors were excluded.

We included software guidance messages, irrespective of the form (e.g., recommendations, alerts, prompts, or reminders), as well as guidance messages, regardless of the target assistance (e.g., diagnostic

test ordering and interpretation, treatment planning, therapy rec- ommendations, primary pre- ventive care, therapeutic drug monitoring and dosing, drug pre- scribing, or chronic disease man- agement). Patient-specific infor- mation had to derive from EHRs. Our operational definitions for considering a study “compliant” with the EHR were inclusive: from clinical data repository and health data repository (CDHR), to electronic medical—patient

record (EMR and EPR), and EHR.22

Our inclusion criteria match the “6S” Haynes’ model for evidence- based literature products23 and the evolution of online point- of-care services.24 The box below describes, in detail, the character- istics of the CDSSs we evaluated. Types of comparison groups. To

address our objectives, we con- sidered the following compari- sons: access to CDSSs according to our definition compared with (1)

Definitions of Computerized Decision Support Systems (CDSSs) Adopted by Authors of Other Systematic Reviews

Bates et al. 15(p524)

(and later adopted by Ash et al. 4(p980)

) defined a CDSS as a computer-based system providing “passive and active referential information as well as reminders, alerts, and guidelines.”

Kawamoto et al.16(p1) (and later adopted by Bright et al.9(p29)) identified a CDSS as “any electronic system designed to aid directly in clinical decision making, in which characteristics of individual

patients are used to generate patient-specific assessments or recommendations that are then presented to clinicians for consideration.”

Payne17(p47S) classified CDSSs as “computer applications designed to aid clinicians in making diagnostic and therapeutic decisions in patient care.”

Characteristics of Computerized Decision Support Systems (CDSSs)

Implementation strategy

Channel Electronic-based

Sharing Local application, networked, or Web applications

Type of device Local personal computer or handheld device

Computational architecture CDSS built into local EHR, knowledge available from central repository, entire system housed outside local site, clouding system

Information

Nature Knowledge-based

Provider Contents provided by national/international publisher, professional society, health care organization, or governmental agency

EBM methodology General references, specific guidelines for a given clinical condition, suggestions considering a patient’s unique clinical data, list of possible diagnoses, drug

interaction alerts, or preventive care reminders

Format: delivery form Messages reminders, prompts, alerts, algorithms, recommendations, rules, order sets, warnings, data reports, and dashboards

Target

Targeted setting Primary, secondary, or tertiary

Target expertise Preventive care (e.g., immunization, screening, or disease management guidelines for secondary prevention)

Diagnosis (e.g., suggestions for possible diagnoses that match a patient’s signs and symptoms)

Planning or implementing treatment (e.g., guidelines for specific diagnoses, drug dosage recommendations, or warnings for drug interactions).

Follow-up management (e.g., corollary orders, reminders for ADE monitoring)

Hospital, provider efficiency (e.g., care plans to minimize length of stay)

Cost reductions and improved patient convenience (e.g., duplicate testing alerts or drug formulary guidelines)

Overall goals Improved overall efficiency, early disease identification, accurate diagnosis, adherence of treatment to protocols, or prevention of ADEs

Time

Timing Immediately at the point of care, before the patient encounter, after the patient encounter, or at any time

Type of presentation “Automatic” (key issues: timing, autonomy and user control over response)

“On demand” (key issues: speed, ease of access, autonomy and user control over response)

Person: health professional Physicians, nurses, or allied health professionals

Note. ADE = adverse drug event; EBM = evidence-based medicine; EHR = electronic health record.

SYSTEMATIC REVIEW

December 2014, Vol 104, No. 12 | American Journal of Public Health Moja et al. | Peer Reviewed | Systematic Review | e13

standard care with no access to CDSSs, (2) CDSSs that do not generate advice, or (3) CDSSs that are not based on evidence. Trials comparing arms accessing the same CDSS at different intensities (e.g., one arm having guidance messages pushed to the health professional vs another arm hav- ing guidance message statically available in a folder) were not pooled together with the other trials in the quantitative analyses. Types of outcomes and assessment

measures. We identified a priori the following (primary) outcome measures for included studies:

1. Mortality: We selected mor- tality as it is the most relevant and objective outcome, al- though there may exist vari- ability across studies with regards to the time frame during which mortality is captured.

2. Morbidity: We selected and grouped objective patient outcomes such as occurrence of illness (e.g., pneumonia, myocardial infarction, stroke), progression of diseases and hospitalizations.

3. Economic outcomes: Infor- mation about health care uti- lization (e.g., length of stay, emergency department visits, and primary care consulta- tions), and costs.

We did not consider the fol- lowing outcomes: patient satisfac- tion, measures of process, and health care professional activity or performance (e.g., adherence to guidelines, rates of screening and other preventive measures, provi- sion of counseling, rates of appro- priate drug administration, and identification of at-risk behaviors). Types of studies. To be eligible,

studies had to be randomized controlled trials (RCTs). Ran- domization was allowed to be

either at the individual- or at the cluster-level.

Data Sources

We systematically searched the English-language literature indexed in MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, and Cochrane Database of Abstracts of Reviews of Effects. Studies found in the bibliographies of Systematic Reviews on CDSSs, as well as those identified by experts, were also considered. The full search strategies for MEDLINE and EMBASE are included in the Appendix.

Study Selection and Data

Extraction

We identified RCTs of the CDSSs fulfilling the aforementioned eligi- bility criteria. We combined the results into a reference manage- ment software program (EndNote X5 for Windows, Thomson Reu- ters, Philadelphia, PA). The data- base was filtered for duplications to derive a unique set of records. Investigators (K. H. K., T. L., L. B., L. B., V. P., G. R., A. V., and S. B.) independently examined the search results and screened the titles and abstracts; the full text reports of all potentially relevant trials were subsequently screened. Investiga- tors (K. H. K., T. L., L. B., L. B., V. P., G. R., A. V., and S. B.) indepen- dently abstracted information on CDSS characteristics and effect estimates from all included trials using a modified version of The Cochrane Effective Practice and Organisation of Care Review Group (EPOC) data collection checklist: study setting and methods (design), comparators, computer- ized CDSS characteristics, patient or provider characteristics, and outcomes. We performed all steps in the study selection and data extraction processes in duplicate.

When necessary, we attempted to contact the study authors to clarify uncertainties in the study design or results.

Risk of Bias Assessment

Two investigators (K. H. K., L. M.) assessed the potential risk for bias in included studies using the criteria outlined in the Cochrane Handbook for Systematic Reviews of Interventions.25 The assess- ment involved the following key domains: sequence generation, al- location concealment, blinding of outcome assessors, incomplete outcome data, selective outcome reporting, and other sources of bias (e.g., extreme baseline imbal- ance or failure to disclose source of funding for the study). We did not assess the blinding of person- nel and participants given the na- ture of the intervention. In fact, the use of masking procedures to prevent personnel and partici- pants from knowing the allocation to the intervention or control arms was impractical. Furthermore, blinding does not affect mortality, an outcome of this review. Our assessment referred only to stud- ies reporting mortality or morbid- ity outcomes. Any disagreement was resolved by discussion or by the involvement of a third inves- tigator (S. B.).

Data Synthesis

Risk ratios and 95% confidence intervals (CIs) were calculated for each trial by reconstructing con- tingency tables based on the num- ber of patients randomly assigned and the number of patients with the outcome of interest (analysis in accordance with the intention- to-treat principle). For the cluster- randomized trials, to calculate adjusted (inflated) CIs that ac- count for the clustering, we per- formed an approximate analysis as recommended in the Cochrane

Handbook.25 Our approach was to multiply the standard error of the effect estimate (from the anal- ysis ignoring the clustering) by the square root of the design effect.25 For this, we used an intracluster correlation coefficient (ICC = 0.027) borrowed from an external source.26 Then, each meta-analysis was performed twice, assuming either a fixed- effects27 or a random-effects model.28 In the absence of het- erogeneity, the fixed-effects and the random-effects models pro- vide similar results. When hetero- geneity is found, the random- effects model is considered to be more appropriate, although both models may be biased.29

For all statistical analyses we used the R software environ- ment,30 version 3.0.1, and the “meta” package for R,31 version 2.3—0. Selective outcome reporting or publication bias was assessed using the Begg and Mazumdar adjusted rank correlation test32

and the Egger regression asym- metry test.33 To evaluate whether the results of the studies were homogeneous, we used the Cochran Q test with a 0.10 level of signifi- cance.34 We also calculated the I2 statistic35 that describes the percentage variation across stud- ies that is attributed to heteroge- neity rather than chance. We regarded an I2 value less than 40% as indicative of “not impor- tant heterogeneity” and a value higher than 75% as indicative of “considerable heterogeneity.”25

To evaluate the stability of the results, we also performed a “leave-one-out” sensitivity analy- sis. The scope of this approach was to evaluate the influence of indi- vidual studies by estimating the summary relative risk in the ab- sence of each study.36 All P values are 2-tailed. For all tests (except for heterogeneity), a probability

SYSTEMATIC REVIEW

e14 | Systematic Review | Peer Reviewed | Moja et al. American Journal of Public Health | December 2014, Vol 104, No. 12

level less than .05 was considered statistically significant.

RESULTS

The results of our search and selection process are presented in Figure 1. We identified 28 RCTs, which met the predefined inclu- sion criteria.37—64 Eighteen studies reported mortality or morbidity data37—54 and were included in the meta-analyses, while 10 more studies reported only economic outcomes.55—64 A description of the RCTs is provided in the Appendix (available as a supplement to this article at http://www.ajph.org).

Risk of Bias in Studies

Included in the Meta-Analyses

Overall, the assessment of the 18 studies incorporated in the meta-analyses indicated high risk of bias across 7 (39%) and unclear risk for 10 studies (56%). Only 1 study44 (5%) was judged to be at low risk for bias. We noticed that the majority of trials did not measure mortality as an outcome, but reported it as additional in- formation, often as a reason for loss to follow-up. Readers should be aware that our risk of bias as- sessment did not evaluate studies based on their intended outcomes, but according to 2 outcomes of

our systematic review: mortality and morbidity. Quality assessment items are summarized in Figure 2.

Meta-Analysis of Mortality

Outcomes

Sixteen RCTs contributed to this analysis.37—52 A total of 37 395 individuals participated in these trials: 18 848 in the inter- vention groups and 18 547 in the control groups. Seven tri- als37,41,42,44,47,48,50 reported a lower mortality in the intervention group, while 8 trials38—40,42,46,49,51,52

reported a higher mortality. Only 3 were statistically signifi- cant.44,46,47 The overall mortality

rate on all 16 RCTs was 6.2% in the intervention groups (1171 deaths) and 6.0% in the control groups (1111 deaths). The pooled effect estimate was not statistically significant assuming either a fixed effects model (RR =1.00; 95% CI = 0.92, 1.08), or a random effects model (RR = 0.96; 95% CI = 0.85, 1.08). Figure 3 shows the forest plot of the RR estimates and 95% CIs from the individual trials and the pooled results. The Cochran Q test had a P value of .047 and the corresponding I2

statistic was 41%, both indicating moderate variability between studies. Visual inspection of the funnel plot (Figure 4a) indicated that pooled data did not appear to be heavily influenced by publi- cation bias, although it is also possible that few studies are “missing” from the area of non- significance. The P values for the tests of Begg and Egger were P = .96 and P = .29, respectively, also suggesting a low probability of publication bias. The “leave- one-out” sensitivity analysis, re- moving a study at a time (Figure 5), confirmed the stability of our results.

Meta-Analysis of Morbidity

Outcomes

Nine RCTs contributed to this analysis.40,42,44,45,49,51—54 A total of 13 868 individuals participated in these trials. The analysis re- vealed a weak inverse association between CDSS use and morbidity from any disease. The difference between the CDSS and control groups in the occurrence of mor- bidity outcomes was marginally significant assuming a random- effects model (RR = 0.82; 95% CI = 0.68, 0.99), but not signifi- cant assuming a fixed-effects model (RR = 0.91; 95% CI = 0.83, 1.00). Figure 3 shows the forest plot of the RR estimates and 95%

Additional records identified through bibliographies of systematic reviews

(n = 78)

Records identified through MEDLINE, EMBASE, CENTRAL, and DARE database searches

(n = 10 100)

(n = 9484) Records after duplicates removed

(n = 9484) Records screened

Records excluded based on title/abstract (n = 9238)

(n = 246) assessed for eligibility

Full-text articles

Id e

n ti

fi c

a ti

o n

S c

re e

n in

g E

li g

ib il

it y

In c

lu d

e d

Studies included in quantitative synthesis Total RCTs (n = 28)

– Morbidity (n = 9) – Mortality (n = 16) – Economic outcomes (n = 17)

– No access to full text (n = 10)

– Not RCT (n = 68) – Simulation study (n = 7) – Not health professional (n = 3) – Not CDSS assessment (n = 3)

– No acceptable comparator (n = 7) – Overlapping studies (n = 7) – Systematic reviews (n = 2)

– Not CDSS according to our criteria (n = 68) – Complex intervention where effect of CDSS could not be separated out (n = 6)

Studies excluded (n = 171)

Eligible studies evaluated for outcomes of interest

(n = 75)

Note. CDSS = computerized decision support systems; RCT = randomized controlled trial

FIGURE 1—Summary of evidence search and selection.

SYSTEMATIC REVIEW

December 2014, Vol 104, No. 12 | American Journal of Public Health Moja et al. | Peer Reviewed | Systematic Review | e15

CIs from the individual trials and the pooled results. The Cochran’s Q test had a P value of .005 and the corresponding I2 statistic was 64%, both indicating substantial variability between studies. Visual inspection of the funnel plot (Fig- ure 4b) indicated slight asymme- try, with relatively few studies existing midway in the area of nonsignificance. The P values for the Begg and the Egger’s tests were P = .18 and P = .07, respec- tively, suggesting the possible ex- istence of selective outcome re- porting bias or small study effects. The sensitivity analysis confirmed that the pooled estimates were fairly unstable (Figure 5).

Qualitative Assessment of

Economic Outcomes

Seventeen RCTs reported eco- nomic outcomes.41—43,45,46,50,53,55—64

Three of these46,50,59 presented the economic data in separate publications.65—67 Differences were seen for costs and health service utilization (e.g., drug or test orders), but these were often small in magnitude. Across economic outcomes, interventions equipped with CDSSs did not consistently perform better than nonequipped ones. Data regarding the impact of CDSSs on cost and health services utilization are given in Table 1.

DISCUSSION

This systematic review of 28 RCTs revealed little evidence for a difference in mortality when pooling results from comparisons of adoption of a CDSS integrated with an EHR versus health care settings without a CDSS. Our re- view indicates that differences in mortality outcomes, if they exist, appear small across studies and health care services, and may exist only in particular settings with specific diseases and circumstances.

However, most of the studies were underpowered and too short to prove or exclude an effect on mortality, and effects as large as a 25% increase or reduction could still be possible. We found weak evidence that an active CDSS is associated with a lower risk for morbidity. All morbidity outcomes selected were relevant from a clin- ical and health services perspec- tive. Again, results on morbidity outcomes were very diverse, lim- iting quantitative inferences; how- ever, the summary RR morbidity decrease of 10% to 18% places CDSSs linked to EHRs at the top of the spectrum of quality improve- ment interventions for their po- tential impact on health outcomes. The beneficial effects of CDSSs might still be greater than that suggested by the current analysis given the limited number of actual studies providing results on hard outcomes. Finally, we observed differences for costs and health service utilization, but these were often small in magnitude.

Several other systematic re- views provided pooled estimates of the RRs for CDSSs. All reviews observed large between-study heterogeneity. This is expected given the variability in interven- tion, settings, diseases, and study designs. Despite this limitation, they concluded in favor of CDSSs. Our review exhibits several differ- ences. We adopted stricter inclu- sion criteria, selecting only CDSSs featuring a rule- or algorithm- based software integrated with EHRs and evidence-based knowl- edge. The CDSSs we included can be viewed as a second generation in terms of their technology, in- formation management, and link- age to EHRs. Furthermore, we did not include process and laboratory outcomes such as adherence to guideline recommendations or change in blood values. Analyzing

Hetlevik et al.37

Montgomery et al.38

Hetlevik et al.39

McCowan et al.53

Kucher et al.40

Paul et al.41

McGregor et al.42

Rothschild et al.43

Gurwitz et al.54

Roy et al.44

Graumlich et al.45

MacLean et al.46

Bosworth et al.47

Cleveringa et al.48

Holbrook et al.49

O’Connor et al.50

Fitzgerald et al.51

Robbins et al.52

R an

d o

m s

eq u

en ce

g en

er at

io n

A llo

ca ti

o n

c o

n ce

al m

en t

In co

m p

le te

o u

tc o

m e

d at

a

Fr ee

o f s

el ec

ti ve

o u

tc o

m e

re p

o rt

in g

Fr ee

o f o

th er

s o

u rc

es o

f b ia

s

B lin

d in

g o

f p ar

ti ci

p an

ts a

n d

p er

so n

n el

Note. Green (+) = low risk of bias; Yellow (?) = unclear risk of bias; Red (–) = high risk of

bias.

FIGURE 2—Summary of risk-of-bias assessments of the

randomized controlled trials included in the meta-analyses.

SYSTEMATIC REVIEW

e16 | Systematic Review | Peer Reviewed | Moja et al. American Journal of Public Health | December 2014, Vol 104, No. 12

estimates from process outcomes is problematic. Their relevance is questionable and the quality of the data may have been less than optimal, particularly when the data sources were administrative rather than clinical. The overlap between our review and others is

limited, as there exists approxi- mately 50% in terms of the studies and less in terms of the rough data. The results of our review comple- ment previous analyses showing that CDSSs are best oriented to directly affect process outcomes (recommendation adherence) and,

with decreasing impact, morbidity and mortality.

Several included studies were cluster-RCTs that did not report if they accounted for clustering ef- fects. Trials randomizing at the group level should not be ana- lyzed at the individual participant

level. If the clustering is ignored, P values will be artificially small. This problem might result in false positive conclusions that the CDSS has an effect when it does not. Thus, we adjusted estimates of the RRs for our data synthesis using a method that inflates variances.

Heterogeneity: I2 = 63.6%

Relative Risk (Log Scale)

Relative Risk (Log Scale)

RR (95% CI)

RR (95% CI)

W (Fixed)

W (Fixed)

W (Random)

W (Random)

Hetlevik et al.37 Montgomery et al.38 Hetlevik et al.39 Kucher et al.40 Paul et al.41 McGregor et al.42 Rothschild et al.43 Roy et al.44 Graumlich et al.45 MacLean et al.46 Bosworth et al.47 Cleveringa et al.48 Holbrook et al.49 O’Connor et al.50 Fitzgerald et al.51 Robbins et al.52

Risk Ratio

Risk Ratio

0.85 (0.59, 1.23)

0.47 (0.26, 0.85) 0.59 (0.43, 0.80) 0.86 (0.68, 1.08) 1.06 (0.92, 1.23) 0.71 (0.38, 1.31) 0.98 (0.80, 1.20) 0.92 (0.55, 1.53) 0.48 (0.24, 0.97) 1.17 (0.68, 2.02)

0.82 (0.68, 0.99)

2.5% 6.7% 13.5% 16.2% 19.0%

6.4% 17.1%

8.3% 5.2% 7.6%

9.1% 16.4% 40.7%

2.3% 21.1%

3.3% 1.7% 2.9%

100%0.91 (0.83, 1.00)

0.90 (0.73, 1.12) 1.11 (0.80, 1.53) 0.94 (0.74, 1.20) 0.47 (0.29, 0.78) 1.00 (0.42, 2.36) 1.21 (1.03, 1.43) 0.50 (0.25, 0.98) 0.81 (0.49, 1.35) 1.53 (0.26, 9.14) 0.60 (0.30, 1.20) 1.15 (0.65, 2.03) 2.00 (0.50, 7.94)

1.00 (0.92, 1.08) 0.96 (0.85, 1.08)

100% 100%

100%

. . .

. . .

. . .

. . .

Fixed effect model

Fixed effect model

Random effects model

Random effects model

Heterogeneity: I2 = 40.5%

0.2

0.2

0.5

0.5

1

1

2

2

5

5

1.24 (0.34, 4.58) 1.18 (0.82, 1.69) 1.01 (0.87, 1.17)

4.4% 7.1% 0.9% 7.4%

15.8% 12.6%

8.4% 11.4%

4.7% 1.8%

14.8% 2.8% 4.6% 0.5% 2.7% 3.8% 0.8%

0.4% 4.7%

28.6% 13.4%

5.7% 10.4%

2.4% 0.8%

21.9% 1.3% 2.4% 0.2% 1.2% 1.9% 0.3%

McCowan et al.53 Kucher et al.40 McGregor et al.42 Gurwitz et al.54 Roy et al.44 Graumlich et al.45 Holbrook et al.49 Fitzgerald et al.51 Robbins et al.52

a

b

Note. CI = confidence interval; RR = risk ratio; W = weight. The RR and 95% CI for each study are displayed on a logarithmic scale.

FIGURE 3—Forest plots from individual studies and meta-analysis for (a) mortality, all follow-up, and (b) morbidity, any disease.

SYSTEMATIC REVIEW

December 2014, Vol 104, No. 12 | American Journal of Public Health Moja et al. | Peer Reviewed | Systematic Review | e17

However, such adjusted results should be interpreted cautiously; if the clustering effect is limit

Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.

Do you need an answer to this or any other questions?

About Wridemy

We are a professional paper writing website. If you have searched a question and bumped into our website just know you are in the right place to get help in your coursework. We offer HIGH QUALITY & PLAGIARISM FREE Papers.

How It Works

To make an Order you only need to click on “Order Now” and we will direct you to our Order Page. Fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Are there Discounts?

All new clients are eligible for 20% off in their first Order. Our payment method is safe and secure.

Hire a tutor today CLICK HERE to make your first order

Related Tags

Academic APA Writing College Course Discussion Management English Finance General Graduate History Information Justify Literature MLA