Comparison of community-acquired, hospital-acquired, and intensive care unit-acquired acute respiratory distress syndrome: a prospective observational cohort study
Critical
Care 2015, 19:384
Kao,
Kuo-Chin et al
Introduction: Acute respiratory distress syndrome (ARDS) is a syndrome characterized by
diffuse pulmonary edema and severe hypoxemia that usually occurs after an
injury such as sepsis, aspiration and pneumonia. Little is known about the
relation between the setting where the syndrome developed and outcomes in ARDS
patients.
Methods: This is a 1-year prospective observational study conducted
at a tertiary referred hospital. ARDS was defined by the Berlin criteria.
Community-acquired ARDS, hospital-acquired ARDS and intensive care unit
(ICU)-acquired ARDS were defined as ARDS occurring within 48 hours of hospital
or ICU admission, more than 48 hours after hospital admission and ICU
admission. The primary and secondary outcomes were short- and long- term
mortality rates and ventilator-free and ICU-free days.
Results: Of the 3002
patients screened, 296 patients had a diagnosis of ARDS, including 70 (23.7 %)
with community-acquired ARDS, 83 (28 %) with hospital-acquired ARDS, and 143
(48.3 %) with ICU-acquired ARDS. The overall ICU mortality rate was not
significantly different in mild, moderate and severe ARDS (50 %, 50 % and 56 %,
p = 0.25). The baseline characteristics were similar other than lower rate of
liver disease and metastatic malignancy in community-acquired ARDS than in
hospital-acquired and ICU-acquired ARDS. A multiple logistic regression
analysis indicated that age, sequential organ function assessment score and
community-acquired ARDS were independently associated with hospital mortality.
For community-acquired, hospital-acquired and ICU-acquired ARDS, ICU mortality
rates were 37 % 61 % and 52 %; hospital mortality rates were 49 %, 74 % and 68
%. The ICU and hospital mortality rates of community-acquired ARDS were
significantly lower than hospital-acquired and ICU-acquired ARDS (p = 0.001 and
p = 0.001). The number of ventilator-free days was significantly lower in
ICU-acquired ARDS than in community-acquired and hospital-acquired ARDS
(11 ± 9, 16 ± 9, and 14 ± 10 days, p = 0.001). The number of ICU-free days was
significantly higher in community-acquired ARDS than in hospital-acquired and
ICU-acquired ARDS (8 ± 10, 4 ± 8, and 3 ± 6 days, p = 0.001). Conclusions:
Community-acquired ARDS have lower short- and long-term mortality rates than
hospital-acquired or ICU-acquired ARDS.
Extracorporeal decarboxylation in patients with severe traumatic brain injury and ARDS enables effective control of intracranial pressure
Critical
Care 2015, 19:381
Munoz-Bendix, c. et al
Introduction: Acute
respiratory distress syndrome (ARDS) with concomitant impairment of oxygenation
and decarboxylation represents a complex problem in patients with increased
intracranial pressure (ICP). Permissive hypercapnia is not an option to obtain
and maintain lung-protective ventilation in the presence of elevated ICP.
Pumpless extracorporeal lung assist (pECLA) devices (iLA Membrane Ventilator;
Novalung, Heilbronn, Germany) can improve decarboxylation without aggravation
associated with invasive ventilation. In this pilot series, we analyzed the
safety and efficacy of pECLA in patients with ARDS and elevated ICP after severe
traumatic brain injury (TBI).
Methods: The medical records of ten patients
(eight male, two female) with severe ARDS and severe TBI concurrently managed
with external ventricular drainage in the neurointensive care unit (NICU) were
retrospectively analyzed. The effect of pECLA on enabling lung-protective
ventilation was evaluated using the difference between plateau pressure and
positive end-expiratory pressure, defined as driving pressure (ΔP), during the
3 days preceding the implant of pECLA devices until 3 days afterward. The ICP
threshold was set at 20 mmHg. To evaluate effects on ICP, the volume of daily
cerebrospinal fluid (CSF) drainage needed to maintain the set ICP threshold was
compared pre- and postimplant.
Results: The ΔP values after pECLA implantation
decreased from a mean 17.1 ± 0.7 cm/H 2 O to 11.9±0.5 cm/H 2 O (p = 0.011). In
spite of this improved lung-protective ventilation, carbon dioxide pressure
decreased from 46.6 ± 3.9 mmHg to 39.7 ± 3.5 mmHg (p = 0.005). The volume of
daily CSF drainage needed to maintain ICP at 20 mmHg decreased significantly
from 141.5 ± 103.5 ml to 62.2 ± 68.1 ml (p = 0.037). Conclusions: For selected
patients with concomitant severe TBI and ARDS, the application of pECLA is safe
and effective. pECLA devices improve decarboxylation, thus enabling
lung-protective ventilation. At the same time, potentially detrimental
hypercapnia that may increase ICP is avoided. Larger prospective trials are
warranted to further elucidate application of pECLA devices in NICU patients.
An Environmental Scan for Early Mobilization Practices in U.S. ICUs
Critical
Care Medicine: November 2015 - Volume 43 - Issue 11 - p 2360–2369
Bakhru,
RN. Et al
Objective:
Early mobilization improves patient outcomes. However, diffusion of this
intervention into standard ICU practice is unknown. Dissemination and
implementation efforts may be guided by an environmental scan to detail
readiness for early mobilization, current practice, and barriers to early
mobilization. Design: A telephone survey. Setting: U.S. ICUs. Subjects: Five
hundred randomly selected U.S. ICUs stratified by regional hospital density and
hospital size. Interventions: None.
Measurements and Main Results: We surveyed
687 ICUs for a 73% response rate (500 ICUs); 99% of respondents were nursing
leadership. Fifty-one percent of hospitals reported an academic affiliation.
Surveyed ICUs were most often mixed medical/surgical (58%) or medical (22%) with
a median of 16 beds (12–24). Thirty-four percent reported presence of a
dedicated physical and/or occupational therapy team for the ICU. Overall, 45%
of ICUs reported early mobilization practice; two thirds of ICUs with early
mobilization practice reported using a written early mobilization protocol. In
ICUs with early mobilization practice, 52% began the intervention at admission
and 74% enacted early mobilization for both ventilated and nonventilated
patients. Early mobilization was provided a median of 6 days per week, twice
daily. Factors independently associated with early mobilization protocols
include dedicated physical/occupational therapy (odds ratio, 3.34; 95% CI,
2.13–5.22; p < 0.01), American Hospital Association region 2 (odds ratio,
3.33; 95% CI, 1.04–10.64; p = 0.04), written sedation protocol (odds ratio,
2.36; 95% CI, 1.25–4.45; p < 0.01), daily multidisciplinary rounds (odds
ratio, 2.31; 95% CI, 1.29–4.15; p < 0.01), and written daily goals for
patients (odds ratio, 2.17; 95% CI, 1.02–4.64; p = 0.04). Commonly cited
barriers included equipment, staffing, patient and caregiver safety, and
competing priorities. In ICUs without early mobilization adoption, 78% have
considered implementation but cite barriers including competing priorities and
need for further planning. Conclusions: Diffusion regarding benefits of early
mobilization has occurred, but adoption into practice is lagging. Mandates for
multidisciplinary rounds and formal sedation protocols may be necessary
strategies to increase the likelihood of successful early mobilization
implementation. Methods to accurately assess and compare institutional
performance via practice audit are needed.
Diagnosis and management of inhalation injury: an updated review
Critical
Care 2015, 19:351
Walker,
PF. et al.
In
this article we review recent advances made in the pathophysiology, diagnosis,
and treatment of inhalation injury. Historically, the diagnosis of inhalation
injury has relied on nonspecific clinical exam findings and bronchoscopic
evidence. The development of a grading system and the use of modalities such as
chest computed tomography may allow for a more nuanced evaluation of inhalation
injury and enhanced ability to prognosticate. Supportive respiratory care
remains essential in managing inhalation injury. Adjuncts still lacking
definitive evidence of efficacy include bronchodilators, mucolytic agents, inhaled
anticoagulants, nonconventional ventilator modes, prone positioning, and
extracorporeal membrane oxygenation. Recent research focusing on molecular
mechanisms involved in inhalation injury has increased the number of potential
therapies.
Aspirin as a potential treatment in sepsis or acute respiratory distress syndrome
Critical
Care 2015, 19:374
Toner, P. et al.
Sepsis
is a common condition that is associated with significant morbidity, mortality
and health-care cost. Pulmonary and non-pulmonary sepsis are common causes of
the acute respiratory distress syndrome (ARDS). The mortality from ARDS remains
high despite protective lung ventilation, and currently there are no specific
pharmacotherapies to treat sepsis or ARDS. Sepsis and ARDS are characterised by
activation of the inflammatory cascade. Although there is much focus on the
study of the dysregulated inflammation and its suppression, the associated
activation of the haemostatic system has been largely ignored until recently.
There has been extensive interest in the role that platelet activation can have
in the inflammatory response through induction, aggregation and activation of
leucocytes and other platelets. Aspirin can modulate multiple pathogenic
mechanisms implicated in the development of multiple organ dysfunction in
sepsis and ARDS. This review will discuss the role of the platelet, the
mechanisms of action of aspirin in sepsis and ARDS, and aspirin as a potential
therapy in treating sepsis and ARDS.
Fluid balance and mortality in critically ill patients with acute kidney injury: a multicenter prospective epidemiological study
Critical
Care 2015, 19:371
Wang N et al
Introduction: Early
and aggressive volume resuscitation is fundamental in the treatment of
hemodynamic instability in critically ill patients and improves patient
survival. However, one important consequence of fluid administration is the
risk of developing fluid overload (FO), which is associated with increased
mortality in patients with acute kidney injury (AKI). We evaluated the impact
of fluid balance on mortality in intensive care unit (ICU) patients with AKI.
Methods: The data were extracted from the Beijing Acute Kidney Injury Trial.
This trial was a prospective, observational, multicenter study conducted in 30
ICUs among 28 tertiary hospitals in Beijing, China, from 1 March to 31 August
2012. In total, 3107 patients were admitted consecutively, and 2526 patients
were included in this study. The data from the first 3 sequential days were
analyzed. The AKI severity was classified according to the Kidney Disease:
Improving Global Outcomes guidelines. The daily fluid balance was recorded, and
the cumulative fluid balance was registered at 24, 48, and 72 h. A multivariate
analysis was performed with Cox regression to determine the impact of fluid
balance on mortality in patients with AKI.
Results: Among the 2526 patients
included, 1172 developed AKI during the first 3 days. The mortality was 25.7 %
in the AKI group and 10.1 % in the non-AKI group (P < 0.001). The daily
fluid balance was higher, and the cumulative fluid balance was significantly
greater, in the AKI group than in the non-AKI group. FO was an independent risk
factor for the incidence of AKI (odds ratio 4.508, 95 % confidence interval
2.900 to 7.008, P < 0.001) and increased the severity of AKI. Non-surviving
patients with AKI had higher cumulative fluid balance during the first 3 days
(2.77 [0.86–5.01] L versus 0.93 [−0.80 to 2.93] L, P < 0.001) than survivors
did. Multivariate analysis revealed that the cumulative fluid balance during
the first 3 days was an independent risk factor for 28-day mortality.
Conclusions: In this multicenter ICU study, the fluid balance was greater in
patients with AKI than in patients without AKI. FO was an independent risk
factor for the incidence of AKI and increased the severity of AKI. A higher
cumulative fluid balance was an important factor associated with 28-day
mortality following AKI.
Ventilator-derived carbon dioxide production to assess energy expenditure in critically ill patients: proof of concept
Critical
Care 2015, 19:370
Stapel, SN. et al
Introduction: Measurement
of energy expenditure (EE) is recommended to guide nutrition in critically ill
patients. Availability of a gold standard indirect calorimetry is limited, and
continuous measurement is unfeasible. Equations used to predict EE are
inaccurate. The purpose of this study was to provide proof of concept that EE
can be accurately assessed on the basis of ventilator-derived carbon dioxide
production (VCO 2 ) and to determine whether this method is more accurate than
frequently used predictive equations.
Methods: In 84 mechanically ventilated
critically ill patients, we performed 24-h indirect calorimetry to obtain a
gold standard EE. Simultaneously, we collected 24-h ventilator-derived VCO 2 ,
extracted the respiratory quotient of the administered nutrition, and
calculated EE with a rewritten Weir formula. Bias, precision, and accuracy and
inaccuracy rates were determined and compared with four predictive equations:
the Harris–Benedict, Faisy, and Penn State University equations and the
European Society for Clinical Nutrition and Metabolism (ESPEN) guideline
equation of 25 kcal/kg/day.
Results: Mean 24-h indirect calorimetry EE was
1823 ± 408 kcal. EE from ventilator-derived VCO 2 was accurate (bias +141 ± 153
kcal/24 h; 7.7 % of gold standard) and more precise than the predictive
equations (limits of agreement −166 to +447 kcal/24 h). The 10 % and 15 %
accuracy rates were 61 % and 76 %, respectively, which were significantly
higher than those of the Harris–Benedict, Faisy, and ESPEN guideline equations.
Large errors of more than 30 % inaccuracy did not occur with EE derived from
ventilator-derived VCO 2 . This 30 % inaccuracy rate was significantly lower
than that of the predictive equations.
Conclusions: In critically ill
mechanically ventilated patients, assessment of EE based on ventilator-derived
VCO 2 is accurate and more precise than frequently used predictive equations.
It allows for continuous monitoring and is the best alternative to indirect
calorimetry.
Arterial Catheter Use in the ICU: A National Survey of Antiseptic Technique and Perceived Infectious Risk
Critical
Care Medicine: November 2015 - Volume 43 - Issue 11 - p 2346–2353
Cohen,
David M. et al
Objectives:
Recent studies have shown that the occurrence rate of bloodstream infections
associated with arterial catheters is 0.9–3.4/1,000 catheter-days, which is
comparable to that of central venous catheters. In 2011, the Centers for
Disease Control and Prevention published new guidelines recommending the use of
limited barrier precautions during arterial catheter insertion, consisting of
sterile gloves, a surgical cap, a surgical mask, and a small sterile drape. The
goal of this study was to assess the attitudes and current infection prevention
practices used by clinicians during insertion of arterial catheters in ICUs in
the United States. Design: An anonymous, 22-question web-based survey of
infection prevention practices during arterial catheter insertion. Setting:
Clinician members of the Society of Critical Care Medicine. Subjects: Eleven
thousand three hundred sixty-one physicians, nurse practitioners, physician
assistants, respiratory therapists, and registered nurses who elect to receive
e-mails from the Society of Critical Care Medicine. Interventions: None.
Measurements and Main Results: There were 1,265 responses (11% response rate),
with 1,029 eligible participants after exclusions were applied. Only 44% of
participants reported using the Centers for Disease Control and
Prevention–recommended barrier precautions during arterial catheter insertion,
and only 15% reported using full barrier precautions. The mean and median
estimates of the incidence density of bloodstream infections associated with
arterial catheters were 0.3/1,000 catheter-days and 0.1/1,000 catheter-days,
respectively. Thirty-nine percent of participants reported that they would
support mandatory use of full barrier precautions during arterial catheter
insertion. Conclusions: Barrier precautions are used inconsistently by critical
care clinicians during arterial catheter insertion in the ICU setting. Less
than half of clinicians surveyed were in compliance with current Centers for
Disease Control and Prevention guidelines. Clinicians significantly
underestimated the infectious risk posed by arterial catheters, and support for
mandatory use of full barrier precautions was low. Further studies are
warranted to determine the optimal preventive strategies for reducing
bloodstream infections associated with arterial catheters.
Ten Myths and Misconceptions Regarding Pain Management in the ICU
Critical
Care Medicine: November 2015 - Volume 43 - Issue 11 - p 2468–2478
Sigakis,
Matthew J. G.et al
Objectives:
The aim of this article is to expose common myths and misconceptions regarding
pain assessment and management in critically ill patients that interfere with
effective care. We comprehensively review the literature refuting these myths
and misconceptions and describe evidence-based strategies for improving pain
management in the ICU. Data Sources: Current peer-reviewed academic journals,
as well as standards and guidelines from professional societies. Study
Selection: The most current evidence was selected for review based on the highest
degree of supportive evidence. Data Extraction: Data were obtained via medical
search databases, including OvidSP, and the National Library of Medicine’s
MEDLINE database via PubMed. Data Synthesis: After a comprehensive literature
review, conclusions were drawn based on the strength of evidence and the most
current understanding of pain management practices in ICU. Conclusions: Myths
and misconceptions regarding management of pain in the ICU are prevalent.
Review of current evidence refutes these myths and misconceptions and provides
insights and recommendations to ensure best practices.
Rapid Diagnosis of Infection in the Critically Ill, a Multicenter Study of Molecular Detection in Bloodstream Infections, Pneumonia, and Sterile Site Infections
Critical
Care Medicine: November 2015 - Volume 43 - Issue 11 - p 2283–2291
Vincent,
Jean-Louis et al.
Objective:
Early identification of causative microorganism(s) in patients with severe
infection is crucial to optimize antimicrobial use and patient survival.
However, current culture-based pathogen identification is slow and unreliable
such that broad-spectrum antibiotics are often used to insure coverage of all
potential organisms, carrying risks of overtreatment, toxicity, and selection
of multidrug-resistant bacteria. We compared the results obtained using a
novel, culture-independent polymerase chain reaction/electrospray
ionization-mass spectrometry technology with those obtained by standard
microbiological testing and evaluated the potential clinical implications of
this technique.
Design: Observational study. Setting: Nine ICUs in six European
countries. Patients: Patients admitted between October 2013 and June 2014 with
suspected or proven bloodstream infection, pneumonia, or sterile fluid and
tissue infection were considered for inclusion. Interventions: None.
Measurements and Main Results: We tested 616 bloodstream infection, 185
pneumonia, and 110 sterile fluid and tissue specimens from 529 patients. From
the 616 bloodstream infection samples, polymerase chain reaction/electrospray
ionization-mass spectrometry identified a pathogen in 228 cases (37%) and
culture in just 68 (11%). Culture was positive and polymerase chain
reaction/electrospray ionization-mass spectrometry negative in 13 cases, and
both were negative in 384 cases, giving polymerase chain reaction/electrospray
ionization-mass spectrometry a sensitivity of 81%, specificity of 69%, and
negative predictive value of 97% at 6 hours from sample acquisition. The
distribution of organisms was similar with both techniques. Similar
observations were made for pneumonia and sterile fluid and tissue specimens.
Independent clinical analysis of results suggested that polymerase chain
reaction/electrospray ionization-mass spectrometry technology could potentially
have resulted in altered treatment in up to 57% of patients.
Conclusions:
Polymerase chain reaction/electrospray ionization-mass spectrometry provides
rapid pathogen identification in critically ill patients. The ability to rule
out infection within 6 hours has potential clinical and economic benefits.
Double-Blind Prospective Randomized Controlled Trial of Dopamine Versus Epinephrine asFirst-Line Vasoactive Drugs in Pediatric Septic Shock
Critical
Care Medicine: November 2015 - Volume 43 - Issue 11 - p 2292–2302
Ventura,
Andréa M. C. et al
Objectives: The primary outcome was to compare the effects of dopamine
or epinephrine in severe sepsis on 28-day mortality; secondary outcomes were
the rate of healthcare–associated infection, the need for other vasoactive
drugs, and the multiple organ dysfunction score. Design: Double-blind,
prospective, randomized controlled trial from February 1, 2009, to July 31,
2013. Setting: PICU, Hospital Universitário da Universidade de São Paulo,
Brazil. Patients: Consecutive children who are 1 month to 15 years old and met
the clinical criteria for fluid-refractory septic shock. Exclusions were
receiving vasoactive drug(s) prior to hospital admission, having known cardiac
disease, having already participated in the trial during the same hospital
stay, refusing to participate, or having do-not-resuscitate orders.
Interventions: Patients were randomly assigned to receive either dopamine (5–10
μg/kg/min) or epinephrine (0.1–0.3 μg/kg/min) through a peripheral or
intraosseous line. Patients not reaching predefined stabilization criteria
after the maximum dose were classified as treatment failure, at which point the
attending physician gradually stopped the study drug and started another
catecholamine.
Measurements and Main Results: Physiologic and laboratory data were
recorded. Baseline characteristics were described as proportions and mean (±
SD) and compared using appropriate statistical tests. Multiple regression
analysis was performed, and statistical significance was defined as a p value
of less than 0.05. Baseline characteristics and therapeutic interventions for
the 120 children enrolled (63, dopamine; 57, epinephrine) were similar. There
were 17 deaths (14.2%): 13 (20.6%) in the dopamine group and four (7%) in the
epinephrine group (p = 0.033). Dopamine was associated with death (odds ratio,
6.5; 95% CI, 1.1–37.8; p = 0.037) and healthcare–associated infection (odds
ratio, 67.7; 95% CI, 5.0–910.8; p = 0.001). The use of epinephrine was
associated with a survival odds ratio of 6.49.
Conclusions: Dopamine was associated
with an increased risk of death and healthcare–associated infection. Early
administration of peripheral or intraosseous epinephrine was associated with
increased survival in this population. Limitations should be observed while
interpreting these results
Rehospitalizations Following Sepsis: Common and Costly
Critical
Care Medicine: October 2015 - Volume 43 - Issue 10 - p 2085–2093
Chang,
Dong W. et al.
Objective: Although recent studies have shown that 30-day readmissions
following sepsis are common, the overall fiscal impact of these
rehospitalizations and their variability between hospitals relative to other
high-risk conditions, such as congestive heart failure and acute myocardial
infarction, are unknown. The objectives of this study were to characterize the
frequency, cost, patient-level risk factors, and hospital-level variation in
30-day readmissions following sepsis compared with congestive heart failure and
acute myocardial infarction.
Design: A retrospective cohort analysis of
hospitalizations from 2009 to 2011. Setting: All acute care, nonfederal
hospitals in California. Patients: Hospitalizations for sepsis (n = 240,198),
congestive heart failure (n = 193,153), and acute myocardial infarction (n =
105,684) identified by administrative discharge codes. Interventions: None.
Measurements and Main Results: The primary outcomes were the frequency and cost
of all-cause 30-day readmissions following hospitalization for sepsis compared
with congestive heart failure and acute myocardial infarction. Variability in
predicted readmission rates between hospitals was calculated using
mixed-effects logistic regression analysis. The all-cause 30-day readmission
rates were 20.4%, 23.6%, and 17.7% for sepsis, congestive heart failure, and
acute myocardial infarction, respectively. The estimated annual costs of 30-day
readmissions in the state of California during the study period were $500
million/yr for sepsis, $229 million/yr for congestive heart failure, and $142
million/yr for acute myocardial infarction. The risk- and reliability-adjusted
readmission rates across hospitals ranged from 11.0% to 39.8% (median, 19.9%;
interquartile range, 16.1–26.0%) for sepsis, 11.3% to 38.4% (median, 22.9%;
interquartile range, 19.2–26.6%) for congestive heart failure, and 3.6% to
40.8% (median, 17.0%; interquartile range, 12.2–20.0%) for acute myocardial
infarction. Patient-level factors associated with higher odds of 30-day
readmission following sepsis included younger age, male gender, Black or Native
American race, a higher burden of medical comorbidities, urban residence, and
lower income.
Conclusion: Sepsis is a leading contributor to excess healthcare
costs due to hospital readmissions. Interventions at clinical and policy levels
should prioritize identifying effective strategies to reduce sepsis
readmissions
Variations in the Operational Process of Withdrawal of Life-Sustaining Therapy
Critical Care Medicine: October 2015
- Volume 43 - Issue 10 - p e450–e457
van
Beinum, A. et al.
Objective: The process of withdrawal of life-sustaining therapy remains
poorly described in the current literature despite its importance for patient
comfort and optimal end-of-life care. We conducted a structured review of the
published literature to summarize patterns of withdrawal of life-sustaining
therapy processes in adult ICUs. Data Sources: Electronic journal databases
were searched from date of first issue until April 2014. Study Selection:
Original research articles describing processes of life-support therapy
withdrawal in North American, European, and Australian ICUs were included. Data
Extraction: From each article, we extracted definitions of withdrawal of
life-sustaining therapy, descriptions and order of interventions withdrawn,
drugs administered, and timing from withdrawal of life-sustaining therapy until
death. Data Synthesis: Fifteen articles met inclusion criteria. Definitions of
withdrawal of life-sustaining therapy varied and focused on withdrawal of
mechanical ventilation; two studies did not present operational definitions.
All studies described different aspects of process of life-support therapy
withdrawal and measured different time periods prior to death. Staggered
patterns of withdrawal of life-support therapy were reported in all studies
describing order of interventions withdrawn, with vasoactive drugs withdrawn
first followed by gradual withdrawal of mechanical ventilation. Processes of
withdrawal of life-sustaining therapy did not seem to influence time to death.
Conclusions: Further description of the operational processes of life-sustaining
therapy withdrawal in a more structured manner with standardized definitions
and regular inclusion of measures of patient comfort and family satisfaction
with care is needed to identify which patterns and processes are associated
with greatest perceived patient comfort and family satisfaction with care
Validation and analysis of prognostic scoring systems for critically ill patients with cirrhosis admitted to ICU
Critical Care 2015, 19:364
Campbell J et al.
Introduction: The number of patients admitted to ICU who have liver
cirrhosis is rising. Current prognostic scoring tools to predict ICU mortality
have performed poorly in this group. In previous research from a single centre,
a novel scoring tool which modifies the Child-Turcotte Pugh score by adding
Lactate concentration, the CTP + L score, is strongly associated with
mortality. This study aims to validate the use of the CTP + L scoring tool for
predicting ICU mortality in patients admitted to a general ICU with cirrhosis,
and to determine significant predictive factors for mortality with this group
of patients. This study will also explore the use of the Royal Free Hospital
(RFH) score in this cohort.
Methods: A total of 84 patients admitted to the Glasgow Royal Infirmary ICU
between June 2012 and Dec 2013 with cirrhosis were included. An additional
cohort of 115 patients was obtained from two ICUs in London (St George’s and St
Thomas’) collected between October 2007 and July 2009. Liver specific and general
ICU scoring tools were calculated for both cohorts, and compared using area
under the receiver operating characteristic (ROC) curves. Independent
predictors of ICU mortality were identified by univariate analysis.
Multivariate analysis was utilised to determine the most predictive factors
affecting mortality within these patient groups.
Results: Within the Glasgow cohort, independent predictors of ICU mortality
were identified as Lactate (p < 0.001), Bilirubin (p = 0.0048), PaO 2 /FiO 2
Ratio (p = 0.032) and PT ratio (p = 0.012). Within the London cohort,
independent predictors of ICU mortality were Lactate (p < 0.001), PT ratio
(p < 0.001), Bilirubin (p = 0.027), PaO 2 /FiO 2 Ratio (p = 0.0011) and
Ascites (p = 0.023). The CTP + L and RFH scoring tools had the highest ROC
value in both cohorts examined.
Conclusion: The CTP + L and RFH scoring tool are validated prognostic scoring
tools for predicting ICU mortality in patients admitted to a general ICU with
cirrhosis.