The effect of mild induced hypothermia on outcomes of patients after cardiac arrest: a systematic review and meta-analysis of randomised controlled trials
Zhang XW et al.
Critical
Care 2015, 19:417
Introduction: Mild
induced hypothermia (MIH) is believed to reduce mortality and neurological
impairment after out-of-hospital cardiac arrest. However, a recently published
trial demonstrated that hypothermia at 33 °C did not confer a benefit compared
with that of 36 °C. Thus, a systematic review and meta-analysis of randomised
controlled trials (RCTs) was made to investigate the impact of MIH compared to
controls on the outcomes of adult patients after cardiac arrest. Methods: We
searched the following electronic databases: PubMed/MEDLINE, the Cochrane
Library, Embase, the Web of Science, and Elsevier Science (inception to
December 2014). RCTs that compared MIH with controls with temperature >34 °C
in adult patients after cardiac arrest were retrieved. Two investigators independently
selected RCTs and completed an assessment of the quality of the studies. Data
were analysed by the methods recommended by the Cochrane Collaboration. Random
errors were evaluated with trial sequential analysis.
Results: Six RCTs, including one abstract, were included. The meta-analysis of
included trials revealed that MIH did not significantly decrease the mortality
at hospital discharge (risk ratio (RR) = 0.92; 95 % confidence interval (CI),
0.82–1.04; p = 0.17) or at 6 months or 180 days (RR = 0.94; 95 % CI, 0.73–1.21;
p = 0.64), but it did reduce the mortality of patients with shockable rhythms
at hospital discharge (RR = 0.74; 95 % CI, 0.59–0.92; p = 0.008) and at 6
months or 180 days. However, MIH can improve the outcome of neurological function
at hospital discharge (RR = 0.80; 95 % CI, 0.64–0.98; p = 0.04) especially in
those patients with shockable rhythm but not at 6 months or 180 days. Moreover,
the incidence of complications in the MIH group was significantly higher than
that in the control group. Finally, trial sequential analysis indicated lack of
firm evidence for a beneficial effect.
Conclusion: The available RCTs suggest that MIH does not appear to improve the
mortality of patients with cardiac arrest while it may have a beneficial effect
for patients with shockable rhythms. Although MIH may result in some adverse
events, it helped lead to better outcomes regarding neurological function at
hospital discharge. Large-scale ongoing trials may provide data better applicable
to clinical practice.
The experiences of nurses implementing the Modified Early Warning Score and a 24-hour on-call Mobile Intensive Care Nurse: An exploratory study
Stafseth
K S et al.
Article
in Press
Aims
and objectives: To explore experiences of nurses implementing and using the
Modified Early Warning Score (MEWS) and a Mobile Intensive Care Nurse (MICN)
providing 24-hour on-call nursing support.
Background: To secure patient safety in hospital wards, nurses may increase the
quality of care using a tool to detect the failure of vital functions.
Possibilities for support can be provided through on-call supervision from a
qualified team or nurse.
Design: This exploratory qualitative investigation used focus group interviews
with nurses from two wards of a university hospital in Norway.
Methods: A purposive sample of seven registered nurses was interviewed in focus
groups. A semi-structured guide and an inductive thematic analysis were used to
identify interview themes.
Results: Three themes emerged: (1) experiences with the early recognition of
deterioration using the MEWS, (2) supportive collaboration and knowledge
transfer between nurses and (3) a “new” precise language using the score for
communicating with physicians. The use of scores and support were perceived as
improving care for deteriorating patients and for supporting the collaboration
of nurses with other professionals.
Conclusion: In our study, nurses described increased confidence in the
recognition of deteriorating patients and in the management of such situations.
The non-critical attitude, supportive communication and interactive learning
according to the MICN were essential elements for success.
A Multicenter Evaluation of Prolonged Empiric Antibiotic Therapy in Adult ICUs in the United States
Thomas,
Z et al.
Critical
Care Medicine: December 2015 - Volume 43 - Issue 12 - p 2527–2534
Objective:
The purpose of this study is to determine the rate of prolonged empiric
antibiotic therapy in adult ICUs in the United States. Our secondary objective
is to examine the relationship between the prolonged empiric antibiotic therapy
rate and certain ICU characteristics. Design: Multicenter, prospective,
observational, 72-hour snapshot study. Setting: Sixty-seven ICUs from 32
hospitals in the United States. Patients: Nine hundred ninety-eight patients
admitted to the ICU between midnight on June 20, 2011, and June 21, 2011, were
included in the study. Intervention: None.
Measurements and Main Results: Antibiotic orders were categorized as
prophylactic, definitive, empiric, or prolonged empiric antibiotic therapy.
Prolonged empiric antibiotic therapy was defined as empiric antibiotics that
continued for at least 72 hours in the absence of adjudicated infection. Standard
definitions from the Centers for Disease Control and Prevention were used to
determine infection. Prolonged empiric antibiotic therapy rate was determined
as the ratio of the total number of empiric antibiotics continued for at least
72 hours divided by the total number of empiric antibiotics. Univariate
analysis of factors associated with the ICU prolonged empiric antibiotic
therapy rate was conducted using Student t test. A total of 660 unique
antibiotics were prescribed as empiric therapy to 364 patients. Of the empiric
antibiotics, 333 of 660 (50%) were continued for at least 72 hours in instances
where Centers for Disease Control and Prevention infection criteria were not
met. Suspected pneumonia accounted for approximately 60% of empiric antibiotic
use. The most frequently prescribed empiric antibiotics were vancomycin and
piperacillin/tazobactam. ICUs that utilized invasive techniques for the
diagnosis of ventilator-associated pneumonia had lower rates of prolonged
empiric antibiotic therapy than those that did not, 45.1% versus 59.5% (p =
0.03). No other institutional factor was significantly associated with
prolonged empiric antibiotic therapy rate.
Conclusions: Half of all empiric
antibiotics ordered in critically ill patients are continued for at least 72
hours in absence of adjudicated infection. Additional studies are needed to
confirm these findings and determine the risks and benefits of prolonged
empiric therapy in the critically ill.
Nutritional Status and Mortality in the Critically Ill
Mogensen,
K et al
December
2015 - Volume 43 - Issue 12
Objectives:
The association between nutritional status and mortality in critically ill
patients is unclear based on the current literature. To clarify this relation,
we analyzed the association between nutrition and mortality in a large
population of critically ill patients and hypothesized that mortality would be
impacted by nutritional status. Design: Retrospective observational study.
Setting: Single academic medical center. Patients: Six thousand five hundred
eighteen adults treated in medical and surgical ICUs between 2004 and 2011.
Interventions: None. Measurements and Main Results: All cohort patients
received a formal, in-person, standardized evaluation by a registered
dietitian. The exposure of interest, malnutrition, was categorized as nonspecific
malnutrition, protein-energy malnutrition, or well nourished and determined by
data related to anthropometric measurements, biochemical indicators, clinical
signs of malnutrition, malnutrition risk factors, and metabolic stress. The
primary outcome was all-cause 30-day mortality determined by the Social
Security Death Master File. Associations between nutrition groups and mortality
were estimated by bivariable and multivariable logistic regression models.
Adjusted odds ratios were estimated with inclusion of covariate terms thought
to plausibly interact with both nutrition status and mortality. We used
propensity score matching on baseline characteristics to reduce residual
confounding of the nutrition status category assignment. In the cohort, nonspecific
malnutrition was present in 56%, protein-energy malnutrition was present in
12%, and 32% were well nourished. The 30-day and 90-day mortality rates for the
cohort were 19.1% and 26.6%, respectively. Nutritional status is a significant
predictor of 30-day mortality following adjustment for age, gender, race,
medical versus surgical patient type, Deyo-Charlson index, acute organ failure,
vasopressor use, and sepsis: nonspecific malnutrition 30-day mortality odds
ratio, 1.17 (95% CI, 1.01–1.37); protein-energy malnutrition 30-day mortality
odds ratio, 2.10 (95% CI, 1.70–2.59), all relative to patients without
malnutrition. In the matched cohort, the adjusted odds of 30-day mortality in
the group of propensity score-matched patients with protein-energy malnutrition
was two-fold greater than that of patients without malnutrition. Conclusion: In
a large population of critically ill adults, an association exists between
nutrition status and mortality.
What is the right temperature to cool post-cardiac arrest patients?
Chandrasekaran
PN et al.
Critical
Care 2015, 19:406
Background:
Brain ischemia and reperfusion injury leading to tissue degeneration and loss
of neurological function following return of spontaneous circulation after
cardiac arrest (CA) is a well-known entity. Two landmark trials in 2002 showed
improved survival and neurological outcome of comatose survivors of
out-of-hospital cardiac arrest (OHCA) of presumed cardiac origin when the
patients were subjected to therapeutic hypothermia of 32 to 34 °C for 12 to 24
hours. However, the optimal target temperature for these cohorts is yet to be
established and also it is not clear whether strict fever management and
maintaining near normal body temperature are alone sufficient to improve the
outcome.
Methods: Objective: The objective is to determine whether a hypothermic goal of
a near-normal body temperature of 36 °C reduces all-cause mortality compared
with a moderate hypothermia of 33 °C for the unconscious survivors of OHCA of
presumed cardiac origin when subjected randomly to these different targeted
temperatures.
Design: A multicenter, international, open label, randomized controlled trial. Setting:
Thirty-six ICUs in Europe and Australia participated in this
study.Participants: Unconscious adults (older than 18 years of age) who survived
(Glasgow coma scale less than 8) OHCA due to presumed cardiac origin with
subsequent persistent return of spontaneous circulation (more than 20 minutes
without chest compressions).
Intervention: The above participant cohorts were randomized to targeted body
temperature of either 33 °C or 36 °C for 36 hours after the CA with gradual
rewarming of both groups to 37 °C (hourly increments of 0.5 °C) after the
initial 28 hours. Body temperatures in both the groups were then maintained
below 37.5 °C for 72 hours after the initial 36 hours.
Outcomes: Primary outcome measure of all-cause mortality in both the groups at
the end of the trial with the secondary outcome measure of all-cause mortality,
composite neurological function as evaluated by cerebral performance category
scale and modified ranking scale at the end of 180 days were studied.
Results: Out of the 939 participants, all-cause mortality at the end of the
trial was 50 % in the 33 °C group (225 of 466 patients) compared with 48 % in
the 36 °C group (235 of 473 patients); the hazard ratio with a temperature of
33 °C was 1.06 (95 % confidence interval (CI) 0.89 to 1.28, P = 0.51). At the
end of 180 days, 54 % of patients in the 33 °C group versus 52 % in the 36 °C
group had died or had poor neurological outcome according to cerebral
performance category (risk ratio 1.02, 95 % CI 0.88 to 1.16, P = 0.78) but the
modified ranking scale at the end of 180 days was unchanged (52 %) in both
groups (risk ratio 1.01, 95 % CI 0.89 to 1.14, P = 0.87).
Conclusions: Maintaining targeted lower normothermia of 36 °C had similar
outcomes compared with induced moderate hypothermia of 33 °C for unconscious
survivors of OHCA of presumed cardiac cause.
Impact of antibacterials on subsequent resistance and clinical outcomes in adult patients with viral pneumonia: an opportunity for stewardship
Crotty
MP et al.
Critical
Care 2015, 19:404
Introduction:
Respiratory viruses are increasingly recognized as significant etiologies of
pneumonia among hospitalized patients. Advanced technologies using multiplex
molecular assays and polymerase-chain reaction increase the ability to identify
viral pathogens and may ultimately impact antibacterial use.
Method: This was a single-center retrospective cohort study to evaluate the
impact of antibacterials in viral pneumonia on clinical outcomes and subsequent
multidrug-resistant organism (MDRO) infections/colonization. Patients admitted
from March 2013 to November 2014 with positive respiratory viral panels (RVP)
and radiographic findings of pneumonia were included. Patients transferred from
an outside hospital or not still hospitalized 72 hours after the RVP report
date were excluded. Patients were categorized based on exposure to systemic
antibacterials: less than 3 days representing short-course therapy and 3 to 10
days being long-course therapy.
Results: A total of 174 patients (long-course, n = 67; short-course, n = 28;
mixed bacterial-viral infection, n = 79) were included with most being
immunocompromised (56.3 %) with active malignancy the primary etiology (69.4
%). Rhinovirus/Enterovirus (23 %), Influenza (19 %), and Parainfluenza (15.5 %)
were the viruses most commonly identified. A total of 13 different systemic
antibacterials were used as empiric therapy in the 95 patients with pure viral
infection for a total of 466 days-of-therapy. Vancomycin (50.7 %), cefepime
(40.3 %), azithromycin (40.3 %), meropenem (23.9 %), and linezolid (20.9 %)
were most frequently used. In-hospital mortality did not differ between patients
with viral pneumonia in the short-course and long-course groups. Subsequent
infection/colonization with a MDRO was more frequent in the long-course group
compared to the short-course group (53.2 vs 21.1 %; P = 0.027).
Conclusion: This study found that long-course antibacterial use in the setting
of viral pneumonia had no impact on clinical outcomes but increased the
incidence of subsequent MDRO infection/colonization.
Safetyand Efficacy of Combined Extracorporeal CO2 Removal and Renal Replacement Therapy in Patients With Acute Respiratory Distress Syndrome and Acute Kidney Injury: The Pulmonary and Renal Support in Acute Respiratory Distress Syndrome Study
Allardet-Servent,
J et al.
Critical
Care Medicine: December 2015 - Volume 43 - Issue 12 - p 2570–2581
Objective:
To assess the safety and efficacy of combining extracorporeal CO2 removal with
continuous renal replacement therapy in patients presenting with acute
respiratory distress syndrome and acute kidney injury. Design: Prospective
human observational study.
Settings: Patients received volume-controlled mechanical ventilation according
to the acute respiratory distress syndrome net protocol. Continuous venovenous
hemofiltration therapy was titrated to maintain maximum blood flow and an
effluent flow of 45 mL/kg/h with 33% predilution.
Patients: Eleven patients presenting with both acute respiratory distress
syndrome and acute kidney injury required renal replacement therapy.
Interventions: A membrane oxygenator (0.65 m2) was inserted within the
hemofiltration circuit, either upstream (n = 7) or downstream (n = 5) of the
hemofilter. Baseline corresponded to tidal volume 6 mL/kg of predicted body
weight without extracorporeal CO2 removal. The primary endpoint was 20%
reduction in PaCO2 at 20 minutes after extracorporeal CO2 removal initiation.
Tidal volume was subsequently reduced to 4 mL/kg for the remaining 72 hours.
Measurements and Main Results: Twelve combined therapies were conducted in the
11 patients. Age was 70 ± 9 years, Simplified Acute Physiology Score II was 69
± 13, Sequential Organ Failure Assessment score was 14 ± 4, lung injury score
was 3 ± 0.5, and PaO2/FIO2 was 135 ± 41. Adding extracorporeal CO2 removal at
tidal volume 6 mL/kg decreased PaCO2 by 21% (95% CI, 17–25%), from 47 ± 11 to
37 ± 8 Torr (p < 0.001). Lowering tidal volume to 4 mL/kg reduced minute
ventilation from 7.8 ± 1.5 to 5.2 ± 1.1 L/min and plateau pressure from 25 ± 4
to 21 ± 3 cm H2O and raised PaCO2 from 37 ± 8 to 48 ± 10 Torr (all p <
0.001). On an average of both positions, the oxygenator’s blood flow was 410 ±
30 mL/min and the CO2 removal rate was 83 ± 20 mL/min. The oxygenator blood
flow (p <0 .001="" and="" br="" co2="" concern.="" hemofilter.="" higher="" membrane="" no="" of="" oxygenator="" p="0.083)" placed="" rate="" removal="" safety="" the="" there="" upstream="" was="" were="" when="">Conclusions: Combining extracorporeal CO2 removal and continuous
venovenous hemofiltration in patients with acute respiratory distress syndrome
and acute kidney injury is safe and allows efficient blood purification
together with enhanced lung protective ventilation.0>
Risks and benefits of stress ulcer prophylaxis in adult neurocritical care patients:a systematic review and meta-analysis of randomized controlled trials
Bolin
Liu
Critical
Care 2015, 19:409
Introduction:
Neurocritical care patients are at high risk for stress-related upper
gastrointestinal (UGI) bleeding. The aim of this meta-analysis was to evaluate
the risks and benefits of stress ulcer prophylaxis (SUP) in this patient group.
Methods: A systematic search of major electronic literature databases was
conducted. Eligible studies were randomized controlled trials (RCTs) in which
researchers compared the effects of SUP (with proton pump inhibitors or
histamine 2 receptor antagonists) with placebo or no prophylaxis in neurocritical
care patients. The primary outcome was UGI bleeding, and secondary outcomes
were all-cause mortality and nosocomial pneumonia. Study heterogeneity was
sought and quantified. The results were reported as risk ratios/relative risks
(RRs) with 95 % confidence intervals (CIs).
Results: We included 8 RCTs comprising an aggregate of 829 neurocritical care
patients. Among these trials, one study conducted in a non–intensive care unit
setting that did not meet our inclusion criteria was ultimately included based
on further evaluation. All studies were judged as having a high or unclear risk
of bias. SUP was more effective than placebo or no prophylaxis at reducing UGI
bleeding (random effects: RR 0.31; 95 % CI 0.20–0.47; P < 0.00001; I 2 = 45
%) and all-cause mortality (fixed effects: RR 0.70; 95 % CI 0.50–0.98;
P = 0.04; I 2 = 0 %). There was no difference between SUP and placebo or no
prophylaxis regarding nosocomial pneumonia (random effects: RR 1.14; 95 % CI
0.67–1.94; P = 0.62; I 2 = 42 %). The slight asymmetry of the funnel plots
raised the concern of small trial bias, and apparent heterogeneity existed in
participants, interventions, control treatments, and outcome measures.
Conclusions: In neurocritical care patients, SUP seems to be more effective
than placebo or no prophylaxis in preventing UGI bleeding and reducing
all-cause mortality while not increasing the risk of nosocomial pneumonia. The
robustness of this conclusion is limited by a lack of trials with a low risk of
bias, sparse data, heterogeneity among trials, and a concern regarding small
trial bias. Trial registration International Prospective Register of Systematic
Reviews (PROSPERO) identifier: CRD42015015802. Date of registration: 6 Jan
2015.
Pain-related Somato Sensory Evoked Potentials: a potential new tool to improve the prognostic prediction of coma after cardiac arrest
Zanatta
P et al.
Critical
Care 2015, 19:403
Introduction:
Early prediction of a good outcome in comatose patients after cardiac arrest
still remains an unsolved problem. The main aim of the present study was to
examine the accuracy of middle-latency SSEP triggered by a painful electrical
stimulation on median nerves to predict a favorable outcome.
Methods: No- and low-flow times, pupillary reflex, Glasgow motor score and
biochemical data were evaluated at ICU admission. The following were considered
within 72 h of cardiac arrest: highest creatinine value, hyperthermia
occurrence, EEG, SSEP at low- (10 mA) and high-intensity (50 mA) stimulation,
and blood pressure reactivity to 50 mA. Intensive care treatments were also
considered. Data were compared to survival, consciousness recovery and 6-month
CPC (Cerebral Performance Category).
Results: Pupillary reflex and EEG were statistically significant in predicting
survival; the absence of blood pressure reactivity seems to predict brain death
within 7 days of cardiac arrest. Middle- and short-latency SSEP were
statistically significant in predicting consciousness recovery, and
middle-latency SSEP was statistically significant in predicting 6-month CPC
outcome. The prognostic capability of 50 mA middle-latency-SSEP was
demonstrated to occur earlier than that of EEG reactivity.
Conclusions: Neurophysiological evaluation constitutes the key to early
information about the neurological prognostication of postanoxic coma. In
particular, the presence of 50 mA middle-latency SSEP seems to be an early and
reliable predictor of good neurological outcome, and its absence constitutes a
marker of poor prognosis. Moreover, the absence 50 mA blood pressure reactivity
seems to identify patients evolving towards the brain death.
Comparison of community-acquired, hospital-acquired, and intensive care unit-acquired acute respiratory distress syndrome: a prospective observational cohort study
Critical
Care 2015, 19:384
Kao,
Kuo-Chin et al
Introduction: Acute respiratory distress syndrome (ARDS) is a syndrome characterized by
diffuse pulmonary edema and severe hypoxemia that usually occurs after an
injury such as sepsis, aspiration and pneumonia. Little is known about the
relation between the setting where the syndrome developed and outcomes in ARDS
patients.
Methods: This is a 1-year prospective observational study conducted
at a tertiary referred hospital. ARDS was defined by the Berlin criteria.
Community-acquired ARDS, hospital-acquired ARDS and intensive care unit
(ICU)-acquired ARDS were defined as ARDS occurring within 48 hours of hospital
or ICU admission, more than 48 hours after hospital admission and ICU
admission. The primary and secondary outcomes were short- and long- term
mortality rates and ventilator-free and ICU-free days.
Results: Of the 3002
patients screened, 296 patients had a diagnosis of ARDS, including 70 (23.7 %)
with community-acquired ARDS, 83 (28 %) with hospital-acquired ARDS, and 143
(48.3 %) with ICU-acquired ARDS. The overall ICU mortality rate was not
significantly different in mild, moderate and severe ARDS (50 %, 50 % and 56 %,
p = 0.25). The baseline characteristics were similar other than lower rate of
liver disease and metastatic malignancy in community-acquired ARDS than in
hospital-acquired and ICU-acquired ARDS. A multiple logistic regression
analysis indicated that age, sequential organ function assessment score and
community-acquired ARDS were independently associated with hospital mortality.
For community-acquired, hospital-acquired and ICU-acquired ARDS, ICU mortality
rates were 37 % 61 % and 52 %; hospital mortality rates were 49 %, 74 % and 68
%. The ICU and hospital mortality rates of community-acquired ARDS were
significantly lower than hospital-acquired and ICU-acquired ARDS (p = 0.001 and
p = 0.001). The number of ventilator-free days was significantly lower in
ICU-acquired ARDS than in community-acquired and hospital-acquired ARDS
(11 ± 9, 16 ± 9, and 14 ± 10 days, p = 0.001). The number of ICU-free days was
significantly higher in community-acquired ARDS than in hospital-acquired and
ICU-acquired ARDS (8 ± 10, 4 ± 8, and 3 ± 6 days, p = 0.001). Conclusions:
Community-acquired ARDS have lower short- and long-term mortality rates than
hospital-acquired or ICU-acquired ARDS.
Extracorporeal decarboxylation in patients with severe traumatic brain injury and ARDS enables effective control of intracranial pressure
Critical
Care 2015, 19:381
Munoz-Bendix, c. et al
Introduction: Acute
respiratory distress syndrome (ARDS) with concomitant impairment of oxygenation
and decarboxylation represents a complex problem in patients with increased
intracranial pressure (ICP). Permissive hypercapnia is not an option to obtain
and maintain lung-protective ventilation in the presence of elevated ICP.
Pumpless extracorporeal lung assist (pECLA) devices (iLA Membrane Ventilator;
Novalung, Heilbronn, Germany) can improve decarboxylation without aggravation
associated with invasive ventilation. In this pilot series, we analyzed the
safety and efficacy of pECLA in patients with ARDS and elevated ICP after severe
traumatic brain injury (TBI).
Methods: The medical records of ten patients
(eight male, two female) with severe ARDS and severe TBI concurrently managed
with external ventricular drainage in the neurointensive care unit (NICU) were
retrospectively analyzed. The effect of pECLA on enabling lung-protective
ventilation was evaluated using the difference between plateau pressure and
positive end-expiratory pressure, defined as driving pressure (ΔP), during the
3 days preceding the implant of pECLA devices until 3 days afterward. The ICP
threshold was set at 20 mmHg. To evaluate effects on ICP, the volume of daily
cerebrospinal fluid (CSF) drainage needed to maintain the set ICP threshold was
compared pre- and postimplant.
Results: The ΔP values after pECLA implantation
decreased from a mean 17.1 ± 0.7 cm/H 2 O to 11.9±0.5 cm/H 2 O (p = 0.011). In
spite of this improved lung-protective ventilation, carbon dioxide pressure
decreased from 46.6 ± 3.9 mmHg to 39.7 ± 3.5 mmHg (p = 0.005). The volume of
daily CSF drainage needed to maintain ICP at 20 mmHg decreased significantly
from 141.5 ± 103.5 ml to 62.2 ± 68.1 ml (p = 0.037). Conclusions: For selected
patients with concomitant severe TBI and ARDS, the application of pECLA is safe
and effective. pECLA devices improve decarboxylation, thus enabling
lung-protective ventilation. At the same time, potentially detrimental
hypercapnia that may increase ICP is avoided. Larger prospective trials are
warranted to further elucidate application of pECLA devices in NICU patients.
An Environmental Scan for Early Mobilization Practices in U.S. ICUs
Critical
Care Medicine: November 2015 - Volume 43 - Issue 11 - p 2360–2369
Bakhru,
RN. Et al
Objective:
Early mobilization improves patient outcomes. However, diffusion of this
intervention into standard ICU practice is unknown. Dissemination and
implementation efforts may be guided by an environmental scan to detail
readiness for early mobilization, current practice, and barriers to early
mobilization. Design: A telephone survey. Setting: U.S. ICUs. Subjects: Five
hundred randomly selected U.S. ICUs stratified by regional hospital density and
hospital size. Interventions: None.
Measurements and Main Results: We surveyed
687 ICUs for a 73% response rate (500 ICUs); 99% of respondents were nursing
leadership. Fifty-one percent of hospitals reported an academic affiliation.
Surveyed ICUs were most often mixed medical/surgical (58%) or medical (22%) with
a median of 16 beds (12–24). Thirty-four percent reported presence of a
dedicated physical and/or occupational therapy team for the ICU. Overall, 45%
of ICUs reported early mobilization practice; two thirds of ICUs with early
mobilization practice reported using a written early mobilization protocol. In
ICUs with early mobilization practice, 52% began the intervention at admission
and 74% enacted early mobilization for both ventilated and nonventilated
patients. Early mobilization was provided a median of 6 days per week, twice
daily. Factors independently associated with early mobilization protocols
include dedicated physical/occupational therapy (odds ratio, 3.34; 95% CI,
2.13–5.22; p < 0.01), American Hospital Association region 2 (odds ratio,
3.33; 95% CI, 1.04–10.64; p = 0.04), written sedation protocol (odds ratio,
2.36; 95% CI, 1.25–4.45; p < 0.01), daily multidisciplinary rounds (odds
ratio, 2.31; 95% CI, 1.29–4.15; p < 0.01), and written daily goals for
patients (odds ratio, 2.17; 95% CI, 1.02–4.64; p = 0.04). Commonly cited
barriers included equipment, staffing, patient and caregiver safety, and
competing priorities. In ICUs without early mobilization adoption, 78% have
considered implementation but cite barriers including competing priorities and
need for further planning. Conclusions: Diffusion regarding benefits of early
mobilization has occurred, but adoption into practice is lagging. Mandates for
multidisciplinary rounds and formal sedation protocols may be necessary
strategies to increase the likelihood of successful early mobilization
implementation. Methods to accurately assess and compare institutional
performance via practice audit are needed.
Diagnosis and management of inhalation injury: an updated review
Critical
Care 2015, 19:351
Walker,
PF. et al.
In
this article we review recent advances made in the pathophysiology, diagnosis,
and treatment of inhalation injury. Historically, the diagnosis of inhalation
injury has relied on nonspecific clinical exam findings and bronchoscopic
evidence. The development of a grading system and the use of modalities such as
chest computed tomography may allow for a more nuanced evaluation of inhalation
injury and enhanced ability to prognosticate. Supportive respiratory care
remains essential in managing inhalation injury. Adjuncts still lacking
definitive evidence of efficacy include bronchodilators, mucolytic agents, inhaled
anticoagulants, nonconventional ventilator modes, prone positioning, and
extracorporeal membrane oxygenation. Recent research focusing on molecular
mechanisms involved in inhalation injury has increased the number of potential
therapies.
Aspirin as a potential treatment in sepsis or acute respiratory distress syndrome
Critical
Care 2015, 19:374
Toner, P. et al.
Sepsis
is a common condition that is associated with significant morbidity, mortality
and health-care cost. Pulmonary and non-pulmonary sepsis are common causes of
the acute respiratory distress syndrome (ARDS). The mortality from ARDS remains
high despite protective lung ventilation, and currently there are no specific
pharmacotherapies to treat sepsis or ARDS. Sepsis and ARDS are characterised by
activation of the inflammatory cascade. Although there is much focus on the
study of the dysregulated inflammation and its suppression, the associated
activation of the haemostatic system has been largely ignored until recently.
There has been extensive interest in the role that platelet activation can have
in the inflammatory response through induction, aggregation and activation of
leucocytes and other platelets. Aspirin can modulate multiple pathogenic
mechanisms implicated in the development of multiple organ dysfunction in
sepsis and ARDS. This review will discuss the role of the platelet, the
mechanisms of action of aspirin in sepsis and ARDS, and aspirin as a potential
therapy in treating sepsis and ARDS.
Fluid balance and mortality in critically ill patients with acute kidney injury: a multicenter prospective epidemiological study
Critical
Care 2015, 19:371
Wang N et al
Introduction: Early
and aggressive volume resuscitation is fundamental in the treatment of
hemodynamic instability in critically ill patients and improves patient
survival. However, one important consequence of fluid administration is the
risk of developing fluid overload (FO), which is associated with increased
mortality in patients with acute kidney injury (AKI). We evaluated the impact
of fluid balance on mortality in intensive care unit (ICU) patients with AKI.
Methods: The data were extracted from the Beijing Acute Kidney Injury Trial.
This trial was a prospective, observational, multicenter study conducted in 30
ICUs among 28 tertiary hospitals in Beijing, China, from 1 March to 31 August
2012. In total, 3107 patients were admitted consecutively, and 2526 patients
were included in this study. The data from the first 3 sequential days were
analyzed. The AKI severity was classified according to the Kidney Disease:
Improving Global Outcomes guidelines. The daily fluid balance was recorded, and
the cumulative fluid balance was registered at 24, 48, and 72 h. A multivariate
analysis was performed with Cox regression to determine the impact of fluid
balance on mortality in patients with AKI.
Results: Among the 2526 patients
included, 1172 developed AKI during the first 3 days. The mortality was 25.7 %
in the AKI group and 10.1 % in the non-AKI group (P < 0.001). The daily
fluid balance was higher, and the cumulative fluid balance was significantly
greater, in the AKI group than in the non-AKI group. FO was an independent risk
factor for the incidence of AKI (odds ratio 4.508, 95 % confidence interval
2.900 to 7.008, P < 0.001) and increased the severity of AKI. Non-surviving
patients with AKI had higher cumulative fluid balance during the first 3 days
(2.77 [0.86–5.01] L versus 0.93 [−0.80 to 2.93] L, P < 0.001) than survivors
did. Multivariate analysis revealed that the cumulative fluid balance during
the first 3 days was an independent risk factor for 28-day mortality.
Conclusions: In this multicenter ICU study, the fluid balance was greater in
patients with AKI than in patients without AKI. FO was an independent risk
factor for the incidence of AKI and increased the severity of AKI. A higher
cumulative fluid balance was an important factor associated with 28-day
mortality following AKI.
Ventilator-derived carbon dioxide production to assess energy expenditure in critically ill patients: proof of concept
Critical
Care 2015, 19:370
Stapel, SN. et al
Introduction: Measurement
of energy expenditure (EE) is recommended to guide nutrition in critically ill
patients. Availability of a gold standard indirect calorimetry is limited, and
continuous measurement is unfeasible. Equations used to predict EE are
inaccurate. The purpose of this study was to provide proof of concept that EE
can be accurately assessed on the basis of ventilator-derived carbon dioxide
production (VCO 2 ) and to determine whether this method is more accurate than
frequently used predictive equations.
Methods: In 84 mechanically ventilated
critically ill patients, we performed 24-h indirect calorimetry to obtain a
gold standard EE. Simultaneously, we collected 24-h ventilator-derived VCO 2 ,
extracted the respiratory quotient of the administered nutrition, and
calculated EE with a rewritten Weir formula. Bias, precision, and accuracy and
inaccuracy rates were determined and compared with four predictive equations:
the Harris–Benedict, Faisy, and Penn State University equations and the
European Society for Clinical Nutrition and Metabolism (ESPEN) guideline
equation of 25 kcal/kg/day.
Results: Mean 24-h indirect calorimetry EE was
1823 ± 408 kcal. EE from ventilator-derived VCO 2 was accurate (bias +141 ± 153
kcal/24 h; 7.7 % of gold standard) and more precise than the predictive
equations (limits of agreement −166 to +447 kcal/24 h). The 10 % and 15 %
accuracy rates were 61 % and 76 %, respectively, which were significantly
higher than those of the Harris–Benedict, Faisy, and ESPEN guideline equations.
Large errors of more than 30 % inaccuracy did not occur with EE derived from
ventilator-derived VCO 2 . This 30 % inaccuracy rate was significantly lower
than that of the predictive equations.
Conclusions: In critically ill
mechanically ventilated patients, assessment of EE based on ventilator-derived
VCO 2 is accurate and more precise than frequently used predictive equations.
It allows for continuous monitoring and is the best alternative to indirect
calorimetry.
Arterial Catheter Use in the ICU: A National Survey of Antiseptic Technique and Perceived Infectious Risk
Critical
Care Medicine: November 2015 - Volume 43 - Issue 11 - p 2346–2353
Cohen,
David M. et al
Objectives:
Recent studies have shown that the occurrence rate of bloodstream infections
associated with arterial catheters is 0.9–3.4/1,000 catheter-days, which is
comparable to that of central venous catheters. In 2011, the Centers for
Disease Control and Prevention published new guidelines recommending the use of
limited barrier precautions during arterial catheter insertion, consisting of
sterile gloves, a surgical cap, a surgical mask, and a small sterile drape. The
goal of this study was to assess the attitudes and current infection prevention
practices used by clinicians during insertion of arterial catheters in ICUs in
the United States. Design: An anonymous, 22-question web-based survey of
infection prevention practices during arterial catheter insertion. Setting:
Clinician members of the Society of Critical Care Medicine. Subjects: Eleven
thousand three hundred sixty-one physicians, nurse practitioners, physician
assistants, respiratory therapists, and registered nurses who elect to receive
e-mails from the Society of Critical Care Medicine. Interventions: None.
Measurements and Main Results: There were 1,265 responses (11% response rate),
with 1,029 eligible participants after exclusions were applied. Only 44% of
participants reported using the Centers for Disease Control and
Prevention–recommended barrier precautions during arterial catheter insertion,
and only 15% reported using full barrier precautions. The mean and median
estimates of the incidence density of bloodstream infections associated with
arterial catheters were 0.3/1,000 catheter-days and 0.1/1,000 catheter-days,
respectively. Thirty-nine percent of participants reported that they would
support mandatory use of full barrier precautions during arterial catheter
insertion. Conclusions: Barrier precautions are used inconsistently by critical
care clinicians during arterial catheter insertion in the ICU setting. Less
than half of clinicians surveyed were in compliance with current Centers for
Disease Control and Prevention guidelines. Clinicians significantly
underestimated the infectious risk posed by arterial catheters, and support for
mandatory use of full barrier precautions was low. Further studies are
warranted to determine the optimal preventive strategies for reducing
bloodstream infections associated with arterial catheters.
Ten Myths and Misconceptions Regarding Pain Management in the ICU
Critical
Care Medicine: November 2015 - Volume 43 - Issue 11 - p 2468–2478
Sigakis,
Matthew J. G.et al
Objectives:
The aim of this article is to expose common myths and misconceptions regarding
pain assessment and management in critically ill patients that interfere with
effective care. We comprehensively review the literature refuting these myths
and misconceptions and describe evidence-based strategies for improving pain
management in the ICU. Data Sources: Current peer-reviewed academic journals,
as well as standards and guidelines from professional societies. Study
Selection: The most current evidence was selected for review based on the highest
degree of supportive evidence. Data Extraction: Data were obtained via medical
search databases, including OvidSP, and the National Library of Medicine’s
MEDLINE database via PubMed. Data Synthesis: After a comprehensive literature
review, conclusions were drawn based on the strength of evidence and the most
current understanding of pain management practices in ICU. Conclusions: Myths
and misconceptions regarding management of pain in the ICU are prevalent.
Review of current evidence refutes these myths and misconceptions and provides
insights and recommendations to ensure best practices.
Rapid Diagnosis of Infection in the Critically Ill, a Multicenter Study of Molecular Detection in Bloodstream Infections, Pneumonia, and Sterile Site Infections
Critical
Care Medicine: November 2015 - Volume 43 - Issue 11 - p 2283–2291
Vincent,
Jean-Louis et al.
Objective:
Early identification of causative microorganism(s) in patients with severe
infection is crucial to optimize antimicrobial use and patient survival.
However, current culture-based pathogen identification is slow and unreliable
such that broad-spectrum antibiotics are often used to insure coverage of all
potential organisms, carrying risks of overtreatment, toxicity, and selection
of multidrug-resistant bacteria. We compared the results obtained using a
novel, culture-independent polymerase chain reaction/electrospray
ionization-mass spectrometry technology with those obtained by standard
microbiological testing and evaluated the potential clinical implications of
this technique.
Design: Observational study. Setting: Nine ICUs in six European
countries. Patients: Patients admitted between October 2013 and June 2014 with
suspected or proven bloodstream infection, pneumonia, or sterile fluid and
tissue infection were considered for inclusion. Interventions: None.
Measurements and Main Results: We tested 616 bloodstream infection, 185
pneumonia, and 110 sterile fluid and tissue specimens from 529 patients. From
the 616 bloodstream infection samples, polymerase chain reaction/electrospray
ionization-mass spectrometry identified a pathogen in 228 cases (37%) and
culture in just 68 (11%). Culture was positive and polymerase chain
reaction/electrospray ionization-mass spectrometry negative in 13 cases, and
both were negative in 384 cases, giving polymerase chain reaction/electrospray
ionization-mass spectrometry a sensitivity of 81%, specificity of 69%, and
negative predictive value of 97% at 6 hours from sample acquisition. The
distribution of organisms was similar with both techniques. Similar
observations were made for pneumonia and sterile fluid and tissue specimens.
Independent clinical analysis of results suggested that polymerase chain
reaction/electrospray ionization-mass spectrometry technology could potentially
have resulted in altered treatment in up to 57% of patients.
Conclusions:
Polymerase chain reaction/electrospray ionization-mass spectrometry provides
rapid pathogen identification in critically ill patients. The ability to rule
out infection within 6 hours has potential clinical and economic benefits.
Double-Blind Prospective Randomized Controlled Trial of Dopamine Versus Epinephrine asFirst-Line Vasoactive Drugs in Pediatric Septic Shock
Critical
Care Medicine: November 2015 - Volume 43 - Issue 11 - p 2292–2302
Ventura,
Andréa M. C. et al
Objectives: The primary outcome was to compare the effects of dopamine
or epinephrine in severe sepsis on 28-day mortality; secondary outcomes were
the rate of healthcare–associated infection, the need for other vasoactive
drugs, and the multiple organ dysfunction score. Design: Double-blind,
prospective, randomized controlled trial from February 1, 2009, to July 31,
2013. Setting: PICU, Hospital Universitário da Universidade de São Paulo,
Brazil. Patients: Consecutive children who are 1 month to 15 years old and met
the clinical criteria for fluid-refractory septic shock. Exclusions were
receiving vasoactive drug(s) prior to hospital admission, having known cardiac
disease, having already participated in the trial during the same hospital
stay, refusing to participate, or having do-not-resuscitate orders.
Interventions: Patients were randomly assigned to receive either dopamine (5–10
μg/kg/min) or epinephrine (0.1–0.3 μg/kg/min) through a peripheral or
intraosseous line. Patients not reaching predefined stabilization criteria
after the maximum dose were classified as treatment failure, at which point the
attending physician gradually stopped the study drug and started another
catecholamine.
Measurements and Main Results: Physiologic and laboratory data were
recorded. Baseline characteristics were described as proportions and mean (±
SD) and compared using appropriate statistical tests. Multiple regression
analysis was performed, and statistical significance was defined as a p value
of less than 0.05. Baseline characteristics and therapeutic interventions for
the 120 children enrolled (63, dopamine; 57, epinephrine) were similar. There
were 17 deaths (14.2%): 13 (20.6%) in the dopamine group and four (7%) in the
epinephrine group (p = 0.033). Dopamine was associated with death (odds ratio,
6.5; 95% CI, 1.1–37.8; p = 0.037) and healthcare–associated infection (odds
ratio, 67.7; 95% CI, 5.0–910.8; p = 0.001). The use of epinephrine was
associated with a survival odds ratio of 6.49.
Conclusions: Dopamine was associated
with an increased risk of death and healthcare–associated infection. Early
administration of peripheral or intraosseous epinephrine was associated with
increased survival in this population. Limitations should be observed while
interpreting these results
Rehospitalizations Following Sepsis: Common and Costly
Critical
Care Medicine: October 2015 - Volume 43 - Issue 10 - p 2085–2093
Chang,
Dong W. et al.
Objective: Although recent studies have shown that 30-day readmissions
following sepsis are common, the overall fiscal impact of these
rehospitalizations and their variability between hospitals relative to other
high-risk conditions, such as congestive heart failure and acute myocardial
infarction, are unknown. The objectives of this study were to characterize the
frequency, cost, patient-level risk factors, and hospital-level variation in
30-day readmissions following sepsis compared with congestive heart failure and
acute myocardial infarction.
Design: A retrospective cohort analysis of
hospitalizations from 2009 to 2011. Setting: All acute care, nonfederal
hospitals in California. Patients: Hospitalizations for sepsis (n = 240,198),
congestive heart failure (n = 193,153), and acute myocardial infarction (n =
105,684) identified by administrative discharge codes. Interventions: None.
Measurements and Main Results: The primary outcomes were the frequency and cost
of all-cause 30-day readmissions following hospitalization for sepsis compared
with congestive heart failure and acute myocardial infarction. Variability in
predicted readmission rates between hospitals was calculated using
mixed-effects logistic regression analysis. The all-cause 30-day readmission
rates were 20.4%, 23.6%, and 17.7% for sepsis, congestive heart failure, and
acute myocardial infarction, respectively. The estimated annual costs of 30-day
readmissions in the state of California during the study period were $500
million/yr for sepsis, $229 million/yr for congestive heart failure, and $142
million/yr for acute myocardial infarction. The risk- and reliability-adjusted
readmission rates across hospitals ranged from 11.0% to 39.8% (median, 19.9%;
interquartile range, 16.1–26.0%) for sepsis, 11.3% to 38.4% (median, 22.9%;
interquartile range, 19.2–26.6%) for congestive heart failure, and 3.6% to
40.8% (median, 17.0%; interquartile range, 12.2–20.0%) for acute myocardial
infarction. Patient-level factors associated with higher odds of 30-day
readmission following sepsis included younger age, male gender, Black or Native
American race, a higher burden of medical comorbidities, urban residence, and
lower income.
Conclusion: Sepsis is a leading contributor to excess healthcare
costs due to hospital readmissions. Interventions at clinical and policy levels
should prioritize identifying effective strategies to reduce sepsis
readmissions
Variations in the Operational Process of Withdrawal of Life-Sustaining Therapy
Critical Care Medicine: October 2015
- Volume 43 - Issue 10 - p e450–e457
van
Beinum, A. et al.
Objective: The process of withdrawal of life-sustaining therapy remains
poorly described in the current literature despite its importance for patient
comfort and optimal end-of-life care. We conducted a structured review of the
published literature to summarize patterns of withdrawal of life-sustaining
therapy processes in adult ICUs. Data Sources: Electronic journal databases
were searched from date of first issue until April 2014. Study Selection:
Original research articles describing processes of life-support therapy
withdrawal in North American, European, and Australian ICUs were included. Data
Extraction: From each article, we extracted definitions of withdrawal of
life-sustaining therapy, descriptions and order of interventions withdrawn,
drugs administered, and timing from withdrawal of life-sustaining therapy until
death. Data Synthesis: Fifteen articles met inclusion criteria. Definitions of
withdrawal of life-sustaining therapy varied and focused on withdrawal of
mechanical ventilation; two studies did not present operational definitions.
All studies described different aspects of process of life-support therapy
withdrawal and measured different time periods prior to death. Staggered
patterns of withdrawal of life-support therapy were reported in all studies
describing order of interventions withdrawn, with vasoactive drugs withdrawn
first followed by gradual withdrawal of mechanical ventilation. Processes of
withdrawal of life-sustaining therapy did not seem to influence time to death.
Conclusions: Further description of the operational processes of life-sustaining
therapy withdrawal in a more structured manner with standardized definitions
and regular inclusion of measures of patient comfort and family satisfaction
with care is needed to identify which patterns and processes are associated
with greatest perceived patient comfort and family satisfaction with care
Validation and analysis of prognostic scoring systems for critically ill patients with cirrhosis admitted to ICU
Critical Care 2015, 19:364
Campbell J et al.
Introduction: The number of patients admitted to ICU who have liver
cirrhosis is rising. Current prognostic scoring tools to predict ICU mortality
have performed poorly in this group. In previous research from a single centre,
a novel scoring tool which modifies the Child-Turcotte Pugh score by adding
Lactate concentration, the CTP + L score, is strongly associated with
mortality. This study aims to validate the use of the CTP + L scoring tool for
predicting ICU mortality in patients admitted to a general ICU with cirrhosis,
and to determine significant predictive factors for mortality with this group
of patients. This study will also explore the use of the Royal Free Hospital
(RFH) score in this cohort.
Methods: A total of 84 patients admitted to the Glasgow Royal Infirmary ICU
between June 2012 and Dec 2013 with cirrhosis were included. An additional
cohort of 115 patients was obtained from two ICUs in London (St George’s and St
Thomas’) collected between October 2007 and July 2009. Liver specific and general
ICU scoring tools were calculated for both cohorts, and compared using area
under the receiver operating characteristic (ROC) curves. Independent
predictors of ICU mortality were identified by univariate analysis.
Multivariate analysis was utilised to determine the most predictive factors
affecting mortality within these patient groups.
Results: Within the Glasgow cohort, independent predictors of ICU mortality
were identified as Lactate (p < 0.001), Bilirubin (p = 0.0048), PaO 2 /FiO 2
Ratio (p = 0.032) and PT ratio (p = 0.012). Within the London cohort,
independent predictors of ICU mortality were Lactate (p < 0.001), PT ratio
(p < 0.001), Bilirubin (p = 0.027), PaO 2 /FiO 2 Ratio (p = 0.0011) and
Ascites (p = 0.023). The CTP + L and RFH scoring tools had the highest ROC
value in both cohorts examined.
Conclusion: The CTP + L and RFH scoring tool are validated prognostic scoring
tools for predicting ICU mortality in patients admitted to a general ICU with
cirrhosis.
Critical
Care Medicine: October 2015 - Volume 43 - Issue 10 - p 2133–2140
Zhang,
D
Objective:
To assess the timing of appropriate antibiotic therapy as a determinant of
postinfection hospital and ICU lengths of stay in patients with sepsis. Design:
Single-center retrospective cohort study (January 2008–December 2012). Setting:
One thousand two hundred fifty–bed academic hospital. Patients: One thousand
fifty-eight consecutive blood culture positive patients. Interventions: We
retrospectively identified adult patients with severe sepsis or septic shock.
Timing of appropriate antibiotic therapy was determined from blood culture
collection time to the administration of the first dose of antibiotic therapy
with documented in vitro susceptibility against the identified pathogen. We
constructed generalized linear models to examine the determinants of
attributable lengths of stay. Measurements and Main Results: The median
(interquartile range) time from blood culture collection to the administration
of appropriate antibiotic therapy was 6.7 hours (0.0–23.3 hr). Linear
regression analysis adjusting for severity of illness and comorbid conditions
identified time to appropriate antibiotic therapy to be an independent
determinant of postinfection ICU length of stay (0.095-d increase per hr of
time to deliver appropriate antibiotic therapy; 95% CI, 0.057–0.132 d; p <
0.001) and postinfection hospital length of stay (0.134-d increase per hr of
time to deliver appropriate antibiotic therapy; 95% CI, 0.074–0.194 d; p <
0.001). Other independent determinants associated with increasing ICU length of
stay and hospital length of stay were mechanical ventilation (both ICU and
hospital lengths of stay) and incremental peak WBC counts (hospital length of
stay only). Incremental changes in severity of illness assessed by Acute
Physiology and Chronic Health Evaluation II scores and comorbidity burden
assessed by the Charlson comorbidity score were independently associated with
decreases in ICU length of stay and hospital length of stay. Conclusions: We
identified time to appropriate antibiotic therapy in patients with sepsis to be
an independent determinant of postinfection ICU and hospital lengths of stay.
Clinicians should implement local strategies aimed at timely delivery of
appropriate antibiotic therapy to improve outcomes and reduce length of stay.
Critical
Care Medicine: October 2015 - Volume 43 - Issue 10 - p 2141–2146
Fawzy,
A et al
Objectives:
Clinical guidelines recommend norepinephrine as initial vasopressor of choice
for septic shock, with dopamine suggested as an alternative vasopressor in
selected patients with low risk of tachyarrhythmias and absolute or relative
bradycardia. We sought to determine practice patterns and outcomes associated
with vasopressor selection in a large, population-based cohort of patients with
septic shock that allows for assessment of outcomes in clinically important
subgroups. Design: We performed a retrospective cohort study to determine
factors associated with choice of dopamine as compared with norepinephrine as
initial vasopressor for patients with septic shock. We used propensity score
matching to compare risk of hospital mortality based on initial vasopressor. We
performed multiple sensitivity analyses using alternative methods to address
confounding and hospital-level clustering. We investigated interaction between
vasopressor selection and mortality in clinical subgroups based on arrhythmia
and cardiovascular risk. Setting: Enhanced administrative data (Premier,
Charlotte, NC) from 502 U.S. hospitals during the years 2010–2013. Subjects: A
total of 61,122 patients admitted with septic shock who received dopamine or
norepinephrine as initial vasopressor during the first 2 days of
hospitalization. Interventions: None. Measurements and Main Results:
Norepinephrine (77.6%) was the most frequently used initial vasopressor during
septic shock. Dopamine was preferentially selected by cardiologists, in the
Southern United States, at nonteaching hospitals, for older patients with more
cardiovascular comorbidities and was used less frequently over time. Patients
receiving dopamine experienced greater hospital mortality (propensity-matched cohort:
n = 38,788; 25% vs 23.7%; odds ratio, 1.08; 95% CI, 1.02–1.14). Sensitivity
analyses showed similar results. Subgroup analyses showed no evidence for
effect modification based on arrhythmia risk or underlying cardiovascular
disease. Conclusions: In a large population-based sample of patients with
septic shock in the United States, use of dopamine as initial vasopressor was
associated with increased mortality among multiple clinical subgroups. Areas
where use of dopamine as initial vasopressor is more common represent potential
targets for quality improvement intervention.
Critical
Care Medicine: October 2015 - Volume 43 - Issue 10 - p 2155–2163
Neto,
AS et al
Objective:
Protective mechanical ventilation with low tidal volumes is standard of care
for patients with acute respiratory distress syndrome. The aim of this
individual patient data analysis was to determine the association between tidal
volume and the occurrence of pulmonary complications in ICU patients without
acute respiratory distress syndrome and the association between occurrence of
pulmonary complications and outcome in these patients. Design: Individual
patient data analysis. Patients: ICU patients not fulfilling the consensus
criteria for acute respiratory distress syndrome at the onset of ventilation.
Interventions: Mechanical ventilation with low tidal volume. Measurements and
Main Results: The primary endpoint was development of a composite of acute
respiratory distress syndrome and pneumonia during hospital stay. Based on the
tertiles of tidal volume size in the first 2 days of ventilation, patients were
assigned to a “low tidal volume group” (tidal volumes≤ 7 mL/kg predicted body
weight), an “intermediate tidal volume group” (> 7 and < 10 mL/kg
predicted body weight), and a “high tidal volume group” (≥ 10 mL/kg predicted
body weight). Seven investigations (2,184 patients) were included. Acute
respiratory distress syndrome or pneumonia occurred in 23% of patients in the
low tidal volume group, in 28% of patients in the intermediate tidal volume
group, and in 31% of the patients in the high tidal volume group (adjusted odds
ratio [low vs high tidal volume group], 0.72; 95% CI, 0.52–0.98; p = 0.042).
Occurrence of pulmonary complications was associated with a lower number of
ICU-free and hospital-free days and alive at day 28 (10.0 ± 10.9 vs 13.8 ± 11.6
d; p < 0.01 and 6.1 ± 8.1 vs 8.9 ± 9.4 d; p < 0.01) and an increased
hospital mortality (49.5% vs 35.6%; p < 0.01). Conclusions: Ventilation with
low tidal volumes is associated with a lower risk of development of pulmonary
complications in patients without acute respiratory distress syndrome.
Critical
Care Medicine: October 2015 - Volume 43 - Issue 10 - p 2076–2084
Objective:
Clinical protocols may decrease unnecessary variation in care and improve
compliance with desirable therapies. We evaluated whether highly protocolized
ICUs have superior patient outcomes compared with less highly protocolized
ICUs. Design: Observational study in which participating ICUs completed a
general assessment and enrolled new patients 1 day each week. Patients: A total
of 6,179 critically ill patients. Setting: Fifty-nine ICUs in the United States
Critical Illness and Injury Trials Group Critical Illness Outcomes Study.
Interventions: None. Measurements and Main Results: The primary exposure was
the number of ICU protocols; the primary outcome was hospital mortality. A
total of 5,809 participants were followed prospectively, and 5,454 patients in
57 ICUs had complete outcome data. The median number of protocols per ICU was
19 (interquartile range, 15–21.5). In single-variable analyses, there were no
differences in ICU and hospital mortality, length of stay, use of mechanical
ventilation, vasopressors, or continuous sedation among individuals in ICUs
with a high versus low number of protocols. The lack of association was
confirmed in adjusted multivariable analysis (p = 0.70). Protocol compliance
with two ventilator management protocols was moderate and did not differ
between ICUs with high versus low numbers of protocols for lung protective
ventilation in acute respiratory distress syndrome (47% vs 52%; p = 0.28) and
for spontaneous breathing trials (55% vs 51%; p = 0.27). Conclusions: Clinical
protocols are highly prevalent in U.S. ICUs. The presence of a greater number
of protocols was not associated with protocol compliance or patient mortality.
Critical
Care 2015, 19:340
Garnero
A et al
Introduction: Lung recruitment maneuvers followed by an individually titrated positive
end-expiratory pressure (PEEP) are the key components of the open lung
ventilation strategy in acute respiratory distress syndrome (ARDS). The
staircase recruitment maneuver is a step-by-step increase in PEEP followed by a
decremental PEEP trial. The duration of each step is usually 2 minutes without
physiologic rationale.
Methods: In this prospective study, we measured the
dynamic end-expiratory lung volume changes (ΔEELV) during an increase and
decrease in PEEP to determine the optimal duration for each step. PEEP was
progressively increased from 5 to 40 cmH 2 O and then decreased from 40 to 5
cmH 2 O in steps of 5 cmH 2 O every 2.5 minutes. The dynamic of ΔEELV was
measured by direct spirometry as the difference between inspiratory and
expiratory tidal volumes over 2.5 minutes following each increase and decrease
in PEEP. ΔEELV was separated between the expected increased volume, calculated
as the product of the respiratory system compliance by the change in PEEP, and
the additional volume.
Results: Twenty-six early onset moderate or severe ARDS
patients were included. Data are expressed as median [25th-75th quartiles].
During the increase in PEEP, the expected increased volume was achieved within
2[2-2] breaths. During the decrease in PEEP, the expected decreased volume was
achieved within 1 [1–1] breath, and 95 % of the additional decreased volume was
achieved within 8 [2–15] breaths. Completion of volume changes in 99 % of both
increase and decrease in PEEP events required 29 breaths.
Conclusions: In early
ARDS, most of the ΔEELV occurs within the first minute, and change is completed
within 2 minutes, following an increase or decrease in PEEP.
Critical
Care 2015, 19:329
Palacio
de Azevedo R et al
Introduction
Constipation is a common problem in intensive care units. We assessed the
efficacy and safety of laxative therapy aiming to promote daily defecation in
reducing organ dysfunction in mechanically ventilated patients.
Methods: We
conducted a prospective, randomized, controlled, nonblinded phase II clinical
trial at two general intensive care units. Patients expected to remain
ventilated for over 3 days were randomly assigned to daily defecation or
control groups. The intervention group received lactulose and enemas to produce
1–2 defecations per day. In the control group, absence of defecation was
tolerated up to 5 days. Primary outcome was the change in Sequential Organ
Failure Assessment (SOFA) score between the date of enrollment and intensive
care unit discharge, death or day 14.
Results: We included 88 patients.
Patients in the treatment group had a higher number of defecations per day (1.3
± 0.42 versus 0.7 ± 0.56, p < 0.0001) and lower percentage of days without
defecation (33.1 ± 15.7 % versus 62.3 ±24.5 %, p < 0.0001). Patients in the
intervention group had a greater reduction in SOFA score (–4.0 (–6.0 to 0)
versus –1.0 (–4.0 to 1.0), p = 0.036) with no difference in mortality rates or
in survival time. Adverse events were more frequent in the treatment group (4.5
(3.0–8.0) versus 3.0 (1.0–5.7), p = 0.016), including more days with diarrhea
(2.0 (1.0–4.0) versus 1.0 (0–2.0) days, p < 0.0001). Serious adverse events
were rare and did not significantly differ between groups.
Conclusions:
Laxative therapy improved daily defecation in ventilated patients and was
associated with a greater reduction in SOFA score.
Critical
Care 2015, 19:335
Gattarello
S et al
Introduction: We
aimed to compare intensive care unit mortality due to non-pneumococcal severe
community-acquired pneumonia between the periods 2000–2002 and 2008–2014, and
the impact of the improvement in antibiotic strategies on outcomes. Methods:
This was a matched case–control study enrolling 144 patients with
non-pneumococcal severe pneumonia: 72 patients from the 2000–2002 database
(CAPUCI I group) were paired with 72 from the 2008–2014 period (CAPUCI II
group), matched by the following variables: microorganism, shock at admission,
invasive mechanical ventilation, immunocompromise, chronic obstructive
pulmonary disease, and age over 65 years. Results: The most frequent
microorganism was methicillin-susceptible Staphylococcus aureus (22.1 %)
followed by Legionella pneumophila and Haemophilus influenzae (each 20.7 %);
prevalence of shock was 59.7 %, while 73.6 % of patients needed invasive
mechanical ventilation. Intensive care unit mortality was significantly lower
in the CAPUCI II group (34.7 % versus 16.7 %; odds ratio (OR) 0.78, 95 %
confidence interval (CI) 0.64–0.95; p = 0.02). Appropriate therapy according to
microorganism was 91.5 % in CAPUCI I and 92.7 % in CAPUCI II, while combined
therapy and early antibiotic treatment were significantly higher in CAPUCI II (76.4
versus 90.3 % and 37.5 versus 63.9 %; p < 0.05). In the multivariate
analysis, combined antibiotic therapy (OR 0.23, 95 % CI 0.07–0.74) and early
antibiotic treatment (OR 0.07, 95 % CI 0.02–0.22) were independently associated
with decreased intensive care unit mortality. Conclusions: In non-pneumococcal
severe community-acquired pneumonia , early antibiotic administration and use
of combined antibiotic therapy were both associated with increased intensive
care unit survival during the study period.
Critical
Care 2015, 19:337
Lorente
JA
Kao et al. have reported in Critical Care the histological findings of
101 patients with acute respiratory distress syndrome (ARDS) undergoing open
lung biopsy. Diffuse alveolar damage (DAD), the histological hallmark of ARDS,
was present in only 56.4 % of cases. The presence of DAD was associated with
higher mortality. Evidence from this and other studies indicates that the
clinical criteria for the diagnosis of ARDS identify DAD in only about half of
the cases. On the contrary, there is evidence that the clinical course and
outcome of ARDS differs in patients with DAD and in patients without DAD. The
discovery of biomarkers for the physiological (increased alveolocapillary
permeability) or histological (DAD) hallmarks of ARDS is thus of paramount
importance.
Critical
Care 2015, 19:324
Leteurtre
S et al
Introduction: Daily or serial evaluation of multiple organ dysfunction
syndrome (MODS) scores may provide useful information. We aimed to validate the
daily (d) PELOD-2 score using the set of seven days proposed with the previous
version of the score.
Methods: In all consecutive patients admitted to nine
pediatric intensive care units (PICUs) we prospectively measured the dPELOD-2
score at day 1, 2, 5, 8, 12, 16, and 18. PICU mortality was used as the outcome
dependent variable. The discriminant power of the dPELOD-2 scores was estimated
using the area under the ROC curve and the calibration using the
Hosmer-Lemeshow chi-square test. We used a logistic regression to investigate
the relationship between the dPELOD-2 scores and outcome, and between the
change in PELOD-2 score from day1 and outcome. Results: We included 3669 patients (median age 15.5 months, mortality rate 6.1
%, median length of PICU stay 3 days). Median dPELOD-2 scores were
significantly higher in nonsurvivors than in survivors (p < 0.0001). The
dPELOD-2 score was available at least at day 2 in 2057 patients: among the 796
patients without MODS on day1, 186 (23.3 %) acquired the syndrome during their
PICU stay (mortality 4.9 % vs. 0.3 % among the 610 who did not; p < 0.0001).
Among the1261 patients with MODS on day1, the syndrome worsened in 157 (12.4 %)
and remained unchanged or improved in 1104 (87.6 %) (mortality 22.9 % vs. 6.6
%; p < 0.0001). The AUC of the dPELOD-2 scores ranged from 0.75 (95 % CI:
0.67-0.83) to 0.89 (95 % CI: 0.86-0.91). The calibration was good with a
chi-square test between 13.5 (p = 0.06) and 0.9 (p = 0.99). The PELOD-2 score
on day1 was a significant prognostic factor; the serial evaluation of the
change in the dPELOD-2 score from day1, adjusted for baseline value,
demonstrated a significant odds ratio of death for each of the 7 days. Conclusion: This study suggests that the progression of the severity of organ
dysfunctions can be evaluated by measuring the dPELOD-2 score during a set of 7
days in PICU, providing useful information on outcome in critically ill
children. Its external validation would be useful
Critical
Care 2015, 19:336
Harrold
ME et al
Introduction:
Mobilisation of patients in the intensive care unit (ICU) is an area of growing
research. Currently, there is little data on baseline mobilisation practises
and the barriers to them for patients of all admission diagnoses.
Methods: The
objectives of the study were to (1) quantify and benchmark baseline levels of
mobilisation in Australian and Scottish ICUs, (2) compare mobilisation
practises between Australian and Scottish ICUs and (3) identify barriers to
mobilisation in Australian and Scottish ICUs. We conducted a prospective,
observational, cohort study with a 4-week inception period. Patients were
censored for follow-up upon ICU discharge or after 28 days, whichever occurred
first. Patients were included if they were >18 years of age, admitted to an
ICU and received mechanical ventilation in the ICU.
Results: Ten tertiary ICUs
in Australia and nine in Scotland participated in the study. The Australian
cohort had a large proportion of patients admitted for cardiothoracic surgery
(43.3 %), whereas the Scottish cohort had none. Therefore, comparison analysis
was done after exclusion of patients admitted for cardiothoracic surgery. In
total, 60.2 % of the 347 patients across 10 Australian ICUs and 40.1 % of the
167 patients across 9 Scottish ICUs mobilised during their ICU stay
(p < 0.001). Patients in the Australian cohort were more likely to mobilise
than patients in the Scottish cohort (hazard ratio 1.83, 95 % confidence
interval 1.38–2.42). However, the percentage of episodes of mobilisation where
patients were receiving mechanical ventilation was higher in the Scottish
cohort (41.1 % vs 16.3 %, p < 0.001). Sedation was the most commonly
reported barrier to mobilisation in both the Australian and Scottish cohorts.
Physiological instability and the presence of an endotracheal tube were also
frequently reported barriers.
Conclusions: This is the first study to benchmark
baseline practise of early mobilisation internationally, and it demonstrates
variation in early mobilisation practises between Australia and Scotland.
Critical
Care Medicine, September 2015 - Volume 43 - Issue 9 - p 1898–1906
Haneya,
A et al
Objectives:
Extracorporeal lung support is currently used in the treatment of patients with
severe respiratory failure until organ recovery and as a bridge to further
therapeutic modalities. The aim of our study was to evaluate the impact of
acute kidney injury on outcome in patients with acute respiratory distress
syndrome under venovenous extracorporeal membrane oxygenation support and to
analyze the association between prognosis and the time of occurrence of acute
kidney injury and renal replacement therapy initiation. Design: Retrospective
observational study. Setting: A large European extracorporeal membrane
oxygenation center, University Medical Center Regensburg, Germany. Patients: A total
of 262 consecutive adult patients with acute respiratory distress syndrome have
been treated with extracorporeal membrane oxygenation between January 2007 and
May 2012.