Study 2 extends the findings in Study 1 in two important ways. First, we examined whether high-density networks are indeed more likely to provide support that addresses issues particularly relevant to prevention self-regulation, whereas low-density networks are more likely to provide support that addresses issues particularly relevant to promotion self-regulation. Thus, we measured the extent to which participants obtain prevention-serving versus promotion-serving support from their social networks. In light of the findings from Study 1, we further predicted that the prevention-serving support has a positive association with life satisfaction among high prevention effective individuals. However, the promotion-serving support would have a negative association with life satisfaction among individuals with low promotion effectiveness. We conducted moderated mediation analyses to test whether prevention-serving support mediates the high-density network effect among prevention effective individuals and whether promotion-serving support mediates the low-density network effect among promotion ineffective individuals.
Second, we further extend Study 1 by including a comprehensive list of individual difference variables that have previously been demonstrated to be important in the social network literature. First, we included self-monitoring, which considers individual differences between those who are attuned and responsive to the situated expectations of others versus those who insist on being themselves despite current social expectations (Snyder 1974; Snyder and Gangestad 1986). Past research has argued that network variables should display stronger predictive power among high self-monitors in predicting instrumental outcomes, such as job performance (e.g., Mehra et al. 2001). However, we do not expect self-monitoring to moderate the effect of network density on well-being outcomes.
Another individual difference variable included in this study is need for closure. Some evidence has shown that need for closure affects people’s perception of network structure (Flynn et al. 2010). In that study, individuals with a high need for closure were more likely to assume that their social contacts were connected to each other. The inclusion of the need for closure measure addressed the possibility that our network density result was not driven by actual network density differences but by perceived connectedness stemming from individual levels of need for closure.
Finally, another concern is that our findings relating regulatory focus to life satisfaction through network density might be due to associations between chronic regulatory focus and other general personality traits. For example, past research has shown that big-five personality variables are strongly associated with life satisfaction (e.g., DeNeve and Cooper 1998; Diener and Lucas 1999; Schimmack et al. 2004). On the other hand, regulatory focus has critically mediates the relationship between personality traits and various satisfaction indexes (see Lanaj et al. 2012). Controlling for the big-five personality thus can further clarify whether the observed effect of self-regulation effectiveness on life satisfaction are unique to regulatory focus and not redundant with personality. Thus, we controlled for the Big-Five personality factors: extraversion, emotional stability, openness to change, agreeableness, and conscientiousness (Goldberg 1992).
Participants and design
Two hundred and fifty-two participants were recruited from a behavioral research lab located in central London, UK (58 % female; Mage = 25. 63, SD = 7.54). Of these, 38.9 % were White British, 26.4 % were Asian British (mainly Chinese and Indian), 12.4 % were African British, and the rest were of other races (mostly Europeans, Middle Eastern). 78 % of the participants were students from universities in central London, and the rest held a full-time job in the local area (mainly staff from the university).
The study consisted of two parts. First, participants completed an electronic survey about their social networks. They were asked to list up to 24 contacts who were important in their social networks and then provide the relevant information following the same format as in Study 1. Second, participants completed a three-section online survey. Section one of the online survey consisted of a list, in random order, of individual difference measures, including regulatory focus orientation, self-monitoring, need for closure, and the Big-Five Inventory. In section two, participants rated the degree to which they obtained both promotion-serving and prevention-serving support from their social networks. Finally, participants rated their general life satisfaction (Diener et al. 1985) on the same three items used in Study 1, as well as provided the demographic information.
Individual difference measures
Big five personality
In measuring the Big-Five Inventory, we used Goldberg’s (1992) terminology. Following the stem “I see myself as …”, participants responded on a 5-point scale ranging from 1(strongly disagree) to 5 (strongly agree). The scale included 48 items on the following traits: conscientiousness (M = 3.69, SD = 0.53, α = 0.67, 8 items), agreeability (M = 3.38, SD = 0.57, α = 0.78, 9 items), emotionally stability (M = 3.28, SD = 0.81, α = 0.68, 7 items), extraversion (M = 3.20, SD = 0.78, α = 0.81, 6 items), and openness to new experiences (M = 3.79, SD = 0.62, α = 0.69, 6 items).
Need for closure
Participants completed the Need for Closure (NFC) Scale developed and validated by Webster and Kruglanski (1994). Items were rated using a 6-point scale ranging from 1 (Strongly disagree) to 6 (Strongly agree). The NFC Scale consists of 42 items. Two sample items are “I don’t like situations that are uncertain” and “I think it is fun to change my plans at the last moment” (reverse coded). Embedded in the scale are five items assessing social desirability. Following the guidelines outlined by Webster and Kruglanski, we summed a “lie score” for these five items and removed individuals from the sample who received a score of 15 or higher (N = 14).
We assessed the participants’ self-monitoring tendencies with the Self-Monitoring Scale (SMS; Snyder 1974). It consists of 25 self-descriptive statements intended to capture several elements of social adroitness, including concern with situational appropriateness, attention to social cues, and ability to control expressive behavior. Each of the items (e.g., “I’m not always the person I appear to be.”) was rated using true or false responses. We summed the responses to create an overall score for self-monitoring (M = 13.17, SD = 3.61).
We controlled for participants’ age, sex, and employment status (1 = fulltime employed, 0 = otherwise). ANOVA revealed that African British participants reported a significantly lower level of life satisfaction than White British, Asian British, and participants of other ethnic categories, F(1, 252) = 4.88, p < .003. Therefore, in the main analysis we included African British as an ethnicity control (1 = yes, 0 = no).4
Predicting life satisfaction
Given that more than three-quarters of our participants in Study 2 were fulltime students, the concern of job features as a potential confounding factor, as it was in Study 1, was no longer an issue. Thus, breaking network density into within- between- and outside- organization categories was not necessary for participants who were not embedded in a work organization structure. We tested our hypotheses by using only the overall network density score. Table 2 summarizes the descriptive statistics and zero-order correlation among the variables. Table 4 (see Appendix) summarizes the regression results.
First, we regressed network density on life satisfaction, controlling for each participant’s sex, age, ethnicity, employment status, and network size (Table 4 in Appendix, Model 1). Consistent with Study 1, there was a significant positive effect of higher network density (b = .18, p < .04). Among the control variables, younger participants (b = −.05, p < .001) and African British participants (b = −.47, p < .03) reported lower life satisfaction. Next, we added regulatory focus variables in Model 2. Again, consistent with Study 1, both higher promotion effectiveness (b = .55, p < .001) and higher prevention effectiveness (b = .17, p < .02) showed significant main effects on life satisfaction. Again, we replicated findings from Study 1 and past research (Ferris et al. 2013) that promotion effectiveness showed a stronger effect size (Δb = 0.38, p < .05, Cumming 2009).5. In Model 3, we tested the interaction terms between network density and the two regulatory focus measures. Consistent with our hypothesis, we found a positive interaction effect between network density and prevention effectiveness (b = .18, p < .045), and a negative interaction effect between network density and promotion effectiveness (b = −.19, p < .022).
Figure 3a depicts the interaction effect between prevention effectiveness and network density. It plots the simple slope of network density at two values of prevention effectiveness. Network density showed no effects on life satisfaction among participants with low prevention effectiveness (−1 SD: p = .90), but a significant positive effect among participants with high prevention effectiveness (+1 SD: b = .33, p < .001). In other words, we found a facilitating effect of high network density for high prevention effective individuals. When embedded in a high (vs. low) density network, high prevention effective individuals showed significantly higher life satisfaction than low prevention effective individuals.
a The interaction effect between network density and prevention effectiveness on life satisfaction (Study 2). b The interaction effect between network density and promotion effectiveness on life satisfaction (Study 2)
Figure 3b depicts the interaction effect between promotion effectiveness and network density, showing a very different pattern. A test of simple slopes across the two levels of promotion effectiveness revealed a null effect between network density and life satisfaction among participants with high promotion effectiveness (+1 SD: p = .9); but a significant effect of network density on life satisfaction for promotion ineffective participants (−1 SD: b = .34, p < .004). Whereas individuals with high promotion effectiveness seem highly satisfied within both low and high density networks, those with low promotion effectiveness suffer in a low density network. Replicating the pattern of interaction found in Study 1, participants with low promotion effectiveness reported a significant lower level of life satisfaction (M = 3.82) than participants with high promotion effectiveness (M = 5.26) under low network density (−1 SD), p < .001.
Predicting promotion-serving and prevention-serving support
Next, we tested the effect of network density and regulatory focus on the support functions of social networks. We first regressed network density and prevention effectiveness on perceived prevention-serving support, including the control variables and promotion effectiveness (Table 4 in Appendix, Model 4). Neither prevention effectiveness nor network density had a significant main effect. However, the interaction effect between these two was significant (Table 4 in Appendix, Model 5, b = .09, p < .043). Network density showed no effect on perceived prevention-serving support among participants low in prevention effectiveness (−1 SD: p = .87), but a significant positive effect among participants high in prevention effectiveness (+1 SD: b = .14, p < .018). Consistent with our hypothesis, individuals with high prevention effectiveness are more likely to report receiving prevention-serving support in a high (vs. low) density network, see Fig. 4. In addition, there was also a significant main effect of promotion effectiveness (b = .09, p < .014), supporting the earlier conjecture that individuals with high promotion effectiveness would show an optimism bias—see the world with rose-colored glasses—and thus generally report a higher level of perceived support.
The interaction effect between prevention effectiveness and network density on prevention-serving support function (Study 2)
We repeated the same steps to test the effects of network density and promotion effectiveness on perceived promotion-serving support, including the control variables and prevention effectiveness (Table 4 in Appendix, Model 6). We found two distinct main effects. Participants with lower network density reported that their social networks provided significantly more promotion-serving support (b = −.19, p < .002). In addition, and independent of network density, participants with high (vs. low) promotion effectiveness reported that their social networks provided significantly more promotion-serving support (b = .21, p < .001), consistent with the notion that high promotion effective individuals see the world as they want it to be.
Next, we tested the interaction effect between network density and promotion effectiveness. The effect of this interaction on promotion-serving support was only marginally significant (Table 4 in Appendix, Model 7, b = .13, p = .084). This finding has two significant implications. First, consistent with the notion of optimism bias, individuals with high promotion effectiveness perceive a high level of promotion-serving support regardless of the density of their networks. Second, it provides the first piece of direct evidence that individuals with low promotion effectiveness also perceive a high level of promotion-serving support from a low density network, given the main effect of network density that was found. Thus, perceived promotion-serving is a potential mediator to explain the link between low promotion effectiveness and low life satisfaction under low density networks. That is, the promotion-serving supports found in a low density network (e.g., “inspiring me to reach my ideal self”, “be creative”) might ironically reduce the life satisfaction among low promotion effective individuals (“I fail in promotion effectiveness even when I am receiving social support to be promotion effective”).
The effect of regulatory focus support functions on life satisfaction
Next, we tested whether the two support functions mediate the interaction effect between network density and regulatory focus effectiveness on life satisfaction. We first established the effect of the support functions on life satisfaction. Model 8 showed a significant main effect of prevention-serving support (b = .31, p < .017), but a non-significant main effect of promotion-serving support (b = .15, p = .102). Model 9 further showed that there was a significant interaction effect between prevention effectiveness and prevention-serving support (b = .16, p < .034), as well as a significant interaction effect between promotion effectiveness and promotion-serving support (b = .23, p < .02). These two interaction terms are the critical mediators in the subsequent analyses.
We conducted further analyses to unpack the nature of the interaction terms. First, prevention-serving support had a significant positive effect on life satisfaction among participants with high prevention effectiveness (+1 SD: b = .28, p < .012), which is what would be expected from a regulatory fit (Higgins 2000). There was no such effect for participants with low prevention effectiveness (−1 SD: p = .63) (see Fig. 5a). On the other hand, Fig. 5b shows that promotion-serving support had a significant positive effect on life satisfaction among participants with high promotion effectiveness (+1 SD: b = .27, p < .024), which is again what would be expected from a regulatory fit (Higgins 2000). In addition, as also shown in Fig. 5b, promotion-serving support had a marginally significant negative effect on life satisfaction among participants with low promotion effectiveness (−1 SD: b = −.19, p = .07). This suggests that promotion-serving support is like a double-edged sword: It is beneficial to individuals with high promotion effectiveness, but detrimental to those with low promotion effectiveness.
a The interaction effect between prevention-serving support function and prevention effectiveness on life satisfaction (Study 2). b The interaction effect between promotion-serving support function and promotion effectiveness on life satisfaction (Study...
We first tested whether prevention-serving support mediates the effect of network density on life satisfaction among participants with high prevention effectiveness. We used the bootstrapping method (with 1000 iterations) provided by Preacher et al. (2007) to test this moderated mediation hypothesis. In this analysis, prevention-serving support served as the mediator, prevention effectiveness served as the moderator, and network density served as the independent variable. We controlled for the same list of control variables, promotion-serving support, and promotion effectiveness. We used Model 59 under the PROCESS macro for the mediation analysis, in which prevention effectiveness moderated all three links in the model (i.e., the network density to prevention-serving support effect, the network density to life satisfaction effect, and the prevention-serving support to life satisfaction effect). Figure 6 summarizes this mediation model.
Mediation analysis demonstrating that the effect of network density on life satisfaction was mediated by prevention-serving support among high prevention effective participants. Note ***p < .001; **p < .01;...
The interaction effect between network density and prevention effectiveness becomes only marginally significant (b = .13, p = .069) after controlling for the mediator (the prevention-serving support X prevention effectiveness interaction), while the effect of the mediator remains significant (b = .17, p < .019). The 95 % corrected confidence intervals for the size of the indirect effect of network density excluded zero (95 % CI [0.0166, 0.1307]) among participants with high prevention effectiveness, but not among those with low prevention effectiveness (95 % CI [−0.0174, 0.0243]). This analysis revealed a significant moderated mediation effect. Consistent with our hypothesis, individuals with high prevention effectiveness showed a significantly higher level of life satisfaction in high-density networks that are perceived to provide stronger prevention-serving support.
Next, we tested the mediation effect of promotion-serving support. Specifically, we hypothesized that promotion-serving support mediates the effect of network density on life satisfaction among high promotion effective participants. We used the same method as above: promotion-serving support function served as the mediator, promotion effectiveness served as the moderator, and network density served as the independent variable. We controlled for the same list of control variables, prevention-serving support, and prevention effectiveness. Because there was only a main effect of network density on promotion-serving support, we only treated promotion effectiveness as a moderator on the link between promotion-serving support and life satisfaction, and the link between network density and life satisfaction, using Model 15 under the PROCESS macro for the mediation analysis. Figure 7 summarizes the mediation model.
Mediation analysis demonstrating that the effect of network density on life satisfaction was mediated by promotion support among high promotion effective participants. Note ***p < .001; **p < .01; *p < .05...
The interaction effect between network density and promotion effectiveness becomes only marginally significant (b = −.16, p = .068) after controlling for the mediator (promotion-serving support X promotion effectiveness interaction), while the effect of the mediator on life satisfaction remained significant (b = .20, p < .02). The 95 % corrected confidence intervals for the size of the indirect effect of network density excluded zero (95 % CI [−0.1312, −0.0028]) among participants with low promotion effectiveness, but not among those with high promotion effectiveness (95 % CI [−0.0190, 0.0819]). This analysis revealed a significant moderated mediation effect. Consistent with our interpretation of the results of Study 1, individuals with low promotion effectiveness showed a significantly lower level of life satisfaction in low-density networks that are perceived to provide promotion-serving support. That is, despite being in a low density network that is providing support to be promotion effective, these individuals are still promotion ineffective. It is not surprising that this would reduce life satisfaction.
It should also be noted that the hypothesized benefit of the low (vs. high) density network for individual with high promotion effectiveness did not occur because these individuals perceived they were receiving the promotion support they wanted in the high density network as well. Importantly, this does not mean there was no regulatory fit effect for them. As reported above, there was a significant interaction effect between perceived promotion-serving support and promotion effectiveness: promotion serving support had a significantly stronger effect on life satisfaction among individuals with high promotion effectiveness.
Other individual difference measures
We repeated the same series of analysis above by controlling for the additional individual difference variables respectively, including the Big-Five personality variables, self-monitoring, and need for closure. The pattern of results reported above remained the same. We did not identify any significant interaction effects from need for closure or from self-monitoring.
We did, however, observe a significant main effect of emotional stability (b = .36, p < .001) and conscientiousness (b = .31, p < .001) on life satisfaction, as well as a significant interaction effect between extraversion and network density (b = −.27, p < .009).6 It is worth noting that a large body of research has examined the relationship of the Big-Five personality factors and subjective well-being (e.g., Costa and McCrae 1980; Headey and Wearing 1992; Tellegen 1985; Watson and Clark 1992; see Lucas and Fujita 2000, for a meta-analytical review). Personality variables are usually stronger predictors to affective-based well-being measures than cognitive-based well-being measures, such as the Satisfaction with Life Scale (Steel et al. 2008). Because the pattern of network density and regulatory foci interactions remained the same after controlling for the significant personality effects, we are confident that effects of regulatory focus were not confounded by associated personality factors.
Overall, Study 2 provides further evidence for the interaction effect of regulatory effectiveness and network density on life satisfaction. More importantly, we demonstrated that the effects of high- and low-density networks have distinct implications for two different types of self-regulatory effectiveness. High-density networks are better at providing trust and ensuring the fulfillment of obligations, which are essential for the well-being of individuals with high prevention effectiveness. By contrast, low-density networks are better at providing opportunities for creative inspiration and personal development, which actually reduces well-being among individuals with low promotion effectiveness.
Epic Beaker Clinical Pathology (CP) is a relatively new laboratory information system (LIS) operating within the Epic suite of software applications. To date, there have not been any publications describing implementation of Beaker CP. In this report, we describe our experience in implementing Beaker CP version 2012 at a state academic medical center with a go-live of August 2014 and a subsequent upgrade to Beaker version 2014 in May 2015. The implementation of Beaker CP was concurrent with implementations of Epic modules for revenue cycle, patient scheduling, and patient registration.
Our analysis covers approximately 3 years of time (2 years preimplementation of Beaker CP and roughly 1 year after) using data summarized from pre- and post-implementation meetings, debriefings, and the closure document for the project.
We summarize positive aspects of, and key factors leading to, a successful implementation of Beaker CP. The early inclusion of subject matter experts in the design and validation of Beaker workflows was very helpful. Since Beaker CP does not directly interface with laboratory instrumentation, the clinical laboratories spent extensive preimplementation effort establishing middleware interfaces. Immediate challenges postimplementation included bar code scanning and nursing adaptation to Beaker CP specimen collection. The most substantial changes in laboratory workflow occurred with microbiology orders. This posed a considerable challenge with microbiology orders from the operating rooms and required intensive interventions in the weeks following go-live. In postimplementation surveys, pathology staff, informatics staff, and end-users expressed satisfaction with the new LIS.
Beaker CP can serve as an effective LIS for an academic medical center. Careful planning and preparation aid the transition to this LIS.
Key words: Clinical chemistry, clinical laboratory information system, electronic health records, hematology, medical informatics, microbiology
Laboratory information systems (LISs) receive, store, and manage clinical laboratory data.[1,2,3] Clinical laboratories may use a single LIS or multiple ones depending on scope of laboratory testing and desired functionality. While some clinical laboratories have developed their own LISs, many laboratories utilize commercially available software. Commercial LIS products vary in their capabilities. For example, some LIS vendors offer products for blood bank and transfusion medicine while others do not.
Epic Beaker is a relatively new LIS operating within the Epic suite of software applications (Epic Systems, Inc., Madison, WI, USA). Epic is a commonly used electronic health record (EHR) in the United States, and the use of Epic Beaker has the potential to provide an enterprise-wide solution for the laboratory and hospital information systems. Epic Beaker Clinical Pathology (CP) (referred to as “Beaker CP” hereafter) is available as a separate module for users of the Epic EHR. Epic also markets Beaker anatomic pathology (AP) and software for patient scheduling (Cadence), patient registration (Prelude), and revenue cycle (Resolute) in addition to numerous modules for various clinical specialties. Epic does not currently market an LIS module for blood bank/transfusion medicine.
Unlike some other commercial LIS products, Epic Beaker does not interface directly with laboratory instrumentation. The route for interfacing is via middleware software, principally Instrument Manager and Dawning from Data Innovations (Burlington, VT, USA). The reliance of Beaker CP interfacing on middleware from a single vendor raises a potential challenge with respect to instruments that predominantly use middleware from other vendors. In particular, Instrument Manager can be used to interface data from the other middleware system, in some cases essentially functioning as a simple “pass-through” to transmit data from the other middleware to Beaker. This approach necessarily results in data navigating a complex pathway through two middleware products and often additional hardware and firewalls. The reliance on middleware software may also create potential future challenges if there are company mergers or acquisitions that impact the middleware vendor.
The alternative approach is to interface the instrument directly to Instrument Manager; however, this may require development of rules within Instrument Manager that in essence mimic the function of the other middleware. This last option may be difficult or even infeasible in some situations, especially if the other middleware product controls complicated functions such as reflex testing, specimen routing, re-runs, or other instrument actions. Troubleshooting rules and interface issues between two middleware systems and an LIS may be very complicated.
In this report, we describe our experience in implementing Beaker CP at the University of Iowa Hospitals and Clinics (UIHC), a state academic medical center, with a go-live of August 2014. The medical center adopted Epic as its EHR in 2009; however, the laboratory continued using its legacy LIS between 2009 and 2014. This report is written approximately 1 year after Beaker CP implementation to provide perspective on both short- and long-term issues. The implementation of Beaker CP at the medical center was concurrent with implementations of Epic Cadence, Prelude, and Resolute.
The institution of this study (UIHC) is a 734-bed tertiary care state academic medical center that includes an emergency room with level one trauma capability, pediatric and adult inpatient units, and multiple Intensive Care Units (ICUs) (neonatal, pediatric, cardiovascular, medical, and surgical/neurologic), as well as primary and specialty outpatient services. The UIHC is the largest teaching hospital in the state of Iowa. Outpatient services are located at the main medical campus, as well as at a multispecialty outpatient facility located three miles away. Smaller primary care clinics are located throughout the local region. Since May 2009, the EHR throughout the healthcare system has been Epic. The LIS for all clinical laboratories (including CP and AP) since 1996 has been Cerner “Classic” (Kansas City, MO, USA). Both EHR and LIS are managed by a consolidated information technology department for the medical center, Health Care Information Systems (HCIS).
Pathology Laboratories and Their Informatics
With respect to pathology testing, a core clinical laboratory within the Department of Pathology provides clinical chemistry, hematopathology, and flow cytometry testing. There are also separate clinical laboratories for AP, blood center, and microbiology/molecular pathology located within the main medical campus. Two critical care laboratories (one located near the main operating rooms and another embedded within the neonatal ICU) perform blood gas and other fast turnaround time testing. Nine months after the implementation of Beaker CP, the critical care laboratory near the operating rooms was relocated to the core clinical laboratory.
Table 1 summarizes the preimplementation interfaces for the clinical laboratory instruments that were ultimately impacted by the switch of the LIS to Beaker CP. Prior to Beaker, there were instruments that interfaced directly with Cerner, as well as others that utilized Instrument Manager or Dawning. The main hematology analyzers used the Molis WAM middleware product (Sysmex Inc., Lincolnshire, IL, USA) to interface the analyzers to the Cerner LIS. Up to 2013, the clinical laboratories used Instrument Manager mainly for interfacing to clinical chemistry and coagulation analyzers. As previously reported, the core laboratory uses extensive autoverification rules for chemistry testing within Instrument Manager.
Scope of Laboratory Information System Project
During the strategic decision-making process related to the change in the LIS, it was known that Cerner was to discontinue support for the Cerner Classic LIS product at the end of 2015. Therefore, LIS replacement needed to occur before that date. Hospital administration had also decided to implement the Epic Cadence, Prelude, and Resolute products, replacing functionality provided by products from other vendors. Ultimately, the decision was reached to implement Beaker CP along with the scheduling, registration, and revenue cycle modules from Epic. The alternative approach of uncoupling the Beaker CP project with the other Epic changes would have necessitated a large amount of work in interfacing Beaker with the older revenue system, only to have that functionality replaced by Epic Resolute within 6 months. Beaker 2012 was the version at go-live. An upgrade to Epic 2014 occurred on May 2, 2015. This was still the Epic version at the time this manuscript was submitted.
UIHC went live with the Epic EHR in 2009. For Beaker, UIHC did not use the Epic “model system” as a foundation, instead using the existing test build from the previous LIS. Some reports, outstanding list views, result entry reports, and order inquiry reports were used for starting the model build, but they were revised based on input from subject matter experts (SMEs).
Replacements for the LISs for AP and blood bank/transfusion center were deferred to October 2015. The primary reasons for deferral of Beaker AP for UIHC were (1) Beaker 2012 offered less integration between Epic OpTime and Beaker AP, (2) lack of ability to separate relative value units (RVUs) for multiple pathologists involved in a single case, (3) lack of conditional logic for orders, and (4) no rich text availability in Beaker reports. Beaker 2014 offered a single specimen navigator for microbiology and AP, ability to separate professional charges within the same case (with ability to assign RVUs to a different pathologist than the sign-out pathologist), cascading questions in Epic orders, and rich text formatting for Beaker reports. UIHC has not customized any portion of the Beaker CP system. In general, UIHC does not support highly customized systems. Rather, we use the standard Epic functionality to ensure that we can take each subsequent Epic software version as they become available without need for ongoing maintenance of customization.
Sources of Data
This manuscript summarizes approximately 3 years of time (2 years preimplementation of Beaker CP and roughly 1 year after). The main source of information was the closure document for the LIS project, which was compiled by the HCIS project manager and finalized on October 31, 2014. This document summarized preimplementation and immediate postimplementation meetings, debriefings, and other data summaries. Additional data came from surveys and interviews of pathology and HCIS staff and management as part of the process to assess areas of success and need for improvement. This project was reviewed by the university Institutional Review Board and determined not to constitute human subjects research.
Table 2 presents a breakdown of the phases for the project. The project initiation (August 1, 2012), with the assignment of a project manager from HCIS, was approximately 2 years before ultimate go-live on August 2, 2014. In October 2012, an LIS executive committee was formed that comprised leadership from pathology (including medical directors and departmental administration), HCIS, and hospital administration. The executive committee made the strategic decision to defer replacement of the LIS for AP and blood bank/transfusion medicine until late 2015.
Project phases and timelines
UIHC generally followed Epic recommendation guidelines (“Flight Plan”) week-by-week through the project. Epic set buckets at 25%, 50%, 75%, and 100% for the build, with dates and deadlines determined for every portion of the project. UIHC used Epic's project implementation plan with four main differences:
First, UIHC completed most of the build prior to workflow validation sessions. The workflow validation sessions were completed using the UIHC build environment, not the Epic model system build. Second, UIHC engaged more SMEs in the hands-on testing. Third, UIHC created spreadsheets for all orderables and resultables. SMEs completed integrated testing for 100% of all orderables and resultable components. Fourth, UIHC had monthly Beaker Milestone readiness assessment meetings for all team members. These status meetings brought together divergent personnel from hardware technical operations, desktop support, LIS builders, interface team, pathology medical directors, and SMEs to discuss progress or difficulties.
The original plan was for a go-live of Beaker CP, Epic Cadence, Epic Prelude, and Epic Resolute in May 2014. This was delayed primarily to allow for additional time to validate the revenue cycle module (Resolute) and to a lesser degree optimize system-wide impacts of switching to the Cadence and Prelude modules. The Beaker CP project was ready to go-live in May 2014 but was delayed to follow the original plan to do a coordinated go-live for the four Epic modules. Figure 1 depicts a timeline summarizing some of the key activities involved in the implementation of Beaker CP. A key to project success was the early assignment of SMEs and the institution of monthly readiness/milestone assessments.
Timeline for key events in the project
The Beaker CP project involved 1852 orderable laboratory tests with approximately 5660 resultable components. The project utilized 22 full-time equivalent (FTE) SMEs from the laboratory staff, 4 FTEs from the pathology informatics staff, and 9 FTEs from hospital informatics (HCIS). During the project timeline, there was wide variation in hours per week spent on the project from the laboratory or HCIS staff. During time periods with the heaviest validation periods, approximately 40 h/week was required. During time periods with the lowest validation requirements, approximately 4 h/week was needed.
Another key to success was regular engagement of the executive committee in monitoring project progress and deferring some items to post go-live. Examples included postponement of some instrument interfacing, particularly in laboratory areas such as immunopathology (located within the section of AP at UIHC) that had no prior experience with middleware but had some testing (e.g., antinuclear antibody immunofluorescence) migrating to Beaker CP. These decisions used the guiding principle of which issues were go-live critical versus not.
Interfacing of Instruments and Autoverification Rules
As noted in the introduction, Beaker CP does not directly interface with laboratory instrumentation and instead utilizes middleware software from Data Innovations. Table 1 shows the pre-Beaker CP state of the clinical laboratories with respect to instrument interfaces. Depending on instrumentation, there were direct interfaces to the previous LIS (Cerner) or to one of four middleware products (Instrument Manager; Dawning; Molis WAM; or RALS for blood gas analyzers [Alere Informatics, Livermore, CA, USA]). Multiple instruments within the microbiology/molecular pathology laboratory were not interfaced prior to Beaker CP. There were detailed rules in middleware, including for autoverification, for the main chemistry (Roche Diagnostics; interfaced to Cerner via Instrument Manager) and hematology automation lines (Sysmex; interfaced to Cerner via Molis WAM).
In general, the issue of instrument interfacing required extensive resources and discussion between our institution, Epic, Data Innovations, and various instrument vendors, particularly since we were a relatively early adopter of Beaker CP. After deliberation, the LIS executive committee and the medical directors of the clinical chemistry and hematopathology laboratories decided to interface all core laboratory instruments with Instrument Manager, which included the discontinuation of Dawning (urinalysis instruments) and replacement of the interface functionality of Molis WAM (hematology line), as well as the use of Instrument Manager instead of RALS for interfacing of blood gas analyzers (Radiometer; Copenhagen, Denmark; note that interfacing of Radiometer instruments also involves the Radiance software from Radiometer) within the two critical care laboratories. The RALS (Alere) middleware system continued to be used for the Roche Diagnostics Accu-Chek glucose meters.
The most challenging change at UIHC was the development of extensive rules within Instrument Manager to replace the instrument to LIS interface functions of Molis WAM. This switch required significant time investment but ultimately has reduced the number of middleware systems that clinical laboratory personnel need to navigate in managing chemistry and hematology testing. This switch benefited from long-term experience and expertise within the Department of Pathology with Instrument Manager. The alternative approach would have been to use Instrument Manager as a “pass-through” to transmit data from Molis WAM to Beaker. This approach, while certainly viable, carries the challenge of a more complex pathway for data along with additional hardware and firewalls.
In general, autoverification rules were kept within Instrument Manager since these had been extensively developed and utilized for years prior to switching the LIS. There are autoverification rules built within Beaker, but they simply autoverify what Instrument Manager sends. Most tests are set up on the test level when to autoverify, meaning there is a Beaker rule for each one. For example, the rule for serum potassium looks for numeric or text results, then autoverifies. More complexity is introduced with panels of testing such as the basic metabolic panel. Currently, there is not a way to transmit via the interface message to Beaker that a result should be autoverified or not.
For some instruments, interfacing did require use of two middleware products [Table 1]. For example, some of the instruments within the microbiology/molecular pathology laboratory utilize EpiCenter, a middleware product from Becton Dickinson (Franklin Lakes, NJ, USA) that manages microbiology data. Instrument Manager was used to pass data from EpiCenter to Beaker while retaining all of the functionality of EpiCenter.
Table 3 summarizes the major challenges encountered in the project. In the preimplementation phase of the project, there were four major technical challenges. The first was that major workflow changes were needed for ordering of microbiology testing. This had substantial impact on the operating rooms and is discussed in more detail in the next section. The second challenge related to obtaining and validating drivers for Instrument Manager to allow for interfacing of all the instruments listed in Table 1. As Beaker CP was a relatively new LIS, drivers did not exist in some cases and had to be developed. This required navigating multiple vendors but was ultimately successful.
The third major challenge related to laboratory-initiated orders (LIOs) for samples that came to the laboratories and required laboratory staff to generate orders within the LIS. The process for generating LIOs within Beaker CP was more time-consuming relative to the previous LIS, although intuitively more straightforward for staff with less experience. For several years prior to Beaker go-live, the medical center had been steadily converting inpatient units and clinics to “paperless” ordering, in which the clinical area directly places an LIS label readable by the laboratory instrumentation on the specimen. This replaced a previous system where generic patient identification labels were placed on specimens that were accompanied by paperwork for the laboratory orders. With the transition to Beaker CP, essentially all remaining clinics and inpatient units completed the transition to paperless ordering and collection. One of the additional challenges with Beaker CP was that the date and time of specimen collection needed to be entered by the one collecting the specimen. If this is not done, processing in the laboratory is delayed.
The fourth major challenge impacted the core laboratory. The previous LIS (Cerner) had the flexibility to allow either numeric or alphanumeric results in the same result field. This was used extensively in the core chemistry laboratory to allow for a result of “Hemolyzed,” “Icteric,” or “Lipemic” when indices for these interference parameters exceeded tolerance thresholds for individual tests. When interferences prevented a numeric result from being reported, billing was also credited for the test. Thus, a given chemistry test could output a numeric result (which could be trended or graphed in the EHR) or one of these text results. Beaker CP only permitted a given laboratory-related result field to be either numeric or text (alphanumeric), but not both. The default process for resulting a quantitative test canceled due to interference would be to display “...” in Epic Results Review and thus require additional mouse clicks to see the specific reason for test cancelation. This option was felt to be a step back in customer service since providers were now used to seeing results such as “Hemolyzed” when scanning laboratory values in Epic Result Review (this was the process since 2009). The compromise solution was to create duplicate laboratory-related result fields for most automated chemistry tests (one numeric and one alphanumeric). This required additional complexity within Instrument Manager to determine which result to send to Beaker CP. For instance, for a plasma potassium result determined to be 5.4 mEq/L and absent any flags preventing verification, the numeric result was sent to Beaker and the alphanumeric result suppressed. Conversely, a test canceled due to hemolysis would have the numeric result suppressed from transmission and an alphanumeric result of “Hemolyzed” sent to the EHR. One side benefit of this complexity was the possibility of resulting both numeric and alphanumeric field in cases where the numeric result should post in the chart but the interference result would show as well. This was done postimplementation with a select number of tests with specific ranges of icterus (high bilirubin), an interference that can be very difficult to avoid for patients with ongoing hepatobiliary disorders. Rather than repeatedly canceling tests in patients whose bilirubin cannot be easily lowered, providers could then readily see in Epic Results Review a numeric result and the “Icteric” flag indicating icterus was in a range to possibly affect test results. Rules within Instrument Manager and Beaker can control the precise conditions for this type of arrangement.
One initial challenge for the laboratory was reconstitution of the master “organism list” and construction of a “tree”-based workflow. Beaker currently allows for 216 – 1 or about 64,000 potential results. Organism names are concrete concepts in Beaker. Even though the number of organism names was far <64,000 in our initial design (about 2500 in our legacy Cerner build), the use of the same organism name in multiple discrete test results (e.g., due to different specimen sources or procedures) can combinatorially exceed 64,000. Therefore, a review of the Bruker BioTyper (Bruker, Billerica, MA, USA) database was performed; redundant organisms (e.g., individual species in the Pseudomonas aeruginosa or Bacteroides fragilis groups) were deleted, and the organism list was pared to about 1700 clinically significant organisms. The build process also avoided combinations of organisms and specimen sources or procedures that are irrelevant. These measures in total avoided exceeding the 64,000 test results limit for the microbiology organism build.
A second challenge was construction of “trees,” which are the structures that Beaker uses to guide laboratory workflow. We chose to keep trees straightforward and flexible, with most beginning with a morphologic indication and ending with mass spectrometric identification: staphylococcal, streptococcal, enteric Gram-negative, fastidious Gram-negative (does not grow on MacConkey agar), and yeast. Our medical center microbiology laboratory is a sentinel laboratory, and category A agents must not be worked up in a routine “tree” to avoid use of automated instruments. We therefore constructed a separate “Bacillus” tree to rule out Bacillus anthracis (causative agent of anthrax) by hemolysis and motility, and a “Gram stain” tree that captures other category A agents as well as other organisms that do not fit well into other trees. The trees at our medical center are thus unique and operate differently from stock Beaker CP functionality, which is intended to be a stepwise advancement down the trees until a terminal identification is made. Our medical center's Beaker trees are constructed as category lists, allowing technologists to rapidly move through trees, e.g., by performing a matrix-assisted laser desorption/ionization time-of-flight identification and skipping to the end of a tree without clicking through intermediate steps.
Additional issues encountered included creation of a traceable mechanism for banking isolates, resulting Gram stains within cultures rather than as a separate “test,” “floating” key results such as the Gram stain to make sure they follow a culture-in-progress in the Epic EHR, and patching over an inability to reorder isolates by clinical significance by initiating cultures with five placeholder isolates at a time so that “mixed flora” was always placed last.
Beaker was unable to issue quantitative results (e.g., “100,000 colony-forming units [CFU]/mL” or “100,000 CFU/g”). We therefore created “category lists” for urine cultures that result greater than, less than, or between threshold amounts. This allowed the >100,000 CFU/mL category that meets the Centers for Disease Control and Prevention National Healthcare Safety Network catheter-associated urinary tract infection standard to be passed to infection control through an interface through TheraDoc (Premier, Inc., Charlotte, NC, USA). Other quantitative cultures required a custom solution that may not be applicable to other sites; for example, numbers in quantitative cultures were set up as a new result type to which units are added when displayed in the Epic EHR.
Automatic workflow optimizations included the ability to enter standard rejection criteria as “growth” values in sputum cultures and application of the appropriate charges. For inducible clindamycin resistance, a generic mechanism was created to remove susceptible results if isolates were D-test positive. Cryptococcal antigen titers (IMMY CrAG; IMMY Inc., Norman, OK, USA) were automatically calculated from dilution series and normalized to a predicate method (CALAS; Meridian Life Science Inc., Memphis, TN, USA) by calculations internal to Beaker.
Finally, quality control optimizations were made to the result review process. Specimens could be assigned to one or multiple benches for ease of follow-up (e.g., stat-testing bench and bacteriology bench). Further, after final supervisor review, results could be sent back to individual benches for revision; in this case, specimens with review status set to pending/preliminary were highlighted in pink to assign them high priority and for special attention to quality control.
Because the previous LIS was not interfaced directly to the Epic EHR, microbiology reports were not as cleanly formatted as those interfaced from Beaker CP. Microbiology reports became more effective as exemplified by a common example in Figure 2. Documentation of critical result reporting in Beaker is displayed prominently, with less critical data related to the specimen placed behind a hyperlink.
Examples of some differences in microbiology reporting in the Epic electronic health record between the previous laboratory information system (Cerner Classic) and Beaker Clinical Pathology. (a) A Cerner Classic report of a positive blood culture for...
Issues Encountered Soon After Implementation
The major issues encountered shortly after implementation were difficulty reading barcodes on labels and problems with workflow with microbiology orders from the operating rooms. Bar codes were heavily tested as part of mapped record testing by the SMEs. However, one of the main challenges with paperless ordering was improper label placement by nonlaboratory staff. At times, a high percentage of labels created scan errors for various core laboratory analyzers. This required laboratory staff to intervene and manually process orders. Compounding the challenge was that the move to paperless ordering also meant that a large amount of hardware (e.g., label printers) was new to many clinical areas. This led to adoption of “notched” labels that aided in proper alignment of the label on the tubes. In addition, the size of the barcode was shrunk to increase readability on laboratory instruments. Intermittent problems with label ink density resulted in scan errors. Overall, issue with labels required intensive troubleshooting in the short term but was mostly resolved within 2 weeks post go-live. Coupled with ongoing education, the notched labels reduced errors associated with bar code reading failures.
In microbiology, operating rooms had maintained a paper-order workflow even after the rest of the institution steadily converted to paperless ordering from 2010 to 2014. Paper orders allowed many different order types to be placed very quickly on a single specimen (e.g., with a single handwritten description of the specimen or by checking boxes on a single sheet of paper) and without the need for computing resources in the operating rooms. Under Beaker CP in UIHC's Epic build, commonly available specimen sources are available in OpTime and are matched with commonly ordered tests (e.g. the “wound, intra-operative” source has aerobic, anaerobic, acid-fast Bacillus, and fungal cultures available); other uncommon sources and tests require one to leave OpTime and order tests in the Epic EHR. This resulted in a slower workflow that necessitated the following: (1) A new requirement to understand what source to select for a given specimen, (2) in-room computing resources and temporary disengagement of the scrub nurses to enter orders from the case, and (3) a series of keystrokes and clicks that added time to orders in a way that initially meshed poorly with an ingrained workflow that budgeted less time for this procedure. The surgical staff, especially the nursing staff, needed to be trained to generate orders and labels in the new system. Paper was widely regarded to be more rapid, and an initial adjustment period of a month was necessary before the new system was not regarded as critically disruptive to workflow.
Benefits and drawbacks to collaborative workflow were similar to those generally observed in transition from paper-based to computerized provider order entry systems. However, because of the closed-ended nature of the ordering system (as opposed to paper sheets which invited source-and transport-inappropriate ordering and subsequent cancelation of tests in the laboratory), fewer ordering mistakes are now made and internal audits show operating room microbiology ordering to be approximately 99% accurate.
Two other additional minor issues arose postimplementation. One involved “specimen locking,” a phenomenon whereby a result cannot file due to someone else using Beaker CP for the same patient. This happens when another user is in an orders activity (new orders, order inquiry, and order review) for the same patient accession number. Microbiology specimen locking can occur with two people trying to enter susceptibilities on the same accession number at the same time. The error message gives the user and workstation where the specimen is locked. The biggest impact on the UIHC laboratory was to create interface errors that prevented autoverified results transmitted from Instrument Manager from posting in Beaker. Laboratory staff would not realize results had not posted in Beaker CP unless they accessed overdue (late) reports within Beaker. A less common, but potentially even more clinically significant, downstream effect of this phenomenon interrupted reflex orders that followed from an initial laboratory result (e.g., an HIV Western blot order following a positive HIV screen). The solution was to modify the interface to create batch jobs to resend results or reflex orders that did not initially transmit due to specimen locking.
The second relatively minor issue related to calculations and was that Beaker CP was more sensitive to the order in which tests were performed. An example was the urine protein/creatinine ratio. In the previous LIS, the calculation occurred regardless of when the two tests were resulted. In Beaker CP, the calculations “expected” a certain order. This necessitated reworking of the calculation parameters and creation of reports to identify instances where the calculations failed because the order of results was not standard.
Pathologist Sign-out Within Beaker Clinical Pathology
The switch to Beaker CP impacted pathologist sign-out of multiple areas including hematology (excluding AP-like activities such as bone marrow core biopsies), hemoglobin electrophoresis, and serum/urine protein electrophoresis. As an example of pathologist sign-out, serum and urine protein electrophoresis are performed by capillary electrophoresis. Interpretive reports were generated within the Phoresis software package for the Capillarys 2 analyzer (Sebia, Norcross, NC, USA) and then transmitted via Instrument Manager to Beaker. An improvement over Cerner was that the formatting of the text was transmitted accurately to Beaker (e.g., carriage returns were lost when attempting the same process in Cerner) along with the numeric electrophoretic fractions and concentration of the monoclonal protein (if present). Beaker allows for different security levels for trainees (e.g., pathology externs, residents, and fellows) with respect to sign-out. For serum/urine electrophoresis, residents participate in the sign-out process but cannot final verify the results. That function is reserved for attending pathologists. For other functions such as hematology smear review, residents can final verify results. Much of the sign-out process occurs via the outstanding lists which can be customized to show specific categories of tests. An advantage of Beaker CP was the ability to access clinical information on a given patient in Epic during the same session.
The switch to Beaker CP maintained two-way electronic interfaces to three commercial laboratories that served as the major destinations for send-out testing. There were few problems associated with the interfaces, and very few orders require retransmission of completed results from the external reference laboratories. Compared to interfacing using the previous Cerner LIS, there were a few minor challenges. First, 24 h urine collection information must be entered and saved in specimen update prior to electronically sending the order. This information is not available to the ordering provider until the order is finalized. Second, code in Cloverleaf (Infor, New York, NY, USA; interface to Epic) had to be written if two different orders from a reference laboratory could have the same test code (e.g., random and 24-h urine tests). This was accomplished, and results from both orders can transmit from reference laboratories to Beaker. Beaker was able to handle reference laboratory tests with many discrete results; examples include certain panels of metabolism (e.g., amino acid profiles) and complex toxicology profiles.
Epic Reporting Workbench functionality allowed for a variety of reports to be constructed either for recurrent or ad hoc use. Examples include display of blood culture results for microbiology, monitors of turnaround time for key parameters (e.g., complete blood count, basic metabolic panel, troponin T, coagulation), phlebotomy collect to laboratory receive time, and critical results missing documentation in communication log. One key advantage for staff was that integration within Epic allowed for quick navigation between Beaker and the EHR.
Training Requirements and Impact on Clinical Laboratory Operations and Quality Metrics
Table 4 summarizes the specific training (including number of hours) for the various staff categories that worked with Beaker CP. The training was customized to the specific functionality used by staff. The most extensive training was for phlebotomists, clerks, medical laboratory technicians, and laboratory managers and amounted to slightly more than 12 h. The training was in addition to any time spent by some staff as SMEs or for work on project validation tasks.
Beaker Clinical Pathology training for staff
The Department of Pathology routinely monitors quality metrics from the clinical laboratories including turnaround time and critical value reporting. In general, these metrics either were unchanged or improved following the transition to Beaker [Figure 3 shows six examples]. In the core laboratory, turnaround time for common photometric and ion-selective electrode chemistry tests (e.g., electrolytes, liver enzymes), immunoassays (e.g., troponin T), and hematology tests (e.g., complete blood count, prothrombin time) were unaffected by the switch in LIS. Figure 3a shows turnaround time metrics for chemistry and hematology testing in the UIHC core laboratory and also a smaller laboratory associated with a multispecialty outpatient facility located three miles from the main UIHC campus. Figure 3b shows critical value reporting metrics for the UIHC core laboratory. Rates of autoverification were similar to those reported in a prior publication with the previous LIS.
Quality metrics before and after switch to Beaker Clinical Pathology. (a) Turnaround time metrics in the University of Iowa Hospitals and Clinics core laboratory and also the smaller clinical laboratory at a multispecialty outpatient facility located...
Post Go-live Optimization
As part of post go-live optimization, pathology, hospital, and vendor staff conducted end-user surveys, tours, and interviews of laboratory staff and management. Two surveys assessing multiple aspects of Beaker CP were conducted at 1–2 months (n = 49 respondents) and 3–4 months (n = 105 respondents) following go-live and focused on the following areas (all scores are mean ± standard deviation): LIS support (1–2 months: 3.4 ± 0.2; 3–4 months: 3.6 ± 0.6; 5-point scale), LIS ease of use (1–2 months: 2.7 ± 0.4; 3–4 months: 2.8 ± 0.4; 4-point scale), LIS training (1–2 months: 2.4 ± 0.1; 3–4 months: 2.6 ± 0.1; 4-point scale), LIS efficiency (1–2 months: 3.2 ± 0.7; 3–4 months: 3.2 ± 0.6; 4-point scale), LIS impact on patient care (1–2 months: 3.1 ± 0.5; 3–4 months: 3.2 ± 0.4; 4-point scale), and overall satisfaction (1–2 months: 5.9 ± 1.3; 3–4 months: 6.2 ± 1.5; 10-point scale). Training rated the lowest of these areas, with many user comments related to either lack of specificity of training to particular jobs and/or desire for more hands-on practice to complement classroom learning. Many of the other comments highlight issues described above such as challenges with clinical areas not following desired workflow.
In terms of costs of ownership, instrument interfaces using Instrument Manager are considerably less expensive than instrument interfaces involving the previous LIS. Report writing is now internally funded as opposed to requiring outside vendor costs. UIHC owns an Epic enterprise-wide license which encompasses maintenance and support of all aspects of Epic. Therefore, we are not able to make direct comparison of these costs relative to the previous LIS.
This report describes the successful implementation of Beaker CP at an academic medical center. Beaker is a relatively new LIS that operates within the Epic suite of software, an example of using a single vendor for LIS and EHR software. Given that Epic is a common EHR within the United States, it is likely that clinical laboratories adopting Beaker will do so in two main scenarios: The hospital/medical center already has Epic as the EHR (as in the current study) and is adding Beaker or, alternatively, is doing a coordinated switch of the EHR and LIS at the same time. In the current study, the medical center also switched to the Epic modules for revenue cycle (Resolute), scheduling (Cadence), and patient registration (Prelude).
The preimplementation phase for the switch to Beaker encompassed approximately 2 years and involved substantial effort of SMEs, especially with microbiology test build and expansion of the use of the Data Innovations Instrument Manager middleware product. For laboratories planning a switch to Beaker but are not currently using Instrument Manager or use it minimally, sufficient time and effort should be allotted to the issue of interfacing. Another key decision point is where to build autoverification rules. Our institution elected to keep most rules within Instrument Manager, but other sites may find it easier to use Beaker for this function.
Some of the major challenges encountered in the short term following go-live were issues that could happen with any switch in LIS such as issues with printer labels and process change with LIOs. For UIHC, even though Beaker bar codes were heavily tested prior to go-live, the deployment of a large amount of new hardware (particularly label printers) to clinical areas created significant challenges for nonlaboratory staff. The change with the long-lasting impact related to microbiology orders from the operating rooms. The switch from a paper-based order process with the previous LIS to one that required the clinical teams to order within Epic was very challenging and required intensive interventions. Future institutions adopting Beaker as LIS should evaluate this issue carefully with regard to process design and education.
In this project, we found that an LIS executive committee with diverse representation of leadership from pathology, medical center informatics, and hospital administration was very helpful in keeping the overall project on target. Key decisions included postponement of the Beaker CP project due to delays in the revenue cycle project and also deferment of some noncritical issues (e.g., interfacing of certain lower volume instruments) to after go-live. Open communication helped to continually evaluate what is “go-live” critical between all groups impacted by changes in the LIS and avoid mission creep that could interfere with key milestones. From end-user surveys, we also identified training of laboratory staff in Beaker CP as an area that rated the lowest in terms of overall satisfaction. This is an area for future improvement.
We assessed a variety of quality metrics before and after the transition to Beaker CP. Key metrics included turnaround time and critical value reporting in the core laboratory and laboratory in a separate outpatient facility. In general, quality metrics stayed the same or even improved upon the transition to Beaker CP. Overall, the switch to Beaker CP at our institution was a positive one that is functioning well over 1 year from go-live.
Financial Support and Sponsorship
Conflicts of Interest
There are no conflicts of interest.
The switch of an LIS is a major undertaking and requires the combined effort of many people from different teams. While not an all inclusive list, the authors would like to especially thank the Executive Oversight Committee (Lee Carmen, Doug Van Daele, Annette Schlueter, Michael Knudson, Ken Fisher, Amy O’Deen, Eric Edens, Rose Meyer) as well as team members from HCIS (Karmen Dillon, Nick Dreyer, Rick Dyson, Steve Meyer, Jason Smith, Mary Heintz, Patrick Duffy, Kathy Eyres, Peter Kennedy, Bob Stewart, Mary Jo Duffy, Elizabeth Lee, Cass Garrett, Dean Aman, Julie Fahnle, Jeanine Beranek, Christine Hillberry, Sharon Lyle, Shel Greek-Lippe, Kurt Wendel, Brian Hegland, Jeffrey Smith, Tom Alt, Tony Castro, Steve Niemela), Department of Pathology (Jeff Kulhavy, Sue Lewis, Connie Floerchinger, Lisa Horning, Heidi Nobiling, Bob Rotzoll, Josh Christain), and Epic (Brian Berres, Jenny Neugent, Zak Keir, and Krystal Hsu). We also thank all the other individuals involved in interface development, validation, help support, and other key tasks in this project.
1. Buffone GJ. Managing beyond the laboratory information system. J Pathol Inform. 2012;3:21.[PMC free article][PubMed]
2. Pantanowitz L, Henricks WH, Beckwith BA. Medical laboratory informatics. Clin Lab Med. 2007;27:823–43, vii.[PubMed]
3. Tuthill JM, Friedman BA, Balis UJ, Splitz A. The laboratory information system functionality assessment tool: Ensuring optimal software support for your laboratory. J Pathol Inform. 2014;5:7.[PMC free article][PubMed]
4. Krasowski MD, Davis SR, Drees D, Morris C, Kulhavy J, Crone C, et al. Autoverification in a core clinical chemistry laboratory at an academic medical center. J Pathol Inform. 2014;5:13.[PMC free article][PubMed]
5. Niazkhani Z, Pirnejad H, Berg M, Aarts J. The impact of computerized provider order entry systems on inpatient clinical workflow: A literature review. J Am Med Inform Assoc. 2009;16:539–49.[PMC free article][PubMed]
6. Mathur G, Haugen TH, Davis SL, Krasowski MD. Streamlined sign-out of capillary protein electrophoresis using middleware and an open-source macro application. J Pathol Inform. 2014;5:36.[PMC free article][PubMed]
7. Koppel R, Lehmann CU. Implications of an emerging EHR monoculture for hospitals and healthcare systems. J Am Med Inform Assoc. 2015;22:465–71.[PubMed]