The Postoperative Pain Assessment Skills pilot trial

Michael McGillion RN PhD1, Adam Dubrowski PhD1,2, Robyn Stremler RN PhD1, Judy Watt-Watson RN PhD1, Fiona Campbell BSc MD FRCA1,2, Colin McCartney MBChB FRCA FCARCSI FRCPC1,3, J Charles Victor MSc1, Jeffrey Wiseman MD MScEd FRCPC FACP4, Linda Snell MD MHPE FRCPC FACP4, Judy Costello RN MScN5, Anja Robb MEd1, Sioban Nelson PhD RN1, Jennifer Stinson RN PhD CPNP1,2, Judith Hunter BScPT PhD1, Thuan Dao DMD MSc Dip Prostho PhD FRCDC1, Sara Promislow MEd PhD1, Nancy McNaughton MEd PhD(c)1, Scott White BMUS1, Cindy Shobbrook RN(EC) MN CHPCN(c) CON(c)6, Lianne Jeffs RN PhD7, Kianda Mauch RN MScN/ONP7, Marit Leegaard RN CRNA PhD8, W Scott Beattie MD FRCPC PhD1,5, Martin Schreiber MD MEd FRCPC1, Ivan Silver MD MEd FRCPC(c)1,7

strategy is all that is needed; patients can clearly articulate their pain and ask for help; observable signs are more reliable indicators of pain than patients' self-reports; and patients should be encouraged to endure as much pain as possible before using an opioid (6,10,(30)(31)(32)(33)(34)(35).Other pain-related misbeliefs, common to HCPs and patients alike, include: patients should expect to endure pain after surgery, and the use of opioids for pain will inevitably lead to addiction (10).
Successful pain curricula, such as the University of Toronto (Toronto, Ontario) Centre for the Study of Pain Interfaculty Pain Curriculum (30,36), have effectively used standardized patients (SPs) and other simulation models to achieve students' rehearsal and integration of complex affective and cognitive skills required to take a pain-related history, and address gaps in pain knowledge and painrelated misbeliefs.While the use of simulation methods for pre-licensure pain education is a burgeoning field of study, little has been done in the way of simulation-based methods for practising HCPs.Moreover, although SPs are realistic and ideal for HCP continuing education in patient assessment and interviewing, they may be unavailable to some health care institutions with resource constraints.We could identify no alternative realistic simulation methods for improving HCPs' postoperative pain assessment skills that would potentially be low resource intensive.Therefore, the purpose of the present study was to examine the efficacy of an alternate simulation method -deteriorating patientbased simulation (DPS) -versus SPs for improving HCPs' pain knowledge and assessment skills.Specific outcomes included HCPs' observed pain assessment skills and knowledge of pain-related misbeliefs (primary outcomes), and satisfaction with and perceived quality of the simulation experience (secondary outcomes).

Study design
The present study was a pilot equivalence trial.According to the Consolidated Standards of Reporting Trials (CONSORT) statement, equivalence randomized controlled trials seek to determine whether new interventions are 'no worse' than a reference intervention (37).The intention of this design is to demonstrate whether new intervention alternatives have at least as much efficacy as an accepted standard or widely used intervention, referred to as the active control (37).In the current study, the active control was SP-based simulation; the new comparison intervention was DPS (38).On completion of demographic and baseline measures, participants were randomly allocated to either an SP or DPS simulation intervention.Postintervention outcomes were evaluated immediately and two months following intervention.A short-term follow-up period was chosen for the present pilot study, which forms the basis of a future larger-scale trial with long-term follow-up.Ethics approval was granted from a university in central Canada and five university-affiliated teaching hospitals.

Study population and procedure
The present study was conducted in central Canada over a 14-month period.The target population was HCPs involved in the direct care of post operative patients.Members of acute pain management teams were not eligible to participate.Instead, acute pain management team clinicians acted as 'recruitment champions' who facilitated the recruitment strategy, which included: presentations at in-services and clinical rounds; notifications in hospital bulletins and newsletters; and e-mail and hardcopy notices to HCPs working in surgical hospital units.All interested HCPs were initially assessed for eligibility by the trial coordinator (TC) via telephone.Willing participants were then interviewed by the TC to confirm eligibility and obtain informed consent.Demographic and baseline measures were completed on site and participants were randomly assigned to either the SP group or the DPS group.Random assignment was centrally controlled using www.randomize.org, a tamper-proof random assignment service.An external research assistant kept a secure list of random allocations (generated by www.randomize.org)that was matched to participant study numbers.As each participant enrolled, the TC called the external research assistant to receive his/her random allocation.Once randomly assigned, participants were scheduled to participate in the next available SP or DPS simulation intervention.
Outcome data collection occurred in two phases: immediately postintervention and two months postintervention.Participants completed immediate postintervention questionnaires (onsite) on an individual basis.Two months postintervention, participants completed two individual and consecutive objective structured clinical examinations (OSCE) at their respective hospital sites.Assiduous follow-up procedures were used to maximize participation in these follow-up OSCEs.

Follow-up oSce procedure
Follow-up OSCEs were designed to evaluate participants' postoperative pain assessment skills.OSCE sessions took place at participants' respective hospital sites and were 45 min in length, including: 10 min briefing with the TC; two consecutive 10 min OSCEs; and 15 min of debriefing and feedback.On arrival, each participant was oriented to the specifics of the overall procedure and informed that he/she would be evaluated while conducting postoperative pain assessments on two consecutive simulated patients, portrayed by SPs.Before each assessment, participants were given a one-paragraph summary ('spec') of the patient case.The TC kept time, allowing for a maximum of 10 min for each assessment.Participants' performance was scored via a two-way mirror by two independent expert OSCE assessors, using an evaluation tool developed for this trial (see Measures).At one hospital site, where two-way mirror observation rooms were unavailable, participants were observed on television monitors by OSCE assessors located in a separate room via a live feed from a camcorder.
In addition to the assessment provided by OSCE assessors, the SPs also scored the participants' performance.The OSCE session concluded with a structured debriefing, wherein participants received verbal feedback from the SPs.The purpose of this debriefing was to discuss key learning points from the OSCEs and related implications for assessing pain in participants' future clinical practice.

interventions
The aim of the simulation interventions (SP and DPS) was to improve participants' pain assessment skills and knowledge of common painrelated misbeliefs that interfere with optimal pain assessment and management.These interventions were delivered in small groups (five to eight participants) by an expert facilitator at the study site.Intervention structure was standardized at 2 h in length and consisted of three components: 1. Participants were briefed for 30 min on common pain-related misbeliefs and key components of a comprehensive pain assessment.The use of empathy and affective involvement to address patient and family concerns effectively was emphasized (8,39).2. Participants worked as a team to conduct a postoperative pain assessment on a patient case, via the SP or DPS method.The simulation lasted 45 min, allowing for engagement of each participant (one at a time) and facilitator-led 'time-outs' for group discussion and problem solving.3.After the simulation, the facilitator conducted a 45 min structured debriefing, focused on pain-related misbeliefs that arose and key learning points from the team pain assessment.A facilitator guide specified the protocol in detail to ensure consistent intervention delivery across all sessions; only the specific simulation method varied between study groups, ie, SP or DPS.

Simulation methods: SP and DPS
Active control -SPs: Four SPs, from an established SP program, were selected according to demographic requirements of the patient case, with attention to their skill in providing feedback on pain assessment.Standardized patient training consisted of three meetings of 2 h duration each.Two SPs were trained to portray the patient and two were trained to portray the patient's sister, who was also part of the case (see Patient cases).The training sessions concluded with a complete 'dry run' of the case at the study site.
All SP intervention sessions were conducted in a simulation laboratory at the study site.A simulated hospital room was used with prerecorded background hospital noise, real telemetry monitoring and authentic postsurgical equipment to ensure that the simulation was as realistic as possible.At the beginning of each SP session, the intervention facilitator provided participants with a brief introduction to the case and the rules for the simulation.Each participant was given approximately 5 min to conduct a part of the assessment.During this time, other participants silently observed.Only the learner or facilitator could call for a 'timeout' to problem solve the next steps, if needed.On entering the patient's room, the facilitator and participants encountered two SPs, portraying the patient and his sister, respectively.The SPs remained in character throughout the simulation.
Comparison intervention -DPS: Wiseman and Snell's (38) DPS method was developed based on the premise that it is difficult to reproduce the need to 'think on one's feet', often required in complex clinical environments and situations.According to Wiseman and Snell, DPS is an inexpensive, portable and rapidly created simulation that reproduces -in real time -the roles, decisions and emotions involved in HCP-patient interactions; the reasoning skills required to manage a 'deteriorating patient' are also made explicit (38).Pilot data show that DPS improves learners' perceptions of readiness, knowledge and prioritization skills by 30% to 45% (1.5 to 2 points, 5-point Likert scale) (38).
In DPS, no SP or actor is present.The simulation method consists of a trained facilitator who verbally introduces a simulated patient scenario to a small group of learners in a classroom setting.The scenario is predetermined to have a number of plausible outcomes, depending on the response of the learner engaging in the simulation (38).The ideal patient assessment/care must also be predetermined.If a learner's approach to the scenario is less than ideal, the facilitator changes the patient condition/situation by a decrement specific to the learner's weaknesses (38).If, however, the learner reacts appropriately, the patients' condition/situation will not deteriorate (38).
In the context of the present trial, the concept of 'deterioration' was adapted to reflect typical communication breakdowns that can occur in relation to common pain-related misbeliefs (for clinicians and patients) and ineffective postoperative pain assessment and management.In keeping with Wiseman and Snell's DPS principles (38), this 'deterioration' was executed incrementally and to degrees appropriate to participants' abilities and learning needs.Figure 1 depicts the DPS method, as performed by the intervention facilitator at the study site.

Patient cases intervention case:
The simulated patient for the intervention was a 52-year-old man suffering from moderate-to-severe postoperative pain following triple coronary artery bypass graft surgery.He had preexisting pain from diabetic neuropathy and neuropathic pain from the site of internal thoracic artery harvesting.The case was set on postoperative day 4, while he was being visited in hospital by his sister.He reported experiencing 8/10 (numerical rating scale) pain on movement and 2/10 pain at rest.Based on a real patient case, details of the simulation included: inadequate pain relief in the early postoperative phase; common pain misbeliefs that blocked effective pain assessment and management; patient and family concerns; and problematic communication about pain between the patient and HCPs involved in his postoperative care.The case content was identical for SP and DPS intervention groups.oSce cases: Simulated patient cases for the OSCEs were designed to encourage the participants' critical thinking and application of the pain assessment skills they learned during the intervention.Two cases were developed, and involved visceral and musculoskeletal postsurgical pain, respectively.The first case was a 40-year-old woman who had undergone a hysterectomy.The scenario was two weeks after surgery, and she was suffering moderate to severe abdominal pain, interfering with sleep and recovery.The second case was a 58-year-old man who had undergone a total knee replacement.Several weeks after the operation, he reported severe knee pain that was interfering with rehabilitation.Similar to the intervention case, both OSCE cases were based on real patient situations, featuring common pain misbeliefs, communication issues around pain and inadequate pain relief.

MeASuReS
Primary outcomes -pain assessment skills and pain-related misbeliefs: Pain assessment skills -oSces: The reliability and validity of OSCE-based performance assessment has been widely debated (40).As Hodges et al (40,41) have demonstrated, the traditional use of binary OSCE checklists to capture complex cognitive appraisal and communications skills is suboptimal.While global rating indicators can augment such checklists and improve reliability and validity, their sole use also remains controversial, with mixed results.Ideally, a combination of checklist and global rating components should be used.We found no such combined evaluation method for examining HCP's postoperative pain assessment skills.Therefore, a Pain Assessment Skills Tool (PAST) was decveloped for the present trial (Appendix A).The PAST was adapted from a pain assessment template reported by Watt-Watson et al (10) and an OSCE template developed by Cleo Boyd in 1996.With permission, a combination of relevant items from both tools were used.
A series of three focused team meetings was held to determine the relevant content domains and objective criteria for each component of the tool, assemble the relevant items and delineate a scoring method.The PAST is divided into two components, including a pain assessment checklist and a global rating scale.The assessment checklist comprises a series of items spanning the following content domains: pain sensory characteristics; treatment history; impact of pain on functional status, perception of self, and relationships; and past pain experiences.The global rating scale uses a series of four Likert scales to evaluate interpersonal skills and empathy, degree of coherence of pain assessment, and verbal and nonverbal expression.Two continuous summary scores are derived, ranging from 0 to 36 for the pain assessment checklist, and 0 to 24 for the global rating scale.
The face and content validity of the PAST was evaluated via expert opinion.Sixteen pain experts used the PAST to evaluate a prerecorded pain assessment online.Feedback was requested on overall usability of the tool and relevance of items.Items deemed by the majority to be irrelevant were deleted.The remaining items were refined for clarity and accurate representation of the content domains.Kuder-Richardson Formula-20 (44) statistics for scales comprised of dichotomous variables (43) were used to evaluate the internal consistency reliability of the PBS on a sample of 150 prelicensure health sciences students enrolled in a pain education randomized controlled trial (42).Reliability estimates ranged from 0.67 (pretest) to 0.70 (post-test), suggesting moderately high internal consistency of the tool (42).Secondary outcomes -HcP satisfaction and quality of the simulation experience: Participants' perceived satisfaction and quality of the simulation experience were evaluated by the Satisfaction with Learning Scale (SSLS) and the Simulation Design Scale (SDS), respectively (45).The SSLS is a 13-item tool designed to measure levels of learner satisfaction (SSLS-satisfaction) with simulation-related activities and self-confidence (SSLS-confidence) in learning (45).Content validity of the SSLS has been established by nine clinical simulation experts; internal consistency reliability on a sample of 395 nursing students was a=0.94 (45).
The SDS (20 items) uses a 5-point scale to assess the quality of simulations with respect to clarity of objectives, learner support features, problem-solving opportunities, feedback mechanisms and fidelity (45).The SDS is subdivided into two parts, one assessing specific simulation features (SDS-total) and the other examining learners' perceived importance of those features (SDS-importance).The SDS also has established content validity as well as reliability (a=0.92 to 0.96) (45).

Data analyses and statistical power evaluation of the PAST:
As discussed, real-time evaluations of each participant's pain assessment skills were conducted by two independent raters and one SP at each OSCE station.Intraclass correlation coefficients (ICC) (46) were used to estimate the inter-rater reliability of the PAST pain history pain assessment checklist and global assessment template; ICC>0.7 were considered satisfactory (46), indicating strong agreement between raters.
intervention effects: It was not within the scope of the present pilot trial to collect a sufficiently sized sample for 'true' equivalence testing.Analyses were based on intention-to-treat principles.A oneway analysis of covariance (ANCOVA) of post-test scores was used to test for overall differences in PBS scores between SP and DPS groups; pretest PBS scores were the covariates (47).Student's t tests were used to test for group differences in post-test PAST, SLSS and SDS scores (47).Bonferroni-adjusted post-hoc and multiple comparisons between groups were planned if overall associations were to be found at the P≤0.05 significance level (47).All data were cleaned and assessed for departures from normality; assumptions of all parametric analyses were met.Statistical power: It was not possible to conduct 'true' equivalence testing for the present pilot study as a meaningful margin of difference in change scores (on the primary outcome) between treatment arms was not yet known.A priori, the study assumed the following: a refusal/loss-to-follow-up rate of 10%; a mean (± SD) post-test PAST score of 17.0±3.0 in the active control SP group; equal group sizes of approximately 25 HCPs per group; and an overall type I error rate of 0.05.With a target sample size of 50, it was estimated that this would have approximately 80% power to detect as small a difference as 2 points in the primary outcome between groups.

Derivation of the sample and attrition
In total, 73 potential participants were assessed for inclusion over a 12-month period.Of these potential participants, 72 were included and one was excluded because she did not work directly with postoperative patients.The acceptance rate for enrollment among those eligible was 100%.Of the 72 consenting participants, 34 were randomly assigned to the DPS group and 38 were randomly assigned to the SP group.Twenty-three participants (DPS group, n=15; SP group, n=8) did not complete an intervention session or immediate postintervention measures.Of these, seven participants withdrew without explanation and could not be contacted and 15 withdrew because of unexpected scheduling conflicts in their respective clinical settings.An additional 10 participants were unable to attend an OSCE session, also because of scheduling problems.In total, data for analyses were available from 49 participants who completed preintervention and immediate postintervention measures, and 39 who completed the follow-up OSCE sessions.The overall attrition rate, from baseline to follow up, was 46% .

Participant characteristics
Baseline characteristics of the study groups are presented in Table 1.The mean age of the sample was 38±11 years, with 12±11 years of clinical experience, on average.The sample comprised mainly female registered nurses.The highest degree held for most was an undergradate degree, with a small number holding a graduate degree.Level of educational preparation was not reported by some.evaluation of the PAST Table 2 presents the results of the PAST inter-rater reliability assessment.Overall, the ICCs were within the acceptable range (ie, 0.72 to   3 and 4, respectively.Mean scores indicate that both groups performed well during the OSCEs and demonstrated improved understanding of pain-related misbeliefs postintervention.There were no significant differences between groups in PAST pain assessment checklist scores or global assessment ratings.Similarly, no significant differences in postintervention PBS scores between SP and DPS groups were found.Secondary outcomes -satisfaction and perceived quality of simulation experience: Mean scores according to group and results of participants' t tests for significant differences in satisfaction (SSLS-satisfaction) and confidence (SSLS-confidence) in learning, and perceived quality (SDS-total) and importance (SDS-importance) of learning are presented in Table 5.Both groups rated their simulations highly with respect to learner satisfaction and design quality.No significant differences across SSLS or SDS scores were found between groups.

DiScuSSion
Our simulation interventions were found to be equivalent, suggesting that DPS is an effective simulation alternative for HCPs' education on postoperative pain assessment, with improvements in knowledge and performance comparable with SP-based simulation.Participants' satisfaction and quality ratings were high in both groups, suggesting that both simulation methods provided valuable learning experiences.The fact that our analyses yielded values of P<0.300 across outcomes suggests that potential lack of statistical power was not a strong factor in the lack of differences between groups (47).The present pilot study will be followed by an adequately powered equivalence trial, allowing for more definitive conclusions to be made about statistical equivalence of our SP and DPS methods.A 100% acceptance rate among those screened for eligibility suggests that the opportunity to learn more about postoperative pain assessment via simulation was appealing to clinicians, especially nurses who constituted 99% of the sample.While all those who consented to participate intended to complete their assigned simulation interventions and follow-up OSCEs, scheduling problems in the clinical setting resulted in a higher than anticipated attrition rate.Our sample consisted of senior clinicians with an average of 12 years of clinical experience.In future work, it may be important to target clinicians on entry to practice, so that participation may be incorporated in entry to practice orientation, such an arrangement would enable more flexibility in scheduling.
Our primary outcome, pain assessment skills, was measured via the PAST.We found the ICC for the PAST pain assessment checklist to be higher at follow-up OSCE station 2 than at station 1.This may indicate that either the participants (as a group) performed their pain assessments more uniformly the second time, or that our assessors became more familiar with using the checklist at station 2. The ICC for the global assessment template was stable across both stations and only marginally improved with the inclusion of our SPs' assessments, indicating strong inter-rater reliability among assessors.Overall, the PAST appears to be a reliable measure of pain assessment skills.Subsequent evaluation of the tool in an equivalence trial with repeated measures (ie, three assessments or more) will allow for further examination of ICC stability.
While the measures we used in the pilot trial provide preliminary, summative evaluation of the effectiveness of DPS versus SP-based simulation, they cannot answer key questions about the formative aspects of learning that occurred in either group.For example, our adaptation of Wiseman and Snell's DPS method (38) required a priori distillation of key learning points we wished to illustrate with respect to the potential consequences of conducting a postoperative pain assessment without empathy or skilled attention to patient and family concerns.As a part of our larger-scale equivalence trial, we plan to embed design research (49,50) elements to examine the processes of participants' situated learning in context during both types of simulation.
Design research (48,49) is concerned with uncovering the processes inherent in innovative educational methods.In addition to addressing definitively the question of equivalence of effectiveness, we also want to know whether, by virtue of design, DPS engages learners in cognitive uptake and rehearsal of postoperative pain assessment skills differently than SP-based simulation.Several design research methods have been proposed, such as videotaping learners in action to examine critical design elements.In our future work, we will incorporate design research methods to examine differences and similarities in learning that occur during SP and DPS simulations.
The methodological strengths of the present pilot study were the robust methods used to minimize biases and random error, including centrally controlled randomization, valid and reliable measures, controls placed on outcome data collection and intention-to-treat   analyses.Intervention integrity was also maximized by using a standardized intervention protocol.Performance bias cannot be ruled out because it is not possible to blind participants or interveners in an education-based intervention study.Social desirability bias may also be possible due to our use of self-report measures for some outcomes.However, randomization should have equally distributed those prone to socially desirable responses.Our follow-up period was limited to two months postintervention.Therefore, long-term sustainability of observed improvements in knowledge of pain-related misbeliefs and levels of pain assessment skill are not known.In addition, for the purposes of the present pilot study, our simulation interventions were delivered by a single facilitator.Our subsequent equivalence trial should employ multiple facilitators to enhance external validity.

concLuSion
Common pain-related misbeliefs contribute to the problem of unrelieved postoperative pain.Deteriorating patient-based simulation may be an effective, low-tech simulation alternative for HCPs' education on postoperative pain assessment, with improvements in knowledge and performance comparable with SP-based simulation.Our pilot study results suggest that an adequately powered equivalence trial to examine the effectiveness of DPS versus SPs is warranted.

Figure 1 )
Figure 1) The deteriorating patient simulation method.Reproduced with permission from reference 38

TABLE 2 PAST intraclass correlations across OSCE stations
across both OSCE stations, indicating good to excellent interrater reliability of the tool.The ICCs for the PAST global rating template did not vary significantly according to OSCE station, whereas the ICC for the pain assessment checklist was highest for OSCE station 2. The additional SP assessments only marginally improved the ICCs of the global assessment template, indicating a high degree of reliability of the independent assessors' ratings.

observed pain assessment skills and knowledge of pain-related misbeliefs:
Mean scores according to group and results of Student's t test and ANCOVA testing for significant differences between groups in PAST scales and PBS scores are presented in Tables

TABLE 4 Comparison* of pre-and post-test Pain Beliefs Scale (PBS) scores between DPS and SP groups*
Data presented as mean ± SD unless otherwise indicated.*Using analysis of covariance; † Statistically significant at P≤0.05.DPS Deteriorating patient simulation; SP Standardized patients

TABLE 5 Comparison* of SSLS and SDS scores between DPS and SP groups
Data presented as mean ± SD unless otherwise indicated.*Using Student's t test; † Statistically significant at P≤0.05.DPS Deteriorating patient simulation; SDS Satisfaction Design Scale; SP Standardized patients; SSLS Satisfaction with Simulated Learning Scale