Skip to main content

A randomised clinical trial of take-home laparoscopic training

Ebbe Thinggaard1, 2, Flemming Bjerrum1, 2, Jeanett Strandbygaard3, Lars Konge1 & Ismail Gögenur2

1. jan. 2019
15 min.

Faktaboks

Fakta

Training on simulators has become part of how we train surgeons. Simulation training has been shown to improve patient outcomes and is a valuable addition to the traditional method of training surgeons at the operating table [1]. Although many surgical trainees and their patients have benefitted from these developments, barriers to simulation training remain. Studies have identified barriers such as access to simulators, time for training and financial constraints [2]. To overcome these barriers, simple mobile box trainers (BT) have been developed, which allow training at home at a time that suits the trainee [3]. Nonetheless, training at home without supervision poses new challenges [4]. Home training of laparoscopic skills has been shown to be feasible [5] However, providing trainees with the freedom to organise their training could change training patterns, allowing for more distributed training where trainees practice more frequently at shorter
intervals. A distributed approach to training is beneficial for technical skills acquisition [6], and is also in line with educational principles of deliberate practice [7] and directed self-regulated learning (DSRL) [8]. The purpose of the present study was to examine the added effects of training at home. We looked at the number of days it took to complete the training, time spent on training, number of training sessions and differences in final scores. Furthermore, we explored the participants’ ability to rate their own performance when training without supervision using a structured self-rating system.

METHODS

Setting

At the Copenhagen Academy for Medical Education and Simulation [9], doctors in speciality training participate in a basic laparoscopic skills training programme during the first year of their training. The course is a cross-speciality training programme for doctors from departments of gynaecology, urology and surgery [10]. The aim of the course is to prepare the course participants for their first supervised laparoscopic surgical procedure. The course consists of two formalised one-day courses separated by a period of self-regulated training on virtual reality simulators (VRS) and BT.
The first part of the programme is an introductory course, which includes theoretical teaching imparted as traditional classroom training mixed with practical sessions to prepare the trainees for training on VRS and BT. After the introduction course, the participants go through a period of self-regulated training during which they book training sessions at the simulation centre and practice on both VRS and BT. At the simulation centre, they are assisted by a simulator technician who is able to give technical assistance and provide feedback during training. Participants are required to pass the Training and Assessment of Basic Laparoscopic Techniques (TABLT) [11] test on the BT and to reach a predefined level of proficiency on the VRS. The TABLT test is a training and assessment system consisting of five simple tasks: peg-transfer, cutting, sharp dissection, blunt dissection and cyst removal. Each task has specified types of errors, and a pass/fail level has been set so that the goal is clear for the trainees. Rating is done using a simple scoring system based on time and number of errors [11]. Participants can rate their own performance when training on the TABLT and can see when they have reached the pass/fail level. When participants feel ready, they hand in a pre-test in which they rate their own performance. After handing in the pre-test they can book a time for a proctored test where a member of faculty is present during testing. After reaching proficiency on the VRS and passing the TABLT test, participants can sign up for a one-day operative course.

Participants

The course participants consisted of doctors in the first year of their speciality training. Participants who had performed more than fifty laparoscopic procedures were excluded.

Intervention

The intervention consisted of the addition of home training on a mobile BT. The intervention group trained at the simulation centre and were also given a portable BT [12] allowing them to practice at home. The control group trained at the simulation centre only. Both groups had access to training on VRS at the simulation centre.

Randomisation

The primary investigator (ET) was responsible for inclusion of participants. After enrolment, participants were randomly allocated using a computer-generated allocation sequence (randomiser). The administrator at the simulation centre retrieved the allocation sequence and kept the sequence concealed until the allocation had been finalised.

Outcomes

All participants were given a training log to record their training. Based on information from the logbooks, we looked at the number of days from enrolment to passing the TABLT test, the time spent training and the number of training sessions attended. We also explored differences in the performance levels that participants reached on their final TABLT test and recorded the participants’ ability to rate themselves.

Statistical analysis

The sample size for the trial was calculated based on the assumption that the control group would pass the TABLT test after six weeks of practice (42 days), standard deviation (SD) ± 3 weeks (± 21 days). The intervention group was expected to pass after four weeks of practice (28 days), SD ± 3 weeks (± 21 days). Setting alpha at 0.05 and beta to 0.10, a total of 24 participants were required in each group. The trial was planned with a one-year inclusion period. Accounting for inaccuracies, we expected to include a total of 50 participants in the trial during the one-year study period
during which six courses were planned with up to 72 course places. We used student’s t-test to analyse whether there was a significant level of difference in the above-mentioned measurements. A p-value below 0.05 was considered statistically significant for the primary outcome. To determine the reliability of the self-rated test, we compared participants’ ratings of their pre-test and the rating of a trained blinded rater. The intraclass correlation coefficient (ICC) was used to examine the reliability of the participants’ self-rating.
A statistical software package was used (SPSS vs. 20.0, Chicago, IL).

Trial registration: The trial was submitted for evaluation to the Regional Ethics Committee, which determined that no approval was needed for the trial
(H-3-2014-FSP31). The trial was also registered with clinicaltrials.gov prior to its commencement (NCT02243215), and it was conducted according
to the CONSORT statement [13].

RESULTS

We included participants during a one-year period in which 50 doctors participated in the training course. Out of the 50 participants who took part in the course, 46 were enrolled in the study and 36 completed the course within the one-year study period. Four participants dropped out of the training course, and six participants were excluded from the study as they did not complete the training course during the one-year study period. Out of the 36 who completed the course, 18 were from to the control group, and 18 were from the intervention group, see Figure 1. For the participants’ baseline characteristics, see Table 1. At the end of the one-year study period, we performed a new sample size calculation based on data available from the 36 participants, corresponding to 75% of the anticipated sample size. We found that 11,422 participants would be
needed in each group which was not feasible, and
therefore we decided to stop recruiting participants. We found no difference in the number of days from enrolment to the passing of the TABLT test (86 days versus 89 days, p = 0.89), time spent training on box
trainers (302 minutes versus 218 minutes, p = 0.26) or between the test score (493 versus 460, p = 0.07)
(Table 2). However, we did find a significant difference in the number of training sessions (5.8 versus 2.3, p < 0.001), see Table 2. There was a good reliability when comparing participants’ ratings of their pre-test and that of a blinded rater, ICC 0.86, p < 0.001.

DISCUSSION

In this study, we explored the added effect of training laparoscopic skills at home and found no difference in the number of days or in the time spent training to pass the TABLT test. However, we did find a significant difference in the number of training sessions attended. Our trial shows that participants training at home do not complete the course faster than participants training only at the simulation centre. However, they practiced more frequently and at shorter intervals. Participants could reliably rate their own performance and 100% were able to pass the TABLT test on a pre-test using a structured self-rating system. In this study, we found that easier access to training did not result in participants passing a test faster. Take-home training can be challenging to implement, and uptake among surgical trainees can be difficult [14]. We found that the duration of training in general was longer for the intervention group and that training patterns varied greatly among participants. These findings demonstrate that factors other than access to training are important determinants of the training duration and training patters. The final part of the training programme was the operative course, which was held on fixed dates six times annually. Participants decided themselves when to enrol for the final course but did so before reaching proficiency on the VRS and before passing the TABLT test. This may have imposed a structure on training duration and patterns that influenced the self-regulated part of the training course, as the final course provided a deadline by when the TABLT test was to be passed. Accordingly, participants entered a training programme governed by the date of the final operative course. Distributing training in shorter and more frequent training sessions has been shown to improve training outcomes compared with massed training sessions [6]. Distributed training is recommended for laparoscopic virtual reality simulator training [15], and learning curves, in particular, have been shown to improve by using distributed training compared with massed training [16]. Even though the ideal training interval for laparoscopic simulation training has not been established, it has been shown that short training intervals are superior to long training intervals [17]. The fact that participants with access to training at home did not reach a higher level on the test might be explained by the fact that they were instructed on how to rate their own performance during training. Therefore, they knew that it made sense for them to stop training when they reached a sufficient performance
level. However, this was a deliberate choice of training strategy. Being able to rate your own performance allows for a more independent approach to training and has emerged from the instructional method called DSRL [8, 18], which is recommended for simulation training [19]. Principles of DSRL have shown to be useful in VRS mastoidectomy training [20]. This approach may also be of great value for training of laparoscopic skills at home. When considering unsupervised laparoscopic skills training at home, using DSRL as a strategy would allow for a structured training programme where trainees are in control of their training. In the present study, we showed that participants could reliably rate their own performance on the TABLT test. Being able to reliably rate your own test allows trainees to monitor their own training and provides them with a tool to apply self-regulatory skills.

Limitations

In this study, we chose to investigate the added effect of training at home on a simple mobile BT while also training in a simulation centre. As we did not wish to
limit the participants’ access to training, it was not possible to compare the effect of only using home-based training with that of training only at a simulation centre. Having chosen a different design could have given us insight into the effects of training at home versus training at a simulation centre. However, this was beyond the scope of our study. In our sample size calculation, we used a beta of 0.10. Having chosen a beta of 0.20 might have allowed for our inclusion of participants to match that of our sample size calculation. In our training programme, we use both VRS and BT;
mixing two training methods could cloud findings.
A trial focusing on BT exclusively might have more clearly demonstrated potential benefits of training at home using a BT. However, examining the use of training at home as a supplement was a deliberate choice of study design. We chose to do the study under realistic circumstances as part of an existing laparoscopic training programme. The results of our study could help guide others that may consider incorporating take-home training in their laparoscopic training course. In the basic laparoscopy course, we also use a cross-speciality approach to laparoscopic training where doctors from different specialities practice together. Having participants from different specialities and with different levels of experience may have had an impact on the results. Using participants from different specialities increases the external validity as findings can be generalised across training programmes for different specialities. The participants in our study had different levels of experience prior to commencing the training programme. This makes the results of the trial applicable to trainees with different degrees of experience.

CONCLUSIONS

Take-home training of basic laparoscopic skills on a mobile box trainer allowed trainees to practice at their own convenience. The increased access to training did not result in trainees passing a test earlier or getting a higher score, but they did engage in shorter and more frequent training sessions. Testing and mandatory training requirements apparently determine training patterns. Trainees could reliably rate their own performance.

CORRESPONDENCE: Ebbe Thinggaard.
E-mail: ebbe.thinggaard@regionh.dk

ACCEPTED: 12 October 2018

CONFLICTS OF INTEREST: Disclosure forms provided by the authors are available with the full text of this article at Ugeskriftet.dk/dmj

Referencer

LITERATURe

  1. Grantcharov TP, Kristiansen VB, Bendix J et al. Randomized clinical trial of virtual reality simulation for laparoscopic skills training. The Br J Surg 2004;91:146-50.

  2. Caban AM, Guido C, Silver M et al. Use of collapsible box trainer as a module for resident education. J Soc Laparoendo 2013;17:440-4.

  3. Partridge R, Hughes M, Brennan P et al. There is a worldwide shortfall of simulation platforms for minimally invasive surgery. JSS 2015;2:
    12-7.

  4. van Empel PJ, Verdam MG, Strypet M et al. Voluntary autonomous simulator based training in minimally invasive surgery, residents’ compliance and reflection. J Surg Educ 2012;69:564-70.

  5. Korndorffer Jr JR, Bellows CF, Tekian A et al. Effective home laparoscopic simulation training: A preliminary evaluation of an improved training paradigm. Am J Surg 2012;203:1-7.

  6. Benjamin AS, Tullis J. What makes distributed practice effective? Cogn Psyc 2010;61:228-47.

  7. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med 2004;79:S70-81

  8. Brydges R, Dubrowski A, Regehr G. A new concept of unsupervised learning: directed self-guided learning in the health professions. Acad Med 2010;85:S49-5.

  9. Konge L, Ringsted C, Bjerrum F et al. The Simulation Centre at Rigshospitalet, Copenhagen, Denmark. J Surg Educ 2015;72:362-5.

  10. Bjerrum F, Sorensen JL, Thinggaard E et al. Implementation of a cross-specialty training program in basic laparoscopy. J Soc Laparo Surg 2015;19:4.

  11. Thinggaard E, Bjerrum F, Strandbygaard J et al. Validity of a cross-specialty test in basic laparoscopic techniques (TABLT). Br J Surg 2015;102:1106-13.

  12. Bahsoun AN, Malik MM, Ahmed K et al. Tablet based simulation provides a new solution to accessing laparoscopic skills training. J Surg Educ 2013;70:161-3.

  13. Moher D, Schulz KF, Altman DG et al. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 2001;357:1191-4.

  14. Nicol LG, Walker KG, Cleland J et al. Incentivising practice with take-home laparoscopic simulators in two UK Core Surgical Training programmes. BMJ Stel 2016:1-6.

  15. Gallagher AG, Ritter EM, Champion H et al. Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training. Ann Surg 2005;241:364-72.

  16. Andersen SA, Konge L, Caye-Thomasen P et al. Learning curves of virtual mastoidectomy in distributed and massed practice. JAMA Otolaryngol Head Neck Surg 2015:141;913-8.

  17. Stefanidis D, Walters KC, Mostafavi A et al. What is the ideal interval between training sessions during proficiency-based laparoscopic simulator training? Am J Surg 2009;197:126-9.

  18. Brydges R, Nair P, Ma I et al. Directed self-regulated learning versus instructor-regulated learning in simulation training. Med Educ 2012;46:648-56.

  19. Brydges R, Manzone J, Shanks D et al. Self-regulated learning in simulation-based training: a systematic review and meta-analysis. Med Educ 2015;49:368-78.

  20. Andersen SA, Foghsgaard S, Konge L et al. The effect of self-directed virtual reality simulation on dissection training performance in mastoidectomy. Laryngoscope 2016;126:1883-8.