Prior to patient interaction, chest tube insertion (CTI) should be practiced in simulated circumstances. Valid evaluations are used to provide feedback and certification, notably in simulation-based training. For a study, researchers sought to create a unique CTI evaluation tool and to guarantee content validity using expert opinion gathered through a structured Delphi survey.
A varied panel of European experts was asked to participate. In round 1, the experts gave at least 5 procedural steps and 3 CTI mistakes. On a 5-point Likert scale, Round 2 assessed the level of agreement with the inclusion of each item in the evaluation instrument. Finally, in round 3, experts judged their level of agreement on the inclusion of the procedural step and its descriptive anchors. A consensus was obtained when more or around 80% of participants agreed on an item’s inclusion.
About 36 of the 105 invited surgeons (26/75, 35%), pulmonologists (8/23, 35%), and emergency doctors (2/7, 29%) took part. Around 81% (29/36) of respondents responded overall, with 100% (36/36) responding in round 1, 83% (30/36) responding in round 2, and 97% (29/30) responding in round 3. After condensation and duplication removal, Round 1 produced 23 steps and 44 mistakes. For 15 steps (65%) and 14 errors (32%), an agreement was reached in round 2. A list of 16 mistakes was provided to the panel, and 19 steps were converted into a rating scale with descriptive anchors. The inclusion of 17 procedural steps (89%) with descriptive anchors and all 16 mistakes were agreed upon by experts in round 3.
The ACTION (Assessment of Competence in Chest Tube Insertion) instrument was developed with agreement by a multidisciplinary expert panel. However, the validity of the 17-step grading system for methods that are special to them, together with a checklist of 16 faults, had to be further investigated.