Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

User Response to the Simulation of a Virtual Patient with Cranial Nerve Injury

User Response to the Simulation of a Virtual Patient with Cranial Nerve Injury Technology has rarely attempted to simulate a CN exam. NERVE simulates a life-size virtual patient (VP), using speech recognition with a Nintendo Wiimote® serving as a virtual hand, ophthalmoscope, and eye-chart. This study assesses the introductory reception, ability to identify the CN lesion, and students' preference of NERVE. Our goal is to evaluate the responses from medical students, residents, and clinicians using the Neurological Examination Rehearsal Virtual Environment (NERVE), a cranial nerve (CN) exam simulator. Medical College of Georgia participants from a variety of medical specialties, including 9 clinicians, 7 residents, and 8 MS3 and 4s, 20 MS 2s, and 25 MS 1s performed a CN examination on a VP. There were no statistically significant differences in measures related to the actual performance of the exam, the controller, overall benefit of the experience, use of technology or satisfaction with the technology. Even with technical limitations, overall medical student's reported NERVE having educational value. Residents had the lowest rate of correct CN identification, indicating they could be the group that most benefits from repeat exposure to CN exams. Medical students and clinicians were the best groups at identifying the correct deficit for our simulation. The next step is to assess NERVE's capability to teach students and residents the cranial nerve exam. KEYWORDS: Cranial Nerve exam, neurological disease/diagnosis, communication, education-medical, user-computer interface, computer simulation, clinical clerkship. Introduction To diagnose cranial nerve deficits in a patient with a hemorrhage, stroke, or tumor, the clinician is required to have astute history collecting and physical examination skills. While imaging is still important for confirmation of the diagnosis, Radiologists depend on the clinician's exam for diagnosis [1]. For students and residents education, outside the chance encounter with actual patients, there has been no consistent method to teach and evaluate their clinical skills in diagnosing these deficits [14]. Current methods of assessing student clinical proficiency during the Neurological Clerkship include the Bedside Examination Evaluation where a preceptor evaluates based on history-taking and physical examination skills [2]. There are many issues associated with this current method of evaluation found in Neurology clerkship and other clerkships including the subjectivity of evaluators [3-6]. The students are not evaluated from a diagnostic standpoint because the patients used for evaluation are either clinical or standardized. Recruiting real patients can prove to be very costly, difficult, and time consuming since patients tend to also lack significant physical exam findings. Training standardized patients to simulate neurological processes can be especially difficult to demonstrate because their presentations can be very difficult. Therefore, uniformly evaluating students and residents tend to also be extremely difficult and subjective. The students also report a lower level of knowledge of the neurological exam and lower confidence in their abilities due to the current educational practice that limit educational opportunities [7-8]. The use of virtual interactive environments has been increasing in recent years to improve skills including in the doctor-patient relationship, posttraumatic stress disorder, and even to held with speech therapy [10-12]. The virtual patient simulation uses virtually animated characters that portray abnormal clinical findings, providing an immersive learning environment that augments the user's available learning opportunities [14]. In general, studies of these simulations have shown positive potential, showing larger positive effect compared to no intervention and promoting clinical reasoning skills [17, 18]. Neurological examination however has not experienced the same technological development. Neurological physical examination techniques have been simulated, but have been limited to a few components like the fundoscopic exam, using sliding photos of abnormal retinas placed into mannequin eyes [19]. Virtual simulation have also been limited to abnormal findings. University of San Diego developed a purely virtual web-based eye simulator that demonstrates abnormal eye movement caused by cranial nerve palsies in Cranial Nerve 3, 4, and 6 [20]. However to our knowledge, there are currently no virtual tools that combine the interpersonal aspects of a history and physical with the abnormal neurological findings. Computer scientists at the University of Florida along with joint research partnerships at the University of Georgia and the Medical College of Georgia (MCG) have created the Neurological Examination Rehearsal Virtual Environment (NERVE) tool. Using the Virtual Patient system developed at Florida as the foundation for the simulation technology, the research group at MCG has been adapting the computer-based simulation to a clinical setting. The motive for creating NERVE was to provide an additional resource for developing practical clinical skills including history taking and physical examination to properly examine, recognize, and diagnosis neurological deficits [9]. Having a virtual patient means this resource is potentially available at all times. This allows for self-directed learning and more consistent evaluation, which gives users more control and has been shown to improve learning [13]. The NERVE tool is capable of simulating a virtual life-sized patient with a cranial nerve deficit that can be examined by speech recognition and motion tracking using a Nintendo Wiimote®. The Wiimote® acts as an onscreen virtual hand, ophthalmoscope, or eye chart used concordantly with a spoken medical interview involving the patient during the virtual exam. The scenario of the virtual patient's diplopia describes has been shown to have good content validity [14]. The importance of simulating an environment that is similar to the actual clinical setting is because memory improves when the learning occurs in a similar context to which it is used [15]. Thus, the added immersion provided by the interface allows students and residents to more readily transfer skills learned in the virtual experience to the clinic and wards. In this study, responses were gathered from clinicians, residents, and medical students. Our goal was to evaluate the responses for the usability and the user's experience of the technology and compare between each group to determine future impact of the technology. For the benefit of this study, educational value was assessed as the participant's appraisal of their examination skills in relation to whether or not it benefitted from using the tool. Correct identification of the cranial nerve deficit was also assessed. Figure 1. A medical student uses NERVE to perform a Cranial Nerve exam Methods All participants were recruited from MCG, including 9 clinicians from a variety of medical specialties, 7 residents from various specialties, 8 medical student upper classmen (MS3 and 4s], 20 MS 2s, and 25 MS 1s. Their general demographics, clinical experience, and video game experience were assessed using a background survey given to all participants before beginning. Next, participants were allowed to familiarize themselves with the tool by receiving a written tutorial explaining how to operate NERVE. Once ready, all participants were encouraged to gather a complete history and perform a cranial nerve exam while also being informed of the technological constraints of certain components of the cranial nerve exam. Since MS1s at MCG were at a stage where they have only been taught how to gather history but have not yet been taught the physical exam, a full written guide with stepby-step instructions of a cranial nerve exam was given only to their group. Finally, all the participants' self-reported perspective was gathered with a post-survey that assessed the overall experience, educational value, and usability. They were also asked to identify which cranial nerve lesion our virtual patient was suffering from. Our surveys used Likert rating scale, which provided our participants with a range that captures the intensity of each answer. Our background experience questions used a 1-5 scale based on if the participant had no, low, moderate, high, and very high amount of experience, respectively. Our postsurvey used two types of scales. The first was a four-point forced-choice scaling method, based on strongly disagreeing, disagreeing, agreeing, and strongly disagreeing, respectively [16]. This scale was used to evaluate the participant's response to statements on their experience of the exam, their own skills, and their perspective of the NERVE tool. The post-survey defines the educational value as examination skills, with these questions asked of the participants: I felt like my examination skills benefited in this study, Repeated exposure to this technology would improve examination techniques, I would use this technology in the future to help improve my exam techniques, I would recommend using this technology to practice examination techniques, I would recommend using this technology to improve examination techniques. The second scale used was a 1-10 severity scale to measure the impact of the interview and the realism of the tool. These two questions needed more options for the participants to express their intensity of feelings, providing better assessment of their responses[16]. This variation in rating scale for the participants perspective on NERVE in the general and overall questions provide more validity for our assessment of the tool based on user feedback. Descriptive statistics were determined for all variables. To examine differences in survey items between students, residents and clinicians, chisquare tests for categorical data, one-way analysis of variance (ANOVA) for continuous data, or Kruskal-Wallis tests for ordinal data were used. A TukeyKramer multiple comparison procedure was used to examine post hoc differences between students, residents and clinicians for ANOVA models. Finally, to examine differences between MS year 1, MS year 2, and MS year 3 and 4 (combined) students, chi-square tests for categorical data, one-way ANOVA for continuous data or Kruskal-Wallis tests for ordinal data were used. All statistical analyses were performed using SAS 9.2 and statistical significance was assessed using an alpha level of 0.05 unless otherwise noted. Results The majority of our participants were students, male, and white. Table 1 gives the results for the Kruskal Wallis or one-way ANOVA results examining differences between students, residents and clinicians. Several differences were detected for various outcomes. Students rated their physical exam skills, neurologic exam skills, and differential diagnosis skills lower than clinicians and residents. Students had played video games, played video games featuring humans, and had experience with the Wii® more often than clinicians. Students were more anxious about the neurological exam than both residents and clinicians and felt less prepared to perform the exam than both clinicians and residents. Interestingly, there were no statistically significant differences in measures related to the actual performance of the exam, the controller, overall benefit of the experience, use of technology, or satisfaction with the technology. Regarding similarity of the patient simulation to a real exam, residents had higher scores than clinicians. The general trend showed that the overall opinion towards NERVE was positive, but not overwhelming. Trends indicate that clinicians and MS1s were in general more critical of NERVE. Most feedback focused on the frustrations with technical restraints. Complaints of the speech-recognition software became our most frequent feedback. Even with technical limitations, clinicians, residents, and medical students' opinion favored NERVE having educational value. All groups also viewed their time with NERVE to be a beneficial experience. Table 1. Kruskal Wallis tests or one-way ANOVA for differences between participant types. Variable Clinicians Residents All Students p-value Participants, n (%) Years of Post-Undergraduate Education/Training, mean (SD) Physical Exam Skills (1=lowest, 5=highest), mean (SD) Differential Diagnosis Skills (1=lowest, 5=highest), mean (SD) Neurological Exam Skills (1=lowest, 5=highest), mean (SD) Play Video Games (1=never, 5=daily), mean (SD) Play Video Games Featuring Humans, mean (SD) Experience with Wii® (1=never, 5=daily), mean (SD) Anxious about performing Neurologic Exam (1=strongly disagree, 4=strongly agree), mean (SD) Prepared to perform Neurologic Exam (1=strongly disagree, 4=strongly agree), mean (SD) Overall NERVE Interview Experience (1=worst, 10=best), mean (SD) Educational value of NERVE on Exam Skill (1=strongly disagree, 4=strongly agree), mean (SD) NERVE's positive influence on Exam Skill (1=strongly disagree, 4=strongly agree), mean (SD) 9 (13) 17.1(8.0) 3.6 (0.5) 3.7 (0.5) 2.9 (0.8) 2 (0.9) 1.8 (1.0) 1.8 (1.0) 7 (10) 7.6 (1.0) 4.3 (0.5) 4.3 (0.5) 3.7 (1.0) 2.7 (0.8) 2.5 (0.8) 2.7 (1.1) 53 (77) 1.6 (0.6) 2.3 (1.1) 2.0 (0.9) 2.2 (1.0) 3.0 (1.0) 2.8 (1.0) 2.6 (0.8) <0.0001 <0.0001 <0.0001 0.0035 0.0204 0.0207 0.0314 1.7 (0.9) 1.9 (0.7) 2.6 (0.7) 3.3 (0.7) 3.0 (1.0) 2.0 (1.0) 5.4 (2.3) 5.7 (1.6) 5.1 (1.9) 2.9 (0.5) 2.8 (0.7) 3.1 (0.7) 3.2 (0.6) Controller Usability Score (1=strongly disagree, 4=strongly agree), mean (SD) Overall Satisfaction with Controller (1=strongly disagree, 4=strongly agree), mean (SD) Overall Beneficial Experience (1=strongly disagree, 4=strongly agree), mean (SD) Similarity of Simulation to Real Patient (1=not realistic, 10=realistic), mean (SD) Technology Enabled Effective Exam (1=strongly disagree, 4=strongly agree), mean (SD) Satisfaction with Technology (1=strongly disagree, 4=strongly agree), mean (SD) Correct Cranial Nerve, n (% correct) 2.6 (0.3) 2.9 (0.4) 2.7 (0.4) 2.5 (0.5) 2.9 (0.7) 3.0 (0.7) 3.3 (0.6) 4.8 (1.7) 6.9 (0.9) 5.7 (1.5) 2.5 (0.5) 2.6 (0.6) 2.5 (0.5) 7 (87.5%) 2.7 (0.8) 3 (42.9%) 2.6 (0.6) 40 (75.5%) Table 2 examines differences between medical program years among students showed statistically significant differences for the rating of the patient interview only. Third and 4th year medical students rated their patient interview higher than both first and second year medical students. No other statistically significant differences were detected between medical program years. Table 2. Kruskal Wallis tests or one-way ANOVA for differences between medical school years. Variable Participants, n (% total students) Overall NERVE Interview Experience (1=worst, 10=best), mean (SD) Educational value of NERVE on Exam Skill (1=strongly disagree, 4=strongly agree), mean (SD) NERVE's positive influence on Exam Skill (1=strongly disagree, 4=strongly agree), mean (SD) Controller Usability Score (1=strongly disagree, 4=strongly agree), mean (SD) Overall Satisfaction with Controller (1=strongly disagree, 4=strongly agree), mean (SD) Overall Beneficial Experience (1=strongly disagree, 4=strongly agree), mean (SD) Similarity of Simulation to Real Patient (1=not realistic, 10=realistic), mean (SD) Technology Enabled Effective Exam (1=strongly disagree, 4=strongly agree), mean (SD) Satisfaction with Technology (1=strongly disagree, 4=strongly agree), mean (SD) Correct Cranial Nerve, n (% correct) MS 3&4 8 (15) 6.6 (1.6) MS 2 20 (38) 4.7 (1.8) MS 1 25 (47) 4.8 (1.9) 0.0339 p-value 3.0 (0.7) 2.9 (0.6) 3.1 (1.0) 3.2 (0.4) 3.2 (0.7) 2.7 (0.3) 2.1 (0.6) 3.1 (0.7) 3.1 (0.6) 3.3 (0.5) 3.2 (0.6) 6.5 (1.1) 5.7 (1.7) 5.6 (1.4) 3.0 (0.0) 2.5 (0.7) 2.6 (0.6) 8 (100%) 2.6 (0.6) 12 (60%) 2.6 (0.6) 20 (80%) Figure 2 shows the overall trend across groups in correctly identifying the cranial nerve deficit. If all the students were combined, then overall, the clinicians were able to correctly identify the lesions the most. However, if you separated the student groups, all the MS 3&4s were able to correctly identify the lesion, thus scoring higher than the clinicians. The lowest scoring group from the entire study was the residents. They incorrectly identified the lesion more often than compared to either the combined or separated student groups. Percentage correct deficit identification by skill level 100,00% 90,00% 80,00% 70,00% 60,00% 50,00% 40,00% 30,00% 20,00% 10,00% 0,00% Clinicians Residents All Students MS 3&4 MS 2 Figure 2. Percentage correct for Cranial Nerve lesion identification by skill level MS 1 Discussion Our results are consistent with past studies evaluating simulation in the medical field for educational purposes, showing positive feedback from students. Students show that they benefit because they are able to learn on their own pace with the added opportunity of feedback provided from the technology. This shows that there is potential for this technology to be a useful teaching and practicing tool for cranial nerve exams. Continued development and improvement of the technical aspects of NERVE will likely translate into the tool becoming more effective in its purpose. From our study, NERVE shows acceptance in an educational setting and towards the potential for this technology to have a teaching impact with improvement in the technology. The students had the opportunity to recognize cranial nerve deficits, a potential complication in strokes, tumors, and other processes, as well as practicing thorough history taking and examination skills for diagnosis. Residents, as a group, responded with the highest approval for overall experience with NERVE, and they also rated NERVE most resembling an actual patient. However, they also had the lowest percentage in correctly identifying the Cranial Nerve deficit. More research needs to be done to see if increased exposure to NERVE improves performance on cranial nerve exams, and the validity of virtual patient experience in measuring the participant's diagnosis skills. Another future aspect of study is the cost effectiveness of the tool. Our expenses for running the study included a television, the computer to run the software, the Wii, and multiple infrared cameras. In estimate, this should run a few thousand dollars. The startup investment would be higher with this technology versus preparing a standardized patient. However, in the long term, keeping the standardized patient per hour for all students should be more expensive versus having a high initial cost for having a virtual patient available at any time for all students, though a full cost assessment is needed in the future. Most people did not have a problem with the controller and in fact liked having the tools in one controller. Being able to fully implement the clinical tools in a clinical setting however will remain a major challenge. Another is the speech recognition technology that was worse with female participants. With continued programming and modifications, these issues can potentially be fixed. However, our future study will focus on if having repeat sessions with NERVE have improved a medical student's capability to perform a complete cranial nerve exam, thereby teaching them in the process via direct contact. In its current state, NERVE provides an additional resource that allows medical students and residents to interact with patients with rare cranial nerve deficits. Infrequent presentations can mean a lack of exposure to see patients with cranial lesions. These virtual patients provide exposure and proficiency in abnormal findings that are difficult to include in any medical school curriculum. With this tool we hope to meet the Consortium of Neurology Clerkship Directors' goal to optimize methods of teaching the neurologic examination to medical students. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Bio-Algorithms and Med-Systems de Gruyter

Loading next page...
 
/lp/de-gruyter/user-response-to-the-simulation-of-a-virtual-patient-with-cranial-WUIm5oTvgJ
Publisher
de Gruyter
Copyright
Copyright © 2012 by the
ISSN
1895-9091
eISSN
1896-530X
DOI
10.2478/bams-2012-0001
Publisher site
See Article on Publisher Site

Abstract

Technology has rarely attempted to simulate a CN exam. NERVE simulates a life-size virtual patient (VP), using speech recognition with a Nintendo Wiimote® serving as a virtual hand, ophthalmoscope, and eye-chart. This study assesses the introductory reception, ability to identify the CN lesion, and students' preference of NERVE. Our goal is to evaluate the responses from medical students, residents, and clinicians using the Neurological Examination Rehearsal Virtual Environment (NERVE), a cranial nerve (CN) exam simulator. Medical College of Georgia participants from a variety of medical specialties, including 9 clinicians, 7 residents, and 8 MS3 and 4s, 20 MS 2s, and 25 MS 1s performed a CN examination on a VP. There were no statistically significant differences in measures related to the actual performance of the exam, the controller, overall benefit of the experience, use of technology or satisfaction with the technology. Even with technical limitations, overall medical student's reported NERVE having educational value. Residents had the lowest rate of correct CN identification, indicating they could be the group that most benefits from repeat exposure to CN exams. Medical students and clinicians were the best groups at identifying the correct deficit for our simulation. The next step is to assess NERVE's capability to teach students and residents the cranial nerve exam. KEYWORDS: Cranial Nerve exam, neurological disease/diagnosis, communication, education-medical, user-computer interface, computer simulation, clinical clerkship. Introduction To diagnose cranial nerve deficits in a patient with a hemorrhage, stroke, or tumor, the clinician is required to have astute history collecting and physical examination skills. While imaging is still important for confirmation of the diagnosis, Radiologists depend on the clinician's exam for diagnosis [1]. For students and residents education, outside the chance encounter with actual patients, there has been no consistent method to teach and evaluate their clinical skills in diagnosing these deficits [14]. Current methods of assessing student clinical proficiency during the Neurological Clerkship include the Bedside Examination Evaluation where a preceptor evaluates based on history-taking and physical examination skills [2]. There are many issues associated with this current method of evaluation found in Neurology clerkship and other clerkships including the subjectivity of evaluators [3-6]. The students are not evaluated from a diagnostic standpoint because the patients used for evaluation are either clinical or standardized. Recruiting real patients can prove to be very costly, difficult, and time consuming since patients tend to also lack significant physical exam findings. Training standardized patients to simulate neurological processes can be especially difficult to demonstrate because their presentations can be very difficult. Therefore, uniformly evaluating students and residents tend to also be extremely difficult and subjective. The students also report a lower level of knowledge of the neurological exam and lower confidence in their abilities due to the current educational practice that limit educational opportunities [7-8]. The use of virtual interactive environments has been increasing in recent years to improve skills including in the doctor-patient relationship, posttraumatic stress disorder, and even to held with speech therapy [10-12]. The virtual patient simulation uses virtually animated characters that portray abnormal clinical findings, providing an immersive learning environment that augments the user's available learning opportunities [14]. In general, studies of these simulations have shown positive potential, showing larger positive effect compared to no intervention and promoting clinical reasoning skills [17, 18]. Neurological examination however has not experienced the same technological development. Neurological physical examination techniques have been simulated, but have been limited to a few components like the fundoscopic exam, using sliding photos of abnormal retinas placed into mannequin eyes [19]. Virtual simulation have also been limited to abnormal findings. University of San Diego developed a purely virtual web-based eye simulator that demonstrates abnormal eye movement caused by cranial nerve palsies in Cranial Nerve 3, 4, and 6 [20]. However to our knowledge, there are currently no virtual tools that combine the interpersonal aspects of a history and physical with the abnormal neurological findings. Computer scientists at the University of Florida along with joint research partnerships at the University of Georgia and the Medical College of Georgia (MCG) have created the Neurological Examination Rehearsal Virtual Environment (NERVE) tool. Using the Virtual Patient system developed at Florida as the foundation for the simulation technology, the research group at MCG has been adapting the computer-based simulation to a clinical setting. The motive for creating NERVE was to provide an additional resource for developing practical clinical skills including history taking and physical examination to properly examine, recognize, and diagnosis neurological deficits [9]. Having a virtual patient means this resource is potentially available at all times. This allows for self-directed learning and more consistent evaluation, which gives users more control and has been shown to improve learning [13]. The NERVE tool is capable of simulating a virtual life-sized patient with a cranial nerve deficit that can be examined by speech recognition and motion tracking using a Nintendo Wiimote®. The Wiimote® acts as an onscreen virtual hand, ophthalmoscope, or eye chart used concordantly with a spoken medical interview involving the patient during the virtual exam. The scenario of the virtual patient's diplopia describes has been shown to have good content validity [14]. The importance of simulating an environment that is similar to the actual clinical setting is because memory improves when the learning occurs in a similar context to which it is used [15]. Thus, the added immersion provided by the interface allows students and residents to more readily transfer skills learned in the virtual experience to the clinic and wards. In this study, responses were gathered from clinicians, residents, and medical students. Our goal was to evaluate the responses for the usability and the user's experience of the technology and compare between each group to determine future impact of the technology. For the benefit of this study, educational value was assessed as the participant's appraisal of their examination skills in relation to whether or not it benefitted from using the tool. Correct identification of the cranial nerve deficit was also assessed. Figure 1. A medical student uses NERVE to perform a Cranial Nerve exam Methods All participants were recruited from MCG, including 9 clinicians from a variety of medical specialties, 7 residents from various specialties, 8 medical student upper classmen (MS3 and 4s], 20 MS 2s, and 25 MS 1s. Their general demographics, clinical experience, and video game experience were assessed using a background survey given to all participants before beginning. Next, participants were allowed to familiarize themselves with the tool by receiving a written tutorial explaining how to operate NERVE. Once ready, all participants were encouraged to gather a complete history and perform a cranial nerve exam while also being informed of the technological constraints of certain components of the cranial nerve exam. Since MS1s at MCG were at a stage where they have only been taught how to gather history but have not yet been taught the physical exam, a full written guide with stepby-step instructions of a cranial nerve exam was given only to their group. Finally, all the participants' self-reported perspective was gathered with a post-survey that assessed the overall experience, educational value, and usability. They were also asked to identify which cranial nerve lesion our virtual patient was suffering from. Our surveys used Likert rating scale, which provided our participants with a range that captures the intensity of each answer. Our background experience questions used a 1-5 scale based on if the participant had no, low, moderate, high, and very high amount of experience, respectively. Our postsurvey used two types of scales. The first was a four-point forced-choice scaling method, based on strongly disagreeing, disagreeing, agreeing, and strongly disagreeing, respectively [16]. This scale was used to evaluate the participant's response to statements on their experience of the exam, their own skills, and their perspective of the NERVE tool. The post-survey defines the educational value as examination skills, with these questions asked of the participants: I felt like my examination skills benefited in this study, Repeated exposure to this technology would improve examination techniques, I would use this technology in the future to help improve my exam techniques, I would recommend using this technology to practice examination techniques, I would recommend using this technology to improve examination techniques. The second scale used was a 1-10 severity scale to measure the impact of the interview and the realism of the tool. These two questions needed more options for the participants to express their intensity of feelings, providing better assessment of their responses[16]. This variation in rating scale for the participants perspective on NERVE in the general and overall questions provide more validity for our assessment of the tool based on user feedback. Descriptive statistics were determined for all variables. To examine differences in survey items between students, residents and clinicians, chisquare tests for categorical data, one-way analysis of variance (ANOVA) for continuous data, or Kruskal-Wallis tests for ordinal data were used. A TukeyKramer multiple comparison procedure was used to examine post hoc differences between students, residents and clinicians for ANOVA models. Finally, to examine differences between MS year 1, MS year 2, and MS year 3 and 4 (combined) students, chi-square tests for categorical data, one-way ANOVA for continuous data or Kruskal-Wallis tests for ordinal data were used. All statistical analyses were performed using SAS 9.2 and statistical significance was assessed using an alpha level of 0.05 unless otherwise noted. Results The majority of our participants were students, male, and white. Table 1 gives the results for the Kruskal Wallis or one-way ANOVA results examining differences between students, residents and clinicians. Several differences were detected for various outcomes. Students rated their physical exam skills, neurologic exam skills, and differential diagnosis skills lower than clinicians and residents. Students had played video games, played video games featuring humans, and had experience with the Wii® more often than clinicians. Students were more anxious about the neurological exam than both residents and clinicians and felt less prepared to perform the exam than both clinicians and residents. Interestingly, there were no statistically significant differences in measures related to the actual performance of the exam, the controller, overall benefit of the experience, use of technology, or satisfaction with the technology. Regarding similarity of the patient simulation to a real exam, residents had higher scores than clinicians. The general trend showed that the overall opinion towards NERVE was positive, but not overwhelming. Trends indicate that clinicians and MS1s were in general more critical of NERVE. Most feedback focused on the frustrations with technical restraints. Complaints of the speech-recognition software became our most frequent feedback. Even with technical limitations, clinicians, residents, and medical students' opinion favored NERVE having educational value. All groups also viewed their time with NERVE to be a beneficial experience. Table 1. Kruskal Wallis tests or one-way ANOVA for differences between participant types. Variable Clinicians Residents All Students p-value Participants, n (%) Years of Post-Undergraduate Education/Training, mean (SD) Physical Exam Skills (1=lowest, 5=highest), mean (SD) Differential Diagnosis Skills (1=lowest, 5=highest), mean (SD) Neurological Exam Skills (1=lowest, 5=highest), mean (SD) Play Video Games (1=never, 5=daily), mean (SD) Play Video Games Featuring Humans, mean (SD) Experience with Wii® (1=never, 5=daily), mean (SD) Anxious about performing Neurologic Exam (1=strongly disagree, 4=strongly agree), mean (SD) Prepared to perform Neurologic Exam (1=strongly disagree, 4=strongly agree), mean (SD) Overall NERVE Interview Experience (1=worst, 10=best), mean (SD) Educational value of NERVE on Exam Skill (1=strongly disagree, 4=strongly agree), mean (SD) NERVE's positive influence on Exam Skill (1=strongly disagree, 4=strongly agree), mean (SD) 9 (13) 17.1(8.0) 3.6 (0.5) 3.7 (0.5) 2.9 (0.8) 2 (0.9) 1.8 (1.0) 1.8 (1.0) 7 (10) 7.6 (1.0) 4.3 (0.5) 4.3 (0.5) 3.7 (1.0) 2.7 (0.8) 2.5 (0.8) 2.7 (1.1) 53 (77) 1.6 (0.6) 2.3 (1.1) 2.0 (0.9) 2.2 (1.0) 3.0 (1.0) 2.8 (1.0) 2.6 (0.8) <0.0001 <0.0001 <0.0001 0.0035 0.0204 0.0207 0.0314 1.7 (0.9) 1.9 (0.7) 2.6 (0.7) 3.3 (0.7) 3.0 (1.0) 2.0 (1.0) 5.4 (2.3) 5.7 (1.6) 5.1 (1.9) 2.9 (0.5) 2.8 (0.7) 3.1 (0.7) 3.2 (0.6) Controller Usability Score (1=strongly disagree, 4=strongly agree), mean (SD) Overall Satisfaction with Controller (1=strongly disagree, 4=strongly agree), mean (SD) Overall Beneficial Experience (1=strongly disagree, 4=strongly agree), mean (SD) Similarity of Simulation to Real Patient (1=not realistic, 10=realistic), mean (SD) Technology Enabled Effective Exam (1=strongly disagree, 4=strongly agree), mean (SD) Satisfaction with Technology (1=strongly disagree, 4=strongly agree), mean (SD) Correct Cranial Nerve, n (% correct) 2.6 (0.3) 2.9 (0.4) 2.7 (0.4) 2.5 (0.5) 2.9 (0.7) 3.0 (0.7) 3.3 (0.6) 4.8 (1.7) 6.9 (0.9) 5.7 (1.5) 2.5 (0.5) 2.6 (0.6) 2.5 (0.5) 7 (87.5%) 2.7 (0.8) 3 (42.9%) 2.6 (0.6) 40 (75.5%) Table 2 examines differences between medical program years among students showed statistically significant differences for the rating of the patient interview only. Third and 4th year medical students rated their patient interview higher than both first and second year medical students. No other statistically significant differences were detected between medical program years. Table 2. Kruskal Wallis tests or one-way ANOVA for differences between medical school years. Variable Participants, n (% total students) Overall NERVE Interview Experience (1=worst, 10=best), mean (SD) Educational value of NERVE on Exam Skill (1=strongly disagree, 4=strongly agree), mean (SD) NERVE's positive influence on Exam Skill (1=strongly disagree, 4=strongly agree), mean (SD) Controller Usability Score (1=strongly disagree, 4=strongly agree), mean (SD) Overall Satisfaction with Controller (1=strongly disagree, 4=strongly agree), mean (SD) Overall Beneficial Experience (1=strongly disagree, 4=strongly agree), mean (SD) Similarity of Simulation to Real Patient (1=not realistic, 10=realistic), mean (SD) Technology Enabled Effective Exam (1=strongly disagree, 4=strongly agree), mean (SD) Satisfaction with Technology (1=strongly disagree, 4=strongly agree), mean (SD) Correct Cranial Nerve, n (% correct) MS 3&4 8 (15) 6.6 (1.6) MS 2 20 (38) 4.7 (1.8) MS 1 25 (47) 4.8 (1.9) 0.0339 p-value 3.0 (0.7) 2.9 (0.6) 3.1 (1.0) 3.2 (0.4) 3.2 (0.7) 2.7 (0.3) 2.1 (0.6) 3.1 (0.7) 3.1 (0.6) 3.3 (0.5) 3.2 (0.6) 6.5 (1.1) 5.7 (1.7) 5.6 (1.4) 3.0 (0.0) 2.5 (0.7) 2.6 (0.6) 8 (100%) 2.6 (0.6) 12 (60%) 2.6 (0.6) 20 (80%) Figure 2 shows the overall trend across groups in correctly identifying the cranial nerve deficit. If all the students were combined, then overall, the clinicians were able to correctly identify the lesions the most. However, if you separated the student groups, all the MS 3&4s were able to correctly identify the lesion, thus scoring higher than the clinicians. The lowest scoring group from the entire study was the residents. They incorrectly identified the lesion more often than compared to either the combined or separated student groups. Percentage correct deficit identification by skill level 100,00% 90,00% 80,00% 70,00% 60,00% 50,00% 40,00% 30,00% 20,00% 10,00% 0,00% Clinicians Residents All Students MS 3&4 MS 2 Figure 2. Percentage correct for Cranial Nerve lesion identification by skill level MS 1 Discussion Our results are consistent with past studies evaluating simulation in the medical field for educational purposes, showing positive feedback from students. Students show that they benefit because they are able to learn on their own pace with the added opportunity of feedback provided from the technology. This shows that there is potential for this technology to be a useful teaching and practicing tool for cranial nerve exams. Continued development and improvement of the technical aspects of NERVE will likely translate into the tool becoming more effective in its purpose. From our study, NERVE shows acceptance in an educational setting and towards the potential for this technology to have a teaching impact with improvement in the technology. The students had the opportunity to recognize cranial nerve deficits, a potential complication in strokes, tumors, and other processes, as well as practicing thorough history taking and examination skills for diagnosis. Residents, as a group, responded with the highest approval for overall experience with NERVE, and they also rated NERVE most resembling an actual patient. However, they also had the lowest percentage in correctly identifying the Cranial Nerve deficit. More research needs to be done to see if increased exposure to NERVE improves performance on cranial nerve exams, and the validity of virtual patient experience in measuring the participant's diagnosis skills. Another future aspect of study is the cost effectiveness of the tool. Our expenses for running the study included a television, the computer to run the software, the Wii, and multiple infrared cameras. In estimate, this should run a few thousand dollars. The startup investment would be higher with this technology versus preparing a standardized patient. However, in the long term, keeping the standardized patient per hour for all students should be more expensive versus having a high initial cost for having a virtual patient available at any time for all students, though a full cost assessment is needed in the future. Most people did not have a problem with the controller and in fact liked having the tools in one controller. Being able to fully implement the clinical tools in a clinical setting however will remain a major challenge. Another is the speech recognition technology that was worse with female participants. With continued programming and modifications, these issues can potentially be fixed. However, our future study will focus on if having repeat sessions with NERVE have improved a medical student's capability to perform a complete cranial nerve exam, thereby teaching them in the process via direct contact. In its current state, NERVE provides an additional resource that allows medical students and residents to interact with patients with rare cranial nerve deficits. Infrequent presentations can mean a lack of exposure to see patients with cranial lesions. These virtual patients provide exposure and proficiency in abnormal findings that are difficult to include in any medical school curriculum. With this tool we hope to meet the Consortium of Neurology Clerkship Directors' goal to optimize methods of teaching the neurologic examination to medical students.

Journal

Bio-Algorithms and Med-Systemsde Gruyter

Published: Jan 1, 2012

There are no references for this article.