Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Building an audio/video-feedback system for simulation training in medical education

Building an audio/video-feedback system for simulation training in medical education Background: Simulation training in medical education is a valuable tool for skill acquisition. Standard audio/ video-feedback systems for training surveillance and subsequent video feedback are expensive and often not available. Methods: We investigated solutions for a low-budget audio/video-feedback system based on consumer hardware and open source software. Results: Our results indicate that inexpensive, movable network cameras are suitable for high-quality video transmission including bidirectional audio transmission and an integrated streaming platform. In combination with a laptop, a WLAN connection, and the open source software iSpyServer, one or more cameras represent the easiest, yet fully functional audio/video-feedback system. For streaming purposes, the open source software VLC media player yields a comprehensive functionality. Using the powerful VideoLAN Media Manager, it is possible to generate a splitscreen video comprising different video and audio streams. Optionally, this system can be augmented by analog audio hardware. In this paper, we present how these different modules can be set up and combined to provide an audio/ video-feedback system for a simulation ambulance. Conclusions: We conclude that open source software and consumer hardware offer the opportunity to build a low-budget, feature-rich and high-quality audio/videofeedback system that can be used in realistic medical simulations. Keywords: audio/video-feedback system; audio/videosystem; medical simulation; simulation training; streaming. Abbreviations: FPS, frames per second; H.264/MPEG-4 AVC, H.264 or Moving Picture Experts Group-4 Advanced Video Coding; HTTP, Hypertext Transfer Protocol; IP, Internet Protocol; MJPEG, Motion JPEG (Joint Photographic Experts Group); SAR, Sample Aspect Ratio; URL, Uniform Resource Locator; VLM, VideoLAN Media Manager; x264, software library for encoding streams in the H.264/ MPEG-4 AVC format. Moritz Mahling and Alexander Münch contributed equally to this work. *Corresponding author: Christoph Castan, Medical School, Faculty of Medicine, University of Tübingen, 72076 Tübingen, Germany, E-mail: christoph.castan@agn-tuebingen.de Moritz Mahling, Alexander Münch and Leopold Haffner: Medical School, Faculty of Medicine, University of Tübingen, Tübingen, Germany Paul Schubert, Andreas Manger and Jörg Reutershan: Department of Anesthesiology and Intensive Care Medicine, University of Tübingen, Tübingen, Germany Jan Griewatz: Competence Centre for University Teaching in Medicine, Faculty of Medicine, University of Tübingen, Tübingen, Germany Nora Celebi: Ärztezentrum Ostend, Stuttgart, Germany Reimer Riessen: Department of Internal Medicine, Medical Intensive Care Unit, University of Tübingen, Tübingen, Germany Verena Conrad: DocLab, Faculty of Medicine, University of Tübingen, Tübingen, Germany Anne Herrmann-Werner: DocLab, Faculty of Medicine, University of Tübingen, Tübingen, Germany; and Department of Internal Medicine, Psychosomatic Medicine and Psychotherapy, University of Tübingen, Tübingen, Germany Introduction Patient safety is an important aspect in medicine and should consequently be integrated into medical education. Currently, with increasingly more sophisticated simulators, performing routine procedures on patients without prior simulation training is no longer appropriate [1]. Although simulators initially focused on the acquisition of procedural and individual skills, simulation training can also be used to train clinical and team-based skills in more complex scenarios [2]. 90Mahling et al.: A project study and cookbook using open source software In medical education, the following different types of simulation can be used [2]: manikin-based simulation, part-task trainers, simulated patients, virtual reality trainers, and screen-based simulators. In this article, we focus on manikin-based and simulated patient training, for which the described audio/video-feedback system is commonly used to perform a video-enhanced debriefing following the simulation training. Although many institutions use simulation-based training in daily practice, evidence supporting the superiority of simulation training compared to other teaching techniques is still limited. Simulation training is well established, and this type of training has been investigated for specific procedures (e.g., Maran and Glavin [3]). According to a recent meta-analysis by Lorello et al., simulation is associated with moderate to large effects in satisfaction, skills, and behavior, but small or negligible effects for patient-oriented outcomes, knowledge, and time [4]. The utilization of (video) debriefing after simulation training is a common practice for consolidating the acquired skills and knowledge. Feedback given in this way is an important requisite to consolidate knowledge [5]. Savoldelli et al. found a lack of improvement when debriefing was not offered for a simulation scenario, particularly in complex, non-technical skill training [i.e., Crisis Resource Management (CRM) team training] [6]. However, the evidence to support supplementary video playback to oral feedback is controversial: most trials showed no disadvantages related to video debriefing, though it was not always a benefit for the participant [4]. Video debriefing allows a participant to realize what he did, not what he thought he was doing [5], and should avoid using very long and unrelated content, which could lead to distraction from the discussed topic [5]. With the context of developing a simulation ambulance (SIMON), we decided to integrate an audio/videofeedback system based on consumer hardware and open source software. During this process, we developed different solutions. This article focuses on these solutions and describes their setup. Anatomy of an audio/videofeedback system The structure of a simulation area The classic simulation area structure consists of a simulation area, a control room, and a debriefing room [1] (Figure 1, Table 1). All these areas are usually connected by an audio/video system. The simulation area (Figure 1A) represents the core element of every simulation center. Depending on the type of simulation (low-fidelity/single-task simulation vs. high-fidelity/full-scale simulation center), the simulation area can differ in size and demands. For full-scale simulations, the size should ideally match the original size of the original environment (such as an operating room, trauma room, or ambulance). Simulation manikin Cameras Simulation area Manikin control Laptop Laptop Video projector Control room Debriefing area Figure 1:Example room setup of a high-fidelity simulator. Pictured are the simulation area (A) with a simulation manikin and cameras, the control room (B) to control the manikin and audio-video transmission, and the debriefing area (C) with a video projector for live streaming and video debriefing. Mahling et al.: A project study and cookbook using open source software91 Table 1:The anatomy of a simulation center. Rooms Description Low fidelity 1 Camera (+) (+) ­ High fidelity/full scale Multiple cameras + + (+) Simulation ambulance 3­4 Cameras + + + Simulation area Control room Debriefing room Live broadcast room Medical environment, manikin Control simulator, bidirectional audio communication, observation Replay recorded videos for debriefing Live observation/audio-video broadcast from the simulation area When controlling a scenario with a simulation manikin, a control room (Figure 1B) is needed. A physical separation between the simulation area and control room is useful to achieve an auditory and visual separation; thus, for example, the trainers can communicate without being heard by the participants. To establish a one-way visual connection between the simulation area and control room, mirror surfaces for a direct observation or live view via cameras for an indirect observation can be installed. To communicate with the simulation area, a bidirectional audio communication must be set up to speak through the manikin or give additional information about the simulation (the so-called voice of god). In contrast to the strenuous setting in the simulation area, the debriefing room (Figure 1C) should represent a comfortable area for all trainees (usually not more than 12­15 persons) to create a pleasant atmosphere for the debriefing [1]. The debriefing room can also be used as a live view room for those trainees not involved in the actual simulation to follow the scenario passively. For video debriefing, recordings of the training scenario (e.g., on a laptop), a video projector and an audio system must be available. If the system is also to be used for live viewing, a live view connection is also necessary. contain a laptop, a camera, and a tripod. Because this solution is flexible and portable, it can be easily transported to a full-scale simulator and integrated into existing simulation environments. Different types of audio/video systems and methods for application are discussed below. Eventually, each simulation center requires a tailored solution to meet all needs and requirements, which have to be defined in advance. Although there is no "standard solution," professional systems available on the market usually include a simulation manikin, one to three cameras for video recording, audio transmission between trainer and trainee, and solutions for video storage and data management. The basics This article describes several hardware and software components that can be combined to set up a simulation environment. Most of the software components are available open source and free of charge, with the exception of Windows (Microsoft Corporation, Redmond, WA, USA), which we use as the operating system for our software environment due to the software constraints of our simulated patient manikin. The equipment Although full-scale manikins are becoming more realistic, not all changes in vital functions can be simulated such that they are noticeable by the trainee. A simulated patient monitor with a similar design to clinically used monitors can help highlight changes in vital states. Furthermore, especially in high-fidelity simulations, large amounts of data are produced and may require storage. Therefore, sufficient storage capacity has to be provided. For in-house trainings or changing simulation sites in particular, mobile solutions with fast and easy setups can be used for simulations in familiar environments. Therefore, the entire equipment can be stored in mobile containers. For the very basic setup, these boxes have to Simulation environment in a box: a laptop and a camera Building an audio/video-feedback system in a simulation environment can be as simple as connecting a laptop and the appropriate camera. In this section, we describe the most simple and affordable solution for a stationary and/ or portable simulation environment. Although limited to one camera position, this setup features live audio and video transmission and video recording and bidirectional audio communication, and it can be set up within a few minutes. 92Mahling et al.: A project study and cookbook using open source software The hardware requirements regarding processor speed, memory, and disk space are highly dependent on the camera used. However, we recommend using a laptop with at least 2 GB memory, a modern dual-core, 2-GHz processor, and a solid-state disk, if possible. Setting up and connecting the camera and laptop The camera and laptop should be placed following the requirements of the simulation area ("Anatomy of an audio/video-feedback system" section). In most situations, it will be necessary to set up the camera and laptop in different rooms; hence, directly listening and speaking to the trainees is impossible without an audio transmission system. According to our experience, the digital voice communication often suffers from a (mostly minimal) delay in transmission. Should the trainees be able to overhear the operators' original voice (e.g., because of insufficient sound separation), this would yield a double presentation of the audio signal, thus impeding comprehensibility. Communication between the laptop and camera can be either achieved with a WLAN network (either ad hoc or stationary) or by using an Ethernet cable connection (Figure 3). If possible, a wired connection should be chosen, as this has proven to be more stable and reliable in practice and often yields a lower delay compared to wireless connections. An encrypted connection should be preferred. Once successfully set up, the camera is assigned an IP address in the local network that can be accessed from the laptop. Figure 2:Laptop (A) and camera (B) that can be used in the simplest setup for simulation training. The camera For the purpose of this paper, we have chosen the "Foscam FI9821 W V2" IP (Internet Protocol) camera (ShenZhen Foscam Intelligent Technology Co. Ltd., Shenzhen, China) (Figure 2). Despite being more of a "consumer" than a "professional" hardware, we found that this camera provided sufficient video and audio recording quality, offering an adequate web interface, and being of reasonable price (currently 75­100). However, many other appropriate IP-based cameras can be integrated in a similar way. The laptop The second component of this setup is a laptop computer (Figure 2), which only requires an Ethernet and/or WLAN (wireless local area network) connection, speakers, and a microphone. If required, the laptop can be connected to a projector and audio system for larger audiences. The simulation training The only step required to prepare the training session for recording is starting a web browser and accessing Wired Ethernet connection Option 1 2 Optional IPv4 Laptop Router Optional Camera IPv4 WiFi connection Figure 3:Different setups for the connection between laptop and camera. Wired Ethernet connection without (1) or with the router (2). WLAN connection with (3) or without (4) the router. Mahling et al.: A project study and cookbook using open source software93 the web interface provided by the camera by entering the IP address of the camera in the address bar. Most of the time, it will be necessary to install a browser plugin and log in using a manufacturer-provided username and password. After logging in, a live transmission from the camera video stream is displayed (Figure 4). The camera can be controlled using the buttons on the left side of the screen. Audio playback (from camera to laptop) and audio recording (from laptop to camera) can be activated using buttons on the bottom-left side of the screen. The integrated web interface features a built-in recording function that saves the video to the local disk of the laptop (the folder the video is saved in can be specified in the settings dialog). The recording can be started and stopped using buttons on the lower-right end of the web interface (Figure 4). An example video recording is provided in Supplemental Data, Video 1 that accompanies the article at http://www. degruyter.com/view/j/bams.2015.11.issue-2/bams-20150010/bams-2015-0010.xml?format=INT. additional cameras. Another extension of the setup is described to integrate a live broadcast room for a larger audience to follow the scenario passively. This requires a second laptop or computer to be connected to the camera network. Benefits and limitations In this section, we show how a simple audio/videofeedback system can be built using a low-budget camera and a laptop (Table 2). The benefits of this setup are the low price, the portability of the system, and the ease of setup and use. However, when recording, the video streams of the cameras are saved individually and must therefore be post-processed. Although a bidirectional audio transmission is possible, the quality of the integrated microphones and speakers is limited in most cases. Possible extensions of this setup The described setup can be easily extended to add functionality. The web interface of our camera model supports the integration of additional cameras in a "split-screen" view. Therefore, the additional cameras also need to be integrated in the network using a network switch and/or WLAN access point. In the next step, the web interface can be configured to support Multiple-camera setup: iSpyServer as central relay When more than one camera and/or cameras from different manufacturers are used and combined, we make use of the fact that every camera provides not only a web interface but also "direct" access to the video stream. This will often be a Uniform Resource Locator (URL), usually consisting of the protocol that is used ("http," "rtmp", Figure 4:User interface for motion control (upper left, highlighted red dots), recording and audio (bottom, red dots), and camera settings (top, red dots). 94Mahling et al.: A project study and cookbook using open source software Table 2:Components needed for simulation environment in a box. Required Yes Yes Component Functionality Approx. price 75­100 300 10 IP-based camera Laptop with WLAN Wired LAN cable Projector Optional Optional Video acquisition (pan/tilt/zoom), audio recording, audio playback Video playback (live/recorded), audio playback (live/ recorded), audio communication, video recording etc.), a colon with two slashes ("://"), the IP address of the camera (e.g., "192.168.1.11"), a specific port (e.g., 8080), and an additional resource name ("/video"). The final URL is camera-specific and will look like the following: "http://192.168.1.11/videoMain." Some cameras provide more than one stream, e.g., to support different video formats. A community of programmers interested in security and surveillance created the open source software package iSpy (by Sean Tearney), which mainly consists of the central software (iSpy) and an additional iSpyServer (http://www.ispyconnect.com). This software is originally designed to facilitate surveillance application, but can also be used to support simulation technology. For example, setting up iSpy to integrate different camera streams can be achieved by using an integrated wizard in a few steps, as the iSpy package contains information about many different camera architectures (Supplemental Data, PDF 1). The layout of the streams can then be adjusted to individual needs, and the iSpy software has the ability to record videos and/or to control bidirectional audio communication using integrated buttons and a unified interface. The iSpy package also contains the iSpyServer software, a special program that can be used to relay streams. Additionally, the software can also selectively capture and stream any selected display of the server it runs on (e.g., for a simulated patient monitor; "Integration of simulated patient monitor software" section). Thus, it is possible to relay all streams using one type of software. This becomes especially useful when cameras from different manufacturers are used. We use iSpyServer to relay three MJPEG [Motion JPEG (Joint Photographic Experts Group)] streams from three cameras as well as our simulated patient monitor as the fourth stream. (For the camera models we used, we recommend using the MJPEG stream as it yields a significantly lower delay compared to the H.264 stream. Therefore, one of the streams has to be set to provide a MJPEG format using the following command: http://IP_ADDRESS:PORT/cgi-bin/CGIProxy.fcgi?usr=US ERNAME&pwd=PASSWORD&cmd=setSubStreamFormat &format=1.) Let the world know: split-screen streaming using VLC media player Introduction to streaming with the VLC media player When more than one camera is used, broadcasting in a split-screen view (e.g., a 2×2 matrix) may be favorable. In professional video processing, this is usually achieved by using special hardware. However, the current capacity of the current computers and the availability of functional open source software also allow for processing the video (e.g., combining four streams to one) and broadcasting the result in a standard format to the network (e.g., as a video stream). We found the widespread VLC media player (VideoLAN non-profit organization, Paris, France, version 2.2.0, 64 bit) to be suitable for video processing and broadcasting. Controlling the VLC media player using a VLM file It is important to recognize that VLC media player can be controlled by ways other than the graphical interface. The software incorporates the VideoLAN Media Manager (VLM), which allows the ability to control multiple streams using one VLC instance. The settings used by VLM can be defined in a configuration file named "*.vlm". In this file, the required video and audio inputs are defined. These are sent to a module called mosaic via a mosaic bridge. Afterwards, the mosaic bridge is configured to arrange the inputs in a desired way (e.g., in a 2×2 matrix), transcode this matrix to a specific video format (e.g., H.264/MPEG-4 AVC), and stream the video. Mahling et al.: A project study and cookbook using open source software95 Initially, we start building the VLM configuration file as follows: # Camera 1 configuration: # Setup new element named "Cam1" new Cam1 broadcast enabled # Define the input (fetch iSpyServer stream from localhost) setup Cam1 input http://127.0.0.1/?camid=1 # Define amount of caching (we chose 800 ms) setup Cam1 option network-caching = 800 # Set fps to 30/sec. Adding "/1" is needed. setup Cam1 option image-fps = 30/1 # Define the output. In this case, send to the mosaic-bridge with ID = 1 (we will need that later) and a defined width, height and , according Sample Aspect Ratio (SAR) setup Cam1 output #mosaic-bridge {id = 1,width = 640,height = 360,SAR = 1.77} As we have set up four camera streams as well as the appropriate background as "canvas", we can start the video processing as follows: setup bg output #transcode{sfilter = "mosaic",venc = x264{fps = 30/1, keyint = 30,min-keyint = 30,crf = 26,threads = 8, profile = main,preset = faster,tune = zerolaten cy},fps=30}:bridge-in{delay=410}:std{access=http {user=USERNAME,pwd=PASSWORD,mime=video/ mpeg},mux = ts,dst = :8080/} As this is a rather complicated line of code, the separate parts are discussed individually: setup bg output #transcode With the addition of more cameras, this code is to run multiple instances with a changed name (Cam2, Cam3, Cam4), changed input ("?camid=..."), and a changed mosaic bridge ID. We use the relayed MJPEG stream sent from iSpyServer as the input. Alternatively, to directly fetch the stream from the camera, this input has to be set after replacing the IP address, username, and password with the chosen values as follows: setup Cam? Input "http://CAMERA-IP/cgi-bin/ CGIStream.cgi?cmd=GetMJStream&usr=USERNAME_ HERE&pwd=PASSWORD_HERE" In contrast to sending the stream to the mosaic bridge, the output is sent to the transcode module of VLC. Subsequently, the parameters of transcode are defined. First, we use the subpicture filter (sfilter) mosaic to build the split screen matrix: sfilter = "mosaic", Additionally, a video format for the resulting stream has to be set using the venc parameter (video encoding). We chose to use x264. Additionally, some codec-specific parameters have to be defined: Venc = x264{fps = 30/1,keyint = 30,min-keyint = 30, crf = 26,threads = 8,profile = main,preset = faster, tune = zerolatency} (The command GetMJStream was used, which is referred to as a stream in the MJPEG format. We observed a lower latency using this format.) After defining the four input streams, they have to be combined into a split screen. First, a canvas (background) has to be created: Afterwards, we define the frames per second (FPS) used for the transcode module: ,fps = 30}: # First, we define a new stream called "bg" This . is the background of the split screen view new bg broadcast enabled # As we do not need a movie as background, we just use a static image file. For this purpose, we use a black PNG image file with the dimensions of the final stream (i.e. , 1280x720 pixels) setup bg input "FULL_PATH_TO_BACKGROUND.png" # We want this image file be of unlimited duration, so we set the duration to "-1" setup bg option image-duration = -1 # The image file should have the same frames per second setup bg option image-fps = 30/1 (Note the ":" in between the different commands of the output chain, which define the individual elements of the chain.) Up to this point, we only processed the video signal. To further import the audio signal from the audio bridge, we need to use the bridge-in command. As there might be a shift in the timing of the audio and video signal, an additional delay can be specified in milliseconds for the bridge-in command: :bridge-in{delay = 410} Because we have processed and merged the audio and video signals, these need to be streamed. For this purpose, 96Mahling et al.: A project study and cookbook using open source software we make use of the standard module (std) and define how the stream can be accessed (i.e., via the http protocol), how the access is to be controlled [using a username (user) and password (pwd)], how the stream has to be encapsulated [i.e., muxed, we recommend the MPEG2/TS muxer (ts)], and from which destination (dst) the stream can be accessed [i.e., ":8080/", which means that the stream is made available under the local IP address (localhost) and port 8080]: :std{access = http{user=USERNAME,pwd=PASSWORD,mime =video/mpeg},mux=ts,dst=:8080/} positioning, 1: fixed positions, 2: user-defined offset (https://wiki.videolan.org/Documentation:Modules/ mosaic/). We chose "2", and then define the mosaic IDs using ­mosaic-order, followed by the individual offsets in pixels (­mosaic-offsets) as a comma-separated list of Xand Y-positions. If VLC is run with the above parameters using the VLM configuration file, a stream combining the four substreams should be available using the IP address of the computer running the VLC media player on port 8080. Now, all required configuration parameters are set. The next step is to start all the individual streams as follows: control Cam1 play control Cam2 play control Cam3 play control Cam4 play # Alternatively, also start the analog audio # control audio play control bg play Integration of simulated patient monitor software There are several situations in which a simulated patient monitor is required: either in the presence of a high-fidelity simulation manikin that comes with a patient monitor software (some are available on the market) or without a specific manikin using a universal patient monitor software (e.g., the Vital Sign Simulator project by Florian Schwander, http://www.sourceforge.net/projects/vitalsignsim/). In both cases, it is necessary to display the patient monitor software in the simulation area. This can be achieved with a separate computer that is connected to the network, or with a second display, e.g., added to the server running the VLC media player and iSpyServer software. To capture the screen displaying the vital sign simulator, a screen capture has to be added in iSpyServer. It is possible to define specific constraints (i.e., which area of the screen is captured) and set a frame capture interval (usually, 10 FPS is sufficient). To integrate the simulated patient monitor in the split-screen stream created by VLC media player, one of the camera streams has to be replaced with the stream URL of the captured display in the VLM configuration. The patient monitor is then integrated in the split-screen stream (Figure 5). The complete VLM configuration file is provided in Supplemental Data, VLM configuration. Starting the VLC media player As we have specified the VLM configuration file, we are to start the VLC media player with the necessary parameters. Therefore, either a new shortcut has to be created, or the command line is to be used. The command to start the VLC player is as follows: PATH_TO_vlc.exe ­vlm-conf = "PATH_TO_VLM. vlm" ­mosaic-keep-picture ­mosaicposition = 2 ­mosaic-order = "2,1,3,4 ­mosaicoffsets = "0,0,640,0,0,360,640,360" In the above command, PATH_TO_vlc.exe has to be replaced with the full path to the executable file (exe) of the VLC media player. We then have to specify the VLM file that has to be used with the ­vlm-conf command and the full path to the VLM file created above (PATH_TO_VLM. vlm). Surprisingly, some configuration statements need to be defined as parameters when executing the vlc.exe (instead of using the VLM file). First, we prevent the mosaic filter from resizing our videos (­mosaic-keep-picture) and define how we want the mosaic module to position the individual elements (­mosaic-position): 0: automatic Adding analog audio All solutions described make use of audio transmission via the IP cameras, i.e., a digital audio transmission. This provides certain advantages (no additional cables needed, etc.), but its main disadvantage is relatively low audio quality and a variation in delay of the audio transmission. According to our experience, this is especially troublesome if a direct patient communication is to be simulated, as a delay yields a significant simulation artifact. Mahling et al.: A project study and cookbook using open source software97 Figure 5:Setup of the audio/video-feedback system that we use. Server, simulator, and client (red boxes) are individual hardware components with connections displayed as dotted (wireless) or solid lines (wired). Individual software components are shown as boxes with black borders. VGA, Video graphics array; DVI, digital visual interface; TCP/IP, transmission control protocol/internet protocol; HTTPS, secure hypertext transport protocol. Due to this disadvantage, we chose to implement an analog audio transmission that enables a bidirectional communication between the operators, the simulation area, and the patient manikin. To optimize the audio quality, we further implemented an analog noise-gate and compressor as well as a digital mixer. The output is then routed to the server running iSpyServer and VLC media player. The following code has to be adapted to the local hardware components and integrated into the VLM configuration file: new audio broadcast enabled setup audio input "dshow://" setup audio option dshow-vdev = none # Specify the audio device: setup audio option dshow-adev = "DEVICE DESCRIPTION" setup audio option dshow-aspect-ratio = 4\:3 setup audio option dshow-chroma = setup audio option dshow-caching = 500 setup audio option dshow-fps = 0 setup audio option no-dshow-config setup audio option no-dshow-tuner setup audio option dshow-audio-input = -1 setup audio option dshow-audio-output = -1 setup audio option dshow-amtuner-mode = 1 setup audio option dshow-audio-channels = 2 setup audio option dshow-audiosamplerate = 44100 setup audio option dshow-audio-bitspersample = 0 setup audio option live-caching = 300 setup audio option volume = 1024 # Route the output to the mosaic bridge after transcoding setup audio output #transcode{acodec = mpga,ab = 192,channels = 2, samplerate = 44100}:bridge-out # Add this line to the end of the VLM file: control audio play The above code then integrates the audio from the analog input of the server to the VLC-broadcasted stream. (To generate the VLM configuration for the correct audio device, it might be easier to use the VLM configuration tool that is integrated in the VLC media player and can be accessed via the "Tools" menu.) Building the audio/video-feedback system for a simulation ambulance Emergency medicine requires different sorts of competencies that have to be learned: individual skills, communication, team skills, and decision making under time constraints in the context of limited information on the patient [1, 7]. A simple setup such as that shown in the 98Mahling et al.: A project study and cookbook using open source software Figure 6:Room used for live view and debriefing with beamer and laptop. section "Simulation environment in a box: a laptop and a camera" might be sufficient to set up an emergency medicine scenario with an audio/video-feedback system. However, to reproduce the complexity of these emergency situations, a full-scale simulator might be preferred. A full-scale ambulance simulator was developed to prepare medical students for their future life as emergency physicians. Therefore, all of the features listed previously were implemented. A life-sized ambulance dummy was equipped with modern medical devices, a full-scale manikin on a stretcher, and an audio/video-feedback system. Three cameras and the vital signs monitor were merged into a split screen, streamed to the live view-room (Figure 6) and recorded for video debriefings. The control room features a live observation of the simulation scenario, bidirectional audio transmission, and control of the manikin. For easier handling of video recording, camera controlling, and video observation, an individual software solution was developed to enable even inexperienced operators to use the simulator. However, building the system requires a moderate to high amount of technical skill as well as some time to familiarize with the suggested software. Using controlling software that is programmed "on top" of the software infrastructure, it is possible to simplify the use for the instructors and teachers. However, the system is still not "consumer ready" compared to solutions available on the market. In contrast to commercial systems, there is no direct, company-driven support or assistance contract available to support the integration of the audio/video system. The high flexibility and scalability of this system yields the downside that no further features but the audio and video recording are directly integrated, i.e., vital signs, event recording, and/or the controlling of a scenario. Furthermore, we only established the system to facilitate one simulation area. Although the components are theoretically extendable to support multiple rooms, this will complicate the architecture further. Thus, while our system yields high flexibility and low hardware and software costs, it requires high technical skill and a moderate amount of time used for integration and customization. Limitations Using the instructions provided above, it is possible to build a simple audio/video-feedback system using mainly open source software and affordable hardware components. Conclusions In this article, we highlight different ways to build audio/ video-feedback systems using consumer hardware and Mahling et al.: A project study and cookbook using open source software99 mostly open source software. These components facilitate a highly specific setup and different levels of complexity. In the future, these components might enable the construction of a simulation center for institutions without large funds or for centers that warrant a highly customizable solution. In our opinion, the solutions described above have the potential to be a powerful alternative to market solutions that often imply technical restrictions and require expensive support contracts. We encourage the reader to explore possible open source-based solutions, thus supporting the open source community and fostering standardization and interoperability. Related content and documentation An excellent, general streaming (including VLM) manual: https://www.videolan.org/doc/streaming-howto/en/ The VLC media player Mosaic manual: https://wiki. videolan.org/Mosaic An excellent H.264 encoding guide: http://www. avidemux.org/admWiki/doku.php?id=tutorial:h.264 (URLs checked for validity on March 27, 2015) DocLab team, and the physicians of the Department of Anesthesiology and Intensive Care Medicine in Tübingen for their valuable support. Author contributions: All the authors have accepted responsibility for the entire content of the submitted manuscript and approved submission. Research funding: Financial support for this project came from the "PROFIL" program of the Faculty of Medicine and from the Department of Anesthesiology and Intensive Care Medicine, University of Tübingen. Employment or leadership: None declared. Honorarium: None declared. Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Bio-Algorithms and Med-Systems de Gruyter

Loading next page...
 
/lp/de-gruyter/building-an-audio-video-feedback-system-for-simulation-training-in-UMz8ZbqLAB
Publisher
de Gruyter
Copyright
Copyright © 2015 by the
ISSN
1895-9091
eISSN
1896-530X
DOI
10.1515/bams-2015-0010
Publisher site
See Article on Publisher Site

Abstract

Background: Simulation training in medical education is a valuable tool for skill acquisition. Standard audio/ video-feedback systems for training surveillance and subsequent video feedback are expensive and often not available. Methods: We investigated solutions for a low-budget audio/video-feedback system based on consumer hardware and open source software. Results: Our results indicate that inexpensive, movable network cameras are suitable for high-quality video transmission including bidirectional audio transmission and an integrated streaming platform. In combination with a laptop, a WLAN connection, and the open source software iSpyServer, one or more cameras represent the easiest, yet fully functional audio/video-feedback system. For streaming purposes, the open source software VLC media player yields a comprehensive functionality. Using the powerful VideoLAN Media Manager, it is possible to generate a splitscreen video comprising different video and audio streams. Optionally, this system can be augmented by analog audio hardware. In this paper, we present how these different modules can be set up and combined to provide an audio/ video-feedback system for a simulation ambulance. Conclusions: We conclude that open source software and consumer hardware offer the opportunity to build a low-budget, feature-rich and high-quality audio/videofeedback system that can be used in realistic medical simulations. Keywords: audio/video-feedback system; audio/videosystem; medical simulation; simulation training; streaming. Abbreviations: FPS, frames per second; H.264/MPEG-4 AVC, H.264 or Moving Picture Experts Group-4 Advanced Video Coding; HTTP, Hypertext Transfer Protocol; IP, Internet Protocol; MJPEG, Motion JPEG (Joint Photographic Experts Group); SAR, Sample Aspect Ratio; URL, Uniform Resource Locator; VLM, VideoLAN Media Manager; x264, software library for encoding streams in the H.264/ MPEG-4 AVC format. Moritz Mahling and Alexander Münch contributed equally to this work. *Corresponding author: Christoph Castan, Medical School, Faculty of Medicine, University of Tübingen, 72076 Tübingen, Germany, E-mail: christoph.castan@agn-tuebingen.de Moritz Mahling, Alexander Münch and Leopold Haffner: Medical School, Faculty of Medicine, University of Tübingen, Tübingen, Germany Paul Schubert, Andreas Manger and Jörg Reutershan: Department of Anesthesiology and Intensive Care Medicine, University of Tübingen, Tübingen, Germany Jan Griewatz: Competence Centre for University Teaching in Medicine, Faculty of Medicine, University of Tübingen, Tübingen, Germany Nora Celebi: Ärztezentrum Ostend, Stuttgart, Germany Reimer Riessen: Department of Internal Medicine, Medical Intensive Care Unit, University of Tübingen, Tübingen, Germany Verena Conrad: DocLab, Faculty of Medicine, University of Tübingen, Tübingen, Germany Anne Herrmann-Werner: DocLab, Faculty of Medicine, University of Tübingen, Tübingen, Germany; and Department of Internal Medicine, Psychosomatic Medicine and Psychotherapy, University of Tübingen, Tübingen, Germany Introduction Patient safety is an important aspect in medicine and should consequently be integrated into medical education. Currently, with increasingly more sophisticated simulators, performing routine procedures on patients without prior simulation training is no longer appropriate [1]. Although simulators initially focused on the acquisition of procedural and individual skills, simulation training can also be used to train clinical and team-based skills in more complex scenarios [2]. 90Mahling et al.: A project study and cookbook using open source software In medical education, the following different types of simulation can be used [2]: manikin-based simulation, part-task trainers, simulated patients, virtual reality trainers, and screen-based simulators. In this article, we focus on manikin-based and simulated patient training, for which the described audio/video-feedback system is commonly used to perform a video-enhanced debriefing following the simulation training. Although many institutions use simulation-based training in daily practice, evidence supporting the superiority of simulation training compared to other teaching techniques is still limited. Simulation training is well established, and this type of training has been investigated for specific procedures (e.g., Maran and Glavin [3]). According to a recent meta-analysis by Lorello et al., simulation is associated with moderate to large effects in satisfaction, skills, and behavior, but small or negligible effects for patient-oriented outcomes, knowledge, and time [4]. The utilization of (video) debriefing after simulation training is a common practice for consolidating the acquired skills and knowledge. Feedback given in this way is an important requisite to consolidate knowledge [5]. Savoldelli et al. found a lack of improvement when debriefing was not offered for a simulation scenario, particularly in complex, non-technical skill training [i.e., Crisis Resource Management (CRM) team training] [6]. However, the evidence to support supplementary video playback to oral feedback is controversial: most trials showed no disadvantages related to video debriefing, though it was not always a benefit for the participant [4]. Video debriefing allows a participant to realize what he did, not what he thought he was doing [5], and should avoid using very long and unrelated content, which could lead to distraction from the discussed topic [5]. With the context of developing a simulation ambulance (SIMON), we decided to integrate an audio/videofeedback system based on consumer hardware and open source software. During this process, we developed different solutions. This article focuses on these solutions and describes their setup. Anatomy of an audio/videofeedback system The structure of a simulation area The classic simulation area structure consists of a simulation area, a control room, and a debriefing room [1] (Figure 1, Table 1). All these areas are usually connected by an audio/video system. The simulation area (Figure 1A) represents the core element of every simulation center. Depending on the type of simulation (low-fidelity/single-task simulation vs. high-fidelity/full-scale simulation center), the simulation area can differ in size and demands. For full-scale simulations, the size should ideally match the original size of the original environment (such as an operating room, trauma room, or ambulance). Simulation manikin Cameras Simulation area Manikin control Laptop Laptop Video projector Control room Debriefing area Figure 1:Example room setup of a high-fidelity simulator. Pictured are the simulation area (A) with a simulation manikin and cameras, the control room (B) to control the manikin and audio-video transmission, and the debriefing area (C) with a video projector for live streaming and video debriefing. Mahling et al.: A project study and cookbook using open source software91 Table 1:The anatomy of a simulation center. Rooms Description Low fidelity 1 Camera (+) (+) ­ High fidelity/full scale Multiple cameras + + (+) Simulation ambulance 3­4 Cameras + + + Simulation area Control room Debriefing room Live broadcast room Medical environment, manikin Control simulator, bidirectional audio communication, observation Replay recorded videos for debriefing Live observation/audio-video broadcast from the simulation area When controlling a scenario with a simulation manikin, a control room (Figure 1B) is needed. A physical separation between the simulation area and control room is useful to achieve an auditory and visual separation; thus, for example, the trainers can communicate without being heard by the participants. To establish a one-way visual connection between the simulation area and control room, mirror surfaces for a direct observation or live view via cameras for an indirect observation can be installed. To communicate with the simulation area, a bidirectional audio communication must be set up to speak through the manikin or give additional information about the simulation (the so-called voice of god). In contrast to the strenuous setting in the simulation area, the debriefing room (Figure 1C) should represent a comfortable area for all trainees (usually not more than 12­15 persons) to create a pleasant atmosphere for the debriefing [1]. The debriefing room can also be used as a live view room for those trainees not involved in the actual simulation to follow the scenario passively. For video debriefing, recordings of the training scenario (e.g., on a laptop), a video projector and an audio system must be available. If the system is also to be used for live viewing, a live view connection is also necessary. contain a laptop, a camera, and a tripod. Because this solution is flexible and portable, it can be easily transported to a full-scale simulator and integrated into existing simulation environments. Different types of audio/video systems and methods for application are discussed below. Eventually, each simulation center requires a tailored solution to meet all needs and requirements, which have to be defined in advance. Although there is no "standard solution," professional systems available on the market usually include a simulation manikin, one to three cameras for video recording, audio transmission between trainer and trainee, and solutions for video storage and data management. The basics This article describes several hardware and software components that can be combined to set up a simulation environment. Most of the software components are available open source and free of charge, with the exception of Windows (Microsoft Corporation, Redmond, WA, USA), which we use as the operating system for our software environment due to the software constraints of our simulated patient manikin. The equipment Although full-scale manikins are becoming more realistic, not all changes in vital functions can be simulated such that they are noticeable by the trainee. A simulated patient monitor with a similar design to clinically used monitors can help highlight changes in vital states. Furthermore, especially in high-fidelity simulations, large amounts of data are produced and may require storage. Therefore, sufficient storage capacity has to be provided. For in-house trainings or changing simulation sites in particular, mobile solutions with fast and easy setups can be used for simulations in familiar environments. Therefore, the entire equipment can be stored in mobile containers. For the very basic setup, these boxes have to Simulation environment in a box: a laptop and a camera Building an audio/video-feedback system in a simulation environment can be as simple as connecting a laptop and the appropriate camera. In this section, we describe the most simple and affordable solution for a stationary and/ or portable simulation environment. Although limited to one camera position, this setup features live audio and video transmission and video recording and bidirectional audio communication, and it can be set up within a few minutes. 92Mahling et al.: A project study and cookbook using open source software The hardware requirements regarding processor speed, memory, and disk space are highly dependent on the camera used. However, we recommend using a laptop with at least 2 GB memory, a modern dual-core, 2-GHz processor, and a solid-state disk, if possible. Setting up and connecting the camera and laptop The camera and laptop should be placed following the requirements of the simulation area ("Anatomy of an audio/video-feedback system" section). In most situations, it will be necessary to set up the camera and laptop in different rooms; hence, directly listening and speaking to the trainees is impossible without an audio transmission system. According to our experience, the digital voice communication often suffers from a (mostly minimal) delay in transmission. Should the trainees be able to overhear the operators' original voice (e.g., because of insufficient sound separation), this would yield a double presentation of the audio signal, thus impeding comprehensibility. Communication between the laptop and camera can be either achieved with a WLAN network (either ad hoc or stationary) or by using an Ethernet cable connection (Figure 3). If possible, a wired connection should be chosen, as this has proven to be more stable and reliable in practice and often yields a lower delay compared to wireless connections. An encrypted connection should be preferred. Once successfully set up, the camera is assigned an IP address in the local network that can be accessed from the laptop. Figure 2:Laptop (A) and camera (B) that can be used in the simplest setup for simulation training. The camera For the purpose of this paper, we have chosen the "Foscam FI9821 W V2" IP (Internet Protocol) camera (ShenZhen Foscam Intelligent Technology Co. Ltd., Shenzhen, China) (Figure 2). Despite being more of a "consumer" than a "professional" hardware, we found that this camera provided sufficient video and audio recording quality, offering an adequate web interface, and being of reasonable price (currently 75­100). However, many other appropriate IP-based cameras can be integrated in a similar way. The laptop The second component of this setup is a laptop computer (Figure 2), which only requires an Ethernet and/or WLAN (wireless local area network) connection, speakers, and a microphone. If required, the laptop can be connected to a projector and audio system for larger audiences. The simulation training The only step required to prepare the training session for recording is starting a web browser and accessing Wired Ethernet connection Option 1 2 Optional IPv4 Laptop Router Optional Camera IPv4 WiFi connection Figure 3:Different setups for the connection between laptop and camera. Wired Ethernet connection without (1) or with the router (2). WLAN connection with (3) or without (4) the router. Mahling et al.: A project study and cookbook using open source software93 the web interface provided by the camera by entering the IP address of the camera in the address bar. Most of the time, it will be necessary to install a browser plugin and log in using a manufacturer-provided username and password. After logging in, a live transmission from the camera video stream is displayed (Figure 4). The camera can be controlled using the buttons on the left side of the screen. Audio playback (from camera to laptop) and audio recording (from laptop to camera) can be activated using buttons on the bottom-left side of the screen. The integrated web interface features a built-in recording function that saves the video to the local disk of the laptop (the folder the video is saved in can be specified in the settings dialog). The recording can be started and stopped using buttons on the lower-right end of the web interface (Figure 4). An example video recording is provided in Supplemental Data, Video 1 that accompanies the article at http://www. degruyter.com/view/j/bams.2015.11.issue-2/bams-20150010/bams-2015-0010.xml?format=INT. additional cameras. Another extension of the setup is described to integrate a live broadcast room for a larger audience to follow the scenario passively. This requires a second laptop or computer to be connected to the camera network. Benefits and limitations In this section, we show how a simple audio/videofeedback system can be built using a low-budget camera and a laptop (Table 2). The benefits of this setup are the low price, the portability of the system, and the ease of setup and use. However, when recording, the video streams of the cameras are saved individually and must therefore be post-processed. Although a bidirectional audio transmission is possible, the quality of the integrated microphones and speakers is limited in most cases. Possible extensions of this setup The described setup can be easily extended to add functionality. The web interface of our camera model supports the integration of additional cameras in a "split-screen" view. Therefore, the additional cameras also need to be integrated in the network using a network switch and/or WLAN access point. In the next step, the web interface can be configured to support Multiple-camera setup: iSpyServer as central relay When more than one camera and/or cameras from different manufacturers are used and combined, we make use of the fact that every camera provides not only a web interface but also "direct" access to the video stream. This will often be a Uniform Resource Locator (URL), usually consisting of the protocol that is used ("http," "rtmp", Figure 4:User interface for motion control (upper left, highlighted red dots), recording and audio (bottom, red dots), and camera settings (top, red dots). 94Mahling et al.: A project study and cookbook using open source software Table 2:Components needed for simulation environment in a box. Required Yes Yes Component Functionality Approx. price 75­100 300 10 IP-based camera Laptop with WLAN Wired LAN cable Projector Optional Optional Video acquisition (pan/tilt/zoom), audio recording, audio playback Video playback (live/recorded), audio playback (live/ recorded), audio communication, video recording etc.), a colon with two slashes ("://"), the IP address of the camera (e.g., "192.168.1.11"), a specific port (e.g., 8080), and an additional resource name ("/video"). The final URL is camera-specific and will look like the following: "http://192.168.1.11/videoMain." Some cameras provide more than one stream, e.g., to support different video formats. A community of programmers interested in security and surveillance created the open source software package iSpy (by Sean Tearney), which mainly consists of the central software (iSpy) and an additional iSpyServer (http://www.ispyconnect.com). This software is originally designed to facilitate surveillance application, but can also be used to support simulation technology. For example, setting up iSpy to integrate different camera streams can be achieved by using an integrated wizard in a few steps, as the iSpy package contains information about many different camera architectures (Supplemental Data, PDF 1). The layout of the streams can then be adjusted to individual needs, and the iSpy software has the ability to record videos and/or to control bidirectional audio communication using integrated buttons and a unified interface. The iSpy package also contains the iSpyServer software, a special program that can be used to relay streams. Additionally, the software can also selectively capture and stream any selected display of the server it runs on (e.g., for a simulated patient monitor; "Integration of simulated patient monitor software" section). Thus, it is possible to relay all streams using one type of software. This becomes especially useful when cameras from different manufacturers are used. We use iSpyServer to relay three MJPEG [Motion JPEG (Joint Photographic Experts Group)] streams from three cameras as well as our simulated patient monitor as the fourth stream. (For the camera models we used, we recommend using the MJPEG stream as it yields a significantly lower delay compared to the H.264 stream. Therefore, one of the streams has to be set to provide a MJPEG format using the following command: http://IP_ADDRESS:PORT/cgi-bin/CGIProxy.fcgi?usr=US ERNAME&pwd=PASSWORD&cmd=setSubStreamFormat &format=1.) Let the world know: split-screen streaming using VLC media player Introduction to streaming with the VLC media player When more than one camera is used, broadcasting in a split-screen view (e.g., a 2×2 matrix) may be favorable. In professional video processing, this is usually achieved by using special hardware. However, the current capacity of the current computers and the availability of functional open source software also allow for processing the video (e.g., combining four streams to one) and broadcasting the result in a standard format to the network (e.g., as a video stream). We found the widespread VLC media player (VideoLAN non-profit organization, Paris, France, version 2.2.0, 64 bit) to be suitable for video processing and broadcasting. Controlling the VLC media player using a VLM file It is important to recognize that VLC media player can be controlled by ways other than the graphical interface. The software incorporates the VideoLAN Media Manager (VLM), which allows the ability to control multiple streams using one VLC instance. The settings used by VLM can be defined in a configuration file named "*.vlm". In this file, the required video and audio inputs are defined. These are sent to a module called mosaic via a mosaic bridge. Afterwards, the mosaic bridge is configured to arrange the inputs in a desired way (e.g., in a 2×2 matrix), transcode this matrix to a specific video format (e.g., H.264/MPEG-4 AVC), and stream the video. Mahling et al.: A project study and cookbook using open source software95 Initially, we start building the VLM configuration file as follows: # Camera 1 configuration: # Setup new element named "Cam1" new Cam1 broadcast enabled # Define the input (fetch iSpyServer stream from localhost) setup Cam1 input http://127.0.0.1/?camid=1 # Define amount of caching (we chose 800 ms) setup Cam1 option network-caching = 800 # Set fps to 30/sec. Adding "/1" is needed. setup Cam1 option image-fps = 30/1 # Define the output. In this case, send to the mosaic-bridge with ID = 1 (we will need that later) and a defined width, height and , according Sample Aspect Ratio (SAR) setup Cam1 output #mosaic-bridge {id = 1,width = 640,height = 360,SAR = 1.77} As we have set up four camera streams as well as the appropriate background as "canvas", we can start the video processing as follows: setup bg output #transcode{sfilter = "mosaic",venc = x264{fps = 30/1, keyint = 30,min-keyint = 30,crf = 26,threads = 8, profile = main,preset = faster,tune = zerolaten cy},fps=30}:bridge-in{delay=410}:std{access=http {user=USERNAME,pwd=PASSWORD,mime=video/ mpeg},mux = ts,dst = :8080/} As this is a rather complicated line of code, the separate parts are discussed individually: setup bg output #transcode With the addition of more cameras, this code is to run multiple instances with a changed name (Cam2, Cam3, Cam4), changed input ("?camid=..."), and a changed mosaic bridge ID. We use the relayed MJPEG stream sent from iSpyServer as the input. Alternatively, to directly fetch the stream from the camera, this input has to be set after replacing the IP address, username, and password with the chosen values as follows: setup Cam? Input "http://CAMERA-IP/cgi-bin/ CGIStream.cgi?cmd=GetMJStream&usr=USERNAME_ HERE&pwd=PASSWORD_HERE" In contrast to sending the stream to the mosaic bridge, the output is sent to the transcode module of VLC. Subsequently, the parameters of transcode are defined. First, we use the subpicture filter (sfilter) mosaic to build the split screen matrix: sfilter = "mosaic", Additionally, a video format for the resulting stream has to be set using the venc parameter (video encoding). We chose to use x264. Additionally, some codec-specific parameters have to be defined: Venc = x264{fps = 30/1,keyint = 30,min-keyint = 30, crf = 26,threads = 8,profile = main,preset = faster, tune = zerolatency} (The command GetMJStream was used, which is referred to as a stream in the MJPEG format. We observed a lower latency using this format.) After defining the four input streams, they have to be combined into a split screen. First, a canvas (background) has to be created: Afterwards, we define the frames per second (FPS) used for the transcode module: ,fps = 30}: # First, we define a new stream called "bg" This . is the background of the split screen view new bg broadcast enabled # As we do not need a movie as background, we just use a static image file. For this purpose, we use a black PNG image file with the dimensions of the final stream (i.e. , 1280x720 pixels) setup bg input "FULL_PATH_TO_BACKGROUND.png" # We want this image file be of unlimited duration, so we set the duration to "-1" setup bg option image-duration = -1 # The image file should have the same frames per second setup bg option image-fps = 30/1 (Note the ":" in between the different commands of the output chain, which define the individual elements of the chain.) Up to this point, we only processed the video signal. To further import the audio signal from the audio bridge, we need to use the bridge-in command. As there might be a shift in the timing of the audio and video signal, an additional delay can be specified in milliseconds for the bridge-in command: :bridge-in{delay = 410} Because we have processed and merged the audio and video signals, these need to be streamed. For this purpose, 96Mahling et al.: A project study and cookbook using open source software we make use of the standard module (std) and define how the stream can be accessed (i.e., via the http protocol), how the access is to be controlled [using a username (user) and password (pwd)], how the stream has to be encapsulated [i.e., muxed, we recommend the MPEG2/TS muxer (ts)], and from which destination (dst) the stream can be accessed [i.e., ":8080/", which means that the stream is made available under the local IP address (localhost) and port 8080]: :std{access = http{user=USERNAME,pwd=PASSWORD,mime =video/mpeg},mux=ts,dst=:8080/} positioning, 1: fixed positions, 2: user-defined offset (https://wiki.videolan.org/Documentation:Modules/ mosaic/). We chose "2", and then define the mosaic IDs using ­mosaic-order, followed by the individual offsets in pixels (­mosaic-offsets) as a comma-separated list of Xand Y-positions. If VLC is run with the above parameters using the VLM configuration file, a stream combining the four substreams should be available using the IP address of the computer running the VLC media player on port 8080. Now, all required configuration parameters are set. The next step is to start all the individual streams as follows: control Cam1 play control Cam2 play control Cam3 play control Cam4 play # Alternatively, also start the analog audio # control audio play control bg play Integration of simulated patient monitor software There are several situations in which a simulated patient monitor is required: either in the presence of a high-fidelity simulation manikin that comes with a patient monitor software (some are available on the market) or without a specific manikin using a universal patient monitor software (e.g., the Vital Sign Simulator project by Florian Schwander, http://www.sourceforge.net/projects/vitalsignsim/). In both cases, it is necessary to display the patient monitor software in the simulation area. This can be achieved with a separate computer that is connected to the network, or with a second display, e.g., added to the server running the VLC media player and iSpyServer software. To capture the screen displaying the vital sign simulator, a screen capture has to be added in iSpyServer. It is possible to define specific constraints (i.e., which area of the screen is captured) and set a frame capture interval (usually, 10 FPS is sufficient). To integrate the simulated patient monitor in the split-screen stream created by VLC media player, one of the camera streams has to be replaced with the stream URL of the captured display in the VLM configuration. The patient monitor is then integrated in the split-screen stream (Figure 5). The complete VLM configuration file is provided in Supplemental Data, VLM configuration. Starting the VLC media player As we have specified the VLM configuration file, we are to start the VLC media player with the necessary parameters. Therefore, either a new shortcut has to be created, or the command line is to be used. The command to start the VLC player is as follows: PATH_TO_vlc.exe ­vlm-conf = "PATH_TO_VLM. vlm" ­mosaic-keep-picture ­mosaicposition = 2 ­mosaic-order = "2,1,3,4 ­mosaicoffsets = "0,0,640,0,0,360,640,360" In the above command, PATH_TO_vlc.exe has to be replaced with the full path to the executable file (exe) of the VLC media player. We then have to specify the VLM file that has to be used with the ­vlm-conf command and the full path to the VLM file created above (PATH_TO_VLM. vlm). Surprisingly, some configuration statements need to be defined as parameters when executing the vlc.exe (instead of using the VLM file). First, we prevent the mosaic filter from resizing our videos (­mosaic-keep-picture) and define how we want the mosaic module to position the individual elements (­mosaic-position): 0: automatic Adding analog audio All solutions described make use of audio transmission via the IP cameras, i.e., a digital audio transmission. This provides certain advantages (no additional cables needed, etc.), but its main disadvantage is relatively low audio quality and a variation in delay of the audio transmission. According to our experience, this is especially troublesome if a direct patient communication is to be simulated, as a delay yields a significant simulation artifact. Mahling et al.: A project study and cookbook using open source software97 Figure 5:Setup of the audio/video-feedback system that we use. Server, simulator, and client (red boxes) are individual hardware components with connections displayed as dotted (wireless) or solid lines (wired). Individual software components are shown as boxes with black borders. VGA, Video graphics array; DVI, digital visual interface; TCP/IP, transmission control protocol/internet protocol; HTTPS, secure hypertext transport protocol. Due to this disadvantage, we chose to implement an analog audio transmission that enables a bidirectional communication between the operators, the simulation area, and the patient manikin. To optimize the audio quality, we further implemented an analog noise-gate and compressor as well as a digital mixer. The output is then routed to the server running iSpyServer and VLC media player. The following code has to be adapted to the local hardware components and integrated into the VLM configuration file: new audio broadcast enabled setup audio input "dshow://" setup audio option dshow-vdev = none # Specify the audio device: setup audio option dshow-adev = "DEVICE DESCRIPTION" setup audio option dshow-aspect-ratio = 4\:3 setup audio option dshow-chroma = setup audio option dshow-caching = 500 setup audio option dshow-fps = 0 setup audio option no-dshow-config setup audio option no-dshow-tuner setup audio option dshow-audio-input = -1 setup audio option dshow-audio-output = -1 setup audio option dshow-amtuner-mode = 1 setup audio option dshow-audio-channels = 2 setup audio option dshow-audiosamplerate = 44100 setup audio option dshow-audio-bitspersample = 0 setup audio option live-caching = 300 setup audio option volume = 1024 # Route the output to the mosaic bridge after transcoding setup audio output #transcode{acodec = mpga,ab = 192,channels = 2, samplerate = 44100}:bridge-out # Add this line to the end of the VLM file: control audio play The above code then integrates the audio from the analog input of the server to the VLC-broadcasted stream. (To generate the VLM configuration for the correct audio device, it might be easier to use the VLM configuration tool that is integrated in the VLC media player and can be accessed via the "Tools" menu.) Building the audio/video-feedback system for a simulation ambulance Emergency medicine requires different sorts of competencies that have to be learned: individual skills, communication, team skills, and decision making under time constraints in the context of limited information on the patient [1, 7]. A simple setup such as that shown in the 98Mahling et al.: A project study and cookbook using open source software Figure 6:Room used for live view and debriefing with beamer and laptop. section "Simulation environment in a box: a laptop and a camera" might be sufficient to set up an emergency medicine scenario with an audio/video-feedback system. However, to reproduce the complexity of these emergency situations, a full-scale simulator might be preferred. A full-scale ambulance simulator was developed to prepare medical students for their future life as emergency physicians. Therefore, all of the features listed previously were implemented. A life-sized ambulance dummy was equipped with modern medical devices, a full-scale manikin on a stretcher, and an audio/video-feedback system. Three cameras and the vital signs monitor were merged into a split screen, streamed to the live view-room (Figure 6) and recorded for video debriefings. The control room features a live observation of the simulation scenario, bidirectional audio transmission, and control of the manikin. For easier handling of video recording, camera controlling, and video observation, an individual software solution was developed to enable even inexperienced operators to use the simulator. However, building the system requires a moderate to high amount of technical skill as well as some time to familiarize with the suggested software. Using controlling software that is programmed "on top" of the software infrastructure, it is possible to simplify the use for the instructors and teachers. However, the system is still not "consumer ready" compared to solutions available on the market. In contrast to commercial systems, there is no direct, company-driven support or assistance contract available to support the integration of the audio/video system. The high flexibility and scalability of this system yields the downside that no further features but the audio and video recording are directly integrated, i.e., vital signs, event recording, and/or the controlling of a scenario. Furthermore, we only established the system to facilitate one simulation area. Although the components are theoretically extendable to support multiple rooms, this will complicate the architecture further. Thus, while our system yields high flexibility and low hardware and software costs, it requires high technical skill and a moderate amount of time used for integration and customization. Limitations Using the instructions provided above, it is possible to build a simple audio/video-feedback system using mainly open source software and affordable hardware components. Conclusions In this article, we highlight different ways to build audio/ video-feedback systems using consumer hardware and Mahling et al.: A project study and cookbook using open source software99 mostly open source software. These components facilitate a highly specific setup and different levels of complexity. In the future, these components might enable the construction of a simulation center for institutions without large funds or for centers that warrant a highly customizable solution. In our opinion, the solutions described above have the potential to be a powerful alternative to market solutions that often imply technical restrictions and require expensive support contracts. We encourage the reader to explore possible open source-based solutions, thus supporting the open source community and fostering standardization and interoperability. Related content and documentation An excellent, general streaming (including VLM) manual: https://www.videolan.org/doc/streaming-howto/en/ The VLC media player Mosaic manual: https://wiki. videolan.org/Mosaic An excellent H.264 encoding guide: http://www. avidemux.org/admWiki/doku.php?id=tutorial:h.264 (URLs checked for validity on March 27, 2015) DocLab team, and the physicians of the Department of Anesthesiology and Intensive Care Medicine in Tübingen for their valuable support. Author contributions: All the authors have accepted responsibility for the entire content of the submitted manuscript and approved submission. Research funding: Financial support for this project came from the "PROFIL" program of the Faculty of Medicine and from the Department of Anesthesiology and Intensive Care Medicine, University of Tübingen. Employment or leadership: None declared. Honorarium: None declared. Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.

Journal

Bio-Algorithms and Med-Systemsde Gruyter

Published: Jun 15, 2015

References