Augmented Reality in Neurosurgery: A Review of Current Concepts and Emerging Applications

Augmented reality (AR) superimposes computer-generated virtual objects onto the user’s view of the real world. Among medical disciplines, neurosurgery has long been at the forefront of image-guided surgery, and it continues to push the frontiers of AR technology in the operating room. In this systematic review, we explore the history of AR in neurosurgery and examine the literature on current neurosurgical applications of AR. Significant challenges to surgical AR exist, including compounded sources of registration error, impaired depth perception, visual and tactile temporal asynchrony, and operator inattentional blindness. Nevertheless, the ability to accurately display multiple three-dimensional datasets congruently over the area where they are most useful, coupled with future advances in imaging, registration, display technology, and robotic actuation, portend a promising role for AR in the neurosurgical operating room.


Intraoperative image guidance has been used in multiple surgical disciplines over the past two decades for localizing subsurface targets that cannot be visualized directly. Although significant advances have been made, current navigation paradigms require surgeons to mentally transform two-dimensional (2D) patient-specific images (e.g. computed tomography [CT] or magnetic resonance imaging [MRI]) into three-dimensional (3D) anatomy, as well as 3D computer-rendered anatomy to the patient, and subsequently manipulate instruments in the surgical field while looking at a separated display. The promise of congruent virtual and physical realms is held by augmented reality (AR) systems, in which computer-generated 2D or 3D images are superimposed onto a user’s vision of the real world. 1 This contrasts with virtual reality (VR), in which the user is fully immersed in a computer-generated environment without real-world input, 2 impractical in an operating room but useful in simulation exercises. 3

Although not computer-generated, the first system overlaying a virtual image registered to a hidden object was described in 1938 in Austria, using a system of x-ray tubes and mirrors to reveal the position of a hidden bullet. 4 Head-up displays were developed in the 1940s to display radar information and artificial horizons in military aircraft; however, it was not until 1968 that a tracked head-mounted display (HMD) was developed by Sutherland. 5 With a mechanical ceiling-mounted head position tracking mechanism, this device allowed the overlay of analog line drawings onto the user’s vision of the real world (Figure 1). Medical applications of AR first began in the mid-1980s, with the augmentation of a neurosurgical monoscopic operating microscope with CT images 6 and the development of a video see-through HMD in the early 1990s for the augmentation of ultrasound images. 4

Figure 1 The first described head-mounted display, with ceiling-mounted mechanical head position tracking mechanism. 5

Surgical applications of AR have been reviewed extensively over the past decade, with several recent systematic reviews. 2 , 4 , 7 , 8 Reviews of AR applications specifically in neurosurgery, however, are limited, with recent articles relatively minimal in scope. 9 , 10 Here, we review the history of AR particularly as it pertains to neurosurgery, detail the modern paradigms of AR setups as well as their current neurosurgical applications, and explore challenges and future directions in the field.

Components of AR

Surgical AR systems comprise three core components. First, a virtual image or environment must be modelled. In modern AR systems, using neurosurgery as an example, this typically involves a computer-generated 3D reconstruction of a subsurface target, often sourced from segmented cross-sectional imaging (CT or MRI), with color or texture-coded differentiation between anatomic structures. Virtual images are overlaid on the user’s vision of the real world classically by solid or wire-mesh overlays (Figure 2). Nonphotorealistic, or “inverse-realism,” augmentation techniques may improve visualization and depth perception. 11 In contrast with earlier systems, modern AR devices use on-demand augmentation whereby virtual image layers may be removed when desired.

Figure 2 Current methods of overlaying virtual content. The example shown is a minimally invasive lumbar hemilaminectomy captured with a head-mounted camera. (A) No augmentation; (B) solid overlay; and (C) wire-mesh overlay.

The second requirement for AR systems is the registration of virtual environments with real space. This is particularly critical in AR because our perceptual systems are more sensitive to visual misalignments than to the kinetic errors common in VR. 12 Registration may be accomplished through a number of means and is the subject of significant ongoing research. Frame-based techniques create a rigid 3D Cartesian system in which the position and pose of imaging devices may be determined, allowing registration of a virtual environment as well as rapid updates as the real-world viewing position changes. More commonly, frameless registration methods using the point-matching of virtual and real spaces using known or rigid anatomic landmarks, including bony landmarks for cranial and spinal surgery, and (relatively) stationary “vessel signatures” for vascular procedures are used. 2 , 13 This is often enhanced with surface-mapping using infrared light-emitting diode (IR-LED)-tracked instruments, or laser range scanners. 14

The final requirement for functional AR is a display technology to combine the virtual and real environments. Display techniques may be categorized broadly as HMDs, augmented external monitors, augmented optics, augmented windows, and image projections (Figure 3). 4 Virtual environments may be projected onto an HMD, overlaid onto either the user’s vision of the real world (optical see-through), or onto a video feed of the real environment (video see-through). Augmented monitors are simply standalone screens displaying virtual content overlaid onto a video feed from the real world. Augmented optics involve direct augmentation of the oculars of an operating microscope or binoculars. Augmented windows are an emerging technology in which a semitransparent screen is placed directly over the surgical site, allowing the display of virtual objects (on the screen) directly over the real object underneath. Last, virtual environments may be projected directly onto the patient using a standard computer projector, without a separate display. 15

Figure 3 Examples of current AR display methods. (A) Video see-through HMD, with head-mounted video camera 35 ; (B) user’s view of output from video-pass through HMD, with augmentation calibration marker (gray) and overlaid vertebroplasty needle trajectories (red, yellow) 35 ; and (C) image projection of cortex and deep lesion (red) onto skin surface for incision planning. 15

AR in Neurosurgery

Neurosurgery has long been at the forefront of image-guided surgery, with the first frameless stereotactic navigation systems being developed for intracranial tumor localization in the early 1990s. It is unsurprising that many surgical applications of AR were pioneered for neurosurgery (Figure 4). The first augmented operating microscope was developed in 1985 at Dartmouth for cranial surgery. 6 Segmented 2D preoperative CT slices were displayed monoscopically into the optics of a standard operating microscope, which was registered to the operating table using an acoustic localizer system. Real-time tool tracking was not possible however, because repositioning of the microscope necessitated reregistration with the operating table, taking ~20 seconds. It was not until 1995 that the first augmented stereo microscope, offering accurate depth perception, was developed in the United Kingdom. 16 This system allowed for the multicolor display of segmented 3D cross-sectional imaging data directly into the microscope oculars, as solid or wire-mesh overlays. Intraoperative registration accuracy of 2mm to 3 mm was reported.

Figure 4 Timeline of neurosurgical applications of augmented reality.

The first video augmentation devices in neurosurgery were developed in 1994. 17 , 18 In both systems, a video camera was mounted on a stereotactic frame, allowing registration to the operating table, and trained on the patient from the surgeon’s presumed perspective. Multicolor 3D reconstructions of segmented CT or MRI data were overlaid onto the video feed on an external display. The surgery was subsequently performed either under direct vision or via the external screen.

AR for endovascular applications was demonstrated first in 1998, overlaying reconstructed preoperative vascular anatomy, from CT or MR angiography, onto a virtual screen displaying real-time x-ray fluoroscopy data. 19 With registration accuracies of 2 to 3 mm, this system was intended to obviate the additional contrast load required to generate angiographic roadmaps.

Although endoscopes have been in use in general surgery since the 1980s, the first augmented neurosurgical endoscope was developed in 2002 for endonasal transsphenoidal approaches. 20 Volumetric 3D reconstructions of preoperative CT or MRI data were overlaid onto the endoscope video feed on an external display. IR-LEDs were used to track the endoscope relative to the patient, allowing display of the endoscope trajectory relative to delicate neurovascular structures.

Current Applications

A comprehensive review was performed on the recent literature pertaining to AR for human clinical neurosurgical applications. MEDLINE, Web of Science, and Scopus were searched for English-language literature from 2000 through 2015 using the search terms (augment* AND reality AND (neurosurgery OR spine OR endovascular)). The search was conducted in August 2016. Nonduplicated, peer-reviewed original investigations encompassing in vivo, human phantom, or human cadaveric specimens were included. Of 126 screened abstracts, 44 were either not relevant to AR or neurosurgery or were commentaries on other primary investigations; 14 were reports on VR devices. The full texts of the remaining 68 articles were reviewed independently by two authors (DG, NMA). From these were excluded 15 reviews of previously published literature and 20 technical/engineering papers without clinical translation, leaving 33 primary manuscripts (Table 1). 13 , 15 Given the significant heterogeneity in reported outcomes, pooled statistics were not computed.

Table 1 Summary of studies on neurosurgical applications of AR

* All values are presented as means or percentages.

AVM=arteriovenous malformation; CTA=computed tomography angiography; fps=frames per second; MCA=middle cerebral artery; mRS=modified Rankin score; NIR=near-infrared; OA=occipital artery; PICA=posterior inferior cerebellar artery; STA=superficial temporal artery; US=ultrasound.

Of the 33 articles on neurosurgical AR, the majority were for applications in tumor resection (16 articles, 48%), open neurovascular surgery (9 articles, 27%), or spinal procedures (7 articles, 21%). Four articles pertained to the stereotactic localization of ventriculostomy catheters or simply a tracked probe (12%) and one pertained to cortical resection in epilepsy. Notably, there were no recent publications on AR for endovascular procedures. Of the 33 total studies, four assessed the role of AR for trainee simulation (12%), with the remainder devoted to intraoperative applications. Nineteen studies were conducted with some in vivo human clinical testing (58%), whereas 14 were exclusively cadaveric or phantom studies. AR stereomicroscopes were assessed in five studies (15%), although three were from the same center. AR HMDs were investigated in three articles (9%), image projection techniques in four (12%; two from the same center), and AR windows in four (12%; two from the same center). All other studies used external AR displays, either standalone or tablets/smartphones.

Evaluation and reporting of outcomes from the use of AR devices were highly heterogeneous across studies. A summary of reported outcomes for each study is presented in Table 1. Subjective feedback of operator comfort, usability, and/or depth perception were reported by most studies, often dichotomized as “satisfactory/unsatisfactory.” Studies investigating AR simulators typically quantified accuracy for the specific simulated task, for instance, the translational deviation of a virtual ventriculostomy catheter from its ideal target or the percentage of pedicle screws placed in satisfactory position. 29 , 38 , 39 For clinical studies, the most commonly quantified metrics included setup time and overall registration error. Overall registration errors were calculated differently between studies, unsurprising given the variety of augmentation techniques used and hence the types of errors introduced. For instance, camera calibration errors apply to any systems using video imaging of the real world, but are obviated in optical see-through HMDs. Error in tracking and transforming eye movements, however, applies primarily to optical HMDs. Errors in virtual image overlay or reprojection, occurring to various extents with each type of augmented display, were typically not reported separately. Nonetheless, overall registration errors for cranial AR ranged from 0.3mm to 4.2 mm, with most studies reporting 2mm to3 mm. This is well within the range of accuracy achieved by current neuronavigation systems.

AR in Vascular Neurosurgery

With the exception of one study in which AR was used by a remote surgeon to guide a carotid endarterectomy 31 and one in which volumetric intracranial CT angiography (CTA) data were overlaid onto a real-world video feed, 47 AR for vascular neurosurgery has focused on the augmentation of stereomicroscopes. Microscope overlays include either fluorescence images from intraoperative indocyanine green (ICG) angiography or segmented preoperative CTA/magnetic resonance angiography/digital subtraction angiography (DSA). 13 , 40 , 41 , 46 Registration of the virtual environment to the patient is done by tracking each component using a standard IR-LED–based neuronavigation system, which would be used typically as standard of care. Verification of registration is performed at each step of skin incision, craniotomy (using standard anatomic landmarks), and arachnoid dissection (by registering the ‘vessel signature’ of the exposed cortex to the preoperative CTA/DSA). 13

Overlay of the target vasculature optimizes both skin incision and craniotomy, with a smaller craniotomy fashioned for 63% of AR-guided cases versus without AR in one series of aneurysm clipping. 41 In this series, AR guidance was felt to be most useful for aneurysms requiring an unusual trajectory or with limited exposition and hidden branches; cases done with AR showed no difference in intraoperative clip correction rates, or 3-month patient functional outcomes, relative to procedures without AR.

For extracranial-intracranial bypass procedures, AR overlays offered the additional advantages of readily identifying donor vessels on the skin surface, facilitating skin incision and vessel harvest. AR guidance proved superior to manual pulse palpation and comparable to Doppler ultrasound or intraoperative DSA-guided donor vessel identification. 13 Craniotomy size was also minimized because of AR display of the preoperatively identified recipient vessel sites.

The role of AR in arteriovenous malformation (AVM) resection may be more limited; in the few series to date, although vessel augmentation was useful for skin incision, craniotomy, and resection planning, the complexity of arterial feeders in most AVMs was not resolvable with current systems, particularly in the context of surrounding hemorrhage from preoperative rupture. 40 Identification of the depth of feeding arteries with AR views was also problematic, despite the use of manually identified markers on deep feeding arteries. 47

Intraoperative setup of the AR stereomicroscope requires approximately 20 additional minutes beyond the registration of a standard optical navigation system: 10 minutes for registration of the microscope and 10 minutes for verification of registration accuracy. 41 Therefore, there is minimal disruption of the surgical workflow, particularly once the procedure is under way. Segmentation and merging of preoperative cross-sectional imaging as well as DSA, however, does entail additional time before surgery.

AR in Skull Base/Tumor Surgery

As with neurovascular applications, AR guidance is particularly useful in the initial stages of surgery for planning skin incisions and minimizing the extent of craniotomy. When tumor boundaries and planned resection margins are segmented preoperatively, along with adjacent neurovascular structures to be preserved, AR overlays of these targets facilitate maximal safe resection, particularly for gliomas. In one series of 74 patients, 64 with primary or recurrent gliomas, AR overlay of volumetric CT/MRI data was achieved with no additional surgical time or complications and reduced both intensive care unit and hospital length of stay by 40% to 50% relative to non-AR cases. 23 AR also offers direct visualization of superficial and deep venous structures, which is particularly useful in the resection of large convexity, parasagittal, and parafalcine meningiomas. 26 However, as with any neuronavigation system guided by preoperative imaging, current AR devices are unable to account for brain shift during cranial surgery, which may represent a significant source of registration error once large volumes of tumor or cerebrospinal fluid have been removed. 51 Recent advancements in surgical navigation include real-time registration updates from intraoperative 3D ultrasound, accounting for brain shift on a time scale on the order of minutes. 52 Ultrasound-based registration updates are now beginning to be applied to AR views for tumor surgery. 53

For endoscopic endonasal transsphenoidal approaches to the skull base, although anatomic landmarks are typically sufficient to target midline and avoid injury to the carotid arteries and optic apparatus, these landmarks are absent in reoperations. In the one series of augmented neuroendoscopes to date, AR overlays of both endoscope trajectory and neurovascular anatomy were highly valuable in reaching the sellar floor safely in redo procedures, with no additional operative time or hardware setup required. 20

AR in Stereotactic Localization and Functional Neurosurgery

The advantages of AR overlays in providing “x-ray vision” to identify deep intracranial structures, may be extended to ventriculostomy insertion. AR ventriculostomy simulators providing haptic feedback and 3D visualization of intracranial catheter trajectory have been instructional for junior residents in appreciating not only a proper target, the foramen of Monro, but also an appropriate trajectory and the adjacent nuclei to be avoided. AR simulators have also allowed for the real-time quantification of trainee accuracy, revealing trends of improvement with multiple attempts as well as with seniority in training. 38 , 39

AR in Spinal Surgery

Although the literature on spinal applications of AR in open procedures is relatively sparse, there is promise in the ability of AR to provide real-time trajectory guidance for percutaneous instrumentation at or superficial to skin level. 35 Current spinal stereotactic navigation systems are able to guide hardware trajectory relative to bony anatomy, but leave the skin projection of these trajectories at the discretion of the surgeon. The literature on spinal AR has largely focused on percutaneous vertebroplasty/kyphoplasty, although these are readily adaptable to percutaneous pedicle screw placement through identical transpedicular approaches.

C-arm intraoperative fluoroscopy is classically used to guide percutaneous instrumentation. One augmentation technique has involved the placement of a video camera in-line with the x-ray axis, allowing overlay of x-ray and real-world images. In a small cadaver study, although AR decreased radiation exposure compared to a C-arm–only technique, breach rates of pedicle instrumentation were 40%, far greater than the 5% to 15% accepted by most practicing surgeons. 28 , 54

Overlay of 3D-reconstructed MRI or CT data is an alternative technique. 32 In one cadaveric study overlaying intraoperative MRI for vertebroplasty guidance, needle-tip target errors averaged 6.1 mm; however with a mean of six intraoperative MRI scans required per level, this is cumbersome for human clinical application. 44 In a study projecting 3D-reconstructed spine CT imaging onto cadaveric torsos for transpedicular approaches, although the AR projection facilitated positioning of the C-arm appropriately for initial targeting, with an entry point error of 4.4 mm, AR alone was unable to accurately allow targeting of the final needle position because of lack of angular information, with a target error of 9.1 mm. 45

Although potentially very useful in percutaneous applications, relevant given the emerging indications for minimally invasive procedures, AR for percutaneous spinal surgery remains insufficiently accurate for clinical application. The overlay of multiplanar cross-sectional imaging rather than 3D reconstructions only, similar to the displays of current stereotactic navigation systems, may provide the angular information required for more accurate targeting of implants to their final transpedicular position.

Challenges in AR

Display of 3D virtual objects onto real-world images presents multiple challenges, some specific to certain display techniques. A basic requirement for AR is the accurate registration of real and virtual spaces, which requires knowledge of the pose and optical characteristics of both real and virtual cameras. 55 Registration errors in video see-through systems, in which the real world is imaged through a video camera, are constituted by errors in camera calibration, image distortion, and object-to-patient registration. 12 Optical see-through systems, although eliminating the need for camera calibration, require tracking of head and eye movements for synchronization of real and virtual content from varying perspectives, introducing additional error. 55 Eye tracking is unnecessary for image projection techniques and AR windows. Unfortunately, projection of 2D light onto 3D surfaces becomes inaccurate with highly curved surfaces and is less useful once direct line of sight to the patient is unavailable, for instance, with the introduction of a microscope or other equipment adjacent to the operating table. AR windows, although able to display content from any perspective without eye tracking, must be placed over the area to be imaged and thus obstruct the surgical field. However, in endovascular and other procedures where the site of manipulation is distant from the target, AR windows may be appropriate.

Even with geometrically correct positional registration, problems with impaired depth perception may arise. In viewing a native scene, the human eye converges on a particular 3D point and accommodates onto that plane to view the image clearly. These focus cues are combined with numerous monocular and binocular depth cues, including shading, texture, stereopsis, motion parallax, and occlusion, to generate 3D perception in the brain. Discrepancy in accommodation and divergence impairs depth perception, most evident in optical see-through AR in which the focal plane of the virtual image is at the level of the display panel, whereas the eye must accommodate at a longer distance onto the real-world target to see it clearly. 56 Accommodation-divergence discrepancy may also lead to visual fatigue, headaches, and diplopia, particularly after prolonged use. 57 Injection of 3D images into the oculars of stereomicroscopes somewhat alleviates this, but the focal plane of the virtual image remains incongruent with that of the target. Video see-through displays, either HMDs or external displays, minimize perceptual discrepancies between real and virtual environments by having full control of both. 25 Unfortunately, they are hampered by limited resolution relative to the native eye. Recent work on multifocal plane stereoscopic displays, either via spatial or time-multiplexing, shows promise for the proper display of depth and focus cues in AR. 58 Occlusion, the partial blockage of an object’s view by another nearer object, is an important monocular depth cue for the perception of relative depth. 59 A well-documented challenge with AR environments is the occlusion of operator’s hands or instruments by superimposed virtual images, leading to misperceptions of relative proximity. Multiple techniques of occlusion handling have been described, for instance, the detection of edges and color-specific surfaces in the camera feed and retention and display of these features over the virtual object. 11 , 60

Temporal synchronization of virtual and real environments is an additional challenge with all AR systems, particularly with rapid perspective changes. This has been most apparent with optical see-through systems, in which even slight delays in remapping of the virtual environment to the real world, following a change in position, are visually jarring to the surgeon. 4 By controlling both real and virtual “cameras,” video see-through systems eliminate relative visual lag; however, lag between visual and tactile feedback cannot be avoided by this technique and should ideally be less than 80 ms for the accurate manipulation of delicate structures. 61

Finally, though image fusion in AR offers the benefit of visualizing multiple 3D datasets congruently, extraneous information may distract surgeons from unpredictable findings in the operative field. Termed “inattentional blindness,” this phenomenon has been extensively studied in the aviation industry, but only recently so in surgery. 62 The predominant driver of inattentional blindness appears to be greater cognitive load of the primary task, 62 , 63 although augmentation of the visual field has also been implicated in multiple studies. 64  66 Use of wire-mesh and “inverse-realism” overlay techniques, rather than solid overlay, has been suggested to potentially reduce inattentional blindness. 66

Future Directions

Although AR has proven useful in the overlay of multiple 3D imaging modalities onto the working area of interest, much work needs to be done to improve utility, streamline workflow, and minimize surgeon distraction and visual fatigue. Improvements in registration techniques, currently under significant research for nonaugmented navigation technologies, will allow automatic intraoperative reregistration to account for soft-tissue deformation or changes in patient positioning. Improvements in the range of content that can be overlaid—for instance computational fluid dynamics quantifications of blood flow from intraoperative angiograms—will broaden the scope of AR to AVM resections and other procedures where pure anatomic data is not necessarily useful.

Advances in display technology, particularly in augmented optics for microscopes, will streamline AR integration into existing hardware as well as improve visualization and depth perception. The intent of AR is to display contextually relevant content over the area where it is required, theoretically minimizing the current ergonomic hindrances requiring surgeons to look simultaneously at multiple displays, for preoperative planning and intraoperative navigation. Part of the improvements in display ergonomics may come from using compact or readily wearable devices, arenas in which the consumer electronics and gaming sectors are making significant progress. The use of smartphone or tablet cameras and displays, coupled with internal accelerometers for positional tracking, has already shown promise for inexpensive video see-through AR. 50 Given their relative affordability, tablet-based AR windows are particularly useful in telesurgical applications for developing nations. 67 Consumer AR HMDs, such as Microsoft’s HoloLens, are optimized for consumption of video and gaming content, but certainly may be applicable to surgical environments with the addition of tracking capabilities for head pose and position.

Finally, the interaction between surgeons, augmented displays, and robotically actuated instruments shows tremendous promise for faster and safer surgery in multiple disciplines. Here again, the consumer and gaming sectors have made significant strides in how consumers interaction with virtual content, particularly in gaming environments, where gesture controls must exceed the comfort and accuracy provided by classic handheld controllers. Novel techniques to improve interaction with virtual content, including the ability to freeze and manipulate virtual objects over a live real-world scene, are in development. 68 Haptic devices, including styli and gloves, continue to evolve in the consumer arena in an effort to improve tactile feedback; although these may not be practical in a sterile intraoperative setting, they show significant promise for preoperative planning and for surgical education. 69


In an era when image guidance is used increasingly across multiple surgical disciplines, AR represents the next frontier where guidance systems are integrated seamlessly into the surgical workflow. We review here the current state of AR using neurosurgery as an example, as one of the surgical disciplines most heavily reliant on advanced imaging. This work represents one of the most comprehensive recent overviews of neurosurgery-specific applications of AR. Challenges to the routine adoption of augmented displays remain, from technical aspects, such as depth misperception and temporal asynchrony, to human factors, such as visual fatigue and inattentive blindness. Nonetheless, rapid advances in display technology and interaction techniques, driven in part by the consumer gaming industry, promise a burgeoning role for AR in the modern neurosurgical operating room.

Acknowledgments and Funding

Salary support for DG is provided in part by a Canadian Institutes of Health Research Postdoctoral Fellowship (FRN 142931).


DG, NA, NN, SG, CM, and VXDY do not have anything to disclose.


1. TangSLKwohCKTeoMYSingNWLingK VAugmented reality systems for medical applicationsIEEE Eng Med Biol Mag1998;17:4958.CrossRef | Google Scholar | PubMed
2. ShuhaiberJHAugmented reality in surgeryArch Surg2004;139:170174.CrossRef | Google Scholar | PubMed
3. AlarajACharbelFTBirkDet alRole of cranial and spinal virtual and augmented reality simulation using immersive touch modules in neurosurgical trainingNeurosurgery2013;72(Suppl 1):115123.CrossRef | Google Scholar | PubMed
4. SielhorstTFeuersteinMNavabNAdvanced medical displays: a literature review of augmented realityDisp Technol J2008;4:451467.CrossRef | Google Scholar
5. SutherlandIEA head-mounted three dimensional displayProc AFIPS Fall Jt Comput Conf1968:757764.Google Scholar
6. RobertsDWStrohbehnJWHatchJFMurrayWKettenbergerHA frameless stereotaxic integration of computerized tomographic imaging and the operating microscopeJ Neurosurg1986;65:545549.CrossRef | Google Scholar | PubMed
7. RankinTMSlepianMJArmstrongDGAugmented reality in surgery. Technological Advances in Surgery, and Trauma Critical CareNew YorkSpringer New York2015, p. 5971.CrossRef | Google Scholar
8. Kersten-OertelMJanninPCollinsDLThe state of the art of visualization in mixed reality image guided surgeryComput Med Imaging Graph2013;37:98112.CrossRef | Google Scholar | PubMed
9. TagaytayanRKelemenASik-LanyiCAugmented reality in neurosurgeryArch Med Sci2016:17.Google Scholar
10. BastienSPeuchotBTanguyAAugmented reality in spine surgery: critical appraisal and status of developmentStud Health Technol Inform2002;88:153156.Google Scholar | PubMed
11. LeroticMChungAJMylonasGYangG-ZPq-space based non-photorealistic rendering for augmented realityMed Image Comput Comput Assist Interv2007;10:102109.Google Scholar | PubMed
12. TuceryanMGreerDSWhitakerRTet alCalibration Requirements and Procedures for a Monitor-Based Augmented Reality SystemTrans Vis Comput Graph1995;1:255273.CrossRef | Google Scholar
13. CabriloISchallerKBijlengaPAugmented reality-assisted bypass surgery: embracing minimal invasivenessWorld Neurosurg2015;83:596602.CrossRef | Google Scholar | PubMed
14. GrimsonWLEttingerGJWhiteSJLozano-PerezTWellsWMKikinisRAn automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visualizationIEEE Trans Med Imaging1996;15:129140.CrossRef | Google Scholar | PubMed
15. Besharati TabriziLMahvashMAugmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection techniqueJ Neurosurg2015;123:206211.CrossRef | Google Scholar | PubMed
16. EdwardsPJHawkesDJHillDLGet alAugmentation of reality using an operating microscope for otolaryngology and neurosurgical guidanceJ Image Guid Surg1995;1:172178.CrossRef | Google Scholar | PubMed
17. GleasonPLKikinisRAltobelliDet alVideo registration virtual reality for nonlinkage stereotactic surgeryStereotact Funct Neurosurg1994;63:139143.CrossRef | Google Scholar | PubMed
18. GildenbergPLLedouxRCosmanELabuzJThe exoscope—a frame-based video/graphics system for intraoperative guidance of surgical resectionStereotact Funct Neurosurg1994;63:2325.CrossRef | Google Scholar | PubMed
19. MasutaniYDohiTYamaneFIsekiHTakakuraKAugmented reality visualization system for intravascular neurosurgeryComput Aided Surg1998;3:239247.CrossRef | Google Scholar | PubMed
20. KawamataTIsekiHShibasakiTHoriTEndoscopic augmented reality navigation system for endonasal transsphenoidal surgery to treat pituitary tumors: technical noteNeurosurgery2002;50:13931397.Google Scholar | PubMed
21. PaulPFleigOJanninPAugmented virtuality based on stereoscopic reconstruction in multimodal image-guided neurosurgery: methods and performance evaluationIEEE Trans Med Imaging2005;24:15001511.CrossRef | Google Scholar | PubMed
22. PandyaASiadatMRAunerGDesign, implementation and accuracy of a prototype for medical augmented realityComput Aided Surg2005;10:2335.CrossRef | Google Scholar | PubMed
23. GildenbergPLLabuzJUse of a volumetric target for image-guided surgeryNeurosurgery2006;59:651659.CrossRef | Google Scholar | PubMed
24. LovoEEQuintanaJCPueblaMCet alA novel, inexpensive method of image coregistration for applications in image-guided surgery using augmented realityNeurosurgery2007;60:362366.Google Scholar | PubMed
25. KockroRATsaiYTNgIet alDex-ray: augmented reality neurosurgical navigation with a handheld video probeNeurosurgery2009;65:795798.CrossRef | Google Scholar | PubMed
26. LowDLeeCKDipLLTNgWHAngBTNgIAugmented reality neurosurgical planning and navigation for surgical excision of parasagittal, falcine and convexity meningiomasBr J Neurosurg2010;24:6974.CrossRef | Google Scholar | PubMed
27. BissonMCherietFParentS3D visualization tool for minimally invasive discectomy assistanceStud Health Technol Inform2010;158:5560.Google Scholar | PubMed
28. NavabNHeiningSMTraubJCamera augmented mobile C-arm (CAMC): calibration, accuracy study, and clinical applicationsIEEE Trans Med Imaging2010;29:14121423.CrossRef | Google Scholar | PubMed
29. LucianoCJBanerjeePPBellotteBet alLearning retention of thoracic pedicle screw placement using a high-resolution augmented reality simulator with haptic feedback. Neurosurgery2011;69:ons14ons19; ; discussion ons19.Google Scholar
30. WangAMirsattariSMParrentAGPetersTMFusion and visualization of intraoperative cortical images with preoperative models for epilepsy surgical planning and guidanceComput Aided Surg2011;16:149160.CrossRef | Google Scholar
31. ShenaiMBDillavouMShumCet alVirtual interactive presence and augmented reality (VIPAR) for remote surgical assistanceNeurosurgery2011;68:200207.Google Scholar | PubMed
32. WeissCRMarkerDRFischerGSFichtingerGMachadoAJCarrinoJAAugmented reality visualization using image-overlay for MR-guided interventions: system description, feasibility, and initial evaluation in a spine phantomAJR Am J Roentgenol2011;196:W305W307.CrossRef | Google Scholar
33. AzimiEDoswellJKazanzidesP. Augmented reality goggles with an integrated tracking system for navigation in neurosurgery. IEEE Virtual Real Conf 2012 Proc. 2012:123-4.Google Scholar
34. ChangYZHouJFTsaoYHLeeSTApplication of real-time single camera SLAM technology for image-guided targeting in neurosurgery. In: Tescher AG, editor. Applied Digital Image Processing, Vol. 8499, BellinghamSpie-Int Soc Optical Engineering2012.Google Scholar
35. AbeYSatoSKatoKet alA novel 3D guidance system using augmented reality for percutaneous vertebroplasty: technical noteJ Neurosurg Spine2013;19:492501.CrossRef | Google Scholar | PubMed
36. MahvashMBesharati TabriziLA novel augmented reality system of image projection for image-guided neurosurgeryActa Neurochir2013;155:943947.CrossRef | Google Scholar | PubMed
37. InoueDChoBMoriMet alPreliminary study on the clinical application of augmented reality neuronavigationJ Neurol Surg2013;74:7176.CrossRef | Google Scholar | PubMed
38. YudkowskyRLucianoCBanerjeePet alPractice on an augmented reality/haptic simulator and library of virtual brains improves residents’ ability to perform a ventriculostomySimul Heal2013;8:2531.CrossRef | Google Scholar | PubMed
39. HootenKGListerJRLombardGLizdasDELampotangSRajonDAet alMixed reality ventriculostomy simulation: experience in neurosurgical residencyNeurosurgery2014;10(Suppl 4):576581.CrossRef | Google Scholar | PubMed
40. CabriloIBijlengaPSchallerKAugmented reality in the surgery of cerebral arteriovenous malformations: technique assessment and considerationsActa Neurochir2014;156:17691774.CrossRef | Google Scholar | PubMed
41. CabriloIBijlengaPSchallerKAugmented reality in the surgery of cerebral aneurysms: a technical reportNeurosurgery2014;10(Suppl 2):251252.CrossRef | Google Scholar | PubMed
42. DengWLiFWangMSongZEasy-to-use augmented reality neuronavigation using a wireless tablet PCStereotact Funct Neurosurg2014;92:1724.CrossRef | Google Scholar | PubMed
43. Kersten-OertelMGerardIDrouinSet alAugmented reality in neurovascular surgery: first experiences. In: Linte CA, Yaniv Z, Fallavollita P, Abolmaesumi P, Holmes DR, editors. Augmented Environments for Computer Assisted Interventions, Vol. 8678, BerlinSpringer-VerlagBerlin2014, p. 8089.Google Scholar
44. FritzJU-ThainualPUngiTet alMR-guided vertebroplasty with augmented reality image overlay navigationCardiovasc Intervent Radiol2014;37:15891596.CrossRef | Google Scholar | PubMed
45. WuJRWangMLLiuKCHuMHLeePYReal-time advanced spinal surgery via visible patient model and augmented reality systemComput Methods Programs Biomed2014;113:869881.CrossRef | Google Scholar | PubMed
46. WatsonJRMartirosyanNSkochJLemoleGMAntonRRomanowskiMAugmented microscopy with near-infrared fluorescence detection. In: Pogue BW, Gioux S, editors. Proc. SPIE, Molecular-Guided Surgery: Molecules, Devices, and Applications, Vol. 9311, BellinghamSpie-Int Soc Optical Engineering2015.Google Scholar
47. Kersten-OertelMGerardIDrouinSet alAugmented reality in neurovascular surgery: feasibility and first uses in the operating roomInt J Comput Assist Radiol Surg2015;10:18231836.CrossRef | Google Scholar
48. AbhariKBaxterJSHChenECSet alTraining for planning tumour resection: augmented reality and human factorsIEEE Trans Biomed Eng2015;62:14661477.CrossRef | Google Scholar | PubMed
49. WatanabeESatohMKonnoTHiraiMYamaguchiTThe trans-visible navigator: a see-through neuronavigation system using augmented realityWorld Neurosurg2016;87:399405.CrossRef | Google Scholar | PubMed
50. EftekharBA smartphone app to assist scalp localization of superficial supratentorial lesions—technical noteWorld Neurosurg2016;85:359363.CrossRef | Google Scholar | PubMed
51. HillDLMaurerCRMaciunasRJBarwiseJAFitzpatrickJMWangMYMeasurement of intraoperative brain surface deformation under a craniotomyNeurosurgery1998;43:514-26-8.CrossRef | Google Scholar
52. ReinertsenILindsethFAskelandCIversenDHUnsgårdGIntra-operative correction of brain-shiftActa Neurochir (Wien)2014;156:13011310.CrossRef | Google Scholar | PubMed
53. GerardIJKersten-OertelMDrouinSet alImproving patient specific neurosurgical models with intraoperative ultrasound and augmented reality visualizations in a neuronavigation environment. In: Oyarzun Laura C, Shekhar R, Wesarg S, et al. editors Clinical Image-Based Procedures. Translational Research in Medical ImagingVol. 9401Lecture Notes in Computer Science2016, p. 2835.CrossRef | Google Scholar
54. ShinBJJamesARNjokuIUHartlRHärtlRPedicle screw navigation: a systematic review and meta-analysis of perforation risk for computer-navigated versus freehand insertionJ Neurosurg Spine2012;17:113122.CrossRef | Google Scholar | PubMed
55. TuceryanMGencYNavabNSingle-point active alignment method (SPAAM) for optical see-through HMD calibration for augmented realityPresence Teleoperators Virtual Environ2002;11:259276.CrossRef | Google Scholar
56. WattSJAkeleyKErnstMOBanksMSFocus cues affect perceived depthJ Vis2005;5:834862.CrossRef | Google Scholar | PubMed
57. BandoTIijimaAYanoSVisual fatigue caused by stereoscopic images and the search for the requirement to prevent them: a reviewDisplays2012;33:7683.CrossRef | Google Scholar
58. HuXHuaHAn optical see-through multi-focal-plane stereoscopic display prototype enabling nearly correct focus cuesProc SPIE2013;8648:86481A-86481A-6.CrossRef | Google Scholar
59. NagataSHow to reinforce perception of depth in single two-dimensional picturesProc SID1983;25:239246.Google Scholar
60. Kersten-OertelMChenSSDrouinSSinclairDSCollinsDLAugmented reality visualization for guidance in neurovascular surgeryStud Heal Technol Informatics2012;173:225229.Google Scholar | PubMed
61. WareCBalakrishnanRReaching for objects in VR displays: lag and frame rateACM Trans Comput Interact1994;1:331356.CrossRef | Google Scholar
62. Hughes-HallettAMayerEKet alInattention blindness in surgerySurg Endosc2015;29:31843189.CrossRef | Google Scholar | PubMed
63. SimonsDJChabrisCFGorillas in our midst: sustained inattentional blindness for dynamic eventsPerception1999;28:10591074.CrossRef | Google Scholar | PubMed
64. DixonBJDalyMJChanHHVescanAWitterickIJIrishJCInattentional blindness increased with augmented reality surgical navigationAm J Rhinol Allergy2014;28:433437.CrossRef | Google Scholar | PubMed
65. DixonBJDalyMJChanHVescanADWitterickIJIrishJCSurgeons blinded by enhanced navigation: the effect of augmented reality on attentionSurg Endosc2013;27:454461.CrossRef | Google Scholar | PubMed
66. MarcusHJPrattPHughes-HallettAet alComparative effectiveness and safety of image guidance systems in surgery: a preclinical randomised studyLancet2015;385:S64.CrossRef | Google Scholar
67. DavisMCCanDDPindrikJRocqueBGJohnstonJMVirtual interactive presence in global surgical education: international collaboration through augmented realityWorld Neurosurg2016;86:103111.CrossRef | Google Scholar | PubMed
68. ArshadHChowdhurySAChunLMParhizkarBObeidyWKA freeze-object interaction technique for handheld augmented reality systemsMultimed Tools Appl2016;75:58195839.CrossRef | Google Scholar
69. TangJKTTewellJEmerging human-toy interaction techniques with augmented and mixed realityNew YorkSpringer International Publishing2015, p. 77105.Google Scholar

Comments are closed.

Page generated in 0.349 seconds. Stats plugin by
View My Stats