camp

Camera Augmented Mobile C-arm (CAMC)

Project page for Camera Augmented X-Ray related research

What is CAMC?

Mobile C-arms are essential tools in everyday trauma and orthopedic surgeries as many procedures are guided by X-Ray fluoroscopy. Even if a 3D CT reconstruction is computed offline, often X-Ray fluoroscopy is used as interventional image-guided modality. However, the success of the procedure depends on the ability of the surgeon to mentally recreate the spatio-temporal intraoperative situation from two-dimensional fluoroscopic X-Ray images. In the medical imaging research community there has been a considerable effort in replacing fluoroscopic guidance with an interactive display of 3D bone models created from preoperative CT studies and tracked in real time. The basic motivation for proposing the Camera-Augmented Mobile C-arm (CAMC) is to enable a mobile C-arm itself to provide 3D reconstruction results.

This subpage provides an overview about all Camera Augmented X-Ray research at the CAMP chair. Our research covers topics such as X-Ray calibration, advanced AR visualizations through co-registration of RGB and X-Ray images, intra-operative navigation of surgical tools, 3D CBCT reconstruction, panoramic X-Ray image stitching, 3D­3D calibration for CBCT and RGBD data, radiation estimation and many more. In the last two decades, over 60 papers on Camera Augmented Mobile C-arm have been published. The following section provides an overview about the history of the CAMC system.

History of CAMC

In 1998, Nassir Navab, Matthias Mitschke and Ali Bani-Hashemi, working at Siemens Corporate Research, Princeton, NJ, USA, filed a number of patents [1 - 4] describing a mirror system for superimposing X-Ray and video images and a patent with the title “Method and apparatus using a virtual detector for three-dimensional reconstruction from x-ray images” [5]. In their 1999 paper “Merging visible and invisible: Two camera-augmented mobile C-arm (CAMC) applications” [6] they described two applications of a camera-augmented C-arm. The first application aims at merging tomographic reconstructions of the 3D volume with video images. The second approach introduces a double mirror system and an appropriate calibration procedure for an overlay of X-ray and optical images in real-time. The pictures below show the first CAMC prototype with attached double-mirror system and CCD camera.

By attaching a CCD camera to a low-cost mobile C-arm and with the concept of a “virtual detector plane”, the X-ray projection geometry can be recovered and thus the volume of interest can be reconstructed. This integrated camera solution was compared to the use of an external tracking system for motion estimation and achieved better reconstruction results. [7]

The 2000 paper “Interventions under Video-Augmented X-Ray Guidance: Application to Needle Placement” was the first application of the double mirror system for merging X-ray and optical images in real-time [8]. This was the first system providing direct optical augmentation of X-ray.

In 2003, an additional patent for "Real-time acquisition of co-registered X-ray and optical images" was filed [9]. A concept for intra-operative repositioning was developed and published in 2006 [10]. Further research involved extensions to the system, such as a multi-view system for 3D navigation during trauma and orthopedic surgeries with tracking of the surgical drill [11]. In 2010, 40 orthopedic and trauma surgeries were conducted using the CAMC system.

In the following years, research was done in parallax-free image stitching to achieve panoramic X-Ray views [12] and modelling the kinematics of the mobile C-arm and operating table as an integrated 6DOF C-arm X-ray imaging system. This enables mobile C-arms to be (re-) positioned relative to the patient’s table with six DOF in 3D Cartesian space [13].

Olivier Pauly et al. presented a learning-based paradigm, which aims at identifying relevant objects in both X-ray and optical images captured with the CAMC system and building a fused image with improved perception of the scene [14].

In 2015 Habert et al. showcased the potential of replacing the RGB camera with a depth (RGB-D) camera and presented a visualization application in which an X-ray image is fused to the 3D reconstruction of the surgical scene using a mirror system [15]. Lee et al. [16] and Fotouhi et al. [17] presented a calibration technique for the registration of CBCT data to RGBD surface data. The system enables an intuitive 3D visualization that overlays both physical and anatomical information from arbitrary views. In the 2016 paper "Preclinical usability study of multiple augmented reality concepts for K-wire placement" this system was compared against conventional X-Ray imaging and the video augmented X-Ray system using the mirror system [18]. The 3D visualization of patient, tool, and DRR showed clear advantages over the conventional X-ray imaging, providing intuitive feedback to place the medical tools correctly and efficiently.

In their 2017 paper “Pose-aware C-arm for automatic re-initialization of interventional 2D/3D image registration”, Fotouhi et al. are utilizing a RGBD camera mounted near the X-Ray detector for computing the pose of the C-arm with RGBD-SLAM. In a second step, this pose information is used as initialization for 2D/3D intensity-based image registration using digitally reconstructed radiographs (DRRs). [19]

2D/3D intensity-based registration of intra-operative X-Ray with a pre-operatively acquired CT volume was used by Tucker et al. [20] in combination with an RGBD sensor on the C-arm to fuse the optical information with patient pre-operative medical data and provide an augmented reality environment. Recent research uses a marker-less and dynamic SLAM-based tracking of the environment (the operation theater) to provide in situ visualization of pre- and intra-operative 3D medical data at the surgical site. [21]

This section provides an overview about the research done the CAMP chair over two decades by presenting selected papers. A complete list of all CAMC related publications can be found in the “Publications” section at the bottom of this page.

Literature

[1] Method for aligning an apparatus for superimposing X-ray and video images, Ali Bani-Hashedmi, Nassir Navab, Matthias Mitschke, US Patent 6,229,873, Sep 30, 1999.

[2] Laser-based method for aligning apparatus for superimposing X-ray and video images, Ali Bani-Hashedmi, Nassir Navab, Matthias Mitschke, US Patent 6,227,704, Sep 30, 1999.

[3] Apparatus for superimposition of X-ray and video images, Ali Bani-Hashedmi, Nassir Navab, Matthias Mitschke, US Patent 6,473,489, Sep 30, 1999.

[4] Method for aligning and superimposing X-ray and video images, Ali Bani-Hashedmi, Nassir Navab, Matthias Mitschke, US Patent 6,447,163, Sep 30, 1999.

[5] Method and apparatus using a virtual detector for three-dimensional reconstruction from x-ray images, Navab, Nassir, and Mitschke, Matthias, US Patent 6,236,704, Google Patents, Dec 15, 1998.

[6] Merging visible and invisible: Two camera-augmented mobile C-arm (CAMC) applications, Navab, Nassir, Bani-Kashemi, A, and Mitschke, Matthias, In Proceedings of 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR), pp 134-141, San Francisco, CA, USA, 1999.

[7] Camera-augmented mobile c-arm (camc) application: 3d reconstruction using a low-cost mobile c-arm, Navab, Nassir, Mitschke, Matthias, and Schütz, Oliver, In Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp 688-697, Cambridge, UK, 1999.

[8] Interventions under video-augmented X-ray guidance: Application to needle placement, Mitschke, Matthias, Bani-Hashemi, Ali, and Navab, Nassir, In Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp 858-868, Pittsburgh, PA, USA, 2000.

[9] Real-time acquisition of co-registered X-ray and optical images, Nassir Navab, James P. Williams, US Patent 7,198,404, Mar 04, 2003.

[10] Visual servoing for intraoperative positioning and repositioning of mobile C-arms, Navab, Nassir, Wiesner, Stefan, Benhimane, Selim, Euler, Ekkehard, and Heining, Sandro Michael, In Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp 551-560, Copenhagen, Denmark, 2006.

[11] A multi-view opto-Xray imaging system, Traub, Joerg, Heibel, Tim Hauke, Dressel, Philipp, Heining, Sandro Michael, Graumann, Rainer, and Navab, Nassir, In Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp 18-25, Brisbane, Australia, 2007.

[12] Parallax-free long bone X-ray image stitching, Wang, Lejing, Traub, Joerg, Weidert, Simon, Heining, Sandro Michael, Euler, Ekkehard, and Navab, Nassir, In Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp 173-80, London, UK, 2009.

[13] Closed-form inverse kinematics for intra-operative mobile C-arm positioning with six degrees of freedom, Wang, Lejing, Zou, Rui, Weidert, Simon, Landes, Juergen, Euler, Ekkehard, Burschka, Darius, and Navab, Nassir, In Proceedings of Medical Imaging 2011: Visualization, Image-Guided Procedures and Modeling, pp 485-493, Lake Buena Vista (Orlando), Florida, USA, 2011.

[14] Supervised classification for customized intraoperative augmented reality visualization, Pauly, Olivier, Katouzian, Amin, Eslami, Abouzar, Fallavollita, Pascal, and Navab, Nassir, In Proceedings of 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp 311-312, Atlanta, GA, USA, 2012.

[15] RGBDX: first design and experimental validation of a mirror-based RGBD X-ray imaging system, Habert, Séverine, Gardiazabal, José, Fallavollita, Pascal, and Navab, Nassir, In Proceedings of 2015 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp 13-18, Fukuoka, Japan, 2015.

[16] Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization, Sing Chun Lee, Bernhard Fuerst, Javad Fotouhi, Marius Fischer, Greg Osgood, Nassir Navab, International journal of computer assisted radiology and surgery, 11 (6), pp 967-975, 2016.

[17] Interventional 3D augmented reality for orthopedic and trauma surgery, Javad Fotouhi, Bernhard Fuerst, Sing Chun Lee, Matthias Keicher, Marius Fischer, Simon Weidert, Ekkehard Euler, Nassir Navab, Greg Osgood, In Proceedings of 16th annual meeting of the Int. Society for Computer Assisted Orthopedic Surgery (CAOS), 2016.

[18] Preclinical usability study of multiple augmented reality concepts for K-wire placement, Marius Fischer, Bernhard Fuerst, Sing Chun Lee, Javad Fotouhi, Severine Habert, Simon Weidert, Ekkehard Euler, Greg Osgood, Nassir Navab, International Journal of Computer Assisted Radiology and Surgery, 11(7), pp 1007-1014, 2016

[19] Pose-aware C-arm for automatic re-initialization of interventional 2D/3D image registration, Javad Fotouhi, Bernhard Fuerst, Alex Johnson, Sing Chun Lee, Russell Taylor, Greg Osgood, Nassir Navab, Mehran Armand, International Journal of Computer Assisted Radiology and Surgery, 12(7), pp 1221-1230, 2017.

[20] Towards clinical translation of augmented orthopedic surgery: from pre-op CT to intra-op x-ray via RGBD sensing, Emerson Tucker, Javad Fotouhi, Mathias Unberath, Sing Chun Lee, Bernhanrd Fuerst, Alex Johnson, Mehran Armand, Greg Osgood, Nassir Navab, In Proceedings of Medical Imaging 2018: Imaging Informatics for Healthcare, Research and Applications, 105790J, 2018.

[21] Closing the Calibration Loop: An Inside-Out-Tracking Paradigm for Augmented Reality in Orthopedic Surgery, Jonas Hajek, Mathias Unberath, Javad Fotouhi, Bastian Bier, Sing Chun Lee, Greg Osgood, Andreas Maier, Mehran Armand, Nassir Navab, In Proceedings of International Conference on Medical Image Computing and Computer-Assisted Inverventions, pp 299-306, Granada, Spain, 2018.

Videos

Complete List of CAMC-related Publications