Équipe AVR - Automatique Vision et Robotique

Sujets de stages

De Équipe AVR - Automatique Vision et Robotique
Révision datée du 17 novembre 2020 à 10:38 par Cindy.rolland (discussion | contributions) (Added an intership offer)
Sauter à la navigation Sauter à la recherche

Developing an Augmented Reality tool to visualize X-ray radiations

Internship offer

Located on the campus of Strasbourg’s University Hospital, the research group CAMMA aims at developing new tools and methods based on machine learning and computer vision, in order to support the medical staff working in the operating room.

AR exposure of a surgeon

Mission: With an innovative team, contribute to the development and optimization of an application for the visualization of simulated X-rays in augmented reality. The purpose of this application is to raise awareness about the use of radiations in the operating room for in-situ safety teaching.

Based on the Microsoft HoloLens technology, this application puts you in the context of an intervention relying on radiations, simulated on GPU with the Monte Carlo method. The tests will be performed in a hybrid room equipped with RGBD cameras that will need to be registered with the HoloLens. The communication between the HoloLens and the system will use the protocol Wi-Fi.

Required profile: Currently a student in your last year of engineering school or master research specialized in computer science, you are looking for an end-of-study internship:

  • you are serious and motivated
  • you have skills in computer vision
  • you are strongly attracted by augmented reality
  • you are able to work in a team
  • you have good English skills, both written and spoken


  • Experience with C++, C# and/or Unity
  • Experience in the development of an AR application, maybe even HoloLens

Duration: 5 to 6 months

Starting date: January-February 2020

Job types: Full-time, Internship

Website: http://camma.u-strasbg.fr/

Contact information:




N. Loy Rodas, J. Bert, D. Visvikis, M. de Mathelin, N. Padoy, Pose Optimization of a C-arm Imaging Device to Reduce Intraoperative Radiation Exposure of Staff and Patient during Interventional Procedures, IEEE International Conference on Robotics and Automation (ICRA), 2017

N. Loy Rodas, F. Barrera, N. Padoy, See It With Your Own Eyes: Marker-less Mobile Augmented Reality for Radiation Awareness in the Hybrid Room, IEEE Transactions on Biomedical Engineering (TBME), Volume: 64,  Issue: 2, Pages: 429 – 440, Feb. 2017 (online version), doi:10.1109/TBME.2016.2560761, 2016

N. Loy Rodas, N. Padoy, Seeing Is Believing: Increasing Intraoperative Awareness to Scattered Radiation in Interventional Procedures by Combining Augmented Reality, Monte Carlo Simulations and Wireless Dosimeters, International Journal of Computer Assisted Radiology and Surgery (IJCARS), MICCAI Special Issue, Volume 10, Number 8, pp. 1181-1191, 2015

Computer vision for robotic flexible endoscopy

pdf file for internship proposal

Title : Environment reconstruction using a monocular endoscopic camera

Keywords : visual tracking, shape from motion, depth recovery, medical robotics

Duration : approximately 5 months (ideally between february and august 2021)

Grant : legal grant for training periods (~ 550 euros / month).

Location : ICube Robotic platform, at IHU Strasbourg

Context : This internship takes place in the scope of the assistance to medical procedures with robotic flexible endoscopes.

The AVR team of the ICube laboratory has developed a robotic platform for endoluminal surgery called STRAS (see photo below). This is a telemanipulated system equipped with an endoscopic camera and two articulated instruments, with 3 degrees of freedom each. In addition to the conventional telemanipulation control, we aim at including automatic modes to the robot, with the aim to perform tasks such as automated scanning, or automatic endoscope positioning. For reaching this aim, one of the difficulties to be tackled is the reconstruction of the shape of the environment with the only available sensor: a monocular endoscopic camera.

Automatic task viewed from the endoscopic camera
STRAS robotic system

Problem to be solved In this project, we aim at reconstructing the shape and position of the environment (tissues in in vivo environment, phantoms in laboratory setups) with respect to the endoscopic camera. The camera being monocular, shape and structure from motion will be primarily used to reconstruct the environment and motions up to a scale factor. Shape from shading could also be envisioned. The difficulties are the low quality of endoscopic images, the limited possible lateral displacement of the endoscope and the possible interactions of the instruments with the tissues creating disturbing motions and deformations. In a second step, we will try to reconstruct the metric shape and positions. This can be done by using odometric measurements on the endoscope. However, these measurements are known to be imprecise. Specific strategies, will thus be needed to recover the unknown scale factor, by using for instance Bayesian filtering approaches or machine learning techniques.

Work to be carried out The intern will have to develop algorithms for shape reconstruction from monocular images by relying on state of the art methods for tissues tracking in endoscopy (gastroenterology in particular). Algorithms have already been implanted for pure tracking and can serve as a basis. Techniques for depth estimation will then be developed, by focusing on the use of embedded measurements provided by the robot encoders. If needed a second miniature camera could be added to the setup. Tests will be carried out in the laboratory on phantoms and on in vivo images acquired during previous preclinical trials.

Work environment The internship will take place on the medical robotic platform of the ICube laboratory located at IHU (Institut Hospitalo Universitaire) in the heart of Strasbourg. The intern will be supervised by Florent Nageotte (associate professor in medical robotics) and Philippe Zanne (Engineer, responsible for the STRAS robotic system). The intern will have access to a computer for developing programs, to image acquisition systems, to in vivo images and to the robotic device for laboratory testing. Developments will be made in C / C++ or Python and possibly with Matlab for prototyping.

Covid19 conditions: In case of sanitary constraints that may prevent the internship to be realized on site, a large part of the work could be done at a distance by working on data acquired off-line. Only robotic testing will be made impossible. The intern will have to work on his/her own laptop either developing and running algorithms locally or at a distance on a connected machine.

Candidates profile We are looking for Master students in the second year or students in engineering school at the level of Master 2, with major in computer vision or robotics / computer science with a strong interest / experience in computer vision. Interest in medical applications is a plus. Proficiency in C/C++ or Python coding is mandatory.

Conditions 5 to 6 months between February 2021 and August / September 2021. The intern will receive the legal “gratification” (around 550€ / month)

Application Interested candidates should send CV / resume, master program and grades (if available) and motivation letter to Nageotte@unistra.fr, by mentioning “computer vision internship” in the email subject.

Sujets en Vision par Ordinateur / Deep Learning (CAMMA: Computational Analysis and Modeling of Medical Activities)

We are looking for motivated and talented students with knowledge in computer vision and/or machine learning who can contribute to the development of our computer vision system for the operating room.

Please feel free to contact Nicolas Padoy if you are interested to do your master's thesis or an internship with us (funding of ~500Euros/month will be provided during 4 to 6 months). The successful candidates will be part of a dynamic and international research group hosted within the IRCAD institute at the University Hospital of Strasbourg. They will thereby have direct contact with clinicians, industrial partners and also have access to an exceptional research environment. The CAMMA project is supported by the laboratory of excellence CAMI, the IdEx Unistra and the MixSurg Institute.


  • Deep Learning for Activity Recognition in Large Video Databases
  • Multi-view Human Body Tracking for the Operating Room using RGBD Cameras

More information about CAMMA