LISA (Laboratory of Image Synthesis and Analysis) brings together expertise in image processing and analysis, pattern recognition, image synthesis and virtual reality. Its LISA-IA unit focuses on the fields of image analysis and pattern recognition and develops new methods for 2D and 3D object segmentation, recognition or tracking, multi-modal image registration, as well as machine and deep learning methods for signal and image processing. In the latter context, research is being carried out on the ability to deal with imperfect (weak or noisy) annotations and on methods of evaluating algorithms in such situations where the ground truth is not available. Developed algorithms are related to biomedical and industrial applications. Following a problem-centered approach, the unit tackles all hardware and software aspects of the chain in multidisciplinary teams (MDs, biologists, engineers, computer scientists, mathematicians, as well as art historians and archaeologists) over multi-institutional collaborations to deliver functional applications. The research is funded both by institutional/public funds and industry collaborations. LISA's achievements include one patent, several highly cited biomedical papers, implementation of acquisition and thermoregulation devices for live cell imaging, multi-media event organization and international cultural heritage projects.
LISA (Laboratory of Image Synthesis and Analysis) brings together expertise in image processing and analysis, pattern recognition, image synthesis and virtual reality. LISA-VR studies advanced Computational Imaging techniques to capture and render natural scenes in 3D for Virtual Reality (cf. attached pictures 1 & 2) and Holography (cf. attached pictures 3 & 4). Instead of explicitly modeling the scene with 3D models as is typically done in 3D video games, we aim at low-cost light-field solutions, characterizing not only the light intensity, but also the light direction in each point in space, aka the plenoptic function. To obtain a specific view to the scene, only a subset of the various light directions is rendered, using so-called Depth Image-Based Rendering techniques that only use images to create proper 3D perspective illusions. Their principles are know for a long time, but it remains challenging to make them work in practical settings; in particular the depth sensing/estimation, the disocclusion handling and the camera architectural settings remain open research questions. The ultimate goal is to be able to properly capture the 3D scene's light field with as little equipment as possible, while reaching high-resolution virtual views of the scene for Holographic portraiting and Virtual Reality, the latter also imposing stringent real-time rendering constraints.
This person isn't currently part of a projet.