Support making RiG more international!

Start Survey

Support making "Research in Germany" more international! Your expertise and commitment are the key to the further development of promoting the German research landscape. We invite you to take part in our online survey and share your valuable experiences and opinions. Duration: 7-10 min.

Please start the survey at the end of your visit.

From two images to a 3D object

Researchers at the Technical University of Munich (TUM) have succeeded in generating precise 3D reconstructions of objects using images from only two camera perspectives. Their method works even with images captured in their natural surroundings. Previously, such reconstructions were possible only with hundreds of perspectives or under laboratory conditions. Camera-based reconstructions are used in autonomous driving or when preserving historical monuments.

Jun 20, 2024, 3:29:53 PM
Julia Rinner , Technische Universität München

In recent years, neural methods have become widespread in camera-based reconstructions. In most cases, however, hundreds of camera perspectives are needed. Meanwhile, conventional photometric methods exist which can compute highly precise reconstructions even from objects with textureless surfaces. However, these typically work only under controlled lab conditions. More precise reconstructions despite small numbers of data points Daniel Cremers, professor of Computer Vision and Artificial Intelligence at TUM and leader of the Munich Center for Machine Learning (MCML) and a director of the Munich Data Science Institute (MDSI) has developed a method together with his team that utilizes the two approaches. It combines a neural network of the surface with a precise model of the illumination process that considers the light absorption and the distance between the object and the light source. It combines a neural network of the surface with a precise model of the illumination process that considers the light absorption and the distance between the object and the light source. The brightness in the images is used to determine the angle and distance of the surface relative to the light source. “That enables us to model the objects with much greater precision than existing processes. We can use the natural surroundings and can reconstruct relatively textureless objects for our reconstructions,” says Daniel Cremers. Applications in autonomous driving and preservation of historical artefacts The method can be used to preserve historical monuments or digitize museum exhibits. If these are destroyed or decay over time, photographic images can be used to reconstruct the originals and create authentic replicas. The team of Prof. Cremers also develops neural camera-based reconstruction methods for autonomous driving, where a camera films the vehicle's surroundings. The autonomous car can model its surroundings in real-time, develop a three-dimensional representation of the scene, and use it to make decisions. The process is based on neural networks that predict 3D point clouds for individual video images that are then merged into a large-scale model of the roads travelled. More information: - Daniel Cremers is a professor of computer vision and artificial intelligence. He is also a director of the Munich Data Science Institute (MDSI) and the Munich Center for Machine Learning (MCML). His research is focused mainly on computer vision and machine learning with the goal of developing algorithms for precise 3D reconstructions and image analysis. - The study will be presented at the Conference on Computer Vision and Pattern Recognition (CVPR) in Seattle (USA) from June 17th to 21st, 2024. The CVPR is the largest and most important event in its field.

Contact for scientific information:

Daniel Cremers Technical University of Munich Professorship for Computer Vision and Artificial Intelligence cremers@tum.de

Original Publication:

Mohammed Brahimi, Bjoern Haefner, Zhenzhang Ye, Bastian Goldluecke, Daniel Cremers. Sparse Views, Near Light: A Practical Paradigm for Uncalibrated Point-light Photometric Stereo. CVPR conference 2024.

Source:

https://idw-online.de/de/news835679
Chat-Icon