Jason Rambach

Senior Researcher, Team Leader Spatial Sensing and Machine Perception
@ German Research Center for AI (DFKI)

jasonrambach.png

Jason Rambach received his PhD in Computer Science in 2020 from the University of Kaiserslautern for his dissertation entitled “Learning Priors for Augmented Reality Tracking and Scene Understanding”. He has been at DFKI Augmented Vision in Kaiserslautern since 2015. Currently, he is a Senior Researcher and Deputy Director leading the team “Spatial Sensing and Machine Perception” working on Scene Perception and Reasoning using Machine Learning.

His research interests include Object Pose Estimation and Tracking, Semantic Scene Understanding, Anomaly Detection, Hybrid AI, Robotic Vision and Augmented Reality. He has over 50 publications in leading Computer Vision and Augmented Reality conferences, a best paper award from the IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 2017 and five awards at the BOP Object Pose Estimation challenge 2022 and 2023 at ECCV and ICCV. He is a reviewer for several scientific journals and conferences (CVPR, T-PAMI, ECCV, ICCV, ICRA, IROS, WACV, BMVC). Since 2022 he is coordinator of the EU Horizon Project HumanTech, applying AI in the construction industry for Scan2BIM, Wearables and Assistance Robots.

news

Mar 22, 2025 Organizing the SPADE Workshop at the Intelligent Vehicles Symposium (IV) 2025
Feb 26, 2025 Article on object pose estimation for symmetric objects published by IEEE Transactions on Image Processing (TIP) Journal
Oct 23, 2024 Invited Talk at the NEM Summit 2024 on Building Virtual Worlds with 3D Sensing and AI
Jun 18, 2024 3rd place at the Scan2BIM challenge of the CV4AEC Workshop at CVPR2024
Apr 02, 2024 Organized the 2nd Workshop on AI and Robotics in Construction at the European Robotics Forum 2024
Mar 05, 2024 Paper on object pose estimation accepted at CVPR 2024 (code available)
Mar 04, 2024 Best Industrial Paper award at ICPRAM 2024
Jan 03, 2024 Paper on 3D semantic segmentation from single spherical images presented at WACV 2024
Nov 01, 2023 3 new research projects started. KIMBA, ReVise_UP, BERTHA
Oct 19, 2023 Invited Talk at IGIC 2023 on Meaningful AR/XR through AI Perception
Oct 03, 2023 Three awards at the Object Pose Estimation challenge (BOP 2023) at ICCV 2023
Sep 15, 2023 Paper on Scan2CAD with retrieval and deformation accepted at ICCV2023
Sep 01, 2023 Paper on hybrid 3D face tracking accepted at ICIP 2023
Aug 29, 2023 Two papers on radar-camera fusion at GCPR 2023 and EUSIPCO 2023
Jun 20, 2023 3rd place at the Scan2BIM challenge of the CV4AEC Workshop at CVPR2023
May 14, 2023 Hiring (Senior) Researchers for projects on automated waste sorting and recycling
May 14, 2023 Co-organizing the session on Human Factors in Construction Robotics at ARSO 2023
Mar 25, 2023 Organized a Workshop on AI and Robotics in Construction at the European Robotics Forum 2023
Jan 20, 2023 Paper on 3D Object detection accepted at the Robotics and Automation Letters (RA-L) journal
Jan 10, 2023 We released the “Rada” Radar Driver Activity Dataset
Oct 27, 2022 Two awards at the Object Pose Estimation challenge (BOP 2022) at ECCV 2022
Oct 01, 2022 Editing a special issue of Sensors on 3D Sensing, Semantic Reconstruction and Modelling
Jul 12, 2022 Two papers presented at CVPR 2022 (1 main track + 1 workshop paper)
Jul 05, 2022 Keynote on Computer Vision at the French-German Research and Innovation event
Jun 01, 2022 For the next 3 years I will be coordinating the EU Horizon Project HumanTech for AI in the construction industry
Dec 06, 2021 Two papers accepted at BMVC 2021
Oct 21, 2021 Paper on Radar-Camera fusion accepted at WACV 2022
Aug 31, 2021 We released two Time-of-Flight datasets (in-car, smart building) at https://vizta-tof.kl.dfki.de/
Jun 16, 2021 Paper on depth image vs. point cloud segmentation accepted at ICIP 2021
Mar 31, 2021 Paper on plane segmentation accepted at ICRA 2021
Dec 16, 2020 Paper on radar multipath detection at ICPR 2020
Jul 14, 2020 Successfully defended my PhD Thesis “Learning priors for augmented reality tracking and scene understanding”