AI-based multi-sensor fusion for solving challenging driving scenarios
The complexity of the real world holds many challenges like different weather and lighting conditions, moving or rare objects, geographical differences and various unlikely events that are underrepresented or even missing in current datasets.
We aim to tackle some of these challenges with AI-based Multi-sensor fusion. LIDARs provide accurate depth measurements but with low resolution and missing texture color information while cameras provide superior resolution and color sensing but no depth measurements. We create a LIDAR-Camera Low-level Fusion system with automotive grade solid state LIDARs and Cameras with Deep Learning to combine and learn from the two complementary information modalities: a sparse point cloud and a camera image. The resulting real-time low-level multi-sensor fusion enables a superior world modeling capability around the car.
Deep Learning for Point Cloud Team Lead, Continental
Robert is the Product Owner of the Low Level Fusion team, working on research and development of machine learning based multi-sensor fusion technologies. The AI based products enabled by the team brings cutting edge solutions for safety critical autonomous driving applications.
Previously Robert served a role in the intersection of Quantitative Finance and Machine Learning where he helped NY and London based clients on site to take advantage of machine learning and analytics and be prepared for changes happening in global markets.
Robert has backgrounds in Computer Engineering, Machine Learning and Business Intelligence from BME and AIT.