Laboratory involved : Heudiasyc - Université de Technologie de Compiègne
Location: Compiègne, France (45min by train from Paris)
Duration: 24 weeks
Keywords: Pole-like features, LiDAR, cameras, Machine learning, Deep learning, Semi-automatic data annotation
If you are interested, send CV and motivation letter to maxime.noizet[AT]hds.utc.fr and philippe.xu[AT]hds.utc.fr
Detection of road features such as traffic signs, traffic lights or road markings is an essential task for intelligent vehicles. Indeed, it can strongly contribute to decision making based on the navigation task, but also to other tasks such as lane keeping using road markings.
In addition, detecting various road features can greatly help localization, whether relative, as in the case of lane keeping, or absolute with an estimated position in a working frame. Being able to localize a vehicle accurately with high confidence is essential for autonomous tasks. For this purpose, multi-sensor fusion is usually used.
Global Navigation Satellite Systems (GPS, Galileo,...) and dead-reckoning sensors providing vehicle dynamics data are classically used to obtain a global positioning. Nevertheless, it is often insufficient and perception data can be used. In particular, the detected road features can be associated with features contained in so-called vector maps. These maps can contain geometric and even semantic information on a large number of road features, including those mentioned above, as well as poles, street lamps, etc.
Various sensors can be used. LiDAR sensors scan the environment with lasers and retrieve 3D information along with the intensity. Thus, traffic signs can be extracted simply based on their intensity. However, being able to detect more features like poles can be very useful for localization, but this requires more complex algorithms. Cameras, on the other hand, can also help detecting road features, but without 3D geometric information. They are typically used for traffic sign classification or traffic light detection. They can also help by estimating an angle between the vehicle heading and the line of sight of a road feature.
Associating these road features to a vector map allows to correct the position and heading of the vehicle. Furthermore, estimating the position and semantics of road features can also help in general navigation, but also in map correction.
For learning-based techniques where a huge amount of data is needed, annotated datasets are required. Depending on the road features studied, existing datasets may be limited and it may be necessary to make our own datasets. However, manually annotating data has a significant high cost and automatic annotation
may be necessary.
For this project, the objective will be to exploit the Pandora Sensor Kit composed of a LiDAR and 5 cameras in order to realize a LiDAR/camera detector able to extract various features and their semantics. For each detected feature, an indication of the detection mode will be possible: Camera-only, LiDAR only and
• Global improvements of perception pipeline: pole-like features detection, classification, ...
• Design of a road-feature extractor exploiting the full capabilities of the Pandora sensor kit (multi-cameras and LiDAR synchronized)
• Study on automatic data annotation using localization ground truth, HD map and map-matching techniques.
• Implementation (ROS) and integration of a real-time module on a robotic vehicle.
Intelligent vehicle platform, ROS, C++, Python