Exwayz is a French start-up company based in Paris. At Exwayz we develop an SDK to accelerate autonomous systems development. We provide innovative real-time tools to allow these systems to locate and understand their environment through 3D LiDAR data processing. On top of that, our product works with any 3D LiDAR sensor on the market.
3D LiDAR processing is a rough task. Those innovative sensors produce 3D point clouds that are collections of (x,y,z) coordinates. Their inherent lack of structure makes it difficult to define the “neighborhood” of a point, in opposition to images where the neighbors of a pixel are adjacent pixels.
At Exwayz, the data processing has two more specificities :
●Point cloud anisotropy due to the radial sampling of the data. In particular, the farther, the sparser.
●Processing has to be done real-time : our algorithms have to run at least at the sensor rate, meaning 50ms for processing more than 100 000 points.
PhD subject : Neural Radiance Fields for semantic 3D LiDAR data synthesis
The context of this PhD is 3D LiDAR data simulation. Today, simulation has shown a growing interest in the development of autonomous systems. Simulation can produce extensive amounts of data at a very low cost. In addition, simulation can be used to produce edge scenarios. Today’s simulators are more and more realistic and recent developments have seen LiDAR data integrated into them, as depicted in the figure above.
Nonetheless, today’s simulators need virtual 3D scenes as input. Building 3D scenes and 3D scenarios is an expert task and takes time and resources. Our goal in this PhD is to address this problem by making a simulation from real-world captured data. The main industrial interest is to “automate” the creation of virtual scenarios representing the real world. Thus, mobile robots can be trained specifically for their future deployment environments, i.e. a warehouse or an industrial site.
Most simulators use a geometric approach based on ray tracing of the scene geometry, this is the case of CARLA  simulator. Recent developments with conditional GANs [2,3] allowed to improve the realism of RGB synthetic images produced by the graphic engine of a video game. These methods need 3D models as input which is an expert task and takes time.  focused on mixed reality approaches by integration of synthetic 3D models in real life scanned 3D scenes. This method is interesting, however the path of the robot is fixed and cannot be simulated, so the number of scenarios possible is low.
In this PhD we focus on recent developments in Neural Radiance Fields . Neural Radiance fields have shown a growing interest in the past 2 years. As depicted in the figure above, these methods are used to reconstruct the complex volumetric light response of an existing scene, from a collection of RGB images. This function is highly complex because the light response of a given object depends on the position of the viewer. Recent developments made them scalable , faster to train and render (GitHub - NVlabs/instant-ngp: Instant neural graphics primitives: lightning fast NeRF and more) and also usable for both 3D reconstruction [9,11] and path planning .
LiDAR is an active sensor technology that relies on time of flight measurement of lasers. As depicted on the figure above, LiDAR measures both depth and intensity images. The goal of this PhD is to design a new framework integrating LiDAR into the recent work on Neural Radiance Fields (NeRF) . It will allow us to simulate new LiDAR measurements in a previously scanned environment.
●Design LiDAR based NeRF framework.
●Simulate static scenes (intensity (+ambient) + depth).
●Include moving objects (deformable nerfs [6,7]).
●Leverage semantic information in the simulation framework .
●Simulate dynamic scenarios for mobile robotics.
Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., & Koltun, V. (2017, October). CARLA: An open urban driving simulator. In Conference on robot learning (pp. 1-16). PMLR.
Wang, T. C., Liu, M. Y., Zhu, J. Y., Tao, A., Kautz, J., & Catanzaro, B. (2018). T In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8798-8807).
Yang, Z., Chai, Y., Anguelov, D., Zhou, Y., Sun, P., Erhan, D., ... & Kretzschmar, H. (2020). Surfelgan: Synthesizing realistic sensor data for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11118-11127).
Fang, J., Zhou, D., Yan, F., Zhao, T., Zhang, F., Ma, Y., ... & Yang, R. (2020). Augmented lidar simulator for autonomous driving. IEEE Robotics and Automation Letters, 5(2), 1931-1938.
Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020, August). Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision (pp. 405-421). Springer, Cham.
Park, K., Sinha, U., Barron, J. T., Bouaziz, S., Goldman, D. B., Seitz, S. M., & Martin-Brualla, R. (2021). Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 5865-5874).
Pumarola, A., Corona, E., Pons-Moll, G., & Moreno-Noguer, F. (2021). D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10318-10327).
Tancik, M., Casser, V., Yan, X., Pradhan, S., Mildenhall, B., Srinivasan, P. P., ... & Kretzschmar, H. (2022). Block-NeRF: Scalable Large Scene Neural View Synthesis. arXiv preprint arXiv:2202.05263.
Rematas, K., Liu, A., Srinivasan, P. P., Barron, J. T., Tagliasacchi, A., Funkhouser, T., & Ferrari, V. (2021). Urban Radiance Fields. arXiv preprint arXiv:2111.14643.
Adamkiewicz, M., Chen, T., Caccavale, A., Gardner, R., Culbertson, P., Bohg, J., & Schwager, M. (2022). Vision-only robot navigation in a neural radiance world. IEEE Robotics and Automation Letters, 7(2), 4606-4613.
Xu, Q., Xu, Z., Philip, J., Bi, S., Shu, Z., Sunkavalli, K., & Neumann, U. (2022). Point-NeRF: Point-based Neural Radiance Fields. arXiv preprint arXiv:2201.08845.
●Master in vision/ signal processing, and/or machine learning.
●Rigor and autonomy
●Writing and oral presentation skills
●Fluent spoken and written English.
●Machine learning/ Deep Learning
●Good programming skills (C++, python)
Duration : 3 years.
Starting date : October 2022
Exwayz team (currently hosted at Station F, Paris 75013) as well as CMM laboratories will be involved in this PhD. The PhD candidate will spend part of his/her time in both structures, being able to evolve in an academic environment and motivated by exciting start-up challenges.
Institution: MINES ParisTech
Research Unit: Mathematics and Systems
Centre de Morphologie Mathématique (CMM) / Mines ParisTech
35 rue Saint-Honoré. 77305 Fontainebleau Cedex
Beatriz Marcotegui, Full Professor at CMM
Tel : 01.64.69.48.04
E-mail : email@example.com
Santiago Velasco, Researcher at CMM
Tel : 01.64.69.47.96
E-mail : firstname.lastname@example.org
Hassan Bouchiba, CEO at Exwayz, PhD in 3D vision
Tel : 06.78.51.75.25
E-mail : email@example.com
Mathias Corsia, CTO at Exwayz
Tel : 06.46.74.46.19
E-mail : firstname.lastname@example.org
Please send by e-mail to the four contacts aforementioned with a detailed up-to-daterésumé, a motivation letter, recommendation letter(s) and academic transcripts.
(c) GdR 720 ISIS - CNRS - 2011-2022.