IndEgo: A Dataset of Industrial Scenarios and Collaborative Work for Egocentric Assistants

Vivek Chavan1,2,*, Yasmina Imgrund2,†, Tung Dao2,†, Sanwantri Bai3,†, Bosong Wang4,†, Ze Lu5,†, Oliver Heimann1, Jörg Krüger1,2
1Fraunhofer IPK, Berlin     2Technical University of Berlin     3University of Tübingen
4RWTH Aachen University     5Leibniz University Hannover
*Project Lead. Correspondence: contact@vivekchavan.com
Work done during student theses/projects at Fraunhofer IPK, Berlin.
NeurIPS Logo Published at NeurIPS 2025

Welcome to IndEgo, a NeurIPS 2025 Datasets & Benchmarks Track accepted dataset and open-source framework for industrial egocentric vision, designed to support training, real-time guidance, process improvement, and collaboration.


🎥 Industrial Scenarios

Assembly/Disassembly
Inspection/Repair
Logistics
Woodworking
Miscellaneous

📘 About

IndEgo introduces a multimodal egocentric + exocentric video dataset capturing common industrial activities such as assembly/disassembly, inspection, repair, logistics, and woodworking.

It includes 3,460 egocentric videos (~197h) and 1,092 exocentric videos (~97h) with synchronised eye gaze, audio narration, hand pose, motion, and semi-dense point clouds.

IndEgo enables research on:

⚙️ Technology

IndEgo combines:

Technology Concept Diagram

🎬 IndEgo Dataset Multimodality

🚀 Try It: No Setup Required

Run IndEgo’s core logic directly in your browser with Google Colab — no installation needed.

Open In Colab

🧩 Citation

If you use IndEgo in your research, please cite our paper:

@inproceedings{Chavan2025IndEgo, author = {Vivek Chavan and Yasmina Imgrund and Tung Dao and Sanwantri Bai and Bosong Wang and Ze Lu and Oliver Heimann and J{\"o}rg Kr{\"u}ger}, title = {IndEgo: A Dataset of Industrial Scenarios and Collaborative Work for Egocentric Assistants}, booktitle = {Advances in Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track}, year = {2025}, url = {https://neurips.cc/virtual/2025/poster/121501} }

🏆 Acknowledgments & Funding

This work is funded by the German Federal Ministry of Research, Technology and Space (BMFTR) and the German Aerospace Center (DLR) under the KIKERP project (Grant No. 16IS23055C) in the KI4KMU program. We thank the Meta AI team and Reality Labs for the Project Aria initiative, including the research kit, the open-source tools and related services. The data collection for this study was carried out at the IWF research labs and the test field at TU Berlin. Lastly, we sincerely thank the student volunteers and workers who participated in the data collection process.