Pervasive Artificial Intelligence Research (PAIR) Labs
On Self-Maneuvered Patrolling Robots with Artificial Intelligence and Multi-Sensory Data Fusion Technology
Principal Investigator:Professor Yu-Chee Tseng
—
Summary
In this project, we focus on the development of AI and multi-sensory data fusion technology. Based on the developed approaches, we can improve machine (i.e., robot) perception in a smart environment and increase the intelligence of robots. The core research topic here is how to utilize artificial intelligence to carry out a wide range of sensory data analysis, and then extract high-level information to enhance the efficiency of work as well as provide advanced robot-human services. The proposed system is able to recognize people even without capturing human biological features.
Keywords
Image Recognition, Depth Learning, Indoor Localization, Inertial Sensor, Sensor Fusion, Robot, ROS
Innovations
- [Supervision capability] Fusion of surveillance camera (RGB) and location data. We design a pairing mechanism to couple human objects with their IDs. The pairing results of PID (Person identification) can be visualized on a screen. A prototype system is developed and extensive experiments are conducted to analyze the correctness of tagging results.
- [Third Eye] Fusion of depth camera (RGB-D) and inertial sensor data. We develop person identification system with wearable devices and RGB-D camera. The depth camera can capture the skeleton data of these people. Each person wears an inertial sensor, which can capture the motions of the user. The skeleton and inertial data are collected and transmitted to our fusion server. The goal is to pair each inertial data with a skeleton. Finally, we can tag the IDs of skeletons in the visualization result.
Benefits
- Supervision capability: Through fusing location trajectories of video data and iBeacon data, we are able to achieve PID on long-distance cameras.
- Third Eye: Through fusing skeleton data from RGB-D camera and motion data from wearable inertial measuring unit, we are able to achieve short-distance, highly accurate PID.
- Other on-going Researches
- Building a robot platform, which enables the robot to track a specific person, climb up and down stairs. Fig. 3 shows the structure of the tracking robot we are working on with Prof. Wayne Wang (NTNU).
- Demonstrating the approaches for multi-sensory fusion in V2V (Vehicle-to-Vehicle) communications. Fig. 4 shows the V2V data fusion results.