posted on 2021-05-22, 14:17authored byAdrian Bulzacki
This work describes the implementation of various human-robot interaction systems in a functioning mobile robot. This project is the result of integrating a tracking system for human faces and objects, face recognition, gesture recognition, body tracking, stereo-vision, speech synthesis, and voice recognition. The majority of these systems are custom designed for this particular project, these systems and how they were designed are explained in detail throughout this report. A unique vector-based approach is used for gesture recognition. There is minimal focus on the mechanics and electronics of the human-robot interaction system, but rather on the information processing of the robot. Using combinations of many information processing systems will allow robots to interact with human users more naturally, and will provide a natural conduit for future cooperative human-robot efforts. This project lays the groundwork for what will be a large collaborative effort aimed at creating possibly one of the most advanced human interactive robot in the world.