Toronto Metropolitan University
Browse

The Augmentation of Urban Search and Rescue Dogs With Sensing, Control, and Actuation--Extending the Metaphor, "Dog as Robot"

Download (8.85 MB)
thesis
posted on 2022-11-03, 16:56 authored by Jimmy Quang Minh Ngoc Tran
When disaster strikes in urban areas, the devastating results are collapsed structures that may contain voids, and trapped people within. To a large extent, the speed with which these victims can be found and extricated determines the likelihood of their survival. Specially trained and equipped emergency first responders are tasked with trying to save their lives by locating and extricating trapped victims from these dangerous environments. Telepresence systems can help first responders search for casualties from a safe location. Most automated search systems intended for use in urban disasters, come in the form of remotely operated robots. This work takes a different approach to telepresence and robotics. This work is an extension of previous work that exploits the intelligence and characteristics of trained search dogs combined with compatible technology and used as components in new kinds of telepresence systems for urban search and rescue (USAR) operations. The Canine Remote Deployment System (CRDS) is a tool that emergency responders can use to deliver critical supplies to trapped victims in rubble using dogs. The first contribution of this work is the development of the bark detection system for automatically triggering deployment of packages near trapped victims from the CRDS-guaranteeing accurate package deployment even when remote communication with the dog is impossible. A well-known ground robot problem is the difficulty in designing a mobility mechanism to traverse rubble. Another contribution of this thesis is the Canine Assisted Robot Deployment (CARD) framework and the design of a robot capable of being carried by a search dog. This work extends the responder’s telepresence in rescue operations by bringing robots much deeper into the disaster site than current methods. Visual odometry is used in location tracking in GPS-denied environments and can be used in rescue operations. This research explores the limitation of RGB-D cameras for visual odometry for this application. An algorithm called pseudo-Random Interest Points Extractor was developed iv to track images over visually feature-sparse areas with the potential use of visually reconstructing canine search paths to victims. This work concentrates on using visual odometry from data collected from a search dog-mounted RGB-D camera. The task of model stabilization is difficult due to the nature of dog’s constant and unpredictable movements, asthe data contains many motion blurred images. The development of an algorithm called Intelligent Frame Selector is shown to improve visual odometry for systems carried by search dogs by intelligently filtering data and selecting only usable frames. The algorithm can be applied to any general visual odometry pipeline beneficially as the technique reduces cumulative error problems by using less data.

History

Language

eng

Degree

  • Doctor of Philosophy

Program

  • Computer Science

Granting Institution

Ryerson University

LAC Thesis Type

  • Dissertation

Year

2019

Usage metrics

    Computer Science (Theses)

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC