Date of Award
1-1-2022
Degree Type
Thesis
Degree Name
Master of Science in Mechanical Engineering and Applied Mechanics
Department
Mechanical, Industrial and Systems Engineering
First Advisor
Musa Jouaneh
Abstract
Installation of mechanical fasteners is one of industries most common tasks handled by robotic systems. Due to highly desirable repeatability and low cycle times, it is a practical task to automate. Disassembly, re-manufacturing, and extraction of mechanical fasteners is however, a highly difficult task to automate and is predominantly handled by human operators. This is largely due to unpredictable and unknown conditions of an end of life (EOL) product where predefined fastener locations are not known with respect to the robot's coordinate system. This task could require any number of advanced techniques, several of which are explored in this work. Research was conducted to develop a system capable of extracting cross-recessed mechanical fasteners using advanced computer vision by means of a Deep Convolutional Neural Network (DCNN) trained to detect cross-recessed-screws. This system was deployed on a collaborative robotic platform with custom tooling and uses several other techniques such as the deployment of a digital twin to assist in disassembly activity. The system utilizes state-based behavior and real-time control via digital simulation software and was extensively evaluated for detecting and extracting mechanical fasteners.
Several imaging and extraction strategies were developed throughout this work based on rigorous testing of a stereoscopic camera platform and characterization of the DCNN that was used. Two methods were implemented and tested followed by the introduction of a third method that is discussed later in the work. With limitations in the DCNN detection range and the stereoscopic cameras minimum depth measurement range, the extraction strategy for these cross-recessed-screws (CRS) was designed primarily to acquire the globalized coordinates of targets from between 300-520mm. A single shot high-level imaging strategy from above 350mm to survey the workspace and identify the locations of all targets was tested. The open loop extraction command from this method typically yielded 44\%-78\% successful extracts when testing on a symmetric-stepped CRS testing artifact. A two-stage imaging strategy which relied on repositioning the camera based on initial high level coordinates to globalize the targets for extraction with the same testing artifact was then tested. The second imaging strategy typically yielded bulk extraction results of 78\%-89\% successful extractions with 100\% being achieved as a second pass yield.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Clark, Austin, "AUTOMATED EXTRACTION OF FASTENERS USING DIGITAL TWIN ROBOTICS SOFTWARE AND DCNN VISION SYSTEM" (2022). Open Access Master's Theses. Paper 2275.
https://digitalcommons.uri.edu/theses/2275