Title: Optimization-Based Robot Programming with Application to Human-Robot Interaction (Optimalisatiegebaseerd robotprogrammeren met toepassing op mens-robot interactie)
Other Titles: Optimization-Based Robot Programming with Application to Human-Robot Interaction
Authors: Decré, Wilm
Issue Date: 13-Jul-2011
Abstract: Existing commercial robot control systems are designed for classical large-scale high-productivity industrial robot tasks, which are typically of low geometric complexity (e.g. point-to-point motions or tool trajectory tracking) and involve limited sensor data processing. In new robotic applications, more complex geometric motions and sensor-based control are becoming more and more relevant, and existing approaches reach their limit.As a first contribution, a new paradigm for representing and implementing sensor-based robot tasks is proposed, both for the task modeling as well as for the robot control and the estimation of task uncertainties. The new method is denoted Task Specification using Constraints or TaSC. The key element in this paradigm is that robot motions are viewed as solutions of constrained optimization problems, with sets of constraints and one or more objective functions. The selection of these constraints and objective functions is made on a higher discrete-decision level, the skill level. Three numerical problem formulations are proposed: a generic formulation and two specific cases. In the generic formulation, non-instantaneous constraints and objective functions (such as the time to execute a robot task) are supported, giving the robot programmer a lot of freedom in specifying robot tasks. In the two specific formulations, which are extended reformulations of existing approaches, the possible constraints and objective functions are restricted to instantaneous functions and known geometric paths respectively. In contrast to the generic case, these formulations however result in convex optimization problems that can be solved efficiently online and with guaranteed global optimality.Human-robot interaction is desired or even essential in many modern robotics applications such as robot-assisted rehabilitation. As a second contribution, the thesis presents an adaptive feedforward control approach for actively assisting humans in human-robot cooperation tasks. The control architecture can be fully integrated in the TaSC framework. In order to model motion trajectories, an invariant description-based parametric modeling approach for six degree-of-freedom motion trajectories is used. This approach facilitates building a library of motion models in a systematic way. A constrained-optimization-based parameter estimation technique is developed to solve the model parameters, both in a batch and in a recursive scheme.As a third contribution, the thesis discusses `best practices' for the concrete implementation of the developed techniques and their integration in existing robot controllers. This contribution is exemplified using a state-of-the-art robot system: the KUKA light-weight robot.Finally, as a fourth contribution, the thesis presents ample simulation as well as experimental results: time-optimal point-to-point motions with kinematic, operational-space and/or dynamic constraints, a laser tracing task using a redundant robot system, a bolt-nut assembly task using a humanoid robot, a box picking task involving a human and two 7 degree-of-freedom (DOF) robot arms, a teleoperation system based on two 7 DOF impedance-controlled robots, and last but not least, a robot assistant for the experimental analysis of the functioning/kinesiology of the upper extremity.
ISBN: 978-94-6018-393-5
Publication status: published
KU Leuven publication type: TH
Appears in Collections:Production Engineering, Machine Design and Automation (PMA) Section

Files in This Item:
File Status SizeFormat
PhD_Wilm_Decre_archive.pdf Published 4896KbAdobe PDFView/Open Request a copy

These files are only available to some KU Leuven Association staff members


All items in Lirias are protected by copyright, with all rights reserved.