Robots are ideal surrogates for performing tasks that are dull, dirty, and dangerous. To fully achieve this ideal, a robotic teammate should be able to autonomously perform human-level tasks in unstructured environments where we do not want humans to go. In this paper, we take a step toward realizing that vision by introducing the integration of state of the art advancements in intelligence, perception, and manipulation on the RoMan (Robotic Manipulation) platform. RoMan is comprised of two 7 degree of freedom (DoF) limbs connected to a 1 DoF torso and mounted on a tracked base. Multiple lidars are used for navigation, and a stereo depth camera visualizes point clouds for grasping. Each limb has a 6 DoF force-torque sensor at the wrist, with a dexterous 3-finger gripper on one limb and a stronger 4-finger claw-like hand on the other. Tasks begin with an operator specifying a mission type, a desired final destination for the robot, and a general region where the robot should look for grasps. All other portions of the task are completed autonomously. This includes navigation, object identification and pose estimation (if the object is known) via deep learning or perception through search, fine maneuvering, grasp planning via grasp library, arm motion planning, and manipulation planning (e.g. dragging if the object is deemed too heavy to freely lift). Finally, we present initial test results on two notional tasks: clearing a road of debris such as a heavy tree or a pile of unknown light debris, and opening a hinged container to retrieve a bag inside it.
This paper presents results from an experiment performed at the Combat Capabilities Development Command, Army Research Laboratory, Autonomous Systems Division (ASD) on the precision of a 7-degree-of-freedom robotic manipulator used on the RoMan robotic platform. We quantified the imprecision in the arm end-effector final position after arm movements ranging over distances from 362 mm to 1300 mm. In theory, for open-loop grasping, one should be able to compute the final X-Y-Z position of the gripper using forward kinematics. In practice, uncertainty in the arm calibration induces uncertainty in the forward kinematics so that it is desirable to measure this imprecision after different arm calibrations. Forty-one runs were performed under different calibration regimes. Ground truth was provided by measuring arm motions with a Vicon motion capture system while the chassis of the platform remained stationary during the experiment. Using a digital protractor to align the arm joints to the ground plane for a “Level” type calibration, the average total offset of the gripper in 3D space was 19.6 mm with a maximum of about 30 mm. After a “Field” (i.e. Hand-Eye) calibration, which aligned fiducials on the joints, the average total offset came to 37.8 mm with a maximum of about 80 mm. Distance travelled by the arm was found to be uncorrelated with total offset. The experiment demonstrated that Total (X, Y, Z) Offset in the gripper final position is reduced significantly if the robot arm is first calibrated using a standard “Level” calibration. The “Field” calibration method results in a significant increase in Offset variation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.