Haptics and Virtual Fixtures

The Problem

In teleoperation, oftentimes operator feedback is primarily in the form of visual display. While this is sufficient for navigation tasks, it is well known that the human sense of touch is critical for more dexterous tasks. Moreover, haptic or force feedback can be used to more effectively guide a teleoperator to complete a telemanipulated task via robotic devices. This feedback overlaid on visual display is termed a “virtual fixture”. Haptic virtual fixtures can be generalized as implementing one of two types of functional modes: forbidden region virtual fixtures or guidance fixtures. In the former, the robot is assisted by limiting its movement into restricted areas. As an example, a forbidden region virtual fixture used in a telerobotic surgery application could apply a resistive force to the user when entering a dangerous region or orientation. Guidance fixtures involve influencing the robot’s movement along a desired path. In all cases, the challenge lies in conveying the remote robot’s environment to the operator while simultaneously assisting the user with force feedback in the structured teleoperation task.

Our Approach

We provide noncontact, real-time methods for haptic rendering based on 3D depth information captured from commercially available RGB-D cameras. This method allows us to provide realistic force feedback of a remote location in real-time without displacing or contacting the environment with the slave device. Moreover, since the haptic rendering is all done in software, forbidden region virtual fixtures can be built in real-time around identifiable objects to, for example, protect delicate or sensitive regions from contact.

Impact

Virtual fixtures for teleoperation can provide automatic safety measures during stressful or delicate operations such as surgery. Safety is increased inherently by limiting the operator’s workspace to safe regions, and task physical and mental workload can be reduced as well through this form of shared autonomy.

Affiliated Students and Faculty: Fredrik Ryden, Kevin Huang, Howard Chizeck, Blake Hannaford

Related Media:

 

 

https://www.youtube.com/watch?v=yTOsKHu60FU

 

Publications:

F. Rydén, ‘Tech to the future: Making a “Kinection” with haptic interaction’, IEEE Potentials, pp. 34-36, May, 2012.

F. Ryden, H. J. Chizeck, S. Nia Kosari, H. King, B. Hannaford, ‘Using Kinect and a Haptic Interface for Implementation of Real-Time Virtual Fixture,’ the Proceedings of the Robotics Sciences and Systems (RSS) Workshop on RGB-D: Advanced Reasoning with Depth Cameras, Los Angeles, June 2011.

F. Ryden, S. Nia Kosari, H. Chizeck, ‘Proxy Method for Fast Haptic Rendering from Time Varying Point Clouds,’ the Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011), San Francisco, March 2011.