The potential for robotic systems in medical applications to learn manipulation skills from video demonstrations by imitation has been demonstrated by University of California, Berkeley, researchers. An artificial intelligence (AI) algorithm has been developed to acquire motion-centric representations of surgical suturing manipulation skills from video demonstrations for imitation learning.

The Motion2Vec system watched, analyzed and labeled surgery videos and used that knowledge to teach a da Vinci surgical robot to apply sutures across incisions. Neural networks and Siamese networks were applied to Nearest neighbor imitation (right) for suturing demonstration (left). Source: University of California, BerkeleyNearest neighbor imitation (right) for suturing demonstration (left). Source: University of California, Berkeleycompare and group similar images in a protocol designed to teach robots suturing tasks. These networks are used to match the video input of what the robotic manipulator arms are doing with the existing video of a human doctor making the same motions.

An 85.5% segmentation accuracy was documented during tests with a dual arm robot, indicating performance improvement over several state-of-the-art baselines. The research published in arXiv also reports an average 0.94 cm error in targeting accuracy.

Ongoing research will next focus on learning closed loop policies on the robot from the embedded video representations and providing useful feedback for training and assistance in remote surgical procedures.

To contact the author of this article, email shimmelstein@globalspec.com