The Motivation

Existing solutions are too difficult to use. Over 3M people suffer from upper limb paralysis and need robotic assistive devices to perform activities of daily living. However, NIH reports 65% of this population does not use these devices. Clinical rehabilitation experts agree this is because existing solutions are simply too difficult to use.

Usable existing solutions have limited functionality.  Current solutions force users to select a grasp before each movement, but a hand requires at least 33 different grasps to perform daily activities. This poses a tradeoff between ease of use and hand functionality.

Screen Shot 2017-12-15 at 12.24.30 PM.png

The Solution

graphicAllTogether.jpg

Using vision, we can make prostheses/orthoses easier to use and with more functionality than ever before. Using eye-tracking technology combined with a sensor-enriched artificial intelligence system, the robotic device becomes aware of the user’s intended grasp.  The user simply decides “when” to grasp.

Your motion becomes faster, more accurate, and more fluid over time.  Using artificial intelligence, the system becomes better with each use. Furthermore, the user now has access to every possible type of grasp, much like a fully functional hand.


Bringing it to Life 

Prototype v0.1: Object-Specific Grasps

The first step was to build a proof of concept as quickly as possible.  We 3D printed an Open Bionics prosthetic hand with flexible Ninjaflex material, which was powered by linear actuators in each finger and an Arduino Uno. Objects were presented to the laptop's built-in camera, and a script on Matlab processed the image in real-time and formed a predetermined, hardcoded grasp associated with the object presented. We used QR codes to identify objects and grasps, before we moved to more sophisticated computer vision methods.  Here is a video from Sling Health's 2017 Demo Day. 

 

Prototype v0.2:  Object-Specific Grasps with Eye Tracking

We purchased an open source eye tracking headset from Pupil Labs and developed a real-time object detection plugin for said headset.  The technical improvement here is two-fold: we don't need QR codes anymore to classify the object, and we know which object the user is focused upon (rather than just classifying all the objects in the field of view with no way of telling if one is more important than others).  Below is a video of the plugin in action.

The real-time visualizations on the top-left screen of the video show the following:

  1. Bounding boxes & labels around recognized objects
  2. A gaze point (the red dot) which shows where the user is looking (projected onto the image frame)
  3. The object on which the user is focused (a 'X' is placed at the center of the object closest to the user's gaze point)

This was developed using Python, OpenCV, and Tensorflow.  This was run solely on CPU, so the model (trained on the COCO dataset) had to be very lightweight. We were able to achieve about 10-15 FPS, at the cost of apples being perceived as donuts sometimes.

The code, along with more technical details, is freely available here: https://github.com/jesseweisberg/pupil

Pupil Labs wrote a blog post of our plugin here!

Now, if we incorporate this technology with the prosthetic hand, the user can perform a grasp based on the object in focus.  To do this, we set up a ROS interface that handles the information flow between the eye tracking/object detection program and the Arduino, which controls the prosthetic hand.  When an object is detected and the user is fixated upon it, the label of the object is sent to the Arduino.  Within the Arduino, we hardcoded grasps for some of the object classes in our object detection model.  When the Arduino receives a label for one its hardcoded grasps, it performs the object-specific grasp.  Here's a video showing it in action.  (The video does not show sEMG functionality where users must flex in order to actuate a grasp, but that has been developed.)

Once again, the code, along with more technical details, is freely available here: https://github.com/jesseweisberg/pupil

 

Prototype v0.3: Incorporating AI and Beyond (in progress)

The next step is an adaptive control systems design that gives the prosthetic hand the capability to learn from every grasp.  We are currently exploring many options to accomplish this, such as incorporating force sensing resistors at the fingertips and sensor fusion with sEMG input.  We are working on reinforcement learning algorithms that can learn how user's interact with their surroundings and make their prosthetic experience more fluid. 

We created a platform to give us more precision and flexibility with our algorithm development.  We built a 3D printed robotic arm from an open source design (BCN3D Moveo), created a model for simulation (ROS and RVIZ), and developed a real-time control interface that is connected with the simulation.  

Code and more details on creating this platform: https://github.com/jesseweisberg/moveo_ros

 

The Team

Pin-Wei ChenPhD Rehabilitation ScienceWashington University in St. Louis

Pin-wei Chen

PhD Rehabilitation Science, Washington University in St. Louis

 

linkedIn4-small.jpg

Jesse Weisberg

Masters of Engineering in Robotics, Washington University in St. Louis

IMG_8922.JPG

SriHarsha Kondapalli

PhD Electrical & Systems Engineering, Washington University in St. Louis