Mind Over Machine


When my daughter was in middle school, I sometimes found myself at talent shows featuring goofy skits. A favorite was called The Dressing Table. A girl sat with a makeshift dressing table in front of her, pretending to face a mirror. Seated Girl wore a very large sweatshirt, but her arms were not in the sleeves. Kneeling behind her, where the audience couldn’t see, was a friend with her head hidden inside the same sweatshirt and her arms thrust through those sleeves, making it look as though Seated Girl had very short arms. Seated Girl announced theatrically, “I think I’ll put on some lipstick!”

The hands of Kneeling Girl scrambled comically around on the table until they landed on a tube of bright red lipstick, which she then applied, very badly, somewhere in the vicinity of Seated Girl’s mouth, before announcing, “Now I’ll do my hair!”

This skit came to mind while I was thinking about the subject of this article: brain computer interface. Seated Girl was, in a sense, paralyzed. Tucked inside the giant sweatshirt, her hands were useless. Kneeling Girl’s job was to compensate — to use her hands to do the task that her friend was naming for the audience.

This is a little like how a BCI system is supposed to work, only without words. When the paralyzed person simply thinks of doing a task, that intention is decoded and translated to machine language. The translated thought is then sent to an actual machine, which carries the intention out seamlessly and instantly, as if the machine were an extension of the body.

But how does “thinking of doing a task” get decoded and translated? With great care, friends.

Decoding the Neurons

Scientists have known for almost a hundred years that our brains generate measurable electrical current. They’ve been working out the implications of that fact for the last half-century and probably will continue to do so for as long as our species survives. Each of our brains has roughly 86 billion individual neurons, the cells that produce measurable electrical current. Every neuron lives in a particular place in the brain and has well-defined sets of connections to other neurons. Collections of individual neurons working in sync to generate tiny currents are thought.

Let me say that again. What we mean by the word, “thought,” in physical terms, is electrical current flowing between nerve cells residing in the brain.

What we experience as intention — I’m going to put that apple on the cutting board — is, in the world of the body, just specifically located groups of neurons passing electrical current in predictable ways. It’s that grouping and those firing patterns that form a translatable code.  With the right equipment, the code can be read, translated and delivered to a robot designed to make grabbing an apple and setting it on a plate a trivial task.

Nicho Hatsopoulos, professor of organismal biology and anatomy at the University of Chicago, published some of the early research on BCI work, wherein human subjects attempted to use their thoughts to move computer cursors. “We said, ‘Think about moving your hand to move a computer mouse, so as to move the cursor.’ That helped us build this decoding, so we could take the brain signals and have them move the cursor,” says Hatsopoulos. “But then what happened was, over time, we asked this one subject, in particular, what they were thinking about, and the subject said, “I’m not thinking about moving my hand anymore. I’m just thinking about moving the cursor. I don’t think about my hand anymore.”

Anyone who has ever learned to touch-type or play the piano will relate to this kind of experience. At some point, you stop being careful and intentional about each key; it’s as if your brain has created a shortcut. Your fingers and the keyboard have become, in a sense, component parts of a single entity.

Nathan Copeland blew President Obama’s mind with his 2016 demo of brain computer interface. “That is unbelievable,” Obama said. “Nathan is moving his hand with his brain.”
Nathan Copeland blew President Obama’s mind with his 2016 demo of brain computer interface. “That is unbelievable,” Obama said. “Nathan is moving his hand with his brain.”

.
Controlling a cursor on a screen with sheer thought is pretty astonishing, but it’s limited in terms of how it can help people with mobility issues. Moving that cursor lets you open email. You can play video games. You can write novels. But you can’t put a ripe apple on a cutting board, pick up a knife, cut off a slice and put it in your mouth. For that sort of thing, you need a version of Kneeling Girl’s hands, which is where robots enter the picture.

The robot currently in use for this kind of learning is at Dr. Michael Boninger’s lab at the University of Pittsburgh Medical Center. Made by the German company KUKA, it’s known as LBR iiwa, which is an acronym for German words meaning “lightweight intelligent industrial work assistant.” It’s a powerful machine. With correct programming, KUKA could build a car all by itself.

One of the end goals of BCI research is to figure out how to make the robot feel like an extension of the body. Professor Hatsopoulos calls this process embodiment, describing how his tennis racket, over time, has become embodied as an extension of his right arm.

Embodiment of the KUKA will have to involve more than thought-driven fine motor control, though, complicated and astonishing as that achievement is. For the robot to feel like an extension of the subject’s body, it will also be necessary for the subject to have a feel for what the robot is touching. How heavy is it? Is the mass equally distributed, or is one end heavier than the other? How hard do you have to squeeze it to get hold of it, and how hard can you squeeze it before you break it? In a word, you need sensation.

Getting the Feel of It

Only a few people in the world are in a position to tell us about progress in that direction. One of them is Nathan Copeland, and with the help of the implants, he has fist-bumped President Obama, fed himself tacos and leveled up in video games like Final Fantasy.

Copeland has four implants in his brain: two in the region that directs motion in his right hand and two more in the region that registers sensation when something touches that hand — or would register sensation if he still had a working spinal cord. His was damaged at C5 in a car crash in 2004, and he hasn’t been able to feel or move his hands since. But he has the implants and — in the lab at least — the robot.

Each implant is a 4-by-4-millimeter array, less than half the size of the nail on your pinky finger. Each array has 100 microelectrodes, and each of those is 1.5 millimeters long and about as wide as a grain of sand. They’re tiny. The surgery to get them correctly placed on the surface of the brain is uncomplicated but intense: five or six hours of meticulous work, followed by a week or so of the usual post-operative discomfort. After that, you can’t feel them.

There are leads coming away from the arrays and through the skull to a pair of what the scientists call “pedestals” located on top of Copeland’s head. These pedestals allow the computers to translate thought to action and touch to a version of sensation.

The sensory implants don’t create an exact analog, but they do allow him to recognize various kinds of touch when applied to the KUKA. He can “feel” tapping, pressure and tingles. He’s spent many, many hours patiently helping researchers identify which particular neurons in his brain must be given the tiny burst of electricity that translates, for him, into those feelings. Just as thought is a pattern of electrical activity, so is sensation. The seamless and instantaneous integration of these patterns is what will, eventually, lead to a robot that has genuine embodiment.

In the meantime, Copeland would very much like not to be the only one. Dr. Boninger, in Pittsburgh, and Dr. Hatsopoulos, in Chicago, are looking to enroll four new subjects, two in each location. These people will have to be prepared for an extended time commitment: four hours per session, twice a week, for at least a year. They’ll need the following: to live near one of those two labs, to have almost no function in at least one hand, and, most of all, to foster a sense of adventure and an ability to commit for the long run.

One of the early subjects of the work with cursors called her implants Lewis and Clark, which captures the sort of approach volunteers need to bring. A sense of humor helps, too. Copeland jokes about controlling a sword to play Fruit Ninja in real life. “I was like, we can lab expense a sword, right? And then you can throw fruit and I’ll try and cut it or something,” he says.

You have to admire his style. When asked why he would sign up for something so invasive and in so early a stage of development, he says that because he can, he has to. That’s more than style; that’s character.

Resources
• KUKA, kuka.com/en-us/products/robotics-systems/industrial-robots/lbr-iiwa
• Copeland’s YouTube: youtube.com/playlist?list=PL-UehttXKgFOyFysBj6yMtmz8X0mwgLhR
• For more info on the clinical trials, or to sign up: scitrials.org/trial/NCT03811301


Support New Mobility

Wait! Before you wander off to other parts of the internet, please consider supporting New Mobility. For more than three decades, New Mobility has published groundbreaking content for active wheelchair users. We share practical advice from wheelchair users across the country, review life-changing technology and demand equity in healthcare, travel and all facets of life. But none of this is cheap, easy or profitable. Your support helps us give wheelchair users the resources to build a fulfilling life.

donate today

Comments are closed.