A Carle Illinois College of Medicine student developed a software co-pilot that offers a new level of autonomy for individuals with paralysis. Sangjoon Lee is part of a team of scientists who are leveraging artificial intelligence to improve the capabilities of existing wearable brain-computer interfaces (BCI) – technology that harnesses signals from the user’s nervous system to trigger movement of an assistive device such as a robotic arm. The new system under development at the University of California – Los Angeles represents a paradigm shift, incorporating data from the user’s surroundings to predict the intended movement.
“In essence, instead of relying solely on neural signals, our system blends inferring human intent with environmental context and decoded neural signals to complete the action with the human, known as shared autonomy,” Lee said.
Computer-brain interface systems use signals from the brain to enable people with severely limited movement to perform tasks such as moving a computer cursor or a robotic arm. Surgically implanted systems that directly interpret brainwaves have been effective in capturing these signals, but they haven’t been widely adopted due to cost and risks. The performance of noninvasive wearable systems has been limited because they have a low signal-to-noise ratio, meaning that the desired signal is weak and hard to distinguish from background ‘noise.’ This makes motor control difficult for users, especially in tasks that require precision or are goal-focused.
Working with a team of engineers at UCLA’s Neural Engineering and Computation Lab under the leadership of Professor Jonathan Kao, Lee helped decode data from electroencephalography tracings (EEG measures the electrical activity in the brain) that reveal movement intentions. “The AI co-pilot uses environmental context — for example, video feed of the robotic arm — together with task dynamics, control history, and the decoder’s current output to infer intent,” Lee said.
When the team integrated their algorithm with a wearable brain-computer interface, the results were remarkable. Participants were challenged to move and control a computer cursor over a targeted point and re-arrange blocks using a robotic arm. The new co-pilot increased test participants’ performance nearly fourfold.
“Our framework design has already demonstrated that an AI co-pilot can substantially enhance control performance for paralyzed individuals, even when paired with a relatively simple reinforcement learning model architecture,” Lee said. “The next step is to harness more advanced AI architectures. We anticipate that using models with greater generalization for the co-pilot could significantly expand the co-pilot’s ability to infer intent and adapt in real time,” he said.
With these new capabilities, Lee says the system could help people with movement limitations accomplish more complex motor tasks.
“A wearable system equipped with an AI co-pilot has the potential to restore a level of autonomy that has been out of reach for individuals who are paralyzed. Such a device would allow users to independently perform daily tasks – tasks that currently require constant assistance from caregivers,” Lee said.
The team’s findings were published recently in the journal Nature Machine Intelligence. The project was funded by the National Institutes of Health and the UCLA Science Hub for Humanity and Artificial Intelligence.