Neurosoft’s Dr Nicolas Vachicouras on the evolution of neurological rehabilitation

Combining recorded brain data with electrical muscle stimulation could open new doorways for neurorehabilitation. By Ross Law.

Credit: Shutterstock/ESB Professional

Traditional methods of neurological rehabilitation for those who have suffered a stroke or spinal cord injury and lost movement in their arms and hands include exercise programmes and activities intended to improve movement. However, this process can take months or even years, depending on the severity of the injury, and frequent hospital visits for patients. 

What if there was a way to evolve the standard processes for neurological rehabilitation by uniting sophisticated technologies into a single system, enabling the restoration of movement in real-time for patients living with neurological conditions, or to independently chart their course of recovery over the long term?  

The Synapsuit Project, spearheaded by the Wyss Center, has set out to achieve this aim by uniting brain interfacing technology with AI and a muscle stimulation exosuit to capture and translate brain signals for movement intentionality. 

The exosuit, which is designed to be worn for multiple years, uses captured brain data for muscle stimulation, effectively serving as a bridge between the brain and the body. 

Neurosoft Bioelectronics, which specialises in soft bioelectronic devices used for brain interfacing, spun off five years ago from the Swiss Federal Institute of Technology (EPFL) in Lausanne, Switzerland. 

The company recently partnered with the Wyss Center for its Synapsuit Project alongside partners including the Korea Electronics Technology Institute (KETI) and the Swiss Innovation Agency. 

According to GlobalData’s medical device pipeline database, 407 Neurological Diagnostic and Monitoring Equipment devices are in various stages of development globally. Digital health technologies in neurology are estimated to grow, and forecasts that collaborations and investment will continue to drive digital trends. 

Medical Device Network sat down with Dr Nicolas Vachicouras, Neurosoft’s CEO and co-founder to understand more about the soft electrodes they have been developing and how their device is involved in the Wyss Center’s Synapsuit Project. 

Ross Law: What does Neurosoft do? 

Nicolas Vachicouras: We develop neuro-interfaces, devices that are in contact with the brain or spinal cord, to record activity from the brain in a similar manner to how an electrocardiogram (EcG) records signals in the heart. Our devices can also electrically stimulate the brain in a similar way to how a pacemaker stimulates the heart.

Ross Law: How has Neurosoft aimed to approach brain interfacing differently?

Nicolas Vachicouras: Many of the devices available on the market today for brain interfacing are made of quite large, stiff platinum discs used to stimulate the spinal cord, for example, in addressing chronic pain or for epilepsy detection. 

The lab I came from realised these interfaces were not ideal to interface with the brain, and the reason for this is that the brain and spinal cord are very soft and prone to a lot of movement. The spinal cord is obvious – you have a lot of movement, especially at the level of the neck, but even the brain is constantly moving slowly. Coming into a soft-moving environment with a very rigid device can risk damaging the brain, and can result in a foreign body reaction, in which the body attacks devices and can eventually cause them to stop working. 

These factors motivated the development of what we call soft electrodes, devices which are not only flexible but also stretchable. You can think of our device as a membrane with sensors, which come at the surface of brain tissue and can record or stimulate the brain. 

The elastic properties of our device allow for safer brain interfacing and better contact with the brain, with less risk of damaging blood vessels or the brain. Our soft electrodes allow for placement in the brain’s valley-like sulci. Some regions to decode specific hand movements or even other parts of the body are encoded in the brain within these regions. 

In the context of the Synapsuit Project, the idea is to leverage our technology to provide high-resolution recordings from the human brain, and this is where we have other modules that come from different partners to the project. 

Ross Law: Can you explain how the different components of the Synapsuit unify with one another?

Nicolas Vachicouras: You start by having a patient who cannot move their arms or hands, and typically this is due to a stroke or spinal cord injury. Their brain still functions. They are still able to think, but the information does not travel past the neck and so nothing happens. 

The Synapsuit starts with sensors on our electrodes picking up an intention. If a patient is thinking of moving their arm, for example, even if they are not able to, you still have the activity of the brain that corresponds to ‘I want to move my left arm’. 

This information is then transmitted to a head-mounted device developed by the Wyss Center called the Active Brain Implant Live Information Transfer sYstem (ABILITY), which connects to our electrodes to process brain data and transmit it digitally outside of the body. This is the second block. 

Another block, also developed by the Wyss Centre, is artificial intelligence (AI). Every single sensor on our electrode device provides brain data, which is then translated by the AI towards actual functions. 

The information interpreted by the AI is then sent to the exoskeleton, which is done by KETI. The goal here is to use functional electrical stimulation – a non-invasive practice which provides stimulation through the skin – to activate the right muscles based on the information from the brain. This piece effectively serves as a bridge between the brain and the arms. 

Ross Law: Can you tell us about the clinical trials you took part in last August? 

Nicolas Vachicouras: It is important to note that the project is at its beginning, so the building blocks are currently being developed on their side and the plan is to merge them eventually. Last year’s clinical study, which took part in Houston, Texas, was just our device by itself, and here we showed that it was able to safely interface with the human brain and record high-quality brain data. In the test, our device was used to record the brain activity of patients with epilepsy. It’s not clinically relevant to the Synapsuit Project and was more about being able to record quality data from the human brain.

Ross Law: What are the aims of your next clinical trial?

Nicolas Vachicouras: Our goal in the upcoming weeks is to start recording brain data from patients who are moving their arms. To do this, we are starting with healthy patients, in the sense they have a connection between their brain and their body. We’re going to put a glove on their hand. The glove, containing a trained sensor, is going to detect all the movement. We will ask participants to move their hands and elbows for around ten minutes, with different movements, and then we will record the activity from the brain. We will know exactly what is supposed to be happening in real life based on the signals, and then we will see how well the AI can decode this information. 

The other technical challenge lies in how well we can connect our device to Wyss Center’s ABILITY system from a mechanical and electrical standpoint in a way that is reliable long term; because the idea is that you implant this device not just for a few months, but for 20 years. 

In turn, KETI are currently showing that they can stimulate different muscles to activate the likes of the right finger, or the left hand, in a completely independent way. Eventually, we’ll combine both technologies and control their wearable parts with the inputs gathered from our device in the brain. 

Ross Law: What are your expectations following this next trial?

Nicolas Vachicouras: In around two years, we want to have shown at least one prototype of the full system together. After that, we can hopefully launch bigger studies, on larger populations of patients, with the full system.

The paper showcased attempts to make GPT-4 produce data that supported an unscientific conclusion – in this case, that penetrating keratoplasty had worse patient outcomes than deep anterior lamellar keratoplasty for sufferers of keratoconus, a condition that causes the cornea to thin which can impair vision. Once the desired values were given, the LLM dutifully compiled a database that to an untrained eye would appear perfectly plausible.

Taloni explained that, while the data would fall apart under statistical scrutiny, it didn’t even push the limits of what Chat-GPT can do. “We made a simple prompt […] The reality is that if someone was to create a fake data set, it is unlikely that they would use just one prompt. [If] they find an issue with the data set, they could fix it with consecutive prompts and that is a real problem. 

“There is this sort of tug of war between those who will inevitably try to generate fake data and all of our defensive mechanisms, including statistical tests and possibly software trained by AI.”

The issue will only worsen as the technology becomes more widely adopted too. Indeed, a recent GlobalData survey found that while only 16.1% of respondents from its Hospital Management industry website reported that they were actively using the technology, a further 26.8% said either that they had plans to use it or were exploring its potential use.

Nature worked with two researchers, Jack Wilkinson and Zewen Lu, to examine the dataset using techniques that would commonly be used to screen for authenticity. They found a number of errors including a mismatch of names and sexes of ‘patients’ and lack of a link between pre- and post-operative vision capacity. 

In light of this, Wilkinson, senior lecturer in Biostatistics at the University of Manchester, explained in an interview with Medical Device Network that he was less concerned by AI’s potential to increase fraud.

“I started asking people to generate datasets using GPT and having a look at them to see if they could pass my checks,” he said. “So far, every one I’ve looked at has been pretty poor. To be honest [they] would fall down under even modest scrutiny.” 

He acknowledged fears like those raised by Dr. Taloni about future improvements in AI-generated datasets but ultimately noted that most data fraud is currently done by “low-skill fabricators,” and that “if those people don’t have that knowledge, they don’t know how to prompt Chat-GPT to have it either.”

The problem for Wilkinson is how widespread falsification already is, even without generative AI. 

Caption: The US Pentagon is seeking to reduce carbon emissions through a range of programmes, but will it go far enough? Credit: US DoD

The mine’s concentrator can produce around 240,000 tonnes of ore, including around 26,500 tonnes of rare earth oxides.

Gavin John Lockyer, CEO of Arafura Resources

Total annual production

$345m: Lynas Rare Earth's planned investment into Mount Weld.

Caption. Credit: 

Phillip Day. Credit: Scotgold Resources