Greg has been in a nursing facility since he was in his mid-30s. He had been attacked six years prior, leaving him barely conscious and unable to speak or eat. He didn’t get much better after two years of therapy. For the majority of their life, Greg would have remained silent and isolated from the outside world. However, Greg underwent a clinical experiment and obtained a brain implant at the age of 38.
His thalamus, the primary relay station of the brain, had electrodes inserted by surgeons on either side.
According to Joseph Fins, MD, head of the Division of Medical Ethics at Weill Cornell Medicine in New York City, “those who are in the minimally aware condition have entire brain circuitry, but those circuits are under-activated.” Electrical impulses can be delivered to afflicted areas to resuscitate such circuits and restore lost or impaired function.
Fins, who co-authored a study about Greg’s surgery in Nature, compares these implants to pacemakers for the brain.
Every 30 days for six months, the researchers turned Greg’s gadget on and off to see how the electrical stimulation—or lack thereof—affected his abilities. They witnessed amazing stuff.
“With the deep brain stimulator, he was able to recite the first 16 words of the Pledge of Allegiance in six or seven-word sentences. Describe his love for his mum. In his book, Rights Come to Mind: Brain Injury, Ethics, and the Struggle for Consciousness, Fins, who shared Greg’s path, said, “go shopping at Old Navy and declare a preference for the style of clothing his mother was buying.”
Greg found his voice once more after six years of silence.
Success tales like his, though, are not without controversy, since the technology has prompted a number of moral dilemmas, such as: Can a person who is only minimally awake give consent for brain surgery? When clinical trials are over, what happens to the subjects? How can neural data on people be used and safeguarded in a responsible manner?
According to Veljko Dubljevic, PhD, an associate professor of science, technology, and society at North Carolina State University, “Move quickly and smash everything” is a particularly horrible approach. He’s alluding to Silicon Valley’s unofficial slogan, which is also the location of Neuralink, Elon Musk’s neurotechnology business.
Nearly ten years after the study regarding Greg’s brain implant was published, Neuralink was established in 2016. However, due in part to its founder’s frequently exaggerated claims, Musk’s business has been the one to most prominently bring neurotechnology to the attention of the general public. (Musk stated in 2019 that his brain-computer interface would be inserted into people in 2020. Since then, he has changed his goal to 2022.) Despite having the official moniker “the Fitbit in your skull,” Musk has referred to his invention as “Link.”
A leading manufacturer of these devices, Blackrock, claims that 36 people have already had brain-computer interfaces (BCIs) implanted. Neuralink stands out because to its ambitious intention to implant more than 1,000 electrodes that are as thin as hair. People with neurological problems, such as quadriplegia, may regain a great deal of freedom if the Link functions as intended—monitoring a person’s brain activity and directing a computer to perform certain tasks.
The Origins of Brain Implants
Brain implants that can connect with an outside device, usually a computer, or BCIs, are sometimes depicted as science-fiction fantasies that geniuses like Musk are bringing to life. However, they owe a lot to a long-used technology called deep brain stimulation (DBS). A woman with depression and anorexia had an electrode put into her brain by a neurosurgeon at Columbia University in 1948. Up until the wire broke a few weeks later, the patient was getting well. However, the groundwork had been done for longer-term neuromodulation.
Movement disorders, not depression, would finally propel DBS into the mainstream of medicine. A study claiming the devices could help with essential tremor and Parkinson’s-related tremor was published in the late 1980s by French researchers. In 1997, the FDA approved DBS for essential tremor; in 2002, it was approved for Parkinson’s. The most popular surgical therapy for Parkinson’s disease nowadays is DBS.
Since then, deep brain stimulation has been used, frequently experimentally, to treat a wide range of illnesses, including addiction, Tourette’s syndrome, and obsessive-compulsive disorder. The advances are astounding: more recent closed-loop devices may react immediately to brain activity, recognising, for instance, when an epileptic seizure is about to occur and then delivering an electrical impulse to halt it.
BCIs have assisted paralysed individuals in clinical studies in moving prosthetic limbs. A blind woman’s ability to read lines, shapes, and letters was made possible by implanted electrodes. Synchron, largely regarded as Neuralink’s main rival, implanted its Stentrode device into its first human subject in the United States in July. Due to this, an unprecedented FDA-approved experiment was started, giving Synchron the advantage over Neuralink (which is still in the animal-testing phase). According to Australian research, patients with Lou Gehrig’s disease, generally known as amyotrophic lateral sclerosis or ALS, can use the Stentrode to do online banking and shopping.
It is difficult to see any drawbacks to brain implants given recent advancements. However, neuroethicists caution that there could be negative side effects if we don’t take proactive measures and businesses don’t incorporate ethical considerations into the fundamental foundation of neurotechnology.
The Ethics of Durability and Safety
It’s tempting to brush these worries off as unfounded. But with 200,000 people globally having deep brain stimulators implanted, neurotechnology has already solidified its grasp. Who is in charge of looking after persons who received equipment from clinical studies is still unclear.
Even if patients claim benefits, this could change as the brain surrounds the implant in glial tissue over time. According to Dubljevic, this “scarification” obstructs the electrical transmission, making the implant less able to communicate. However, taking out the device can come with serious risks, such bleeding in the brain. Many devices are still implanted, probe-like, deep inside the brain despite innovative designs that try to overcome this (the Stentrode, for instance, is placed into a blood vessel rather than undergoing open brain surgery).
Although device removal is frequently not covered as part of the trial, it is typically offered at the conclusion of research. According to a study published in the journal Neuron, researchers often ask the patient’s insurance to cover the surgery. However, insurers are not required to remove a brain implant unless it is medically essential to do so. In most cases, a patient’s distaste for the device is insufficient.
Receivers’ acceptance is not consistently high. According to patient interviews, these devices may alter identity and cause people to feel less like themselves, particularly if they are already prone to having a negative self-perception.
Some people, according to Dubljevic, “feel like they’re dominated by the device,” compelled to heed the implant’s warnings, such as being required to refrain from going on a stroll or carrying on with their daily activities when a seizure might be on the horizon.
Paul Ford, PhD, head of the Cleveland Clinic’s NeuroEthics Program, argues that it is more typical for patients to feel in charge and more self-aware. Even individuals who enjoy and want to maintain their gadgets, however, can encounter a lack of post-trial assistance, particularly if the implant wasn’t statistically demonstrated to be beneficial.
When the battery on the device eventually runs out, surgery will be required to replace it.
“Who’s going to cover that cost? It doesn’t relate to the clinical trial, according to Fins. Giving people Teslas but denying them access to charging stations while travelling is analogous to this.
Health care institutions must invest in the infrastructure needed to maintain brain implants as neurotechnology develops, just as anyone with a pacemaker may go into any hospital and have a cardiologist change their device, according to Fins.
We must take our obligations to these people seriously if we are serious about developing this technology.
The Principles of Privacy
Concerns about brain implants extend beyond simply their potential medical risks to the vast amounts of private information they collect. Dubljevic contrasts the current state of neurological data with blood samples taken fifty years ago, before genetic information could be extracted. Today, it is simple to connect those same vital signs to specific people.
According to him, as technology advances, more private information may be retrieved from brain data recorders. It’s not mind-reading at the moment in any way, shape, or form. However, it might develop into mind-reading in perhaps 20 or 30 years.
In this area, the phrase “mind-reading” is frequently used.
Fins describes it as “kind of the science fantasy version of where the technology is now.” (At this time, brain implants cannot read minds.)
However, data will become more accurate as device signals get clearer. In the future, according to Dubljevic, researchers might be able to identify attitudes or psychological states.
Based on brain patterns, “someone could be characterised as less attentive or less intellectual,” he claims.
Brain data may potentially reveal previously undetected medical issues, such as a history of stroke, which may be used to increase an individual’s insurance premiums or completely reject coverage. Brain implants might theoretically be taken over by hackers, who could then turn them off or send erroneous signals to the user’s brain.
Fins and other academics contend that retaining medical records on your phone and saving brain data are both as risky.
He explains, “It’s about cybersecurity in general.
Others believe that brain data is wholly personal, though.
According to a report by UNESCO’s International Bioethics Committee, “These are the sole statistics that reveal a person’s mental processes” (IBC). Neural data may be seen as the source of the self and require specific definition and preservation if the premise is that “I am defined by my brain.”
The chair of neuroethics at Penn State University, Laura Cabrera, PhD, asserts that “the brain is such a vital element of who we are – what makes us us.” “Who is the data owner? Is the medical system to blame? Is it you, the user or the patient? That, in my opinion, has not really been settled.
Many of the controls imposed on what Google and Facebook collect and share might also be applied to data about brain activity. Some argue that rather than asking users to opt out of sharing, the industry default should be to keep brain data private. Dubljevic, however, adopts a more nuanced stance because scientific collaboration and transparency depend on researchers giving their raw data.
It is evident that transparency, not halting research, is the answer. Patients should be informed of the location, duration, and use of their data storage as part of the consent procedure, according to Cabrera. A law banning genetic information-based discrimination in health insurance and employment was passed in the United States in 2008. She claims that this might be a useful precedent.
The Legal Issue
Legislators from all across the world are researching the issue of brain data. A Columbia University neurobiologist’s visit a few years ago prompted Chile’s Senate to create a measure to control how neurotechnology may be used and how data would be protected.
The amendment guaranteed that scientific and technical advancement “would be at the service of people and will be carried out with respect for life and physical and mental integrity.”
The neuro-rights bill was basically killed in September when Chile’s new Constitution was rejected. However, similar legislation is being considered in other nations. France revised its bioethics law in 2021 to forbid discrimination based on brain data and to include the ability to outlaw technologies that alter brain function.
Fins is not quite convinced that this kind of legislation is beneficial. He cites examples of people like Greg, a 38-year-old who had a brain implant and was able to speak once more. According to him, if it’s forbidden to experiment with the brain’s functioning or alter it, “you couldn’t find out whether there was hidden consciousness”—mental awareness that isn’t immediately visible—”thereby destining people to terrible isolation.”
Access to neurotechnology must also be safeguarded, particularly for individuals who require it for communication.
“It’s one thing to go against someone’s wishes. That is a breach of personhood and of consent, claims Fins. To intervene in order to foster agency is something entirely different.
A medical surrogate, such as a family member, can frequently be asked to give consent in situations where the patient is only partially conscious. Overly tight rules might make it impossible to implant neural implants in these persons.
It’s a pretty complex place, according to Fins.
Implants in the Brain’s Future
Brain implants are only used for medical purposes right now. However, according to Dubljevic, “improvement is an aspiration” in some circles. Studies on animals indicate that there is promise. In a 2013 study, scientists observed the activity of rats’ brains as they made their way around a maze. Electrical stimulation then sent this neurological information to rats in a different lab. The second group of rodents moved through the maze as if they were familiar with it, which raises the possibility that memory transfer could someday become a reality. Due to the fact that only the wealthiest people might purchase cognitive improvement, scenarios like these elevate the possibility of social inequality.
They might also result in militaries with dubious moral standards.
In a 2017 report published in Nature, a group of academics stated, “We have heard officials at DARPA and the U.S. Intelligence Advanced Research Projects Activity discuss plans to equip soldiers and analysts with improved mental powers (“super-intelligent agents”). Some experts suggest strict international restrictions for military usage of the technology, similar to the Geneva Protocol for chemical and biological weapons. Brain implants could even become mandatory for soldiers, who might be required to participate in experiments.
For both businesspeople and scientists, the temptation to investigate every potential use of neurotechnology will probably be too great to resist. As a result, safety measures are crucial.
In a 2020 piece published in Philosophies, a group of researchers led by Dubljevic observed that while numerous potential ethical concerns and dilemmas could arise from the deployment of a revolutionary technology, “what is astonishing is the paucity of suggestions to overcome them.”
He asserts that it is crucial for the sector to move forward with the proper attitude, emphasising teamwork and making ethics a priority at every stage.
How can we prevent potential issues from occurring and identify solutions before they do? Asks Dubljevic. A little preemptive thought is often helpful.