When speaking of AIMs, there are a few points I must address first as to not corrupt the use of the word through miscommunication.
First, some point out that we are already AI dependant via our reliance on technology for daily life (our phones, the Internet, our computers) and our good ol’ friends Siri, Alexa, and Echo. It is important to specify that this assumption is not wrong, just mislabeled. There is a difference between the artificially intelligent external enhancement of human daily functions versus the artificially intelligent internal integration of human daily functions. This argument is that though we may be “attached” to our devices, we cognitively are able to separate ourselves; though in today’s world, sometimes with a great deal of difficulty. In an implant scenario, we cannot separate ourselves so easily, and separation is dependent if the procedure is medically irreversible in all cases.
Secondly, it is not necessarily the medical usage of this device that can be problematic in terms of rights. Neural implants, bionic limbs, pacemakers, and other biomechnical integration to aid the medical community have served a great purpose, nor have they caused societal and or legal consequences regarding differentiation. There are two important distinctions here that need to be addressed. The first, is that that extension of AI within the human brain is not new, but the concept of manipulating mind to device is. The second distinction is that these devices and enhancements are not made available to able-bodied people. The underlying assumption with Neuralink’s device, is that eventually the general population will first want, then access, then need. These two distinctions are essential for the context of this analysis.
Without question, AIMs still have to be defined and recognized within the context human rights just like any other group. However, given the disconnect between technological innovation, legal guidelines, and human rights, we cannot afford to assume this will be a smooth transition. Here is what we need to consider:
The Artificial Intelligence Stigma in the Human Rights Community
From a human rights perspective, artificial intelligence bears an unforunate negative connonation. Why this is, I am uncertain; but it could be influenced by the unintended harm of technology or the fear or what it could become, and thereby threatens the progress human rights have made over years of hard work. Regardless, having the “pull the blanket over your head” attitude is not going to do the community any favors — nor is grimacing over the innovation in fear. Over the course of my one year research for my master’s thesis, I was concerned at the disconnect between human rights and technology. In my research I quoted a prominent organization AccessNow, an international digital rights organization that claims:
“As artificial intelligence continues to find its way into our daily lives, its propensity to interfere with human rights only gets more severe…”
The inability of these two fields to work together is going to backtrack societal progress of conditioning to support AIMs. One main issue I want to point out is not only the importance affording of human rights to these individuals, but the assurance it will happen. Both my support and caution is based on the granting of human rights to AI and the talk crafting electronic personhood status, while at the same time receiving tremendous backlash from the community not in favor of the extension of those rights to apply to technology.
Since there is limited precedent in this arena, I am using the super robot Sofia, who received international attention as Saudi Arabia granted her citizenship. Now, there are considerable differences with this example in that Sofia is a robot, not human which will take the comparison a bit tricky. But, aside the fact, there are a couple of points that do not concern the difference. The first is that Sofia is granted more rights in Saudi Arabia than actual female citizens of the country, in the areas of dress and communication to name a few. This sparks a consideration in that rights granted to AIMs have to be equal to those of “regular” intelligence, for lack of a better term. The second point is more generic, in that regardless of the country’s legal acceptance of the critical intelligence, societal behavior is not reflecting that change. With that, biases and discrimination are invited to develop, which will of course play a substantial role of if human rights will be respected or ignored.
Will This Be a New Movement?
Previous, continuous, and recent struggles for equality have all prompted a mass movement by both the people involved, and supporters for the cause. But can we avoid having to have another mass-movement and avoid the struggles those groups had to go through? That remains uncertain because even though clinical trials for Neuralink’s implant are set to begin in 2020, we have quite a few years before they have the potential of it becoming main stream. It may be that society will have to have a mass movement to extend those rights to that group; however, if we have the opportunity to learn from our past mistakes of discrimination, we should explore our options for damage control.
As for where they may fall in the legal spectrum and if separation can be done in practice, there is a potential that they could fall closer to the inexistent, ill-equipped current artificial intelligence laws, especially concerning the “mind” of the implant itself. If they fell directly into the human rights category, they would be amid an ongoing, contentious controversy of human rights extending to artificial intelligence. If they fell somewhere in between, there is no current applicable legal structure to house both. Technically speaking, having a human be part AI (by means of mind integration) allows it to fall anywhere within this spectrum, and those odds we cannot take. We need to make sure that we both preserve humanity and technology separately and assure that human rights will extend to this group.
Obvious legal entanglement will develop if human rights extend to AIMs in practice. This analysis will only dive into some rights and scenarios to be examined within the criminal justice system. First and foremost, it is essential that we separate the technology aspect from the human aspect. These people, though artificially enhanced, need to be thought of, respected, and supported as humans. However, in doing so, there is going to be complications in both the separation and unison that need further critique.
Situation #1 “I didn’t do it, my implant did!” The first argument in this arena is whether the AIMs will fall closer to laws surrounding technological-based laws. Let’s imagine this scenario: A person with an implant commits a crime. The defense attorney argues that the implant caused the person to commit the crime. — But wait, then the defense attacks the manufacterers of the implant as the responsible party. Due to the legally protected trade-secrecy of the device, the defense cannot legally use or have access to whether the implant’s framework aided in the crime. Some questions that rise are who or what should be legally held accountable? Will the AIMs become above the law? Can you legally separate the implant’s actions from the defendant’s actions? And even further, let’s just assume that the jury and the judge are impartial, and do not implore bias tactics against this group.
Given today’s legal precedent, the proposed scenario above could be how it would legally play out. To answer the first question, a relatively close example is self-driving vehicles. Whether the vehicle’s software injures the driver or another driver on the road, someone or something? has to be held legally accountable. According to ClassAction.com, laws surrounding the legality and consequences of self-driving cars are still in development. In addition, they state that:
Self-driving technology may still be a relatively new concept, but a number of accidents have already occurred. Injured drivers have helped bring the dangerous problems of this new technology to the public’s attention by filing lawsuits against the autopilot manufacturers.
The key word is manufacturers. Now in this particular legal scenario, the culpability of the implant becomes entangled within the web of protected device secrecy of the manufacturer. The legal precendent in recent cases is to favor company trade-secrecy. For example, in Loomis v. Wisconsin (2016) the Wisconsin Surpeme Court Court’s favored the protection of algorithm trade secrets regarding risk assessment tools used in sentencing of the defendant. The company in question, COMPAS, stated that:
“The key to our product is the algorithms, and they’re proprietary,” one of its executives said last year. “We’ve created them, and we don’t release them be cause it’s certainly a core piece of our business.”
Now, the same could happen theoretically with the Neuralink implants in that the accountability becomes entangled within manufacturing and legally protected company trade-secrecy.
In addition, to answer the second question posed, “Will AIMs become above the law? Some could argue that the manufacturing of the implant may weigh more heavily than the person alone. In the self-driving car example, the driver of the car is the victim. Did he or she have any responsibility of awareness? In the proposed Neuralink case, will this put the defendant in the victim scenario irregardless of culpability? These questions need to be throughly considered before legal application.
Situation #2: Maintaining the Right Against Self Incrimination
Using the above situation, imagine that Neuralink granted access to the data storage from the defendant’s implant, implicating him or her in the crime. Now, imagine that was introduced as evidence: the data recorded from his or her implant — from his or her mind. However, at this stage, the recording is reportedly small, and concerns itself with movements. But suppose that recording of the movements indicated the crime? Issues surrounding data privacy and self incrimination would have to be analyzed further.
In either legal situation, today’s legal precedent points out our lagging legal structure compared to technologial innovation. While we can limit the usage of self driving cars or risk assessment tools through legislation or completely take them off the market, we cannot so easily do so with human beings. Once they’re there, they’re there. And that’s why addressing this legal scenario ahead of time is so important.
In short, if Neuralink’s implant has the potential to go mainstream, we need to actively analyze and address this human rights situation. Societal conditioning works, and the proof is the diversity we have in equality today. And that is due part and partial to the support of groups, like the human rights community who advocate for them. If we have the warning signs already, let’s do something about it.