Artificial Law

For many years people have been debating these questions about Marry Shelley’s Frankenstein: who is the real monster? Who is to blame? There is a lot of gray area, an unclear line where Victor’s responsibility ends and the monsters begins. In the book, both creatures commit terrible deeds; Victor abandons his unstable invention with no regard to consequences and the monster kills his family out of revenge. In my eyes, these two characters mirror each other very closely in corrupt morals and actions, as discussed in the Doppelganger theory, which I agree with. However, in the topic I am going to be examining we must first look into the issue of artificial intelligence. The definition of the phrase simplified is the ability of a man-made object, usually a computer or machine, to think/learn and make decisions on its own. I think this idea can be applied to Frankenstein’s monster. So, essentially, my main question is: since Frankenstein’s creation is technically artificial intelligence, could it be held criminally responsible for the murders of William, Henry and Elizabeth? In order for a proper crime to occur, there must be two elements present: a wrongful deed (actus reas) and a guilty mind (mens rea). We know that the murders are the wrongful actions, but is the guilty mind the monsters or Victors? An article I found on MIT’s website called “When an AI finally kills someone, who will be responsible?“, analyzes the same question in three scenarios. In this blog I will explain how two of these scenarios could be related to the novel.

From Pixabay

The first scenario is called  “perpetrator via another”. This happens when the offence was committed by an animal or mentally ill person, who would thereby be proclaimed innocent (arXiv). Now, even though the monster is not a human, I don’t think we can classify him as an ‘animal’ in this context. Within a year of observing the De Lacey family he is able to learn and comprehend language, which usually takes a human four to six years to master with daily practice and interaction, something the monster never truly had. As for mentally sound, it’s a bit unclear. We know that the reason he killed was because Victor gave him the motive for revenge after leaving him to deal with the toils of the world alone. We also know that the monster is logical and eloquent, as shown through his narration in chapters 11-16, and his conversations with Victor. He uses complex sentences and rational arguments in a calm manner to deliver his valid points. With this level of intelligence one should assume the creature was also able to understand feelings like empathy and mercy, although with no real reciprocal connections to others, I don’t think he did. Even through having intellectual maturity, his emotional and moral capacities were never able to evolve, leaving him child-like, hyper fixated on revenge, and in my opinion, not mentally sane. Therefore, in this scenario, the guilt would not fall on the monster or Victor.

From Wikimedia

“The second scenario, known as natural probable consequence, occurs when the ordinary actions of an AI system might be used inappropriately to perform a criminal act…The key question here is whether the programmer of the machine knew that this outcome was a probable consequence of its use.”(arXiv). I think this situation falls back on Victor’s significant lack of consideration of the consequences when creating unnatural life. We know that Victor never intended to use his accomplishment for malicious purposes if he was triumphant in creating it. The problem is that Victor could never think past his obsession with seeing if he could be successful in fathering life. In the many months he spent researching and executing his plan, I can’t understand how he never stopped and thought ‘hm, what’s gonna happen if this actually works?”. Nevertheless, he didn’t, and now we’re left to infer that Victor had no knowledge that this could have been the outcome of his invention. So, again Victor is not to blame for the murders. Then who is? In my opinion, when we try to break it down into one or the other like we want to, we can never get an answer that can satisfy everybody. There is always a different lens to look through, an alternative angle to consider. This is why my final answer is: both or neither. The two protagonists are two sides on the same coin, one would not exist without the other molding them into what they become. Victor built a being out of nothing, and the creature broke a man into a mouse, both breaking natural laws and possessing too much power for a happy ending to ever be an option.

 

arXiv, Emerging Technology from the. “When an AI Finally Kills Someone, Who Will Be Responsible?”.  MIT                             Technology Review. MIT Technology Review, 12 Mar. 2018. Accessed 13 May 2019.                                                             www.technologyreview.com/s/610459/when-an-ai-finally-kills-someone-who-will-be-responsible/.

 

Shelley, Mary Wollstonecraft, and Douglas Clegg. Frankenstein, or The Modern Prometheus. Penguin, 2013.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *