Tech Blog

Meet Norman – A Psychopath AI Created by MIT and How Moral Measures Can Be Implemented to Stop Him

Happy Halloween, readers!

As Halloween season has passed us and we thought it would be a great time to revisit ‘Norman’ the Psychopath AI that was a trial development by MIT.  With the public’s misperception of AI, we thought of exploring how Norman was developed; more so, understanding how to prevent AI algorithms from becoming destructive and unpleasant. As the leader in Artificial Emotional Intelligence (AEI), we at BPU Holdings understand the importance of all AI forms – even bad ones.  The more we understand it, the better we can attain the importance to direct us to being the leading AEI technology in the world.

On April Fool’s day, MIT introduced a psychopathic AI they named Norman (named after Norman Bates from the film Psycho). Scientists fed Norman data from the darkest corners of Reddit to show how AI can go wrong when biased data is fed through machine learning algorithms.

The way Norman sees the world is very different than your average AI...

After feeding Norman data from a particularly gruesome subreddit that was focused primarily on death and gore, they subjected him and a standard AI to the Rorschach inkblot test to determine what each AI would see in the ink blots. Here are some of the results:

Norman sees:

“A man is electrocuted and catches to death”

Norman sees:

“Man is murdered by machine gun in broad daylight”

Inkblot #1
Inkblot #7

Standard AI sees:

“A group of birds sitting on top of a tree branch”

Standard AI sees:

“A black and white photo of a baseball glove.”

See more of what Norman sees at norman-ai.mit.edu/

Understandably the thought of an evil or malevolent AI is enough to make most people uneasy; however, the reasoning behind MIT creating the Norman AI is largely to show that AI is largely dependent on the sort of data that is given to it. If a person biases the data given to a machine, let’s say overly happy: rainbows, unicorns, and kittens. This would make for a very cheery AI and it would make choices based on that bias. For example, subjecting the happy AI to the Rorschach inkblot test would yield different results from even the standard AI.

This brings us to a discussion of ethics and morals when it comes to AI.  Now with AIs having the ability to create other AIs with no human guidance (such as Google’s AutoML detailed in this article from Wired.com) we need to explore the limitations that may need to be in place.

Can we train AI to know the difference between good and evil?

Ethics and morals are regularly used interchangeably, but there is a distinction. Ethics is a code of conduct, a set of rules that come from an outside source. Morals are more internal; what you, the individual, consider right or wrong. And both ethics and morals can vary considerably between different cultures. Most cultures would consider death both morally and ethically wrong. But there are some cultures where the lines blur and social morals dictate the decision of another human life.

So how do we build a sense of right and wrong into our AI? MIT is doing just that. MIT’s Moral Machine is a platform that gathers data from people on moral decisions made by machine intelligence (in this case self-driving cars). People are presented with moral quandaries and are asked to pick the best of two bad scenarios. After you complete your answers Moral Machine gives you a breakdown of the data you inputted and how it compares to the answers of others. From my personal experience, it’s a difficult exercise and I walked away from my computer feeling really drained. If you’d like to give your input follow the link above.

There are so many articles out now on the topic of ethics, morals, and AI. The one I found most comprehensive was posted by Ambarish Mitra (CEO and co-founder of Blippar) on Quartz. What it boils down to is that we have the ability to train AIs to know right from wrong, and from that AI could actually end up teaching us about morality.

As technology advances at a rapid rate, we must consider and explore the ideas of ethics and morals within AI and AEI.  It will be challenging; yet interesting to see what the future brings.

What it all boils down to is this: AIs created are only as good (or bad) as the data that gets fed to them. Giving AIs data that is broader and more in depth allows for it to greatly reduce bias. More data, and more varied data, is better data. And that makes for better AI overall.

(and you can help Norman get better by doing just that: giving him more data. You can do that by clicking here.)

Zimgo