
We present Norman, the world's first psychopath AI.
Norman, an artificial intelligence project from the Massachusetts Institute of Technology (MIT), is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior.
So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.
The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set.
Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.
The MIT team trained Norman on Reddit, and compared captions with a standard image captioning neural network. Here is what both AIs see on Rorschach's inkblot tests.
Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death.
Then, the MIT team compared Norman's responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.
(Note: Due to the ethical concerns, the MIT team only introduced bias in terms of image captions from the subreddit which are later matched with randomly generated inkblots (therefore, no image of a real person dying was utilized in this experiment).
Browse what Norman sees, or help Norman to fix himself by taking this survey.