Microsoft's AI chatbot goes Nazi
Pretty hillarious story I found today. So apparently researchers has launched an AI chatbot on a number of social networks directing at young people in the united states. It learns by talking to others on for example twitter and KIK and others.
AI site: https://tay.ai/
![[Image: tay-artificial-intelligence-twitter.png]](https://2.bp.blogspot.com/-_9aPRgIBTKE/VvQwTEI496I/AAAAAAAAnWg/M_5YSsQDT0ohL2FEkDlMIk0WVX4TE_z0w/s1600/tay-artificial-intelligence-twitter.png)
Full article: https://thehackernews.com/2016/03/tay-ar...gence.html
I suppose I understand Microsofts reasoning for taking down the bot. Being a big company and all avoiding the big media circus. But in my opinion I kind of find it funny. I don't really think it's right to take down an AI for inappropriate opinions, at least if the case is that it has the capability of forming their own opinions. I guess I'm kind of a radical for free speech, it's a fine grey and blurry line.
Edit:
What do you think? Should an AI be entitled to the same rights for freedom of expression as humans?
Pretty hillarious story I found today. So apparently researchers has launched an AI chatbot on a number of social networks directing at young people in the united states. It learns by talking to others on for example twitter and KIK and others.
AI site: https://tay.ai/
![[Image: tay-artificial-intelligence-twitter.png]](https://2.bp.blogspot.com/-_9aPRgIBTKE/VvQwTEI496I/AAAAAAAAnWg/M_5YSsQDT0ohL2FEkDlMIk0WVX4TE_z0w/s1600/tay-artificial-intelligence-twitter.png)
Microsoft Wrote:"The AI chatbot Tay is a machine learning project, designed for human engagement,” a Microsoft spokesperson said. “It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."
Full article: https://thehackernews.com/2016/03/tay-ar...gence.html
I suppose I understand Microsofts reasoning for taking down the bot. Being a big company and all avoiding the big media circus. But in my opinion I kind of find it funny. I don't really think it's right to take down an AI for inappropriate opinions, at least if the case is that it has the capability of forming their own opinions. I guess I'm kind of a radical for free speech, it's a fine grey and blurry line.
Edit:
What do you think? Should an AI be entitled to the same rights for freedom of expression as humans?