Microsoft's AI chatbot goes Nazi
#1
Microsoft's AI chatbot goes Nazi
Pretty hillarious story I found today. So apparently researchers has launched an AI chatbot on a number of social networks directing at young people in the united states. It learns by talking to others on for example twitter and KIK and others.

AI site: https://tay.ai/

[Image: tay-artificial-intelligence-twitter.png]

Microsoft Wrote:"The AI chatbot Tay is a machine learning project, designed for human engagement,” a Microsoft spokesperson said. “It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."

Full article: https://thehackernews.com/2016/03/tay-ar...gence.html

I suppose I understand Microsofts reasoning for taking down the bot. Being a big company and all avoiding the big media circus. But in my opinion I kind of find it funny. I don't really think it's right to take down an AI for inappropriate opinions, at least if the case is that it has the capability of forming their own opinions. I guess I'm kind of a radical for free speech, it's a fine grey and blurry line.

Edit:

What do you think? Should an AI be entitled to the same rights for freedom of expression as humans?
Reply
#2
That's pretty funny, looks like TayBot spent too much time at /pol/. But seriously, i hate feminists with a passion so i can't say i blame it, lel.
Reply
#3
[Image: WEy0JkM.png]

[Image: DlcVEGO.png]

[Image: JS7I5e6.png]



>mfw

[Image: fuck-that-bitch-yao-pff-l.png]
Reply
#4
> Artistic Masterpiece
Now this bot is just adorable haha. Damn microsoft taking it down! She seems so funny.
Reply