This was all but inevitable given that, as Tay’s tagline suggests, Microsoft designed her to have no chill.
I don’t necessarily have a problem with going easy on the designers of learning AI systems.Day 6's Brent Bambury spoke to Lauren Williams, a tech writer for Think Progress and Saadia Muzaffar, founder of Tech Girls, about what went wrong and what Microsoft had to pull Tay down after less than a day."For Tay it was very short," Muzaffar said, "but for somebody like me and many women who are active online in terms of advocacy, that happens everyday." Williams and Muzaffar agreed that Tay was designed to act as a mirror but only reflected the hate and vitriol of the internet. She was supposed to have the persona of a teenage girl, capable of interacting with 18 to 24 year olds. She greeted the Twitterverse with an exuberant "Hello World" and calling humans "super cool". Programmers designed Tay to simulate intelligent, on-line conversations with humans.Like calling Zoe Quinn a “stupid whore.” And saying that the Holocaust was “made up.” And saying that black people (she used a far more offensive term) should be put in concentration camps. What Microsoft apparently did not anticipate is that Twitter trolls would intentionally try to get Tay to say offensive or otherwise inappropriate things.
How could a chatbot go full Goebbels within a day of being switched on?
Tay also referred to feminism as cancer, repeatedly used the N-word and denied the holocaust.
Microsoft deleted many of the offensive tweets before being taken offline following, according to a Microsoft representative, in an effort to make "adjustments" to the artificial intelligence profile.
"Other tech companies working on similar technology can learn from this".
She's offline again and Microsoft has officially apologized for any offence she may have caused.
Restraining an AI system’s learning abilities to prevent it from learning bad things might also prevent it from learning good things.