“We really need to take accountability”, Microsoft CEO on the ‘Tay’ chatbot

Posted 29 January 2018
Microsoft CEO, Satya Nadella, refining his action at Lord's, as he reflects on the impact of cricket in “Hit Refresh”

The ‘Tay’ chatbot, launched in March 2016, “has had a great influence on how [Microsoft is] approaching AI”, Satya Nadella, Microsoft’s CEO said in September 2017.

“Now we are launching it again”, Nadella told press at Lord’s Cricket Ground, London, “it’s in a preview in a couple of channels, so we are being much more deliberate in that process”.

“One of the things that has really influenced our design principles in [the Tay] episode is: we really need to take accountability”, Nadella said.

Tay, the rogue Microsoft chatbot
Tay, the rogue Microsoft chatbot

“First and foremost, we need to be able to in-fact foresee these attacks, which interestingly enough are attacks by humans. But the idea that we need to have the broader goal of having this AI behave properly is our accountability.”

“So how can we test it? how can we make sure that it does not lose control?” Nadella assured the audience this was a priority for Microsoft.

“I think this goes to this intelligible AI. The state of the art of how do you make algorithms that you can actually inspect?” Nadella explained. “These are all things that we are working on.”

“And by the way we also have an ethics committee internally at Microsoft that looks at everything we are doing. Especially for making sure that there is no bias introduced into things that we do.”

Nadella was speaking at Lord’s to promote the launch of his new book, ‘Hit Refresh: the quest to rediscover Microsoft’s soul and imagine a better future for everyone’.

The background: in Nadella’s words

“Tay was some of the experimental work that we were doing in the United states”, Nadella said.

“We actually have a very scaled bot in China, and now in Japan, and Indonesia and many other countries. It’s a social bot that people talk to and in fact the sessions are very high.”

“We launched it in the United States, and on the Twitter channel it was attacked, and therefore what happened was it started learning from the attacks and spewing out comments that were not acceptable.”

Microsoft’s Tay chatbot was trained by a group of internet trolls to return language that was offensive, but also hateful towards ethnic and religious groups, and women.

It was shut down by Microsoft less than 24 hours after launch on March 23rd 2016.