top of page

Tay: Lessons to Be Learned from Microsoft’s Chatbot 

Open Research

By Sacha Kavanagh

  • LinkedIn Social Icon
  • Facebook Social Icon
  • Twitter Social Icon
  • Google+ Social Icon

Do you have a question about this research

Success! Message received.

Artificial intelligence (AI), machine learning, intelligent software, bots, call it what you will, have been much in the news recently, none more so than Microsoft’s now infamous ‘chatbot’ Tay.

 

Tay was created as a Twitter account to demonstrate the company’s AI prowess and allow researchers to “experiment with and conduct conversational understanding”. But in less than 24 hours Tay went from thinking humans were “super cool” to swearing, praising Hitler and questioning the Holocaust – and was taken down. A brief appearance a week later saw it (most emphatically not she, I refuse to assign human – or even animal – attributes to a bunch of algorithms) boast of taking drugs in front of police.

 

Tay was designed to learn as it was used – the more it was used, the ‘smarter’ it would become. But it debuted with no filters or base domain knowledge. So, like a child, Tay merely repeated what it was told – and was quickly exploited by users who pranked it into learning offensive language and spouting offensive ideas. Whether those users genuinely believed the filth they spewed or were merely demonstrating Tay’s vulnerabilities, is really a moot point. The damage was done.

 

Or was it? Was it a genuine error? Or was it just a PR stunt or even a deliberate attempt to highlight the inherent weaknesses of what Forbes calls a “truly naïve learning AI”? It seems inconceivable that a corporation the size of Microsoft could have publicly released the technology without anticipating the potential for abuse. Maybe it did, or maybe it just didn’t expect the level and speed of that abuse.

 

Not all publicity is good publicity, but Microsoft is certainly no stranger to bad press and is big enough to withstand any backlash. Had this been a start-up, the company would likely have lost all credibility.

 

Another intriguing aspect of Tay is that it was given a female – young and pretty – avatar, much like many of today’s digital assistants. Is this a reflection of society or just that the most likely users are male? Would the bot have been subject to such abuse had it been given another persona?

 

Microsoft intends to replace Tay with a more advanced model with filters and background content that should help it avoid the disaster that was Tay 1.0. But will Microsoft continue with its chatbot efforts?

 

I’m not an American millennial aged 18-24 so I’m not the target user for Tay, but I fail to see the appeal of “chatting” to a machine. I certainly can’t see many people paying for the privilege. Tay is clearly not a commercial proposition for Microsoft, rather an exploration into the world of bots.

 

Tay is not Microsoft’s only foray into intelligent software, nor is it the only player staking its claim. In recent weeks, Microsoft has updated its Cortana Intelligence Suite and released a Skype Bot Platform as part of its ‘Conversation-as-a-Platform’ drive. Facebook has highlighted AI, as well as virtual and augmented reality, in its 10 year technology roadmap, and has opened up Facebook Messenger so that companies can develop AI-powered systems on the platform. Big names including Salesforce and Dropbox are early adopters. AlphaGo (developed by the UK’s DeepMind, which was acquired by Google in 2014) famously beat a top-ranked professional Go player 4:1. Google is allowing developers to use its AI technologies with application programming interfaces (APIs) to identify images and recognise speech, among others. Amazon has expanded its Alexa AI service and acquired deep learning start-up Orbeus, while Salesforce will acquire rival MetaMind. We’ll be exploring these developments in greater detail in upcoming blogs and research.

 

In the meantime, what lessons can be learned from the Tay debacle? First, that absolutely anything is vulnerable and, second, there will always be people who will exploit those vulnerabilities. Nothing is safe. Nowhere is this more relevant than in the Internet of Things (IoT).

 

By Sacha Kavanagh

 

Originally Published by

GET IN TOUCH
 

Office
71-75 Shelton Street
Covent Garden
WC2H 9JQ, London

 

Contact Info
Phone: +44 (0) 1483 846691
Email: dblatch@nylitik.com

  • LinkedIn Social Icon
  • Facebook Social Icon
  • Twitter Social Icon
  • Google+ Social Icon
bottom of page