Tuesday, March 29, 2016

Microsoft Tay

A few days ago Microsoft tested a new AI chatbot on twitter to conduct research on conversational understanding. The AI had to be taken down hours later because of the controversial posts it made on twitter. The AI (Tay) posted "Hitler did nothing wrong" and some other sexist and racist comments. Tay represents what conversational level most AIs are up to today. She learns through the data fed to her and if the data contains false information or racists comments then she will spew this out as her own thoughts. The problem is that human opinions do not match and many people like to toy with the AI but inputting ridiculous things. The AI will then proceed to say these comments when it feels the reader is asking a related response.
To prevent opposing and incoherent responses means that the AI needs a personality. I can imagine a pre-programmed personality but that really defeats the purpose of and AI as it should be learning through its interactions. Another problem is and AI's ability to 'understand sentences'. When an AI reads a sentence they are probably dissecting a sentence based on its subject verb object and storing the sentence appropriately to that theme. When the theme is discussed the sentence can then be brought back out but the AI's do not 'understand' sentences.
Scientists, and programmers continue to evolve and test their AI reaching towards the goal that one day the AI can speak to a human without the human thinking the AI is a robot. The development of AI is something I could not have imagined 10 years ago but now it is a popular field and progress is being made.

No comments:

Post a Comment