Facebook has not shut down an AI Program That Was Getting Dangerously Smart

Around the middle of the month, FastCo Design had reported about two AI agents designed by Facebook who invented their own gibberish language and were subsequently turned off by the company. This was quickly taken up and reported as the advent of the machines and various other variants of gloom and doom in which AI will take over human lives.

A snippet of the conversation,

Bob: “I can can I I everything else.”

Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”

Fortunately, Facebook’s AI Research (FAIR) unit had posted a blog entry on the topic last month which explained the purpose behind these two AI agents. The purpose behind these AI was to showcase that it is “possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.” Given a certain set of tasks which in this case was the discussion of how to barter for certain goods (in this case books, hats and balls) till both parties are agreeable.

FAIR machine learning bots in conversation

The end goal was to get a chatbot (which are fairly commonplace) which could learn from human interaction to negotiate deals seamlessly with an end user in a manner that said user would not realise they are interacting with a bot. This goal was met according to FAIR. Now the catch in this whole system was that the bots were not incentivised to use English or some other human comprehensible language and thus they fell back to a gibberish language that was best suited to their tasks which bereft of context does come off as creepy.

Speaking on point, FAIR Visiting Researches, Dhruv Batra stated,

“Agents will drift off understandable language and invent code words for themselves. Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

Facebook did shut down the conversation but this was not because of some threat of imminent nuclear war but due to far more mundane considerations. FAIR Researcher Mike Lewis told FastCo that the intention of the project was to have bots that could converse fluently with humans not each other. Thus, the team is working on training the bots to converse in a more legible means.
FAIR Dialog rollouts
Although it would behove us to be more mindful of such incidents, this is what happens when you let two machine learning devices and let them learn from each other without any boundaries. Though once put in context these messages are not half as threatening as some news stories painted them to be. The more imminent danger of such uncontrolled language mutations would be in the case of debugging such a system when it goes wrong.
Going forward, we will see more and more such chatbots coming into play as AI takes over more and more mundane tasks and yes it would require human oversite but that does not equate to overreach. Hopefully, this means we do not entrust something as dangerous as a network of killer robots and nuclear silos to a master AI.