Advertisement

Microsoft's "Zo" chatbot picked up some offensive habits

Zo went cray-cray like Tay.

It seems that creating well-behaved chatbots isn't easy. Over a year after Microsoft's "Tay" bot went full-on racist on Twitter, its successor "Zo" is suffering a similar affliction.

Despite Microsoft programming Zo to ignore politics and religion, the folks at BuzzFeed News managed to get the bot to react to the restricted topics with surprising (and extremely controversial) results. One of these exchanges saw Zo refer to the Qu'ran as "very violent." It also opined on the death of Osama Bin Laden, claiming his "capture" came after "years of intelligence gathering under more than one administration." Microsoft claims the errors in its behaviour have now been corrected.

Just last year, Microsoft's Tay bot went from emulating the tone of a supposedly hip teenager to spouting racist tirades within the span of a day. To make matters worse, the entire debacle unfolded on Twitter for everyone to see, forcing Microsoft to disable it. As a result, the company kept Zo within the confines of messaging app Kik, and its mid-sized user base. But it seems the chatbot still managed to pick up some bad habits.

Microsoft blamed Tay's downfall on a concentrated group effort by select users to corrupt the bot, but it claims no such attempt was made at bringing down Zo. The chatbot is still available on Kik, with Microsoft saying it has no plans of disabling it.

Please note: Comments have been closed on this article due to the extensive number of off-topic and unrelated comments and conversations.