Microsoft’s new A.I. chatbot went off the rails on Wednesday, posting a series of incredibly racist messages in response to questions from Twitter users.

Redmond had introduced “Tay” this week, which is basically a bot that responds to users’ queries and emulates the casual, jokey speech patterns of a stereotypical millennial. The bot was supposed to learn from interactions with social media and had begun spewing racist comments within a day of its launch this week, company officials said.

Aimed toward 18- to 24-year-olds, Tay was launched as an experiment to conduct research on conversational understanding, with the chatbot becoming smarter and offering a more personalized experience the more someone interacted with “her” on social media. Microsoft launched Tay on Twitter and messaging platforms GroupMe and Kik.

The bot was designed to get smarter as it learned from the users’ conversation, however, the major flaw was the fact that it was not capable of assessing the offensive nature of its staetments. Some Twitter users seized on this vulnerability, turning the naive chat bot into a racist troll.

The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical,” a Microsoft representative told ABC News in a statement today. “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.

At present, it is quite unclear what adjustments were being made or when Tay would be back online. She no longer responds to the messages and had posted the last message using its Twitter handle @TayandYou.

What are your thoughts on Microsoft’s handling of the incident? Let us know in the comments below.