
While some utmost capabilities seem to be in their possession, artificial intelligence (AI) chatbots are very likely to get caught off guard by hallucinations or glitches, and back to the past few weeks, xAI's Grok chatbot fell prey to a bug, causing it to remark on the "genocide against white citizens" in South Africa.
According to the Elon Musk-owned AI company, an unauthorised change to Grok's software systems prompted it to raise the politically controversial topic, following which the company expressed its commitment to address the issue at the earliest to avoid hurting people's sentiments.
xAI stated in a post on X (formerly known as Twitter) that the alteration took place early on Wednesday after dodging the standard monitoring system, making Grok comment on a sensitive topic, resulting in the violation of xAI's internal policies, Reuters reported.
Taking to X, Grok users shared screenshots of their interactions with Grok. The screengrabs vividly displayed the AI tool raising the topic of "white genocide" even amid irrelevant discussions.
Since coming to light, the unexpected AI glitch has led to heated debates on political bias, hate speech, and the accuracy of AI chatbots. It should be noted that such incidents have been causing a stir globally since the launch of ChatGPT in 2022.
Critics of South Africa's land acquisition policy, including Musk, labelled these remarks a discrimination against white citizens, while the South African government denies any evidence of persecution.
As a response, xAI is reportedly planning to publish Grok's system prompts on GitHub, allowing the public to scrutinise and provide feedback.