Copilot Designer's inappropriate image generation ceased as Microsoft blocks certain terms

Microsoft has now restricted for users to use terms such as "pro-choice", "four-twenty", and "pro-life" on Copilot Designer
An undated image shows Copilot logo. — Microsoft
An undated image shows Copilot logo. — Microsoft

Microsoft came under the fire after a bug caused the generation of inappropriate images on its Copilot Designer, the image generation wing of the Copilot AI. 

Following the fiasco and in an attempt to undertake proactive measures to address the issue, the Redmond-based company has decided to block certain terms that drove the AI model to produce violent and sexual images.

Pertaining to the mishap that accrued from a malfunction spotted on OpenAI’s DALL-E 3 models, Shane Jones, Microsoft Principal Software Engineering Manager, expressed his concerns in a letter to both the US Federal Trade Commission and Microsoft’s board.

Read more: Google Wallet can now fetch tickets, boarding passes from Gmail

Why Copilot Designer generated harmful images?

The malfunction transformed Copilot Designer to evade the watchdog system employed by Microsoft to suppress any sort of inappropriate image generation.

The Microsoft engineer said the bug in the AI tool has potential to generate offensive images that contain "political bias, underage drinking, and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion,” and that the tool will add "sexually objectified" women in images without requiring any manual prompt command.

Before writing the letters he shared on LinkedIn, Jones sought to have an age restriction added to the tool by Microsoft, but his request was reportedly denied. The engineer also requested the company to temporarily remove Copilot Designer from public use until better safeguards were utilised. 

Terms blocked on Copilot Designer

PCMag, while citing a CNBC report, said Copilot has now restricted the use of following three terms:

  1. Pro-choice 
  2. Four-twenty
  3. Pro-life

When a user would attempt to create images by using one of the restricted terms, an error message will be displayed to indicate that the term has been blocked, with the complete messaging stating, "Our system automatically flagged this prompt because it may conflict with our content policy. More policy violations may lead to automatic suspension of your access. If you think this is a mistake, please report it to help us improve.”

Last month, Google took action to prevent its Gemini AI from generating inaccurate historical representations of people of colour.