Microsoft’s Copilot Designer Raises Concerns

0 0
Read Time:1 Minute

Microsoft’s Copilot Designer AI Text-to-Image Tool under Scrutiny for Unfiltered Content

Recent reports suggest that Microsoft’s AI text-to-image generator, Copilot Designer, may be allowing disturbing and inappropriate content to surface. This revelation comes after a Microsoft engineer, Shane Jones, highlighted concerns regarding the tool’s random creation of violent and sexual imagery.

Warnings Ignored

Jones reportedly raised alarms about the alarming content he encountered while voluntarily participating in red-teaming exercises to assess the tool’s vulnerabilities. Despite multiple warnings, Microsoft allegedly failed to address the issue by either removing the tool or implementing safeguards. They also neglected to update the product’s rating to mature in the Android store.

Instead of taking action, Microsoft apparently redirected Jones to report the problem to OpenAI, the entity behind the DALL-E model that powers Copilot Designer’s outputs.

Public Disclosure Efforts

Left with no response from OpenAI, Jones resorted to various measures to draw attention to the problems within Microsoft’s tool. His actions included posting an open letter on LinkedIn calling out OpenAI and sending letters to lawmakers, stakeholders, and regulatory bodies like the Federal Trade Commission (FTC) and Microsoft’s board of directors.

In his correspondence with the FTC, Jones expressed concern that unless regulatory intervention occurs, Microsoft and OpenAI will continue marketing the product which presents various issues despite being advertised as safe for children.

Evidence of Harmful Content

Bloomberg reviewed Jones’ correspondence with the FTC and noted mentions of sexually objectified images, political bias, substance abuse, copyright violations, conspiracy theories, and other inappropriate content that Copilot Designer has reportedly generated randomly.

Additionally, Jones urged Microsoft’s board to conduct an independent review of the company’s AI decision-making processes related to the tool. He emphasized the necessity for transparency and accountability, especially after exhausting internal reporting avenues without substantial responses.

Microsoft’s Response

In response to the concerns raised, a Microsoft spokesperson stated the company’s commitment to addressing employee feedback in line with established policies. They highlighted the importance of utilizing internal reporting channels for investigating and resolving safety issues. Meetings with product leadership and the Office of Responsible AI were mentioned to address reported concerns effectively.

Notably, attempts to replicate the prompts shared by Jones resulted in error messages, indicating potential efforts to filter out problematic images. However, OpenAI refrained from providing any comments on the matter.

Image/Photo credit: source url

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %