Microsoft landed in hot water this week over an insensitive AI-created poll that speculated on the cause of death of a young Australian woman. The poll appeared alongside a news article on Microsoft’s Start portal, sparking outrage and damaging the reputation of the publisher.
Poll Asks for Opinions on Tragic Death
The poll ran next to a story originally published by The Guardian covering the death of a 21-year-old water polo coach in Sydney who was found dead at her school. The young woman had suffered severe head injuries.
The AI-generated poll asked readers to weigh in on potential reasons for the woman’s death, giving three options: murder, accident or suicide.
The speculative poll was deeply inappropriate given the tragic nature of the case and the ongoing investigation into the cause of death.
Backlash Blamed Guardian Journalist
Many outraged readers associated the offensive poll with The Guardian’s journalist who authored the article. They demanded the reporter be fired, despite the journalist having no role in creating the poll.
The Guardian said the incident caused “significant reputational damage” by falsely tying their organization to the poll. Readers directed harsh criticism at the news outlet over the AI content appended to their story.
Microsoft Deactivates AI Polls, Apologizes
Facing a storm of criticism, Microsoft swiftly deactivated AI-generated polls from all news articles on its platforms.
The company admitted the poll was an “inappropriate use of generative AI” and promised to examine how its systems produced such insensitive content.
Microsoft President Brad Smith received a letter from The Guardian’s CEO expressing dismay over the reputational harm caused by attributing the poll to Guardian journalism.
Incident Highlights AI Content Moderation Challenges
The disturbing poll underscores the difficulties tech firms face in effectively moderating AI-created content. AI can sometimes generate harmful or unethical output not intended by its developers.
Microsoft will need to implement stronger guardrails to filter out inappropriate AI material and prevent a repeat of this incident. Experts say the technology remains unreliable for handling sensitive topics.
The tech giant pledged to learn from its mistakes and improve oversight of its AI systems. But the event demonstrated how AI can go astray when inappropriately applied, creating PR headaches.
Publisher Faced Unfair Damage for Microsoft’s AI Issue
While Microsoft took ownership for the AI problem, The Guardian suffered significant unfair blowback as readers wrongly assumed the news outlet added the poll to its own story.
The Guardian had no part in the AI poll but still saw its reputation tarnished and faced public calls to punish its journalist over something Microsoft did.
The situation reveals the tricky position publishers are in as tech platforms augment their content with AI elements outside their control. News outlets bear the brunt of public outrage even over AI they did not create.
Incident Fuels Debate Over AI’s Societal Impact
The poll controversy feeds into broader discussions around AI ethics and responsibilities. It highlighted potential downsides of unleashing immature AI systems on the public.
Microsoft noted it is still early in its journey of developing AI capabilities responsibly. The tech industry faces pressure to adopt safeguards and think critically about AI’s societal ramifications as adoption accelerates.
Generating speculative polls about private individuals’ deaths represents exactly the sort of harmful AI behavior that experts have warned about as the technology proliferates. Microsoft’s misstep provided a sobering case study.
The intense backlash sparked by the poll will compel companies to carefully consider how they deploy AI, especially for sensitive topics. Getting AI ethics right remains an urgent priority for the tech sector.