AI Ethics Under Fire: OpenAI's Deal with the US Military Sparks Controversy and Backlash
The world of artificial intelligence just got a lot more complicated. Just a few minutes ago, Chris Vallance and Laura Cress, technology reporters for AFP, broke the news that OpenAI is revising its agreement with the US government after facing intense criticism. The initial deal, which allowed the US military to utilize OpenAI's technology in classified operations, has been described as 'opportunistic and sloppy' by the company itself.
But here's where it gets controversial: the agreement has raised crucial questions about the ethical use of AI in warfare and the balance of power between governments and private companies. OpenAI's statement on Saturday revealed that the new deal includes more safeguards than any previous agreement for classified AI deployments, even surpassing Anthropic's.
And this is the part most people miss: on Monday, OpenAI's CEO, Altman, announced further changes on X, ensuring their system won't be used for domestic surveillance of US citizens. Additionally, intelligence agencies like the NSA will require a contract modification to access OpenAI's technology.
Altman admitted that the company rushed the initial announcement, saying, 'The issues are super complex and demand clear communication.' He acknowledged the backlash, stating that their intentions were to de-escalate the situation but that it came across as opportunistic.
The backlash from users was swift, especially after OpenAI's partnership with the Pentagon was revealed. Sensor Tower's data shows a significant increase in ChatGPT uninstalls following the announcement. Anthropic's AI model, Claude, gained popularity and topped Apple's App Store rankings, despite being blacklisted by the Trump administration for refusing to compromise on its principle of not creating fully autonomous weapons.
However, Claude's involvement in the US-Israel war with Iran has recently come to light, raising further ethical concerns. The Pentagon remains silent on its dealings with Anthropic.
AI's role in the military is multifaceted, from logistics optimization to rapid information processing. Palantir, a US company, provides AI-powered data analytics tools to governments, including the US, Ukraine, and NATO, for intelligence gathering, surveillance, counterterrorism, and military operations. The UK Ministry of Defence recently signed a substantial contract with Palantir.
BBC's interview with Palantir's UK operations head, Louis Mosley, highlighted the integration of Palantir's Maven platform into NATO. This platform aggregates various military data, which is then analyzed by AI models like Claude to support strategic decision-making. However, large language models can make errors or even fabricate information, a phenomenon known as 'hallucinating.'
NATO's Task Force Maven emphasizes human oversight, with Lieutenant Colonel Amanda Gustave assuring that AI decisions are always supervised by humans. While Palantir advocates for a 'human in the loop' approach, it doesn't support a blanket ban on autonomous weapons, unlike Anthropic.
With Anthropic's absence from the Pentagon, Oxford University's Professor Mariarosaria Taddeo warns that the most safety-conscious player is now absent from the table. This development raises important questions about the future of AI ethics in military applications.
Stay tuned as we delve deeper into the world of AI and its implications during the BBC's AI Unpacked week.