According to Platformer, Microsoft fired their AI entire safety ethics teams, responsible for ensuring that Microsoft’s AI products are shipped with safeguards to mitigate social harms was fired during the company’s most recent layoff of 10,000 employees.
According to former employees, the ethics and society team was an important part of Microsoft’s strategy to reduce the risks associated with using OpenAI technology in Microsoft products. Before it was killed, the team created an entire “responsible innovation toolkit” to assist Microsoft engineers in forecasting and mitigating the harms that AI could cause.
Platformer’s report came just as OpenAI released GPT-4, possibly its most powerful AI model yet, which is already powering Bing search, according to Reuters.
Microsoft fired their AI safety ethics team, stating in a statement provided that it is “committed to developing AI products and experiences in a safe and responsible manner, and does so by investing in people, processes, and partnerships that prioritize this.”
Microsoft described the ethics and society team’s work as “trailblazing,” and stated that the company has spent the last six years investing in and expanding the size of its Office of Responsible AI. That office, as well as Microsoft’s other responsible AI working groups, the Aether Committee and Responsible AI Strategy in Engineering, are still in operation.
Emily Bender, a computational linguistics and ethical issues in natural-language processing expert at the University of Washington, joined other critics in condemning Microsoft’s decision to fire their AI safety ethics and society team. As an outsider, Bender believes Microsoft’s decision was “short-sighted.” “Given how difficult and important this work is, any significant cuts to the people doing the work are damning,” she added.
A breakdown of the ethics and society team’s history
CNBC reported that Microsoft began focusing on teams dedicated to researching responsible AI in 2017. Platformer stated that by 2020, that effort would include an ethics and society team with a maximum size of 30 members. However, as the AI race with Google heated up, Microsoft began relocating the majority of the ethics and society team members last October to specific product teams. Employees told Platformer that this left only seven people dedicated to putting the ethics and society team’s “ambitious plans” into action.
It was too much work for a small team, and former team members told Platformer that Microsoft didn’t always act on their recommendations, such as mitigation strategies recommended for Bing Image Creator to prevent it from copying living artists’ brands. (Microsoft has denied this claim, claiming that the tool was modified prior to launch to address the team’s concerns.)
According to Platformer, while the team was being reduced last fall, Microsoft’s corporate vice president of AI, John Montgomery, stated that there was a great deal of pressure to “take these most recent OpenAI models and the ones that come after them and move them into customers’ hands at a very high speed.” Employees expressed “significant” concerns about the potential negative consequences of this speed-based strategy, but Montgomery insisted that “the pressures remain the same.”
Despite the fact that the ethics and society team’s size was shrinking, Microsoft assured the team that it would not be eliminated. The company announced a change on March 6, when the remaining members of the team were told during a Zoom meeting that it was “business critical” to dissolve the team completely.
Furthermore Bender added, that the decision is particularly disappointing because Microsoft “managed to gather some really great people working on AI, ethics, and societal impact for technology.” She claimed that for a time, the team appeared to be “actually even fairly empowered at Microsoft.” However, according to Bender, Microsoft’s move “basically says” that if the company perceives the ethics and society team recommendations “They have to go if they aren’t going to make us money in the short term.”
According to experts like Bender, Microsoft appears to be less interested in funding a team dedicated to telling the company to slow down when AI models may pose risks, including legal risks. One employee told Platformer that they were concerned about what would happen to the brand and users now that there appeared to be no one to say “no” when potentially irresponsible designs were pushed to users.
“The worst part is that we’ve exposed the business and people to risk,” one former employee told Platformer. Check our previous post’s about tech.
The risky future of responsible AI
When Microsoft relaunched Bing with AI, users quickly discovered that the Bing Chat tool was acting strangely—creating conspiracies, spreading misinformation, and even seemingly slandering people. Until now, tech companies such as Microsoft and Google have been trusted to self-regulate AI tool releases, identifying risks and mitigating harms. However, Bender, who coauthored the paper with former Google ethics AI researcher Timnit Gebru, which resulted in Gebru’s dismissal for criticizing large language models on which many AI tools rely, told Ars that “self-regulation as a model is not going to work.”
To invest in ethical AI teams, “there has to be external pressure,” Bender said.
In light of the “present wave of AI hype,” Bender urges regulators to step in now if society wants more transparency from businesses. Without a strong grasp of how consumers could be damaged by those tools, people run the risk of hopping on bandwagons to utilize popular products, as they did with AI-powered Bing, which now has 100 million monthly active users.
Every person who uses this, in my opinion, ought to be very aware of what they’re working with, Bender added. And I can’t think of any businesses that are doing that well.
It’s “frightening,” according to Bender, because businesses seem obsessed with cashing in on the AI hype, which asserts that AI “is going to be as massive and chaotic as the Internet was.” Companies should “think about what could go wrong” instead.
According to a Microsoft representative, the Office of Responsible AI is now responsible for this task.
The Office of Responsible AI.
Furthermore, after Microsoft fired thier AI safety ethics team, office which offers cross-company support for tasks including analyzing delicate use cases and supporting laws that safeguard customers, has also expanded in size and scope, according to a Microsoft representative.
Bender believes that society should support regulations rather than relying on businesses like Microsoft to uphold moral standards. She tweeted that this is a better approach than “micromanaging specific technologies, but rather to establish and protect rights” over the long term.
Bender advises users to “never accept AI medical advice, legal counsel, psychotherapy,” or other sensitive applications of AI until there are enough rules in place, more transparency about potential risks, and better information literacy among users.
“That strikes me as extremely, very short-sighted,” Bender remarked of the current AI boom.
Please check our previous post’s and always visit our page for more interesting article’s on Tech. Thanks