OpenAI Forms Committee for AI Safety

OpenAI has established a panel to assess the safety of its artificial intelligence (AI) technology, following significant changes within the company. The decision was implemented following the departure of its topmost executive responsible for security and wellness, leading to the dissolution of his team.

To conduct a comprehensive assessment of OpenAI’s technological risks, the newly formed committee will take around three months. It will then present a report, after which the company plans to publicly release an update about the recommendations accepted, in a way that aligns with its safety and security goals. This information was shared in a blog post by OpenAI on a Tuesday.

Recently, efforts to train its latest AI model have begun, according to OpenAI. The firm’s swift progress in the field of AI has raised concerns around the potential risks associated with managing such advanced technology. Tensions escalated during a brief period last autumn when Sam Altman, the company’s CEO, was temporarily dismissed from the board following disagreements with the company’s co-founder and leading scientist, Ilya Sutskever, on the pace of AI product development and approaches to mitigating potential harm.

These anxieties resurfaced when Sutskever and his deputy, Jan Leike, left the company this month. The two led a team at OpenAI devoted to addressing the long-term threats of superhuman AI technology. After leaving the company, Leike complained that his team struggled for resources at OpenAI, a sentiment shared by other employees who left the company.

Following Sutskever’s exit, his team was disbanded with the company advising that their work would carry on within its unit of research, headed by John Schulman, who adopted the new title of Head of Alignment Science.

The company admitted to having faced challenges in managing staff exits in the past, a situation highlighted when their equity cancellation policy for staff criticising the company was dropped last week. In response to criticism from former employees, OpenAI acknowledged its need to address these concerns and anticipated more criticism.

Three board members – Chairman Bret Taylor, Quora CEO Adam D’Angelo, and former Sony Entertainment executive Nicole Seligman – along with six employees, including Schulman and Altman, will comprise OpenAI’s safety panel. The company intends to continue soliciting advice from outside experts, including Rob Joyce, an adviser to Donald Trump on Homeland Security, and John Carlin, a former Justice Department official under President Joe Biden. – Bloomberg

Written by Ireland.la Staff

Toyota’s New Combustion Engines Challenge Tesla

“Retail Sales Drop in Stores, Bars”