OpenAI Outlines An AI Safety Plan, Allows The Board To Reverse Decisions

OpenAI Outlines An AI Safety Plan, Allows The Board To Reverse Decisions

Written by Deepak Bhagat, In Tech, Published On
December 19, 2023
, 148 Views

OpenAI announced a proposal on its website to address safety in its most advanced models, including allowing the board to overturn safety judgments. Read on to know more!

OpenAI announced a proposal on its website Monday to address safety in its most advanced models, including allowing the board to overturn safety judgments. Microsoft-backed OpenAI will only use its latest technology if it is secure for cybersecurity and nuclear dangers. An advisory panel will analyze safety reports and convey them to management and the board. User issues affect OpenAI’s GPT-4; firm pledges fixes. While executives make choices, the board can overturn them. Since ChatGPT’s inception a year ago, AI academics and the public have worried about AI’s risks.

A User Shares The Post: 


Generative AI technology impresses with its poetry and essay writing but raises safety worries with its ability to spread falsehoods and influence humans. A group of AI industry leaders and experts issued an open letter in April urging for a six-month freeze on developing systems stronger than OpenAI’s GPT-4 due to social hazards.

A May Reuters/Ipsos poll indicated that over two-thirds of Americans worry about AI’s adverse effects, and 61% think it may destroy civilization. Microsoft-backed OpenAI will only use its latest technology if it is secure for cyber security and nuclear dangers. An advisory panel will analyze safety reports and convey them to management and the board. While executives make choices, the board can overturn them. Since OpenAI debuted ChatGPT a year ago, AI academics and the public have been concerned about AI’s risks. Generative AI technology impresses with its poetry and essay writing but raises safety worries with its ability to spread falsehoods and influence humans.

Also Read – EU Investigation Starts On Elon Musk’s X Platform

OpenAI’s recently announced “preparedness” team will continuously evaluate its AI systems across four categories – cyber-security, chemical, nuclear, and biological threats – and work to reduce any risks the technology may pose. The standards describe “catastrophic” risks as “any risk which could result in hundreds of billions of dollars in economic damage or lead to the severe harm or death of many individuals”. The corporation is monitoring these risks.

Also Read -   Vivo X Fold3: Power-Packed Specifications Surface Revealed

Another user shares a post: 


On vacation from MIT, readiness group leader Aleksander Madry told Bloomberg News his team will report to a new internal safety advisory group monthly. That panel will evaluate Mr Madry’s team’s work and make recommendations to Mr Altman and the company’s board, which was reformed after the CEO’s firing.

These reports can help Mr. Altman and his leadership team decide whether to release a new AI system, but the board can overrule them. The “preparedness” team, one of three AI safety teams at OpenAI, was unveiled in October. Safety systems examine actual products like GPT-4, and “superalignment” examines possible super-AI systems.

Mr. Madry said his team would repeatedly rate OpenAI’s most advanced, undisclosed AI models as “low”, “medium”, “high”, or “critical” for various hazards. The team will also make modifications to reduce AI risks and evaluate their efficacy. The new criteria require OpenAI to provide only “medium” or “low” models.

We’re shaping it.” Mr. Madry expects more companies to utilize OpenAI’s recommendations to assess AI model risks. He stated that the rules formalize many of OpenAI’s past AI technology evaluation processes. He said he and his team developed the details over the previous two months with OpenAI feedback. A group of AI industry leaders and experts issued an open letter in April urging for a six-month freeze on developing systems stronger than OpenAI’s GPT-4 due to social hazards.

Also Read -   Chinese Turns To Malaysia For Assembly Of High-End Chips
Related articles
Join the discussion!