News Science & Technology

ChatGPT-maker OpenAI’s new guidelines can help avoid ‘catastrophic risks’ of AI


Soon after Sam Altman’s return as CEO in the Microsoft-owned firm, OpenAI released a list of guidelines for all the consumers on Monday, warning them about the “catastrophic risks” of artificial intelligence.

OpenAI has issued new guidelines to evaluate the risks of AI (AFP)(AFP)
OpenAI has issued new guidelines to evaluate the risks of AI (AFP)(AFP)

The newest guidelines published by ChatGPT-maker OpenAI are for gauging “catastrophic risks” from artificial intelligence in models currently being developed. The document titled “Preparedness Framework” talks about studies falling short when it comes to evaluating the risks of AI.

IPL 2024 Auction is here! Catch all the updates LIVE on HT. Join Now

OpenAI, in its latest guidelines, said, “We believe the scientific study of catastrophic risks from AI has fallen far short of where we need to be.” The guidelines further state that the framework should “help address this gap.”

A monitoring and evaluations team announced in October will focus on “frontier models” currently being developed that have capabilities superior to the most advanced AI software in an attempt to evaluate all the risks of the new technology.

Further, the evaluations team will also be assessing each new model and assign it a level of risk, from “low” to “critical,” in four main categories. Only models with a risk score of “medium” or below can be deployed, according to the framework.

Four categories of risks for AI models

The first category concerns cybersecurity and the model’s ability to carry out large-scale cyberattacks.

The second will measure the software’s propensity to help create a chemical mixture, an organism (such as a virus) or a nuclear weapon, all of which could be harmful to humans.

The third category concerns the persuasive power of the model, such as the extent to which it can influence human behavior.

The fourth and last category of risk concerns the potential autonomy of the model, in particular whether it can escape the control of the programmers who created it.

The results derived from this practice will then be sent to OpenAI’s Safety Advisory Group, which will then make the necessary recommendations to CEO Sam Altman or another person of the board.

(With inputs from AFP)

Unlock a world of Benefits with HT! From insightful newsletters to real-time news alerts and a personalized news feed – it’s all here, just a click away! -Login Now!


Source link