OpenAI is already studying the “catastrophic risks” associated with AI. Among other things, about the nuclear threat

The development of artificial intelligence systems continues to raise fears of a dystopian future, and now OpenAI wanted to prepare to avoid them. They just created a new team called Readiness that will be in charge of studying, evaluating and analyzing AI models. protect us from what they describe as “catastrophic risks”.

This group of experts will be led by Aleksander Madry, a machine learning expert from MIT who joined OpenAI in May. Your task, like that of the rest of the team, will certainly be interesting.

The goal is therefore to monitor, evaluate, predict and protect against “catastrophic risks” which, according to these experts, “cover multiple categories”, among which they quote:

  • Individual persuasion (e.g. using phishing attacks)
  • Cyber ​​security
  • Chemical, biological, radiological and nuclear (CBRN) threats
  • Autonomous Replication and Adaptation (ARA)

There are many experts and personalities who have warned of the dangers that may arise from the development of artificial intelligence systems. Even Sam Altman, the head of OpenAI, has recognized this, and in fact has been advocating for some time – probably in a self-interested way – the creation of regulation for the development of these models.

At OpenAI, they realize that even they cannot take into account all the potential threats that the development of these systems can generate, and therefore they have requested that any interested party send them studies regarding the risks that may appear, and there will be a $25,000 reward and potential job positions in the Readiness team for the top 10 studios in this regard.

To evaluate these risks, the company suggests that stakeholders imagine being given access to their most advanced models, both the Whisper and the GPT-4V or DALLE-3. The idea is that they use these tools to become users with a toxic purpose: what would be the worst and most catastrophic use that these models could be put to?

The Readiness Team will also be responsible for preparing test development manuals and documents for these models to be run both before and after their development. Although they believe these systems will eventually surpass current capabilities,”They also pose increasingly serious risks. “We need to ensure we have the knowledge and infrastructure necessary to secure highly capable AI systems,” they conclude.

Image | Steve Jennings/Getty Images for TechCrunch

In Xataka | AI companies seem to be the first to take an interest in AI regulation. This is disturbing

Leave a Comment