OpenAI created the ‘Superalignment’ team to help keep ‘superintelligent’ AI safe, and now it has been disbanded: Report

The Superalignment team at OpenAI, tasked with coming up with ways to govern and direct “superintelligent” AI systems, was previously promised 20% of the company’s compute resources, but that never manifested. This resulted in the team not being able to sufficiently complete their job, according to Tech Crunch.

As a result, several team members, including co-lead Jan Leike, have decided to resign from their posts. Leike was a former DeepMind researcher who helped develop ChatGPT, GPT-4, and ChatGPT’s predecessor, InstructGPT.

‘These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.’

In a lengthy thread, Leike posted to X, writing: “I joined because I thought OpenAI would be the best place in the world to do this research. However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics,” he continued. “These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”

Now, the OpenAI’s Superalignment team has been disbanded just one year after the company announced the group, according to CNBC. The report mentioned that OpenAI co-founder Ilya Sutskever has also decided to leave the company.

Leik said that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

When OpenAI’s Superalignment team was announced in 2023, it noted that it was focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.”

The announcement added: “Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”

Co-founder and CEO Sam Altman commented on Leike’s thread, writing: “i’m super appreciative of @janleike’s contributions to openai’s alignment research and safety culture, and very sad to see him leave. he’s right we have a lot more to do; we are committed to doing it. i’ll have a longer post in the next couple of days.”

Leik wrote: “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”

Leike noted that OpenAI must be more focused on the safety of its technology, saying that “[b]uilding smarter-than-human machines is an inherently dangerous endeavor.”

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Articles You May Like

Black Lives Matter leaders doled out lucrative contracts to family and friends, new documents reveal
Hundreds Turn Out for MAGA Michigan Boat Parade Saturday on the Detroit River President Trump Visits Motor City
Russia a No-Show at ‘Swiss Peace Conference’
“Threat to American Democracy” – AOC Attacks the First Amendment, Throws a Fit Over ‘Outside Money’ Raised to Unseat Jamaal Bowman
Surgeon General Wants Warning Labels on Social Media Sites

Leave a Comment - No Links Allowed:

Your email address will not be published. Required fields are marked *