One of the tech CEOs who signed a letter calling for a six-month pause on AI labs training powerful systems warned that such technology threatens “human extinction.”
“As stated by many, including these model’s developers, the risk is human extinction,” Connor Leahy, CEO of Conjecture, told Fox News Digital this week. Conjecture describes itself as working to make “AI systems boundable, predictable and safe.”
Leahy is one of more than 2,000 experts and tech leaders who signed a letter this week calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” The letter is backed by Tesla and Twitter CEO Elon Musk, as well as Apple co-founder Steve Wozniak, and argues that “AI systems with human-competitive intelligence can pose profound risks to society and humanity.”
Leahy said that “a small group of people are building AI systems at an irresponsible pace far beyond what we can keep up with, and it is only accelerating.”
“We don’t understand these systems, and larger ones will be even more powerful and harder to control. We should pause now on larger experiments and redirect our focus towards developing reliable, bounded AI systems.”
Leahy pointed to previous statements from AI research leader, Sam Altman, who serves as the CEO of OpenAI, the lab behind GPT-4, the latest deep learning model, which “exhibits human-level performance on various professional and academic benchmarks,” according to the lab.
Leahy cited that just earlier this year, Altman told Silicon Valley media outlet StrictlyVC that the worst-case scenario regarding AI is “lights out for all of us.”
Leahy said that even as far back as 2015, Altman warned on his blog that “development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”
The heart of the argument for pausing AI research at labs is to give policymakers and the labs themselves space to develop safeguards that would allow for researchers to keep developing the technology, but not at the reported threat of upending the lives of people across the world with disinformation.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter states.
I INTERVIEWED CHATGPT AS IF IT WAS A HUMAN; HERE’S WHAT IT HAD TO SAY THAT GAVE ME CHILLS
Currently, the U.S. has a handful of bills in Congress on AI, while some states have also tried to tackle the issue, and the White House published a blueprint for an “AI Bill of Rights.” But experts Fox News Digital previously spoke to said that companies do not currently face consequences for violating such guidelines.
When asked whether the tech community is at a critical moment to pull the reins on powerful AI technology, Leahy said that “there are only two times to react to an exponential.”
MUSK’S PROPOSED AI PAUSE MEANS CHINA WOULD ‘RACE’ PAST US WITH ‘MOST POWERFUL’ TECH, EXPERT SAYS
“Too early or too late. We’re not too far from existentially dangerous systems, and we need to refocus before it’s too late.”
“I hope more companies and developers will be on board with this letter. I want to make clear that this only affects a small section of the tech field and the AI field in general: only a handful of companies are focusing on hyperscaling to build God-like systems as quickly as possible,” Leahy added in his comment to Fox News Digital.
OpenAI did not immediately respond to Fox News Digital regarding Leahy’s comments on AI risking human extinction.