AI’s ability to learn poses challenge to regulators, companies: ‘A little bit scary’

The capacity of artificial intelligence systems to learn things even when they aren’t explicitly taught those things will pose a significant challenge both to the companies creating and marketing these tools, and federal regulators tasked with protecting consumers who use them, a member of the Federal Trade Commission predicted.

“Personally, and I say this with respect, I do not see the existential threats to our society that others do,” FTC Commissioner Alvaro Bedoya said in recent speech made available this week. “Yet when you combine these statements with the unpredictability and inexplicability of these models, the sum total is something that we as consumer protection authorities have never reckoned with.”

Bedoya was speaking to the International Association of Privacy Professionals about the tendency of generative AI systems to pick up knowledge and intuition about subjects even when programmers aren’t focusing on those topics. Bedoya said even some developers have said this is something that is “a little bit scary” about AI.

FTC STAKES OUT TURF AS TOP AI COP: ‘PREPARED TO USE ALL OUR TOOLS’

As an example, Bedoya noted that even though the second iteration of OpenAI’s ChatGPT wasn’t fed that much information on how to translate English to French, it could do the job reasonably well. Some large language models like ChatGPT can also play chess, even though technical experts aren’t sure how it picked up that skill.

Bedoya said this unpredictable nature of AI could mean future problems for the companies creating these systems, and for the FTC as it tries to protect consumers.

AI EXPERT IN CONGRESS WARNS AGAINST RUSH TO REGULATION: ‘WE’RE NOT THERE YET’

“Let me put it this way,” Bedoya said. “When the iPhone was first released, it was many things: a phone, a camera, a web browser, an email client, a calendar, and more. Imagine launching the iPhone – having 100 million people using it – but not knowing what it can do or why it can do those things, all while claiming to be frightened of it. That is what we’re facing today.”

Bedoya also made it clear that the FTC is prepared to regulate these products using their mandate to protect consumers from deceptive claims and injury. And he warned that AI developers would likely run into trouble with his agency if they oversell what AI is able to achieve.

One possible pitfall for these companies is if people are led to believe that AI systems are sentient. He said that while technical experts will say these systems are not emoting and are just imitating human speech, companies could still be liable if consumers have a different perception.

“The law doesn’t turn on how a trained expert reacts to a technology – it turns on how regular people understand it,” he said.

ELON MUSK’S WARNINGS ABOUT AI RESEARCH FOLLOWED MONTHS-LONG BATTLE AGAINST ‘WOKE’ AI

“I urge companies to think twice before they deploy a product that is designed in a way that may lead people to feel they have a trusted relationship with it or think that it is a real person,” he said. “I urge companies to think hard about how their technology will affect people’s mental health – particularly kids and teenagers.”

Bedoya also warned that under current law, companies won’t be able to defend themselves from consumer complaints by saying their AI system developed unanticipated knowledge or skills.

“The inexplicability or unpredictability of a product is rarely a legally cognizable defense,” he said. “We have frequently brought actions against companies for the failure to take reasonable measures to prevent reasonably foreseeable risks.”

scroll to top