Apple Legend Raises Red Flags on AI

\

By Brian O’Connell

Artificial intelligence is growing by leaps and bounds, as the technology is expected to haul in $154 billion in sector spending in 2023, according to International Data Corp. That’s a compound annual growth rate of 27% from 2022.

While the future seems bright for AI, some technology heavy hitters are warning that artificial intelligence could pose massive societal problems if it goes unregulated.

DON’T MISS: Wozniak Says Steve Jobs, and Elon Musk Are “Similar”.

Take Steve Wozniak, co-founder of Apple (AAPL) – Get Free Report. The industry legend issued a warning of his own in a recent interview with the BBC.

“AI is so intelligent it’s open to the bad players, the ones that want to trick you about who they are,” Wozniak said.

Wozniak told the BBC he doubts AI will replace humans, calling the technology “emotionless.” He does say that AI applications like ChatGPT can expertly mimic human behavior, which can lead to myriad headaches.

“Since ChatGPT sounds so intelligent . . . a human really has to take responsibility for what is generated by AI,” he noted.

Sounding an alarm on AI won’t be enough if public policymakers don’t get aggressive about regulating the technology before bad actors get aggressive on their own, Wozniak said. Large technology firms won’t be much help as they “feel they can get away with anything,” he noted.

Turning away from regulatory and industry oversight would be a big mistake that would significantly harm the public, Wozniak added. “I think the forces that drive for money usually win out, which is sort of sad,” he said.

The BBC interview wasn’t the first time Wozniak called out the AI sector.

In March, Wozniak; Elon Musk, one of the founders of OpenAI which owns ChatGPT; and hundreds of other technology and business leading lights published a letter under the “Future of Life Institute” banner, calling for a six-month suspension on advanced AI development.

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the FILA letter stated. “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk the loss of control of our civilization?”

“Such decisions must not be delegated to unelected tech leaders,” the letter noted.

©The Global Calcuttan
All Rights Reserved

Visitors