[ad_1]

American philanthropist Bill Gates speaks during the Seventh Global Fund Replenishment Conference in New York on September 21, 2022.

Mandel Ngan | AFP | Getty Images

Microsoft Co-founder Bill Gates believes in the potential of AI, and frequently reiterates that he believes models like the one at the heart of ChatGPT are the most important advancement in technology since the PC.

The advent of the technology can lead to problems like deepfakes, biased algorithms, and cheating at school, he says, but he predicts that the problems arising from the technology are solvable.

“One thing is clear from all that has been written so far about the dangers of AI — and much has been written — that no one has all the answers,” Gates wrote in a blog. this week. “Another thing that is clear to me is that the future of AI is not as bleak as some think or as rosy as others think.”

Gates airing a middle-of-the-road view of the risks of AI could shift the debate over the technology away from doomsday scenarios toward more limited regulation that addresses current risks, just as governments around the world grapple with how to regulate the technology and its potential failures. On Tuesday, for example, senators received a classified briefing on artificial intelligence and the military.

Gates is one of the most prominent voices on AI and its regulation. It is also still closely affiliated with Microsoft, which has invested in OpenAI and integrated ChatGPT into its core products including Office.

In the blog post, Gates cites how society has reacted to past developments to prove that humans have adapted to major changes in the past, and will do so with artificial intelligence as well.

“For example, it will have a huge impact on education, as did portable calculators a few decades ago and, more recently, allowed computers into the classroom,” Gates writes.

The kind of regulation the technology needs, Gates suggests, are “speed limits and seat belts.”

“Soon after the first car hit the road, there was the first crash. But we didn’t ban cars—we adopted speed limits, safety standards, licensing requirements, drink-driving laws, and other rules of the road,” Gates wrote.

Gates worries about some of the challenges arising from technology adoption, including how it can change people’s jobs, “hallucinations,” or the tendency of models like ChatGPT to invent facts, documents, and people.

As an example, he cites the problem of deepfakes, which use artificial intelligence models to allow people to easily create fake videos impersonating someone else, which can be used to deceive people or tip elections, he writes.

But he also doubts that people will get better at recognizing deepfakes, and cites deepfake detectors developed by Intel and DARPA, a government funder. He proposes regulation that would clearly define what type of deepfakes are legal.

He also worries about the ability of AI-generated code to look for the kind of vulnerabilities needed to hack computers, and proposes creating a global regulatory body along the lines of the International Atomic Energy Agency.

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *