[ad_1]

Meta’s chief artificial intelligence scientist Yann LeCun spoke at the Viva Tech conference in Paris and said that AI does not currently have human-level intelligence but could one day do.

Chesnot | Getty Images News | Getty Images

Existing AI systems like ChatGPT do not have human-level intelligence and are hardly smarter than a dog, meta said the AI ​​chief, as debate raged about the dangers of the fast-growing technology.

ChatGPT, developed by OpenAI, is based on the so-called big language model. This means that the AI ​​system has been trained on huge amounts of language data that allows the user to direct questions and requests to it, while the chatbot responds in a language we understand.

The rapid development of artificial intelligence has worried senior technologists that the technology, if unchecked, could pose risks to society. Tesla CEO Elon Musk said this year that artificial intelligence is “one of the greatest risks to the future of civilization.”

At the Viva Tech conference on Wednesday, Jacques Attali, a French economic and social theorist who writes about technology, said whether AI is good or bad will depend on its use.

Atali said, “If you use AI to develop more fossil fuels, it will be terrible. If you use AI (to develop) more terrible weapons, it will be terrible.” “On the contrary, AI can be amazing for health, amazing for education, amazing for culture.”

On the same panel, Yann LeCun, chief artificial intelligence scientist at Facebook parent company Meta, was asked about the current limitations of AI. He focused on generative AI trained on large language models, saying that they are not very smart, because they are only trained on language.

“These systems are still very limited, and they don’t have any understanding of the basic reality of the real world, because they’re completely text-trained, a massive amount of text,” LeCun said.

“Most human knowledge has nothing to do with language…so that part of the human experience is not captured by AI.”

LeCun added that the AI ​​system can now pass the United States Bar, an exam required to become a lawyer. However, he said, AI cannot load a dishwasher, which a 10-year-old can “learn in 10 minutes”.

Lecon concluded, “What tells you is that we’re missing something really big…to reach not only the level of human intelligence, but even that of dogs.”

GitHub CEO: The AI ​​model is not sentient - human developers are still in charge

The head of Meta AI said the company is working on training AI in video, rather than just language, which is a tougher task.

In another example of the current limitations of AI, he said, a five-month-old baby would look at an object that was floating and not think much about it. However, a nine-month-old baby will look at this item and be surprised, because he understands that something should not float.

LeCun said that “we have no idea how to reproduce that ability with machines today. Until we can do that, we wouldn’t have human-level intelligence, we wouldn’t have dog-level or cat-level (intelligence).”

Will robots take over?

In a pessimistic tone about the future, Attali said, “It is known that humanity will face many dangers in the next three or four decades.”

He cited climate disasters and wars among his top fears, also citing his concern that robots would “turn on us”.

During the conversation, LeCun of the Meta said that in the future, there will be machines that are smarter than humans, which should not be seen as a danger.

“We shouldn’t see this as a threat, we should see this as something very useful. Each of us will have an AI assistant…it will be like a staff to help you in your daily life that is smarter than you,” LeCun said.

The scientist added that these AI systems need to be “basically controllable and subservient to humans.” He also rejected the idea that robots would take over the world.

“The fear popularized through science fiction is that if robots are smarter than us, they will want to take over the world… There is no connection between being smart and wanting to take over,” LeCun said.

Ethics and regulation of artificial intelligence

While looking at the risks and opportunities of AI, Attali concluded that there is a need for guardrails to develop the technology. But he wasn’t sure who would do that.

“Who will set the boundaries?” he asked.

Macron: I think we need global regulation on AI

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *