[ad_1]
Google Cloud CEO Thomas Kurian speaks at the company’s 2019 Cloud Computing Conference.
Michael Short | bloomberg | Getty Images
London – Google He’s in productive early conversations with EU regulators about the union’s ground-breaking AI regulations and how he and other companies can build AI safely and responsibly, the company’s head of cloud computing told CNBC.
The internet search pioneer is working on tools to address a number of the mass concerns surrounding AI — including concern that it might become difficult to distinguish between human-generated and AI-generated content.
“We are having productive talks with the EU government. Because we want to find a way forward,” said Thomas Kurian in an interview, speaking to CNBC exclusively from the company’s London office.
“These technologies have risks, but they also have a huge potential to generate real value for people.”
Kurian said Google is working on technologies to make sure people can distinguish between human-generated and AI-generated content. The company revealed its “watermark” solution that labels AI-generated images at its I/O event last month.
It hints at how Google and other big tech companies are working on ways to bring private-sector-driven oversight to AI ahead of formal regulations on the technology.
AI systems are evolving at a rapid pace, with tools like ChatGPT and Stability Diffusion capable of producing things beyond the capabilities of previous iterations of the technology. ChatGPT and tools like it are increasingly being used by computer programmers as companions to help them create code, for example.
However, a major concern of EU policymakers and regulators is that generative AI models have lowered the barrier to the mass production of content based on copyright-infringing material, and could harm artists and other creative professionals who depend on royalties. to gain money. Generative AI models are trained on huge sets of publicly available Internet data, much of which is copyrighted.
Earlier this month, members of the European Parliament approved legislation aimed at censoring the spread of artificial intelligence in the bloc. The law, known as the EU Artificial Intelligence Act, includes provisions to ensure that training data for generative AI tools does not violate copyright laws.
“We have a lot of European customers who are building generative AI applications using our platform,” Kurian said. “We continue to work with the EU government to make sure we understand their concerns.”
“We provide tools to, for example, tell if content was created by a model. That’s just as important as saying copyright matters, because if you can’t tell what was created by a person or what was created by the model You won’t be able to force it.”
Artificial intelligence has become a major battleground in the global technology industry as companies vie for a leading role in technology development—particularly generative AI, which can generate new content from user prompts.
What generative AI can do, from producing lyrics to creating code, has astounded academics and boards of directors.
But it has also led to concerns about job shedding, misinformation, and bias.
Many senior researchers and employees within Google’s own ranks have expressed concern about the speed at which the pace of artificial intelligence is moving.
Google employees have called the company’s advertisement for Bard, its conversational AI bot to Microsoft-backed OpenAI’s ChatGPT competitor, “hasty,” “dud,” and “un-Googley” in messages on the internal forum Memegen, for example.
Several former prominent Google researchers have also sounded the alarm about the company’s handling of AI and what they say is a lack of interest in the ethical development of the technology.
They include Timnit Gebru, the former co-head of Google’s AI ethics team, after sounding the alarm about the company’s internal guidelines on AI ethics, and Jeffrey Hinton, a machine-learning pioneer known as the “Godfather of AI,” who recently left the company over concerns that Her aggressive push to AI was spiraling out of control.
To that end, Google’s Kurian wants global regulators to know he’s not afraid to welcome regulation.
“We’ve said broadly that we welcome regulation,” Kurian told CNBC. “We believe these technologies are powerful enough, and they should be regulated in a responsible way, and we are working with governments in the European Union, the United Kingdom and many other countries to ensure that they are adopted in the right way.”
Elsewhere in the global rush to regulate AI, the UK has provided a framework of AI principles for regulators to enforce themselves rather than writing its formal regulations into law. In the United States, the administration of President Joe Biden and several US government agencies have also proposed frameworks for regulating artificial intelligence.
However, the main problem among tech industry insiders is that regulators are not the fastest moving when it comes to responding to innovative new technologies. This is why many companies are devising their own approaches to introducing sandboxes around AI, rather than waiting for appropriate laws to pass.
He watches: AI isn’t in the hype, says Dan Ives of Wedbush Securities, it’s a “transformational technology”
[ad_2]