[ad_1]

Washington: AmazonGoogle, metaMicrosoft and other companies leading the development of AI technology have agreed to meet a set of AI safeguards brokered by President Joe Biden’s administration.
The White House said on Friday that it has secured voluntary commitments from seven US companies aimed at ensuring the safety of their artificial intelligence products before they are launched. Some of the commitments require third-party oversight of the way commercial AI systems operate, though they don’t detail who will audit the technology or hold companies accountable.
A rush of commercial investment in generative AI tools that can write persuasive human-like text and output new images and other media has fueled public fascination as well as concern about their ability to deceive people and spread disinformation, among other dangers.
The White House said in a statement that the four tech giants, along with OpenAI, maker of ChatGPT and startups Anthropic and Inflection, have committed to security testing “conducted in part by independent experts” to protect against key risks, such as biosecurity and cybersecurity.
The companies have also committed to methods of reporting vulnerabilities in their systems and using digital watermarks to help distinguish between real and AI-generated images known as deepfakes.
The White House said they would publicly report flaws and risks in their technology, including impacts on fairness and bias.
The voluntary commitments are intended as an immediate way to address risks before a long-term push to get Congress to pass laws regulating the technology.
Some advocates for the AI ​​regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.
“History may suggest that many tech companies are not actually walking on a voluntary pledge to act responsibly and uphold strong regulations,” said a statement from James Steyer, founder and CEO of the nonprofit Common Sense Media.
Senate Majority Leader Chuck Schumer, R-N.Y., said he would introduce legislation to regulate AI. He has held a number of briefings with government officials to educate senators on an issue that has attracted bipartisan interest.
A number of tech executives have called for organizing, and several of them went to the White House in May to speak with Biden, Vice President Kamala Harris and other officials.
But some experts and upstart competitors worry that the kind of regulation being rolled out could be a boon for affluent employers led by OpenAI, Google and Microsoft as the small players are outpaced by the small players because of the high cost of making their AI systems known as the big ones. Language paradigms adhere to organizational constraints.
The BSA software trade group, which includes Microsoft as a member, said Friday that it welcomed the Biden administration’s efforts to set rules for high-risk AI systems.
“Enterprise software companies look forward to working with the administration and Congress to enact legislation that addresses the risks associated with artificial intelligence and enhances its benefits,” the group said in a statement.
A number of countries have been looking at ways to regulate AI, including European Union lawmakers who were negotiating sweeping AI rules for the 27-nation bloc.
UN Secretary-General António Guterres recently said the UN is the “perfect place” to adopt global standards, and appointed a board to report on global AI governance options by the end of the year.
The Secretary-General of the United Nations also said that he welcomes calls by some countries to establish a new United Nations body to support global efforts to manage artificial intelligence, inspired by models such as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.
The White House said on Friday that it has already consulted about voluntary commitments with a number of countries.



[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *