[ad_1]

People walk past the New York Times building in New York City.

Andrew Burton | Getty Images

Newsroom leaders are preparing for chaos as they consider firewalls to protect their content from disinformation and AI-driven aggregation.

the The New York Times And NBC News is among the organizations in initial talks with other media companies, big tech platforms and digital content Next, the industry’s digital news trade organization, to develop rules about how its content can be used by natural language AI tools, according to people familiar with the matter.

The latest trend — generative artificial intelligence — can generate seemingly new blocks of text or images in response to complex queries like “Write an earnings report in the style of the poet Robert Frost” or “Draw a picture of an iPhone as Vincent Van Gogh gave it.”

Some of these generative AI programs, such as Open AI chat and google coldThey are trained in large amounts of publicly available information from the Internet, including journalism and the copyrighted arts. In some cases, the resulting material is lifted almost literally from these sources.

Publishers fear that these programs could undermine their business models by posting forwarded content without credit and create an explosion of inaccurate or misleading content, reducing trust in online news.

The following digital content, which represents More than 50 of the largest American media organizations Including The Washington Post and The Wall Street Journal News Corp., this week Published seven principles On “Development and Governance of Generative Artificial Intelligence”. They address issues related to safety, intellectual property compensation, transparency, accountability, and fairness.

The Principles are intended as a vehicle for future discussion. They include: “Publishers have the right to negotiate and receive fair compensation for the use of their IP” and “Publishers of GAI systems must take responsibility for the output of the system” rather than industry specific rules. Digital Next shared the principles with its board and related committees on Monday.

News outlets dealing with artificial intelligence

The following digital content “Principles for the Development and Governance of Generative AI”:

  1. GAI developers and publishers must respect the rights of the creators of their content.
  2. Publishers have the right to negotiate and obtain fair compensation for the use of their intellectual property.
  3. Copyright laws protect creators from unauthorized use of their content.
  4. GAI systems must be transparent to publishers and users.
  5. GAI systems publishers should be responsible for the output of the system.
  6. GAI systems must not create or risk creating unfair market or competition outcomes.
  7. GAI systems must be secure and address privacy risks.

Jason Kent, CEO of Digital Content Next, said the urgency behind building a system of rules and standards for generative AI is intense.

“I’ve never seen anything go from emerging issues to dominating so many workstreams in my tenure as CEO,” said Kent, who has led Digital Content Next since 2014. We have had 15 meetings since February. All kinds of media.

Axios CEO Jim VandeHei said how generative AI will emerge in the coming months and years has dominated media conversations.

“Four months ago, I wasn’t thinking or talking about AI. Now, that’s all we talk about,” VandeHei said. “If you own a company and AI isn’t something you care about, you’re crazy.”

Lessons from the past

Generative AI presents both potential efficiencies and threats to the news business. Technology can create new content — such as games, travel menus, and recipes — that provide benefits to consumers and help lower costs.

But the media industry is equally concerned about threats from artificial intelligence. Digital media companies have seen their business models falter in recent years, primarily as social media and search companies Google And Facebook, reap the rewards of digital advertising. Vice announced bankruptcy last month, news site BuzzFeed The shares have traded below $1 for more than 30 days The company received notice of delisting from the NASDAQ stock exchange.

Against this background, media leaders such as IAC Barry Diller’s president News Corp. CEO Robert Thompson is urging big tech companies to pay for any content they use to train AI models.

Thompson said during his opening remarks at the International News Media Association’s World Conference in New York on May 25.

During the April Semaphore Conference in New York, Diller said the news industry should band together to demand payment, or threaten a lawsuit under copyright law, sooner rather than later.

“What you have to do is get the industry to say you can’t get rid of our content until you develop systems where the publisher gets paid,” Diller said. “If you really take these (AI) systems, and don’t plug them into a process where there’s some way to get compensation for them, then all is lost.”

Fighting misinformation

Other than balance sheet concerns, AI’s most important concern for news organizations is alerting users of what’s real and what isn’t.

“In general, I am Optimistic about this as a technology for us, with the big caveat that technology poses significant risks to journalism when it comes to checking the authenticity of content,” said Chris Berend, head of digital at NBC News Group, who added that he expects AI to work alongside humans. in the editing room instead of replacing them.

There are already signs that AI can spread disinformation. Last month, a Twitter account named “Bloomberg Feed” was verified. Tweet a fake picture Explosion at the Pentagon outside Washington, D.C. While this image was quickly debunked as a fake, it did lead to a brief drop in stock prices. More advanced fakes can create more confusion and cause unnecessary panic. They can also damage trademarks. Bloomberg Feed is not affiliated with Bloomberg LP, the media company.

“It’s the beginning of what will be hellfire,” VandeHei said. “This country is going to see massive mass littering. Is this real or is it not real? Add this to a society that is already thinking about what is or isn’t real.”

The US government may regulate Big Tech’s development of AI, VandeHei said, but the pace of regulation is likely to lag the speed at which the technology is being used.

This country is going to see massive mass littering. Is this real or is this not real? Add this to a society that is already thinking about what is or isn’t real.

Technology companies and newsrooms are battling potentially disruptive artificial intelligence, such as the one recently invented Portrait of Pope Francis He wears a big puffer coat. Google said last month that it would encode information that would allow users to decode if the image was created using AI.

DisneyChris Luft, coordinating producer, visual verification, said on ABC News that ABC News “really has a team working around the clock, online video validation”.

“Even with generative AI tools or AI models running in script like ChatGPT, it doesn’t change the fact that we’re already doing that work,” Loft said. “The process remains the same, to combine reporting with visual techniques to confirm the authenticity of the video. That means picking up the phone and talking to eyewitnesses or analyzing metadata.”

Ironically, one of the early uses of AI taking over human labor in the newsroom could be fighting AI itself. NBC News’ Berend predicts that there will be an arms race in the coming years of “conditional AI for artificial intelligence,” as both media and technology companies invest in software that can sort the real from the fake and properly describe it.

“Fighting disinformation is computing power,” Berend said. “One of the main challenges when it comes to content verification is the technology challenge. It’s a huge challenge that needs to be done through partnership.”

The confluence of powerful, rapidly evolving technology, input from dozens of large companies and US government regulations has led some media executives to privately acknowledge that the months ahead could be very chaotic. The hope is that today’s age of digital maturity can help get solutions to work more quickly than in the pre-Internet days.

Disclosure: NBCUniversal is the parent company of the NBC News Group, which includes both NBC News and CNBC.

WATCH: We need to regulate generative AI

The professor says we need to regulate biometric technologies

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *