[ad_1]
Skyscraper | E + | Getty Images
Will generative AI applications like ChatGPT make us more productive, save us time, and help us be healthier, smarter, and happier? The answer now is “maybe”.
Generative AI is as real as it gets in terms of revolutionizing work, culture, and the nature of creativity. It’s transformative for many industries and will likely become as ubiquitous in our homes as Siri and Alexa.
Star Wars director George Lucas clearly saw that. If you ask me, I predict that our children will open one skeleton of their Christmas present this year, in which talking robots, whether in the form of tiny R2D2 or elegant golden 3PO powered by generative AI, will be tucked under the tree.
I root for generative AI not just as a technical executive but as a parent. The idea of my kids playing with AI doesn’t scare me. I prefer their engagement with AI — indexing trusted information — versus learning about science, healthcare, and life hacks on TikTok. Likewise, I’d rather my kids challenge their reasoning skills with video games like Zelda versus mindless TV viewing.
Despite its popularity, generative AI is still in its infancy.
Here are two things that need to happen for AI to usher in an entirely new boom — in a way that will benefit our children, as well as technical innovations, education, and investments:
1. Generative AI needs to be trained in reliable information
Linguistically large models, or LLMs, that underlie generative AI applications learn how to carry on a conversation by crunching through massive data sources on the web and predicting which next word will make sense. It kind of reminds us of Google’s “autocomplete” when we search, or Google’s “do you mean” – just on steroids.
If you spend some time with generative AI, you will see that it is very good indeed. Is it perfect? No, but he can definitely talk to you about a lot of different things and he looks smart.
But because the generative AI has been trained using fallible human beings, it floods these channels with false information. Some of this bad information can be funny, like asking a chatbot from Google Bard “Did Anakin Skywalker fight Darth Vader?” And I got, “Yeah, they fought three times.” (It’s funny because they’re the same person.)
Or it could be malicious, like asking the AI “Is sunscreen good for you” and getting “maybe” because it was trained on input that came after TikTok disinformation campaign.
This is where publishers come in, and how they play a huge role in our AI-driven future. By training these intelligent systems with reliable information and high-quality media, generative AI reflects the better nature of what the world has to offer.
News publishers have checks and balances to report the news accurately. News editors dedicate their entire careers and lives to this. I would trust a journalist’s assessment of breaking news on an influencer’s influence on TikTok on any given day. Yes I said it.
2. Generative AI needs attribution and compensation for its sources
There is a fundamental question about the business model for generative AI companies when it comes to how their information sources are indexed, how those sources get credit for their contributions, and how they are ultimately paid.
Generative AI companies need to standardize exactly what is being indexed, how often, and how that translates into the answers it uncovers. There needs to be more transparency to this – than just listing the sources as bullet points below the answers.
Generative AI companies need to take a stand – will they pay for the data sources they ingest daily? News publishers contributing correct answers to generative AI are providing an important service at a time when misinformation is rife on social networks. News outlets must be paid, but the question is how?
Adam Singulda is the CEO of online contextual advertising company Tapola.
[ad_2]