[ad_1]
A hidden army of contract workers has been doing the behind-the-scenes work teaching AI systems how to analyze data so they can create the kinds of text and images that fascinate people with newly popular products like ChatGPT.
bloomberg | bloomberg | Getty Images
Alexej Savreux, 34, of Kansas City says he’s done all kinds of work over the years. He made fast food sandwiches. He was a trustee and junk hauler. He has done audio artwork for live theatre.
These days, however, his work is less applied: He’s an AI trainer.
Savreux is part of a hidden army of contract workers doing the behind-the-scenes work teaching AI systems how to analyze data so they can create the kinds of text and images that fascinate people using newly popular products like ChatGPT. To improve the AI’s accuracy, it labels images and makes predictions about what text the apps should generate next.
The pay: $15 an hour and up, no interest.
Away from the spotlight, Savreux and other contractors have spent countless hours in the past few years teaching OpenAI systems to deliver better responses in ChatGPT. Their feedback fills an endless urgency for the company and its competitors in artificial intelligence: providing streams of sentences, labels, and other information that serve as training data.
“We are passionate workers, but there would be no language systems for AI without it,” said Saffro, who has done work for tech startups including OpenAI, the San Francisco company that released ChatGPT in November and set off hype around generative AI. .
“You can design all the neural networks you want, you can engage all the researchers as you want, but without the tagging tools, you don’t have ChatGPT. You don’t have anything,” Saffro said.
It’s not a job that will give Safro fame or riches, but it is an essential and often overlooked task in the field of artificial intelligence, where the seeming fascination of new technological frontiers can overshadow the labors of contract workers.
“So much talk about AI is congratulatory,” said Sonam Jindal, chair of the AI, Labor and Economics program at the Partnership on AI, a San Francisco-based nonprofit advancing research and education on AI.
“But we’re missing a big part of the story: that this still relies heavily on a large human workforce,” she said.
The tech industry relied for decades on the labor of thousands of low-skilled, low-paid workers to build computer empires: from the punch-card operators of the 1950s to the more recent Google contractors who complained about their second-rate status, including the yellow badges that distinguished them from full-time employees. Online work businesses through sites like Amazon Mechanical Turk grew more popular early in the pandemic.
Now, the burgeoning AI industry is following a similar lead.
The work is defined by its precarious on-demand nature, with people working under written contracts either directly by a company or through a third party vendor who specializes in temporary or outsourced work. Benefits like health insurance are scarce or non-existent—which translates to lower costs for tech companies—and the business is usually anonymous, with all credit going to emerging tech executives and researchers.
The AI Partnership warned in a 2021 report that higher demand was coming for what it called “data enrichment work.” It recommended that the industry adhere to fair compensation and other improved practices, and last year published voluntary guidelines for companies to follow.
“So much talk about AI is congratulatory.”
Sonam Jindal
A leading program for artificial intelligence, work and the economy in partnership on artificial intelligence
DeepMind, a subsidiary of artificial intelligence company Google, is the only technology company so far to publicly adhere to these guidelines.
“A lot of people have realized this is important,” Jindal said. “The challenge now is getting companies to do this.”
“This is a new function created by artificial intelligence,” she added. “We have the potential for this to be a quality job and for the workers who do this work to be respected and recognized for their contributions to enabling this progress.”
Demand has arrived, and some AI contract workers are asking for more. In Nairobi, Kenya, more than 150 people who have worked on artificial intelligence at Facebook, TikTok and ChatGPT voted Monday to form a union, citing low wages and the mental cost of work, Time magazine reported. Facebook and TikTok did not immediately respond to requests for comment on the vote. OpenAI declined to comment.
So far, AI contract work has not inspired a similar movement in the United States among Americans who are quietly building AI systems word for word.
Savreux, who works from home on a laptop, joined AI contracting after seeing a job opening online. He credits AI’s work—along with a previous job at sandwich chain Jimmy John’s—with helping bring him out of homelessness.
“People sometimes underestimate these necessary and arduous jobs,” he said. “It is the essential field for entry-level machine learning.” $15 an hour is more than the Kansas City minimum wage.
Job postings for AI contractors point to the attractiveness of working in an evolving industry as well as the sometimes challenging nature of the work. An advertisement from Invisible Technologies, a temp agency, for an “Advanced AI Data Trainer” indicates that the position will be entry-level with pay starting at $15 an hour, but could also be “beneficial to humanity.”
“Think of it as a language arts teacher or personal tutor to some of the most influential technology in the world,” the job posting says. He doesn’t name Invisible’s client, but says the new employee will work “within protocols developed by the world’s leading AI researchers.” Invisible did not immediately respond to a request for more information about its listings.
There is no definitive tally of the number of contractors who work for AI companies, but it is an increasingly popular form of work around the world. Time magazine reported in January that OpenAI relied on low-paid Kenyan workers to label text that contained hate speech or sexually abusive language so its apps could better identify toxic content.
Online news outlet Semaphore reported in January that OpenAI has hired about 1,000 remote contractors in places like Eastern Europe and Latin America to compile data or train the company’s software for computer engineering tasks.
OpenAI remains a small company, with about 375 employees as of January, CEO Sam Altman said on Twitter, but that number does not include contractors and does not reflect the full scope or ambitions of the operation. A spokesperson for OpenAI said no one was available to answer questions about its use of AI contractors.
The work of generating data to train AI models is not always easy, and sometimes it is complex enough to attract potential AI entrepreneurs.
Jatin Kumar, 22, in Austin, Texas, said he’s been working in AI on a contract for a year since graduating from college with a degree in computer science, and said it gives him a chance to get in on where the generative AI technology is heading in. near term.
“What it allows you to do is start thinking about ways to use this technology before it hits the public market,” Kumar said. He’s also working on his tech startup Bonsai, which makes software to help with hospital bills.
Kumar, the conversational coach, said his main work was generating prompts: engaging in a back-and-forth conversation using chatbot technology that is part of the long process of training AI systems. He said that the tasks got more complicated with experience, but they started out very simple.
“Every 45 or 30 minutes,” he said, “you’ll get a new task, generating new prompts.” The prompts might be as simple as, “What is the capital of France?” He said.
Kumar said he and about 100 other contractors worked on tasks to generate training data and correct answers and fine-tune the model by providing feedback on the answers.
Other workers, he said, handled conversations that were “flagged”: reading examples provided by ChatGPT users who, for one reason or another, flagged the chatbot’s answer to the company for review. He said that when a reported conversation comes in, it is sorted based on the type of error involved and then used in further training of the AI models.
“Initially, it started as a way for me to help out with OpenAI and learn about existing technologies,” said Kumar. “But now, I can’t see myself walking away from this role.”
[ad_2]