ChatGPT Is Reshaping Crowd Work
While some workers may shun AI, the temptation to use it is very real for others. The field can be “dog-eat-dog,” Bob says, making labor-saving tools attractive. To find the best-paying gigs, crowd workers frequently use scripts that flag lucrative tasks, scour reviews of task requesters, or join better-paying platforms that vet workers and requesters.
CloudResearch began developing an in-house ChatGPT detector last year after its founders saw the technology’s potential to undermine their business. Cofounder and CTO Jonathan Robinson says the tool involves capturing key presses, asking questions that ChatGPT responds to differently to than people, and looping humans in to review freeform text responses.
Others argue that researchers should take it upon themselves to establish trust. Justin Sulik, a cognitive science researcher at the University of Munich who uses CloudResearch to source participants, says that basic decency—fair pay and honest communication—goes a long way. If workers trust that they’ll still glet paid, requesters could simply ask at the end of a survey if the participant used ChatGPT. “I think online workers are blamed unfairly for doing things that office workers and academics might do all the time, which is just making our own workflows more efficient,” Sulik says.
Ali Alkhatib, a social computing researcher, suggests it could be more productive to consider how underpaying crowd workers might incentivize the use of tools like ChatGPT. “Researchers need to create an environment that allows workers to take the time and actually be contemplative,” he says. Alkhatib cites work by Stanford researchers who developed a line of code that tracks how long a microtask takes, so that requesters can calculate how to pay a minimum wage.
Creative study design can also help. When Sulik and his colleagues wanted to measure the contingency illusion, a belief in the causal relationship between unrelated events, they asked participants to move a cartoon mouse around a grid and then guess which rules won them the cheese. Those prone to the illusion chose more hypothetical rules. Part of the design’s intention was to keep things interesting, says Sulik, so that the Bobs of the world wouldn’t zone out. “And no one’s going to train an AI model just to play your specific little game.”
ChatGPT-inspired suspicion could make things more difficult for crowd workers, who must already look out for phishing scams that harvest personal data through bogus tasks and spend unpaid time taking qualification tests. After an uptick in low-quality data in 2018 set off a bot panic on Mechanical Turk, demand increased for surveillance tools to ensure workers were who they claimed to be.
Phelim Bradley, the CEO of Prolific, a UK-based crowd work platform that vets participants and requesters, says his company has begun working on a product to identify ChatGPT users and either educate or remove them. But he has to stay within the bounds of the EU’s General Data Protection Regulation privacy laws. Some detection tools “could be quite invasive if they’re not done with the consent of the participants,” he says.
Source link