Amazon says it has put in place mechanisms – both automatic and manual – to prevent misuse. But experts and users have expressed concern that the program could still be overrun by bad actors, as has been the case with some previous big tech crowdsourcing projects.
“They’re playing with fire, really,” said Ginger Gorman, an online hate expert and the author of Troll Hunting.
Gorman said sites such as Wikipedia fight trolls by employing an intensive editing structure. She added that, from the outside, that appears to be something Alexa Answers lacks.
According to Amazon, answers will be automatically rejected if they contain “obscene, threatening, defamatory, invasive of privacy, or infringing of intellectual property rights (including publicity rights)”. Fast Company reported that Amazon’s algorithms will filter out profanity and questions with a political angle, with at least some human editors.
An Amazon spokesperson said the company has filters that will flag when a contributor is editing an answer multiple times, as well as help prevent potentially offensive questions and answers from surfacing. Users can also upvote or downvote answers to help weed out problems.
“High-quality answers are important to us, and this is something we take seriously; we will continue to evolve Alexa Answers,” the spokesperson added.
Washington Post, with staff reporters