YouTube AI Moderator Tightens the Screws for Content Creators
Due to the COVID-19 pandemic, Google has moved many YouTube employees to work remotely. At this time, artificial intelligence was entrusted with checking the uploaded content for compliance with the video hosting rules. And as it turned out, the automatic moderator was very picky. In the second quarter of 2020, he removed a record number of "dubious" videos in the history of the service, including completely harmless ones.
A report published on Google's official website shows that more than 11.4 million videos were deleted between April and June, compared with less than 9 million videos in the same period last year. The company noted that this was expected and explained why the decision was made to automate the process. Given that moderators usually work in offices under surveillance, allowing them to perform such work outside of a controlled environment could lead to unintentional disclosure of users' personal data.
"In response to COVID-19, we’ve taken steps to protect our extended workforce and reduce in-office staffing. As a result, we are temporarily relying more on technology to help with some of the work normally done by human reviewers, which means we are removing more content that may not be violative of our policies. This impacts some of the metrics in this report and will likely continue to impact metrics moving forward." the company's website says.
Realizing that deleting videos that do not violate YouTube's rules will lead to an increase in the number of appeals from content creators, Google has expanded its staff to handle disputes. Indeed, the number of complaints about erroneous removal of videos has grown from 166,000 in the first quarter of this year to more than 325,000 in the second. Accordingly, the increase in the number of recovered video. The company does not yet say how long the AI will continue to serve as full-time employees of YouTube.