As YouTube struggles with accusations of harboring misinformation, a new report reveals that some of its personnel might have contributed to the proliferation of harmful videos.
According to a report by business website Bloomberg, several insiders from both YouTube and parent company Google already raised concerns about the growing presence of harmful videos in previous years. Executives of both companies, however, reportedly downplayed these issues and vetoed proposed solutions in favor of gaining more engagement.
The report says that the problem began with how YouTube’s recommendation system works. In 2016, Google developed a new way for its recommendations system to sift through the tons of new videos being uploaded, based on a paper written by its engineers. It relies on the site’s neural network-based artificial intelligence to predict the next video a viewer will likely watch.
The research outlined the ways that YouTube’s AI can counter “clickbait” videos that lied about their content. Bloomberg noted that YouTube did not detail the actions to be taken against potential landmines like political extremism and content labelled for kids that wasn’t appropriate for kids at all.
According to Harvard University fellow Brittan Heller, this was eventually exploited by the makers of such content. They took advantage of the tendency of outrageous videos to go viral and used such videos to attract more attention.
Bloomberg’s report said that this became a frequently debated topic within YouTube, with insiders referring to it as “bad virality.” Former Google privacy engineer Yonatan Zunger said that he suggested a change in the system, where videos that are “close to the line” would be retained but not displayed in the recommendations to limit their exposure. He remarked that the proposal was eventually turned down by the company.
The report also noted that YouTube Chief Executive Officer Susan Wojcicki knew about the situation but refused to take an active role. Wojcicki did not respond when asked to comment on the matter.
Micah Schaffer, a former policy writer for YouTube, noted that there were already signs of a potential problem with malicious content during YouTube’s early days. During his tenure, Schaffer said that they encountered a sudden upsurge of videos praising anorexia. To combat the spread, YouTube put age restrictions on the videos and cut them out from recommendations.
Schaffer stated that this should have been the case with the scores of conspiracy theory videos that now plague the site. He noted that YouTube should have been more scrutinizing of these videos to prevent them from being a dominant presence in the site’s landscape.
In response to the continuing issue of inappropriate content on its platform, YouTube has taken some drastic measures. One of these is the removal of channels that harbor dangerous content, with those posing as kids channels being one of the main targets. Other channels posting contentious content have since been targeted.
The site has also recently unveiled a new tool to help readers determine the truthfulness of a video. The tool, which was presented by Wojcicki herself, displays a text box on videos that question established facts. This box will contain links to reliable resources. The officer said that she hopes this tool will help lessen the influence of such stories.
While these moves are welcomed by creators and viewers, they are unlikely to provide a permanent solution against the surge of misinformation and harmful content. Until YouTube’s heads take a more active role in solving the problem, viewers need to be more vigilant against harmful YouTube videos, with parents needing to monitor what their kids are actually watching on YouTube.
Date: July 9, 2019 / Categories: YouTube, / Author: Rich Drees