The video-sharing giant has rolled out an entire policy of how it planned to take action against election-related content.
In February, YouTube announced its plans to remove videos that contain misleading information about the upcoming election. The reason is that such videos can cause, according to the company, “serious risk of egregious harm.” This is the first time the video-sharing giant has laid out a comprehensive plan to handle political videos and viral false information.
YouTube, which is owned by the largest search engine Google, rolled out the entire plan on the day of Iowa caucuses. That was the same day when voters began to indicate their chosen Democratic presidential candidate. Notably, YouTube previously had a number of different policies that it used to address false or misleading videos.
Leslie Miller, ouTube’s vice president of government affairs and public policy, issued a statement about the video-sharing giant’s move. The last few years have witnessed their increased efforts to make the platform more reliable information and news source. She added that they have also taken steps to make YouTube an open platform for healthy political discourse.
Moreover, tech companies have also taken steps to grapple with online misinformation. This is because they are aware that such is likely to increase ahead of the election this November. In January, Facebook announced that it would take down videos that were manipulated by artificial intelligence in a way that misleads viewers. However, the social media giant also noted that it would permit posting political advertisements that would not police them for truthfulness.
On the other hand, Twitter has entirely banned political ads. It noted that it would not take down tweets published by world leaders, although it may denote them differently.
Furthermore, YouTube is facing the crucial task of dealing with misinformation related to the upcoming election. Every minute, more than 500 hours of videos are uploaded to the platform. In addition, the video-sharing giant is facing concerns about its algorithms. This is because YouTube’s algorithm has the potential of pushing viewers toward extremist and radical views by displaying more of that type of content on the suggestions.
In a blog post, YouTube said it would take down content that provides viewers with the wrong information about the election. Such content includes the wrong voting date and false information about participating in the census. The company also added that it would remove content that tells lies about a political candidate’s citizenship status or their eligibility for public office. In addition, YouTube said that an example of a video that could pose a serious risk is the one that’s manipulated to make it appear that a government official has died.
Moreover, the video-sharing platform said that it would remove channels that try to impersonate a person or another channel, hide their country of origin, or conceal their connection with the government. Similarly, videos that increased their number of views, comments, likes, and other metrics through automated systems would also be removed from the platform.
However, there is a possibility for YouTube to face questions regarding the consistent application of its new policies as the election day comes near. Like Facebook and Twitter, YouTube faces a challenge as its methods of determining what kind of speech and political statements are considered public deception are not applicable to all the videos on its platform.
The director of the Atlantic Council’s Digital Forensic Research Lab, Graham Brookie, had a statement about YouTube’s election-related policies. According to him, the policy would provide the platform with more flexibility with regards to responding to disinformation. However, the responsibility of choosing how to respond to misinformation would be on YouTube itself. This is especially true when it comes to defining the authority that YouTube is planning to upgrade. The same thing goes to the thresholds of removing manipulated videos such as deep fakes.
A spokeswoman from YouTube, Ivy Choi, noted that the content and context of a video would identify whether or not it would be taken down from the platform. She also said that the video-sharing platform would focus on videos that were doctored or technically manipulated to mislead viewers beyond clips that are taken out of context.
Choi cited a viral video of Speaker Nancy Pelosi last year as an example. The viral video was slowed down to make it appear like Pelosi, who is a Democrat from California, is slurring her words. Under the policies of YouTube, the video would be removed as it was technically manipulated.
Another example is a video of former Vice President Joseph R. Biden Jr., where he was responding to a voter in New Hampshire. The video was cut wrongly, which made it appear that Biden has made racist remarks. According to Choi, such videos can stay on YouTube and get more YouTube views.
Furthermore, the spokeswoman said that deep fakes would be taken down if the platform identifies that they were created with malicious intentions. Notably, deepfakes are videos that are manipulated using artificial intelligence to make subjects look different or say words that they did not actually say. However, the decision of whether or not YouTube would remove parody videos would again depend on both the context and content in which they were presented.
According to Renee DiResta, the video-sharing platform’s new policy was trying to address what it identifies as a newer form of harm. Notably, Renee DiResta is Stanford Internet Observatory’s technical research manager. The said observatory works on studying disinformation.
However, DeRista said that the disadvantage of the new policy is that social channels present information to viewers most likely to believe them. For this reason, videos that are up on YouTube with the intent of misleading viewers about the upcoming election have a high possibility of succeeding with their intention.
Date: November 4, 2020 / Categories: News, / Author: Joy P