Social media platforms allow users to voice their opinions about important matters. That makes them valuable tools for racing awareness. Pride Month, a month-long celebration for empowering LGBTQIA+, would not be as successful if not for social media.
Unfortunately, people can also say bad things on social media. Some use platforms like YouTube, Facebook, and Twitter to attack other people. Aside from misinformation, hate speech is also a big problem for these platforms. Sometimes, malicious posts even turn to real-life violence. They make the networks feel less safe and inclusive.
Thankfully, tech companies are on top of the issue. They are taking necessary steps to remove hate speech from their platforms.
Recently, YouTube promised to take fresh steps to combat online extremism.
US President Joe Biden called on Americans to fight racism and extremism during a summit at the White House. He gathered experts and survivors and included bipartisan local leaders in the summit.
As part of this movement, Biden said he’d ask Congress to do more to hold social media companies accountable for spreading hate. Attendees gave him a standing ovation when he said that.
More specifically, Biden wants Congress to remove the special immunity for social media companies. Furthermore, he wants to impose a stronger security requirement on all of them.
What exactly is that “special immunity” that Biden wants to get rid of? That is Section 230, a law that makes online companies not liable for content posted by users. Regulators believe that social media companies have become too comfortable because of this shield. Since they can’t be held accountable, they are not doing much to stop the spread of hate speech on their platforms.
The White House has repeatedly called for revoking Section 230. Also, it has supported ramping up anti-trust and transparency enforcement on technology companies.
As to whether Section 230 would be revoked or not, we do not know yet. However, one thing is for sure, Biden caught the attention of tech companies through his speech.
For years, online platforms like YouTube and Facebook have been against regulators. Critics say they allow hate speech, lies, and violent rhetoric to flourish on their services.
Of course, social companies always deny this. During interviews and on their blogs, they always say they are taking care of hate speech on their platforms.
However, some events and investigations don’t agree with that. What happened in the US Capitol on January 6 last year happened because of social media.
Following the summit at the White House, major tech companies like Meta and YouTube announced that they would be expanding policies against extremism.
YouTube said it would be removing content that glorifies violent acts. They will do that even if the content creator is not related to a terrorist organization.
YouTube already prohibits violent incitement even before the White House summit on fighting hate-fueled violence. However, in some cases, the platform failed to apply existing policies to videos promoting militia groups involved with the US Capitol riot.
Tech Transparency Project is an information and research hub for journalists, academics, and policymakers. It aims to hold large technology companies accountable. TPP made a report about YouTube not removing content from pro-militia groups.
The investigation in May found 435 pro-militia videos on YouTube. That includes 85 videos posted since the January 6 Capitol attack. That is concerning since some of the said videos gave training advice. For example, some of the videos taught viewers how to carry out guerilla-style ambushes. These creators then buy real YouTube views in the millions to secure organic placement for all new searches.
Jack Malon, a YouTube spokesman, declined to say whether YouTube would change its approach to that content under the new policy. But he said the new policy enables the video-streaming platform to go further with enforcement than it had previously.
So it seems YouTube would not change its approach. However, the new policy against extremism allows the company to overcome the obstacles that prevented them from pushing further before.
— The Telegraph (@Telegraph) August 25, 2018
Aside from removing more content inciting violence, YouTube promised one more thing. The social media giant said it has plans to launch a media literacy campaign. With this, YouTube hopes to teach younger users how to spot the manipulation tactics used by malicious users to spread misinformation.
That is such a brilliant move. As the adage goes, prevention is better than cure. If a person has already seen a misleading video, it could be challenging to convince them that the information they consumed is false. So, teaching viewers how to identify misleading information before they believe it is better.
YouTube started testing media literacy tips ads last year. It occasionally plays 15-second media literacy ads before videos. The goal is to “prompt critical thinking.”
YouTube selected some users for the experiment – they are users from the US watching select videos. The 15-second ad plays before the video, offering valuable tips on how to know what information to trust online.
Google shared one video from the video literacy campaign. It encourages the viewer to check the source of the video. Then, the ad encourages the viewer to assess the source. Is it credible or not? Then, decide on whether to trust the information on the video or not.
Like other ads, media literacy campaign ads are skippable. And we all know what happens when a skip button is available. It lessens the chances of users actually watching the ad. Thus, the effectiveness of this campaign is questionable.
Perhaps, YouTube’s latest announcement means they are taking this campaign to the next step. More people could potentially see these ads play before the videos they want to watch on the platform.
Meta Platforms announced it would partner with researchers from the Center on Terrorism, Extremism, and Counterterrorism from the Middlebury Institute of International Studies.
Meanwhile, Microsoft says it plans to make a basic and more affordable version of its artificial intelligence and machine learning tools. It will make them available to schools and smaller organizations to help them detect and prevent violence.
Date: January 17, 2023 / Categories: YouTube, / Author: Rich Drees