YouTube’s Hate Speech Battle: This Week in Digital

‘THIS WEEK IN DIGITAL’ SEEKS TO IGNITE DISCUSSION ABOUT THE BIGGEST ISSUES IN THE TECH WORLD. CHECK IN EACH FRIDAY TO STAY UP-TO-DATE AND GET INFORMED ABOUT EVERYTHING THAT IS, DIGITAL.

The internet is a wild and wonderful place. It’s caused advancement in so many areas and connected us like never before. However, it has caused some dangers too. Online radicalising into extremist groups has forced online leaders to take preventative measures. Notably, YouTube’s hate speech battle has recruited machine learning and moderation to combat this challenge.

This plight is one of the only negatives of social media. Such platforms are now breeding grounds to spread misinformation and target vulnerable users. Because we all have on equal voice online, even the despicable can connect with misguided and troubled minds.

Despite the obvious need to eradicate it, this issue divides opinion. At what point does moderating hate stop, and restricting free speech begin? Should members of contrasting religion be able to engage in debate, or just share cat pictures? There’s no quick fix solution, but something has to definitely be done.

Frustratingly, human moderation alone has proven to be inadequate. Recently, the German government suggested imposing ludicrous fines on social platforms if they failed to remove distasteful content within 24 hours. Now with 400 hours of video uploaded every minute on YouTube alone, is that really possible without machines?

YouTube’s Hate Speech Battle

As reported by Fortune, YouTube will use machine learning technology to step up its efforts against terrorism. They’ll also put into place “tougher standards” for determining when other videos are too controversial. Last month, the video giants began redirecting searches for extremist content, to anti-terrorism videos instead.

This follows British Prime Minister Theresa May describing online communities as “safe spaces” for extremist groups. She also called on tech companies to do more in this combat. Facebook, Twitter and others have also rolled out their own initiatives aimed at cutting down on hate speech on their respective services.

YouTube and its creators have also struggled in the recent ‘Adpocalypse’ nightmare. Taking a big financial hit from changes to advertiser restrictions. Hopefully this new service will strengthen its brand back for ad revenue.

YouTube says its technology is making moderation easier and effective. They’re confident that the new efforts will remove double the number of videos flagged, in half the amount of time.

Will it Work?

We’re not sure how we feel about this one to be honest. We’re obviously delighted to say goodbye to the small subculture of hate on social platforms. However, some arguments against machine moderation are certainly valid.

Human intervention takes longer, but its better. We have the ability to analyse content properly. Blanket blocking anything that flies close to a guideline violation isn’t right. Discussion and empathy over divisive issues is what makes the internet great. It’s a platform to share our thoughts and develop together.

Yes there’s a definite problem with extremist groups. But we all need to be vigilant about that. We’ll never eradicate bigotry or hate, that’s a downside to free speech. Our only concern is that we lose the chance to connect with one another.

We’ve seen with these dangers with the likes of PewDiePie last February. YouTube’s biggest star received intense criticism after stupid jokes he made were misrepresented in the media. Without research, he was accused of being an alt-right leader and anti-Semite.  He lost considerable production and revenue opportunities without investigation or trial.

Now we’re not defending his content for a second. But without fully understanding what he did, it’s wrong to make a judgement call on his videos. This is the danger of looking for a head to hang prematurely. Misinformation or restriction can be just as deadly to creators as advertising boycotts.

We just fear that machine moderation will be to constricting. You can discuss racial, religious or political issues without being offensive. And to be honest, it’s important to do so! Should an online news channel be flagged for mentioning an attack of some sort, if they’re literally just reporting it? Humans can understand these differences in a way even the best AI can’t yet.

What Do You Think?

Hopefully it’s a success and hateful content becomes a thing of the past. We just hope social media giants don’t rely on it too quickly or remove human moderation completely. By working together on reporting and removing hate, the battle is half won.

As always, check in each Friday to check out what’s happening in the digital world. If you want to improve on your digital strategy or win online, make sure to get in touch.