I. Introduction
Armed with the power of anonymity, many groups and individuals abuse social media tools in order to proliferate messages advocating for violence and suppression of another people. Such language and its spread across online social networks (OSNs) presents several issues to the managers of such networks, but there is an inherent issue with rooting out such speech. The definition and classification of hate speech has not reached a universal consensus. Merriam-Webster designates hate speech as language "expressing hatred of a particular group of people" while the European Union defined it as speech that "spread, incite, promote or justify racial hatred, xenophobia, antisemitism or other forms of hatred based on intolerance". There is a notable distinction between the two definitions above; the latter places an emphasis on hateful "calls to action" while the former does not. This difference, the "call to action", typically is the borderline between merely offensive speech and hateful speech in the eyes of OSN moderators and admins. Regardless, the identification of hate speech a persistent problem for OSN moderators, and due to the sheer volume of generated content on said networks, manual moderation is infeasible.