- Research
- Open access
- Published:
Online hate speech victimization: consequences for victims’ feelings of insecurity
Crime Science volume 13, Article number: 4 (2024)
Abstract
This paper addresses the question whether and to what extent the experience of online hate speech affects victims’ sense of security. Studies on hate crime in general show that such crimes are associated with a significantly higher feeling of insecurity, but there is little evidence concerning feeling of insecurity due to online hate speech. Based on a secondary data analysis of a representative population survey in Lower Saxony, Germany, on the topic of cybercrime in 2020 (N = 4,102), we tested three hypotheses regarding the effect of offline and online hate speech on feelings of insecurity. As a result, compared to non-victims, victims of online hate speech exhibit a more pronounced feeling of insecurity outside the Internet, while victims of other forms of cybercrime do not differ in this regard from non-victims. We found no effect for offline hate speech when relevant control variables were included in the statistical model. Possible reasons for this finding are assumed to lie in the characteristics of the phenomenon of online hate speech, for example, because the hateful content spreads uncontrollably on the Internet and reaches its victims even in protected private spheres.
Introduction
While the Internet has become a seemingly indispensable part of our lives, its digital landscape has also given rise to new challenges. With the growing importance of digital communication, online hate speech has increased sharply in recent years (Costello et al., 2017; Ștefăniță & Buf, 2021). Hate speech is defined as a verbal attack against a certain group of people with a common characteristic, such as race, gender, ethnic group, religion, or political preference (Castaño-Pulgarín et al., 2021). Due to the perpetrators' (perceived) prejudicial motives, hate speech is a form of "Group-Related Misanthropy" (Zick et al., 2008). Depending on the respective legal provisions, acts of hate speech can be hate crimes (Sheppard et al., 2021) and online hate speech can be a form of cyber-enabled crime.Footnote 1 Regardless of its legal assessment, hate speech can have serious consequences for those affected. Therefore, we use criminological terms such as “victim” or “victimization” in reference to hate speech although not all acts of hate speech are necessarily illegal.
With the increasing rise in online hate speech, it has become the focus of scientific interest in various disciplines (Benier, 2017; Paz et al., 2020). There is a large body of studies and literature that discusses the consequences of being exposed to online hate, with the conclusion that the experience of online hate has a negative effect on the mental health and well-being of both victim and observer (Näsi et al., 2015; Stahel & Baier, 2023; Tynes, 2006; Tynes et al., 2016; Walther, 2022). Hence, the exposure and experience of online hate have a negative impact on levels of depression, anxiety, self-doubt and/or confidence. Nevertheless, recent articles have highlighted that some consequences of online hate speech remain insufficiently researched from a victimological perspective (i.e., Wachs et al., 2022). Here, we focus on feelings of insecurity as studies have shown that prejudice-motivated crimes outside the Internet are associated with increased feelings of insecurity among the victims (Benier, 2017; Dreißigacker et al., 2020; Gelber & McNamara, 2016; McDevitt et al., 2001). This gives rise to the question whether hate speech as a form of online prejudice-motivated incidence has similar consequences. Since the Internet and more specifically, the growing meaning of social media as everyday means of communication, makes one vulnerable to hate speech almost constantly, an influence on insecurity feelings outside the Internet would suggest far-reaching significance of hate speech in the daily lives of those affected. Also, understanding the consequences of online hate speech is crucial not only for the mental and emotional well-being of individuals but also on a social level. A more nuanced understanding of the impact on different demographic groups can help to identify specific minorities or marginalized groups which are disproportionately affected by online hate speech and develop targeted interventions and policies that aim to protect the rights and well-being of all victims. Therefore, the question whether online hate speech also influences feelings of insecurity outside the Internet is highly relevant.
State of research
Hate crime and feelings of insecurity
As already explained above, some acts of hate speech qualify as criminal acts in the legal systems of some countries, and therefore as a form of hate crime. There are several studies on the impact of hate crime on the victims. However, these studies mostly refer to incidences outside the Internet or do not differentiate between online and offline acts. A common finding is that victims of hate crimes experience more severe psychological consequences, such as anger, stress, and fear, compared to those affected by crimes not motivated by prejudice (Barnes & Ephross, 1994; Ehrlich et al., 2003; Herek et al., 1999; Iganski, 2019; Iganski & Lagou, 2016; McDevitt et al., 2001).
In addition, victims of hate crime have a greater sense of insecurity than non-hate crime victims (Benier, 2017; Gelber & McNamara, 2016). For example, in the survey by McDevitt et al. (2001), over two-fifths of hate crime victims reported feeling unsafe when alone in their neighborhood at night, compared to just under one-third of non-hate crime victims. Similar differences have also been reported in numerous studies in Germany (Church & Coester, 2021; Dreißigacker, 2018; Dreißigacker et al., 2020; Groß et al., 2018). An increased feeling of insecurity is related to lower trust in state institutions such as the police (Blanco & Ruiz, 2013) and lower generalized trust in other people. It affects the assessment of the personal risk of becoming a victim of similarly motivated acts outside the Internet (Dreißigacker et al., 2020; Groß et al., 2018), the avoidance of certain places as well as behavioral changes (Iganski, 2019; Mellgren et al., 2017).
Online hate speech and feelings of insecurity
Regarding online hate speech as a potentially criminal form of prejudice-motivated online harassment, there have been various studies on detection (Qian et al., 2021; Schmidt & Wiegand, 2017; Warner & Hirschberg, 2012), prevalence (Dreißigacker et al., 2020; Geschke et al., 2019; Kansok-Dusche et al., 2022; Saha et al., 2019), the consequences for society (Bilewicz & Soral, 2020), regulation (Bleich, 2011; Judge & Nel, 2018; Reed, 2009; Sheppard et al., 2021), and possible risk and protective factors for potential victims (Costello et al., 2017; Garland et al., 2022; Hinduja & Patchin, 2022; Wright et al., 2021). However, the impact of online hate speech on the lives of victims, specifically on the feeling of insecurity outside the Internet, has hardly been studied so far (Berg & Johansson, 2016; Salmi et al., 2007). A positive correlation was found between victimization of adolescents and depressive symptoms (Wachs et al., 2022) and population surveys indicate that experiencing online hate speech is positively associated with loneliness (Stahel & Baier, 2023) and negatively associated with psychological well-being (Geschke et al., 2019; Waldron, 2012) and life satisfaction (Stahel & Baier, 2023). Nevertheless, there is no empirical evidence on the relationship between experiencing hate speech online and feelings of insecurity offline.
Moreover, even though the specifics of online hate speech compared to offline hate speech are increasingly being discussed (Brown, 2018; Citron, 2014; Cohen-Almagor, 2011), the existing empirical studies on the consequences of online hate speech hardly make systematic comparisons to those affected by (cyber)crime without a hate motive. Moreover, they mostly refer only to specific victim groups, such as the LGBTQ + community (Herek et al., 1999, 2002; Ștefăniță & Buf, 2021), religious groups (Awan & Zempi, 2016), or to youth and adolescents or young adults (Hawdon et al., 2014; Keipi et al., 2017; Saha et al., 2019; Wachs et al., 2022). This study contributes to the existing literature by providing insights into the impact of hate speech on feelings of insecurity in comparison to cybercrimes in a representative sample of the general population.
Theoretical considerations
Janoff-Bulman and Hanson Frieze (1983) noted that a crime victimization can shatter victim’s perception of basic assumptions about themselves and the world. Consequently, they may no longer be able to see the world as a safe place and feel unsafe and vulnerable. Therefore, a criminal victimization can have serious consequences and can affect the feelings of safety, particularly if victims are unable to cope with their victimization and such experiences cannot be integrated into one’s own worldview. This leads to the question how victims of hate speech deal with the victimization. Following Sykes and Matza’ neutralization thesis (Sykes & Matza, 1957), crime victims in general use various neutralization techniques and social support (Green & Pomeroy, 2007) to reduce negative reactions or emotions such as fear, insecurity, guilt, and shame (Agnew, 1985; Ferraro & Johnson, 1983; Maruna & Copes, 2005; Weiss, 2011). These techniques include victimization denial, vulnerability denial, denial of one's innocence, or denial of (serious) harm. According to Agnew (1985), such rationalizations may explain the low global correlation between general victimization and fear of crime and feelings of insecurity that has frequently been found in various victimization surveys (DuBow et al., 1979).
However, the effectiveness of such neutralization techniques may vary as a function of the characteristics of the victimized person (like age, gender, or education) (Agnew, 1985), the level of social support (for example from family and friends) (Green & Pomeroy, 2007; Wright et al., 2021), and, most importantly here, the type of victimization (such as delict type/severity).
Based on the neutralization thesis on the processing and effects of crime, we not only assume that those affected by hate speech have difficulties to deny their own vulnerability, but contrary, the perceived prejudice motive of the perpetrators is likely to even increase perceived vulnerability and thus the feeling of insecurity. Hate crime in general has been said to convey a "message character" (Bannenberg et al., 2006). It degrades all members of a certain social group and thus suggests further victimization in the future. It is also associated with an "incitement character" (Bannenberg et al., 2006), meaning the assault can be perceived as an appeal to be imitated by all like-minded people with a similar ideology. We assume that hate speech, too, is strongly associated with a message and an incitement character. Thus, who are affected by hate speech, it indicates that they should expect similarly motivated acts, which are not specified in terms of the type of incidence. Hate speech experiences occur based on personal characteristics that cannot simply be changed or hidden. In this respect, those affected by hate crime or hate speech may find it difficult to avoid it through their own behavior, which is likely to increase their perceived vulnerability (McDevitt et al., 2001).
Moreover, we assume that hate speech experienced online rather than outside the internet should have an even higher impact on subjective vulnerability and perceived feelings of insecurity. Brown (2018) points out that online hate speech is more spontaneous and immediate, more widespread via social media, and permanently present. While a verbal attack on the street may fade away, the attack on social media remains present for both the victim and the perpetrator's peers. It can be called up again at any time and spread uncontrollably. In addition, those potentially affected by hate speech can also be reached in their own homes if they do not avoid digital communication. Both aspects showed in a qualitative interview study in which those affected by cyberbullying reported an increased burden due to the feeling of being permanently exposed to cyberbullying, even in their own homes (Müller et al., 2022).
In summary, we assume that neutralization techniques are less effective in the context of hate speech victimization. Given their message and inciting character, instances of it should therefore increase feelings of insecurity among those affected. This should be particularly the case in the context of online experiences as these are more permanent, less controllable and harder to avoid.
Methods
Hypotheses
Based on our theoretical considerations and the state of research on the connection between hate speech and feelings of insecurity, and the considerations regarding more severe consequences of online hate speech compared to offline hate speech, the following hypotheses will be tested:
H1: Having experienced offline hate speech increases feelings of insecurity outside the Internet compared to not having experienced crime.
H2: Having experienced online hate speech increases feelings of insecurity outside the Internet compared to not having experienced crime, and the effect is likely to be even stronger than for offline hate speech.
H3: Having experienced offline and online hate speech increases feelings of insecurity outside the Internet compared to not having experienced crime cumulatively, thus more than online hate speech and offline hate speech each individually.
Based on previous findings and to exclude confounding variables and increase test strength, gender, age, migration background, urban or rural living environment, and social support are included as control variables. On average, women feel more insecure than men (Smith & Torstensson, 1997). Increasing age may also be associated with higher insecurity due to decreasing mental and physical capacity and associated higher vulnerability in case of victimization (Parker & Ray, 1990). People with a migration background may feel more insecure due to their status in the majority society (Ortega & Myles, 1987) and residents from urban areas also show higher feelings of insecurity compared to residents from less anonymous rural areas (Belyea & Zingraff, 1988; Scarborough et al., 2010; Snedker, 2015). Finally, various studies show that social support can be a protective factor regarding the consequences of different types of crime for victims (Hardyns et al., 2018; Kimpe et al., 2020; Leets, 2002; Wachs et al., 2022).
Data collection
The following analysis is based on the data from a representative population survey (16 years and older) in Lower Saxony (N = 10,000), a German federal state, regarding the experiences and consequences of cybercrime and other potentially harmful online experiences that are not (yet) criminal offenses in Germany.Footnote 2 The paper–pencil survey was conducted between August and October 2020 based on a two-stage sampling procedure. First, a sample of 73 municipalities was selected by GESIS—Leibniz Institute for the Social Sciences. In a second step, the target persons to be interviewed were randomly selected from the registers of the respective residents' registration offices. The selected persons were then contacted and sent a 16-page questionnaire, with the option to complete the survey either in writing and return it via a pre-stamped envelope or answer the questionnaire online. In addition, a monetary incentive of a five Euro note attached to the questionnaire was used. After 2 weeks, all survey participants were sent a reminder/thank-you letter. Overall, 9636 questionnaires could be delivered. Of these, 4102 people participated, 511 of them online. This resulted in a response rate of 42.6%.
Cases with missing values and respondents who stated that they did not use the Internet for private purposes were excluded. After the data cleansing (e.g., excluding speeders with a completion time less than 5 min of the online questionnaire and implausible/contradictory answers), this resulted in a final sample of N = 3,293.
Sample
A total of 52.0% of the respondents were female (Table 1). The cases of non-binary respondents were not included in this analysis since their number was in the low single digits and could not be meaningfully evaluated separately. The average age of the respondents was 49.2 years with a standard deviation of 17.7 years. A total of 14.5% had a migration background, meaning they or at least one parent was not born in Germany. In terms of location, 27.1% of the respondents lived in a municipality/town with more than 50,000 inhabitants (50,000 to 1,000,000 inhabitants), and the rest of the respondents lived in a municipality/town with fewer than 50,000 inhabitants.Footnote 3
Operationalization
Dependent variable
Following Groß et al. (2018), McDevitt et al. (2001), and Tseloni and Zarafonitou (2008), the following items were used to measure feelings of insecurity outside the Internet: "In general, how safe do you feel in your neighborhood?" "… in your apartment/house?" "… alone in your neighborhood at night?" "… alone in your neighborhood at night when you meet a stranger?". Response options ranged from 1: "Very safe," 2: "Safe," 3: "Somewhat safe," 4: "Somewhat unsafe," 5: "Unsafe," to 6: "Very unsafe." The individual items were combined into a mean index (Table 1). The internal consistency of the items was good (Cronbach’s α = 0.83).
Independent variables
Assignments to different (non-)victim groups served as independent variables. Following Wachs and Wright (2018), online hate speech victims (online HSVs) included those who at some point experienced at least one of the following items (lifetime prevalence): "Someone has insulted (me online) or sent me other unpleasant messages online…", "Someone has spread lies or rumors about me online", "Someone has excluded me from online groups, chats, or online games", "Someone has threatened or bullied me online…" and "Someone has made fun of me online because of my gender, national origin, race, religious affiliation, or sexual orientation”. Similar items were used for experiences of offline hate speech. Thus, in addition to online hate speech victims (n = 51), two further groups were distinguished: offline hate speech victims (n = 202) and both online and offline hate speech victims (n = 71). Note that not all items cover criminal acts but may nevertheless seriously impair those affected.
Cybercrime victims (CVs) who were not online hate speech victims (n = 1282) included those who had not experienced hate speech online or offline but had experienced at least one of the other cybercrime types surveyed in their life, for example online fraud or a ransomware attack. The non-victims (NVs) included those who had never experienced any of the surveyed offense types (n = 1687).
Each of the three constructs CV, online HSV and offline HSV were questioned separately in the questionnaire. For CV, the participants were informed that the following questions are related to specific experiences with cybercrime. Before asking about an online hate speech victimization, participants were given a definition of online hate speech. They were told that online hate speech refers to insults or hurtful posts, comments, videos or images because of their gender, national origin, race, religious affiliation, or sexual orientation on the Internet. For offline HSV, the participants were asked at the end of the questionnaire whether they had experienced the respective incidents outside the Internet.
The detailed operationalization of all victimization forms (Online Hate Speech, Offline Hate Speech, Cybercrime) is shown in Table 3 in the Appendix.
Control variables
In addition to the control variables age, gender, migration background, and number of inhabitants in the home municipality/city, the degree of social support was assessed with a short scale based on Fydrich et al. (2009) and Kliem et al. (2015). The following items were used and combined into a mean index: "I receive a lot of understanding and security from others," "There is someone very close to me whose help I can always count on," "I have friends/relatives who will definitely take time to listen if I need someone to talk to," "If I’m very depressed, I know whom I can turn to." The response options ranged from 1: "Does not apply at all," 2: "Does not apply," 3: "Does not really apply," 4: "Rather applies," 5: "Applies," to 6: "Applies completely." The internal consistency of these items was also good (Cronbach’s α = 0.85).
Results
Descriptive statistics
In the descriptive evaluation in Fig. 1, online hate speech victims stand out. The correlation of social support with the feeling of insecurity deviates most clearly in this group from the non-victims (red reference line: male non-victims), especially among male online hate speech victims with little social support. As expected, feelings of insecurity among women exceed those of men in all groups but are most pronounced among online hate speech victims. In contrast, the level of insecurity among female offline hate speech victims hardly differs from female non-victims or cybercrime victims. For the group of women, both online and offline hate speech victims, social support seems to have at best a minor influence on feelings of insecurity.
Hypothesis testing
The association of victimization types and feelings of insecurity were estimated using two multiple linear regression models (Table 2), with the independent variables introduced simultaneously.Footnote 4 In the first model (Model 1), only (non-)victim groups were included as independent variables. Compared to non-victims, hate speech victims (whether online, offline, or online and offline) tend to have significantly higher feelings of insecurity outside the Internet, while cybercrime have no statistically relevant coefficient in this regard. The two largest coefficients come from online and offline hate speech victims with b = 0.30 (β = 0.04) and b = 0.29 (β = 0.04) from online hate speech victims, followed by offline hate speech victims with b = 0.14 (β = 0.03). The included victim groups alone can account for about 1% of the variance in feelings of insecurity (R2 = 0.01).
In the second model, to determine whether the association with the types of victimization remain stable, the control variables described above were included. The coefficient of an online hate speech victimization hardly changes in Model 2 (b = 0.27), whereas the coefficient of respondents who were victims of both online and offline hate speech becomes slightly smaller (b = 0.22). The coefficient of an offline hate speech victimization is no longer significantly different from zero at b = 0.08. The latter is related to controlling for respondent gender. Once gender is included in the model, the significant coefficient of an offline hate speech victimization disappears.
As expected, increased social support is associated with significantly decreased feelings of insecurity, women have stronger feelings of insecurity than men, and those in larger municipalities/towns (50,000 inhabitants or more) feel more insecure than those in smaller municipalities/towns. In contrast, age and migration background have no independent correlations with feelings of insecurity. When comparing the standardized coefficients within Model 2, social support (β = − 0.19) and gender (β = 0.17) have the greatest explanatory power. With the variables included in Model 2, about 10% of the variance of the feelings of insecurity can be explained (R2 = 0.10).Footnote 5
Ultimately, our data could only confirm H2: online hate speech increases feelings of insecurity outside the Internet both in comparison to non-victims and to victims of offline hate speech. In contrast, H1 and H3 could not be confirmed: victims of offline hate speech did report stronger feelings of insecurity than non-victims. However, this relationship disappeared when controlling for various sociodemographic characteristics, especially gender (H1). Moreover, there was no cumulative correlation between online and offline hate speech and feelings of insecurity as hypothesized in H3: Victims who had been victimized by both online and offline hate speech at least once in their lives had significantly stronger feelings of insecurity compared to non-victims but not compared to victims of online-only hate speech.
Discussion
Using a representative dataset of the resident population in Lower Saxony, Germany, we explored the question of whether online hate speech victimization affects feelings of insecurity outside the Internet. For this purpose, we tested three hypotheses using multiple linear regression. We confirmed that online hate speech increases feelings of insecurity outside the Internet compared to non-victims and victims of offline hate speech (H2), which speaks for differences in coping with different types of victimization in terms of our assumptions based on the neutralization thesis (Sykes & Matza, 1957). One possible explanation is that online hate speech is associated with messages to victims and incitements to like-minded potential perpetrators, which spread uncontrollably via the Internet and affect their victims even in protected private spheres (Brown, 2018). In addition, the harmful contents may remain visible for the victim and a large audience because the incident does not necessarily have to violate laws or the terms of use of the communication platforms. Also, subjectively, it may be harder to escape the digital space than to terminate a stressful hate speech situation outside the Internet as smartphones keep people online almost all the time. Another important aspect may be that online hate speech often happens in the digital public and cannot be easily removed from platforms. Moreover, hurtful messages are at least temporarily stored in inboxes and mobile phones which may be perceived as intrusion of personal space, especially since those affected may be “victimized” everywhere they use their phone—even at home (Müller et al., 2022). All these factors combined might increase the vulnerability of online hate speech victims, especially since recognizable personal characteristics that motivated the perpetrators cannot be easily discarded, and a similar motivated attack may be possible in other contexts and outside the Internet. Although the effect size of the association of online hate speech experiences and feelings of insecurity was rather small, it must be interpreted as a total effect of the whole sample and can therefore be higher in individual cases, but of course also lower. It may well be that some aspects of the hate speech experience, for example its motivation or its severity, are associated with more serious effects. Other individual factors (level of education, social networks, etc.) could also play a role, as could situational and contextual characteristics (counter-speech by third parties, social structure of the neighborhood, access to support facilities, etc.). However, future research on the connection between online hate speech and feelings of insecurity should consider additional factors. For instance, Hawdon et al. (2017) suggest that exposure to online hate is linked to varying degrees of risky online behavior. Moreover, exposure to online hate material does not always have negative consequences, possibly due to different coping strategies among victims (Obermaier et al., 2018; Obermaier & Schmuck, 2022).
Contrary to our expectations, when key variables are controlled for, offline hate speech victimization does not significantly affect feelings of insecurity compared to non-victims, nor does it have a cumulative reinforcing effect when combined with online hate speech victimization. Thus, H1 and H3 could not be confirmed. However, the analysis could not control for whether and how the reported offline hate speech experiences differed from the reported online hate speech experiences, for example in terms of motivation and severity (Iganski & Lagou, 2015; Mellgren et al., 2017). One indication of possible differences is that offline hate speech was reported more frequently by women and respondents with a migration background, whereas no corresponding correlations were found for online hate speech victimizations (see Appendix Fig. 2).
Some limitations should be mentioned when interpreting the results. First, this is a secondary analysis of a cross-sectional survey. It was not conducted to answer the current research question and does not allow for proof of causality. Victimizations were surveyed retrospectively, with the drawback that distant memories may be distorted. As we stated in the introduction, hate speech is not necessarily a crime, as the legal assessment depends on the country. Also, the respondents' assessments regarding the illegality of the crime and the motivation of the perpetrators were subjective. However, the question whether online hate speech has potentially damaging consequences, is independent of the legal assessment but depends on the perspective of those affected. To include a sufficiently large number of cases for statistical evaluations, the life prevalence had to be used instead of the annual prevalence; the victimized thus include all persons who had ever experienced a corresponding act. The experienced victimizations can therefore also lie further in the past. Since the number of such cases is also relatively small, further differentiations between different types of severe (online) hate speech victimization and the modeling of interaction effects were not possible. Except for offline hate speech victimization, no other prejudice-motivated types of victimization, group memberships (such as LGBTQ + , religion, etc.), or personal characteristics on which the victimization may have been based were asked about. Corresponding comparisons, for example, between xenophobic, homophobic, sexist, or racist acts, could therefore not be made and should be considered in future studies.
Conclusions
The main aim of this study was to examine whether and to what extent the experience of online hate speech affects victims’ sense of security. Overall, we found that online hate speech affects feelings of insecurity, even outside the internet. Compared to non-victims and victims of offline hate speech, victims of online hate speech exhibit a more pronounced feeling of insecurity outside the Internet. The reasons for this finding may lie in the characteristics of the phenomenon of online hate speech. Since online hate speech exposes and attacks victims based on their personal characteristics and group affiliation, the victim itself and others must fear a (renewed) victimization by like-minded people of the perpetrator at any time, even outside the internet. Therefore, this uncertainty transfers to the victim’s sense of insecurity outside the Internet.
Because of its unique characteristics, online hate speech can have a profound impact on the psychological well-being of its victims, leading not only to feelings of fear or anxiety but also insecurity. Our study’s emphasis on the transfer of insecurity from online to offline spaces underscores the interconnectedness of these domains. This interconnectedness underlines the importance of understanding and addressing feelings of insecurity induced by online hate speech, as it challenges traditional boundaries between virtual and real-world experiences. Our results emphasize the urgent need for ongoing efforts to combat online hate speech and its offline ramifications and point to the lasting impact it has on the victims’ lives and the importance of specific interventions and support mechanisms. Anti-hate speech initiatives should not only focus on mitigating the spread of hateful online content but also on addressing the psychological consequences and the emotional well-being of the victims. One possible measure is to increase awareness of the issue and the impact on victims’ well-being. Our findings also underline the importance of further judicial analyses as well as collaborative efforts between online platforms and law enforcements to strengthen laws and regulations aimed at combating online hate speech. As the digital landscape continues to evolve, addressing the psychological and societal impact of online hate speech remains a pressing concern. Due to its relevance for fear of crime in general and the increasing prevalence of online hate speech, our results hopefully encourage further empirical research on online hate speech consequences.
Availability of data and materials
The dataset on which this work relies was not shared publicly. However, upon request, the authors are willing to share the data using a data use agreement.
Notes
The assessment whether (online) hate speech is a crime depends on the country and its specific legal provisions. In Germany, hate speech is punishable if it exceeds the limits of freedom of expression and violates the rights of others. Possible offenses related to hate speech include insult, incitement to hatred, incitement to commit crimes, and approval of crimes.
To minimize the risk of emotionally straining the participants, participants were clearly informed about the topic of the survey in the cover letter as well as on the first page of the questionnaire. It was also explicitly stated that the participation is voluntary and can be cancelled at any time without further consequences. Also, the additional information sheet provided the participants with a victim counseling service in case they needed help.
A more detailed description of the sample can be found in Müller et al. (2022).
To test the predictors of the regression models for collinearity, we calculated the variance inflation factors (VIF) with the R package „car “ under R version 4.2.1. The highest VIF in model 2 is around 1.2, so there is no indication of multicollinearity (James et al., 2013).
For model validation, see Fig. 3 in the Appendix. To additionally test the robustness of the findings, the bootstrap procedure was applied and model 2 was estimated repeatedly for 5,000 random samples from the data set used. The R package "car" under R version 4.2.1 was used for this purpose (Fox & Weisberg 2018). The bootstrapping results raise concerns about the robustness, as the significant regression weights of online HSVs and online and offline HSVs were only present in 94% of the bootstrap samples (see Table 4 in the appendix). Therefore, additional research is required to confirm these findings.
References
Agnew, R. S. (1985). Neutralizing the impact of crime. Criminal Justice and Behavior, 12(2), 221–239. https://0-doi-org.brum.beds.ac.uk/10.1177/0093854885012002005
Awan, I., & Zempi, I. (2016). The affinity between online and offline anti-Muslim hate crime: Dynamics and impacts. Aggression and Violent Behavior, 27, 1–8. https://0-doi-org.brum.beds.ac.uk/10.1016/j.avb.2016.02.001
Bannenberg, B., Rössner, D., & Coester, M. (2006). Hasskriminalität, extremistische Kriminalität, politisch motivierte Kriminalität und ihre Prävention. In R. Egg (Ed.), Extremistische Kriminalität: Kriminologie und Prävention (pp. 17–59). KrimZ.
Barnes, A., & Ephross, P. H. (1994). The impact of hate violence on victims: Emotional and behavioral responses to attacks. Social Work, 39(3), 247–251.
Belyea, M. J., & Zingraff, M. T. (1988). Fear of crime and residential location. Rural Sociology, 53(4), 473–486.
Benier, K. (2017). The harms of hate: Comparing the neighbouring practices and interactions of hate crime victims, non-hate crime victims and non-victims. International Review of Victimology, 23(2), 1–23. https://0-doi-org.brum.beds.ac.uk/10.1177/0269758017693087
Berg, M., & Johansson, T. (2016). Trust and safety in the Segregated City: Contextualizing the relationship between institutional trust, crime-related insecurity and generalized trust. Scandinavian Political Studies, 39(4), 458–481. https://0-doi-org.brum.beds.ac.uk/10.1111/1467-9477.12069
Bilewicz, M., & Soral, W. (2020). Hate speech epidemic. The dynamic effects of derogatory language on intergroup relations and political radicalization. Political Psychology, 41(S1), 3–33. https://0-doi-org.brum.beds.ac.uk/10.1111/pops.12670
Blanco, L., & Ruiz, I. (2013). The impact of crime and insecurity on trust in democracy and Institutions. American Economic Review, 103(3), 284–288. https://0-doi-org.brum.beds.ac.uk/10.1257/aer.103.3.284
Bleich, E. (2011). The rise of hate speech and hate crime laws in liberal democracies. Journal of Ethnic and Migration Studies, 37(6), 917–934. https://0-doi-org.brum.beds.ac.uk/10.1080/1369183X.2011.576195
Brown, A. (2018). What is so special about online (as compared to offline) hate speech? Ethnicities, 18(3), 297–326. https://0-doi-org.brum.beds.ac.uk/10.1177/1468796817709846
Castaño-Pulgarín, S. A., Suárez-Betancur, N., Vega, L. M. T., & López, H. M. H. (2021). Internet, social media and online hate speech. Systematic Review. Aggression and Violent Behavior, 58, 101608. https://0-doi-org.brum.beds.ac.uk/10.1016/j.avb.2021.101608
Church D, Coester M. (2021). Opfer von Vorurteilskriminalität: Thematische Auswertung des Deutschen Viktimisierungssurvey 2017 (Forschungsbericht 2021/4). Wiesbaden. https://www.bka.de/SharedDocs/Downloads/DE/Publikationen/Publikationsreihen/Forschungsergebnisse/2021KKFAktuell_OpferVorurteilskriminalitaet.pdf. Accessed 6 Feb 2024
Citron, D. K. (2014). Hate crimes in cyberspace. Harvard Univ.
Cohen-Almagor, R. (2011). Fighting Hate and bigotry on the internet. Policy & Internet, 3(3), 89–114. https://0-doi-org.brum.beds.ac.uk/10.2202/1944-2866.1059
Costello, M., Hawdon, J., & Ratliff, T. N. (2017). Confronting online extremism: The effect of self-help, collective efficacy, and guardianship on being a target for hate speech. Social Science Computer Review, 35(5), 587–605. https://0-doi-org.brum.beds.ac.uk/10.1177/0894439316666272
de Kimpe, L., Ponnet, K., Walrave, M., Snaphaan, T., Pauwels, L., & Hardyns, W. (2020). Help, I need somebody: Examining the antecedents of social support seeking among cybercrime victims. Computers in Human Behavior, 108, 106310. https://0-doi-org.brum.beds.ac.uk/10.1016/j.chb.2020.106310
Dreißigacker A. (2018). Erfahrungen und Folgen von Vorurteilskriminalität: Schwerpunktergebnisse der Dunkelfeldstudie des Landeskriminalamtes Schleswig-Holstein 2017 (KFN-Forschungsbericht No. 145). Hannover. https://kfn.de/wp-content/uploads/2019/03/FB_145.pdf. Accessed 6 Feb 2024
Dreißigacker, A., Riesner, L., & Groß, E. (2020). Vorurteilskriminalität: Ergebnisse der Dunkelfeldstudien der Landeskriminalämter Niedersachsen und Schleswig-Holstein 2017. In C. Grafl, M. Stempkowski, K. Beclin, & I. Haider (Eds.), Neue Kriminologische Schriftenreihe: Sag, wie hast du‘s mit der Kriminologie?“: Die Kriminologie im Gespräch mit ihren Nachbardisziplinen (Vol. 118, pp. 125–150). Forum Verlag Godesberg. https://0-doi-org.brum.beds.ac.uk/10.25365/phaidra.213
DuBow, F., McCabe, E., & Kaplan, G. (1979). Reactions to crime: A Critical review of the literature. National Institute of Law Enforcement and Criminal Justice.
Ehrlich, H. J., Larcom, B. E. K., & Purvis, R. D. (2003). The traumatic effects of ethnoviolence. In B. Perry (Ed.), Hate and bias crime: A reader (pp. 153–170). Routledge.
Ferraro, K. J., & Johnson, J. M. (1983). How women experience battering: The process of victimization. Social Problems, 30(3), 325–339. https://0-doi-org.brum.beds.ac.uk/10.2307/800357
Fox J, Weisberg S. (2018). Bootstrapping regression models in R: An appendix to an r companion to applied regression. https://socialsciences.mcmaster.ca/jfox/Books/Companion/appendices/Appendix-Bootstrapping.pdf. Accessed 6 Feb 2024
Fydrich, T., Sommer, G., Tydecks, S., & Brähler, E. (2009). Fragebogen zur sozialen Unterstützung (F-SozU): Normierung der Kurzform (K-14). Zeitschrift Für Medizinische Psychologie, 18, 43–48.
Garland, J., Ghazi-Zahedi, K., Young, J.-G., Hébert-Dufresne, L., & Galesic, M. (2022). Impact and dynamics of hate and counter speech online. EPJ Data Science. https://0-doi-org.brum.beds.ac.uk/10.1140/epjds/s13688-021-00314-6
Gelber, K., & McNamara, L. (2016). Evidencing the harms of hate speech. Social Identities, 22(3), 324–341. https://0-doi-org.brum.beds.ac.uk/10.1080/13504630.2015.1128810
Geschke, D., Klaßen, A., Quent, M., Richter, C. (2019). #Hass im Netz: Der schleichende Angriff auf unsere Demokratie: Eine bundesweite repräsentative Untersuchung (Forschungsbericht). Jena.
Green, D. L., & Pomeroy, E. C. (2007). Crime victims: What is the role of social support? Journal of Aggression, Maltreatment & Trauma, 15(2), 97–113. https://0-doi-org.brum.beds.ac.uk/10.1300/J146v15n02_06
Groß E, Pfeiffer H, Andree C. (2018). Vorurteilskriminalität (Hate Crime): Erfahrungen und Folgen. Hannover. https://www.lka.polizei-nds.de/download/73836/Sondermodul_Hasskriminalitaet_2017.pdf. Accessed 6 Feb 2024
Hardyns, W., Pauwels, L. J. R., & Heylen, B. (2018). Within-individual change in social support, perceived collective efficacy, perceived disorder and fear of crime: results from a two-wave panel study. The British Journal of Criminology, 58(5), 1254–1270. https://0-doi-org.brum.beds.ac.uk/10.1093/bjc/azy002
Hawdon, J., Oksanen, A., & Räsänen, P., et al. (2014). Victims of Online Groups: American youth’s exposure to online hate speech. In J. Hawdon, J. Ryan, & M. Lucht (Eds.), The Causes and consequences of group violence: from bullies to terrorists (pp. 165–182). Lexington Books.
Hawdon, J., Oksanen, A., & Räsänen, P. (2017). Exposure to online hate in four nations: A cross-national consideration. Deviant Behavior, 38(3), 254–266. https://0-doi-org.brum.beds.ac.uk/10.1080/01639625.2016.1196985
Herek, G. M., Cogan, J. C., & Gillis, J. R. (2002). Victim experiences in hate crimes based on sexual orientation. Journal of Social Issues, 58(2), 319–339. https://0-doi-org.brum.beds.ac.uk/10.1111/1540-4560.00263
Herek, G. M., Gillis, J. R., & Cogan, J. C. (1999). Psychological sequelae of hate-crime victimization among lesbian, gay, and bisexual adults. Journal of Consulting and Clinical Psychology, 67(6), 945–951. https://0-doi-org.brum.beds.ac.uk/10.1037//0022-006x.67.6.945
Hinduja, S., & Patchin, J. W. (2022). Bias-based cyberbullying among early adolescents: Associations with cognitive and affective empathy. The Journal of Early Adolescence. https://0-doi-org.brum.beds.ac.uk/10.1177/02724316221088757
Iganski, P. (2019). Hate crime victimization survey: Report. https://www.osce.org/files/f/documents/8/c/424193.pdf. Accessed 6 Feb 2024
Iganski, P., & Lagou, S. (2015). Hate crimes hurt some more than others: Implications for the just sentencing of offenders. Journal of Interpersonal Violence, 30(10), 1696–1718. https://0-doi-org.brum.beds.ac.uk/10.1177/0886260514548584
Iganski, P., & Lagou, S. (2016). The psychological impact of hate crimes on victims: An exploratory analysis of data from the US National crime victimization survey. In E. Dunbar, A. Blanco, & D. CrËvecoeur-MacPhail (Eds.), The psychology of hate crimes as domestic terrorism: U.S. And global issues (pp. 279–292). ABC-CLIO.
James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning with applications in R (Vol. 103). Springer, New York. https://0-doi-org.brum.beds.ac.uk/10.1007/978-1-4614-7138-7
Janoff-Bulman, R., & Hanson Frieze, I. (1983). A theoretical perspective for understanding reactions to victimization. Journal of Social Issues, 39(2), 1–17.
Judge, M., & Nel, J. A. (2018). Psychology and hate speech: A critical and restorative encounter. South African Journal of Psychology, 48(1), 15–20. https://0-doi-org.brum.beds.ac.uk/10.1177/0081246317728165
Kansok-Dusche, J., Ballaschk, C., Krause, N., Zeißig, A., Seemann-Herz, L., Wachs, S., & Bilz, L. (2022). A systematic review on hate speech among children and adolescents: Definitions, prevalence, and overlap with related phenomena. Trauma, Violence & Abuse,. https://0-doi-org.brum.beds.ac.uk/10.1177/15248380221108070
Keipi, T., Näsi, M., Oksanen, A., & Räsänen, P. (2017). Online hate and harmful content. Cross-national perspectives. Routledge Taylor & Francis Group.
Kliem, S., Mößle, T., Rehbein, F., Hellmann, D. F., Zenger, M., & Brähler, E. (2015). A brief form of the perceived social support questionnaire (F-SozU) was developed, validated, and standardized. Journal of Clinical Epidemiology, 68(5), 551–562. https://0-doi-org.brum.beds.ac.uk/10.1016/j.jclinepi.2014.11.003
Leets, L. (2002). Experiencing hate speech: Perceptions and responses to Anti-Semitism and antigay speech. Journal of Social Issues, 58(2), 341–361. https://0-doi-org.brum.beds.ac.uk/10.1111/1540-4560.00264
Maruna, S., & Copes, H. (2005). What have we learned from five decades of neutralization research? Crime and Justice, 32, 221–320. https://0-doi-org.brum.beds.ac.uk/10.1086/655355
McDevitt, J., Balboni, J., Garcia, L., & Gu, J. (2001). Consequences for Victims: A comparison of bias- and non-bias-motivated assaults. American Behavioral Scientist, 45(4), 697–713. https://0-doi-org.brum.beds.ac.uk/10.1177/0002764201045004010
Mellgren, C., Andersson, M., & Ivert, A.-K. (2017). For Whom does hate crime hurt more? A comparison of consequences of victimization across motives and crime types. Journal of Interpersonal Violence, 36(3–4), 1–25. https://0-doi-org.brum.beds.ac.uk/10.1177/0886260517746131
Müller P., Dreißigacker, A., Isenhardt, A. (2022). Cybercrime gegen Privatpersonen: Ergebnisse einer repräsentativen Bevölkerungsbefragung in Niedersachsen (KFN-Forschungsbericht No. 168). Hannover. https://kfn.de/wp-content/uploads/Forschungsberichte/FB_168.pdf. Accessed 6 Feb 2024
Näsi, M., Räsänen, P., Hawdon, J., Holkeri, E., & Oksanen, A. (2015). Exposure to online hate material and social trust among Finnish youth. Information Technology & People, 28(3), 607–622. https://0-doi-org.brum.beds.ac.uk/10.1108/ITP-09-2014-0198
Obermaier, M., Hofbauer, M., & Reinemann, C. (2018). Journalists as targets of hate speech. How German journalists perceive the consequences for themselves and how they cope with it. Studies in Communication and Media, 7(4), 499–524. https://0-doi-org.brum.beds.ac.uk/10.5771/2192-4007-2018-4-499
Obermaier, M., & Schmuck, D. (2022). Youths as targets: factors of online hate speech victimization among adolescents and young adults. Journal of Computer-Mediated Communication, 27(4), zmzc012. https://0-doi-org.brum.beds.ac.uk/10.1093/jcmc/zmac012
Ortega, S. T., & Myles, J. L. (1987). Race and gender effects on fear of crime: An interactive model with age. Criminology, 25(1), 133–152. https://0-doi-org.brum.beds.ac.uk/10.1111/j.1745-9125.1987.tb00792.x
Parker, K. D., & Ray, M. C. (1990). Fear of crime: An assessment of related factors. Sociological Spectrum, 10(1), 29–40. https://0-doi-org.brum.beds.ac.uk/10.1080/02732173.1990.9981910
Paz, M. A., Montero-Díaz, J., & Moreno-Delgado, A. (2020). Hate speech: A systematized review. SAGE Open, 10(4), 1–12. https://0-doi-org.brum.beds.ac.uk/10.1177/2158244020973022
Qian, J., Wang, H., ElSherief, M., Yan, X. (2021). Lifelong learning of hate speech classification on social media. Advance online publication. https://0-doi-org.brum.beds.ac.uk/10.48550/arXiv.2106.02821
Reed, C. (2009). The challenge of hate speech online. Information & Communications Technology Law, 18(2), 79–82. https://0-doi-org.brum.beds.ac.uk/10.1080/13600830902812202
Saha, K., Chandrasekharan, E., & de Choudhury, M. (2019). Prevalence and psychological effects of hateful speech in online college communities. Proc ACM Web Sci Conf, 2019, 255–264. https://0-doi-org.brum.beds.ac.uk/10.1145/3292522.3326032
Salmi, V., Smolej, M., & Kivivuori, J. (2007). Crime victimization, exposure to crime news and social trust among adolescents. Young, 15(3), 255–272. https://0-doi-org.brum.beds.ac.uk/10.1177/110330880701500303
Scarborough, B. K., Like-Haislip, T. Z., Novak, K. J., Lucas, W. L., & Alarid, L. F. (2010). Assessing the relationship between individual characteristics, neighborhood context, and fear of crime. Journal of Criminal Justice, 38(4), 819–826. https://0-doi-org.brum.beds.ac.uk/10.1016/j.jcrimjus.2010.05.010
Schmidt, A., Wiegand, M. (2017). A survey on hate speech detection using natural language processing. In: Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, pp. 1–10. https://0-doi-org.brum.beds.ac.uk/10.18653/v1/W17-1101
Sheppard, K. G., Lawshe, N. L., & McDevitt, J. (2021). Hate Crimes in a Cross-Cultural Context. In H. N. Pontell (Ed.), Oxford research encyclopedias Oxford research encyclopedia of criminology and criminal justice. Oxford University Press. https://0-doi-org.brum.beds.ac.uk/10.1093/acrefore/9780190264079.013.564
Smith, W. R., & Torstensson, M. (1997). Gender differences in risk perception and neutralizing fear of crime: Toward resolving the paradoxes. British Journal of Criminology, 37(4), 608–634. https://0-doi-org.brum.beds.ac.uk/10.1093/oxfordjournals.bjc.a014201
Snedker, K. A. (2015). Neighborhood conditions and fear of crime: A reconsideration of sex differences. Crime & Delinquency, 61(1), 45–70. https://0-doi-org.brum.beds.ac.uk/10.1177/0011128710389587
Stahel, L., & Baier, D. (2023). Digital hate speech experiences across age groups and their impact on well-being: A nationally representative survey in Switzerland. Cyberpsychology, Behavior and Social Networking, 26(7), 519–526. https://0-doi-org.brum.beds.ac.uk/10.1089/cyber.2022.0185
Ștefăniță, O., & Buf, D.-M. (2021). Hate speech in social media and its effects on the LGBT community: A review of the current research. Romanian Journal of Communication and Public Relations, 23(1), 47. https://0-doi-org.brum.beds.ac.uk/10.21018/rjcpr.2021.1.322
Sykes, G. M., & Matza, D. (1957). Techniques of neutralization: A theory of delinquency. American Sociological Review, 22(6), 664–670. https://0-doi-org.brum.beds.ac.uk/10.2307/2089195
Tseloni, A., & Zarafonitou, C. (2008). Fear of crime and victimization. European Journal of Criminology, 5(4), 387–409. https://0-doi-org.brum.beds.ac.uk/10.1177/1477370808095123
Tynes, B. M. (2006). Children, Adolescents, and the Culture of Online Hate. In N. E. Dowd, D. G. Singer, & R. F. Wilson (Eds.), Handbook of children, culture, and violence (pp. 267–288). Sage Publications.
Tynes, B. M., Rose, C. A., Hiss, S., Umaña-Taylor, A. J., Mitchell, K., & Williams, D. (2016). Virtual environments, online racial discrimination, and adjustment among a diverse, school-based sample of adolescents. International Journal of Gaming and Computer-Mediated Simulations, 6(3), 1–16. https://0-doi-org.brum.beds.ac.uk/10.4018/ijgcms.2014070101
Wachs, S., Gámez-Guadix, M., & Wright, M. F. (2022). Online hate speech victimization and depressive symptoms among adolescents: The protective role of resilience. Cyberpsychology, Behavior and Social Networking, 25(7), 416–423. https://0-doi-org.brum.beds.ac.uk/10.1089/cyber.2022.0009
Wachs, S., & Wright, M. F. (2018). Associations between Bystanders and perpetrators of online hate: The moderating role of toxic online disinhibition. International Journal of Environmental Research and Public Health. https://0-doi-org.brum.beds.ac.uk/10.3390/ijerph15092030
Waldron, J. (2012). The harm in hate speech. Harvard University Press.
Walther, J. B. (2022). Social media and online hate. Current Opinion in Psychology, 45, 101298. https://0-doi-org.brum.beds.ac.uk/10.1016/j.copsyc.2021.12.010
Warner, W., Hirschberg, J. (2012). Detecting Hate Speech on the World Wide Web. In: Proceedings of the 2012 Workshop on Language in Social Media, pp. 19–26.
Weiss, K. G. (2011). Neutralizing sexual victimization: A typology of victims’ non-reporting accounts. Theoretical Criminology, 15(4), 445–467. https://0-doi-org.brum.beds.ac.uk/10.1177/1362480610391527
Wright, M. F., Wachs, S., & Gámez-Guadix, M. (2021). Youths’ coping with cyberhate: Roles of parental mediation and family support. Comunicar, 29(67), 21–33. https://0-doi-org.brum.beds.ac.uk/10.3916/C67-2021-02
Zick, A., Wolf, C., Küpper, B., Davidov, E., Schmidt, P., & Heitmeyer, W. (2008). The syndrome of group-focused enmity: The interrelation of prejudices tested with multiple cross-sectional and panel data. Journal of Social Issues, 64(2), 363–383. https://0-doi-org.brum.beds.ac.uk/10.1111/j.1540-4560.2008.00566.x
Acknowledgements
The authors thank the anonymous reviewers for their helpful comments.
Funding
This work is based on the data of the population survey in Lower Saxony 2020 as part of the project Cybercrime against private users funded by the Pro*Niedersachsen funding program of the Lower Saxony Ministry of Science and Culture.
Author information
Authors and Affiliations
Contributions
Author 1: Conceptualization (equal); Formal analysis; Methodology (lead); Visualization; Writing—original draft (equal); Writing—review & editing (equal). Author 2: Investigation; Conceptualization (equal); Writing—original draft (equal); Writing—review & editing (equal). Author 3: Investigation (lead); Project administration; Supervision (project); Writing—review & editing (equal). Author 4: Conceptualization (equal); Methodology (supporting); Supervision; Writing—review & editing (equal).
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Dreißigacker, A., Müller, P., Isenhardt, A. et al. Online hate speech victimization: consequences for victims’ feelings of insecurity. Crime Sci 13, 4 (2024). https://0-doi-org.brum.beds.ac.uk/10.1186/s40163-024-00204-y
Received:
Accepted:
Published:
DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s40163-024-00204-y