Considering the prevalence of online hate speech and its harm and risks to the targeted people, democratic discourse and public security, it is necessary to combat online hate speech. For this purpose, interact interm...Considering the prevalence of online hate speech and its harm and risks to the targeted people, democratic discourse and public security, it is necessary to combat online hate speech. For this purpose, interact intermediaries play a crucial role as new governors of online speech. However, there is no universal definition of hate speech. Rules concerning this vary in different countries depending on their social, ethical, legal and religious backgrounds. The answer to the question of who can be liable for online hate speech also varies in different countries depending on the social, cultural, history, legal and political backgrounds. The First Amendment, cyberliberalism and the priority of promoting the emerging internet industry lead to the U.S. model, which offers intermediaries wide exemptions from liability for third-party illegal content. Conversely, the Chinese model of cyberpaternalism prefers to control online content on ideological, political and national security grounds through indirect methods, whereas the European Union (EU) and most European countries, including Germany, choose the middle ground to achieve balance between restricting online illegal hate speech and the freedom of speech as well as internet innovation. It is worth noting that there is a heated discussion on whether intermediary liability exemptions are still suitable for the world today, and there is a tendency in the EU to expand intermediary liability by imposing obligation on online platforms to tackle illegal hate speech. However, these reforms are again criticized as they could lead to erosion of the EU legal framework as well as privatization of law enforcement through algorithmic tools. Those critical issues relate to the central questions of whether intermediaries should be liable for user-generated illegal hate speech at all and, if so, how should they fulfill these liabilities? Based on the analysis of the different basic standpoints of cyberliberalists and cyberpaternalists on the internet regulation as well as the arguments of proponents and opponents of the intermediary liability exemptions, especially the debates over factual impracticality and legal restraints, impact on internet innovation and the chilling effect on freedom of speech in the case that intermediaries bear liabilities for illegal third-party content, the paper argues that the arguments for intermediary liability exemptions are not any more tenable or plausible in the web 3.0 era. The outdated intermediary immunity doctrine needs to be reformed and amended.Furthermore, intermediaries are becoming the new governors of online speech and platforms now have the power to curtail online hate speech. Thus, the attention should turn to the appropriate design of legal responsibilities of intermediaries. The possible suggestions could be the following three points: Imposing liability on intermediaries for illegal hate speech requires national law and international human rights norms as the outer boundary; openness, transparency and accountability as internal constraints; balance of multi-interests and involvement of multi-stakeholders in internet governance regime.展开更多
文摘Considering the prevalence of online hate speech and its harm and risks to the targeted people, democratic discourse and public security, it is necessary to combat online hate speech. For this purpose, interact intermediaries play a crucial role as new governors of online speech. However, there is no universal definition of hate speech. Rules concerning this vary in different countries depending on their social, ethical, legal and religious backgrounds. The answer to the question of who can be liable for online hate speech also varies in different countries depending on the social, cultural, history, legal and political backgrounds. The First Amendment, cyberliberalism and the priority of promoting the emerging internet industry lead to the U.S. model, which offers intermediaries wide exemptions from liability for third-party illegal content. Conversely, the Chinese model of cyberpaternalism prefers to control online content on ideological, political and national security grounds through indirect methods, whereas the European Union (EU) and most European countries, including Germany, choose the middle ground to achieve balance between restricting online illegal hate speech and the freedom of speech as well as internet innovation. It is worth noting that there is a heated discussion on whether intermediary liability exemptions are still suitable for the world today, and there is a tendency in the EU to expand intermediary liability by imposing obligation on online platforms to tackle illegal hate speech. However, these reforms are again criticized as they could lead to erosion of the EU legal framework as well as privatization of law enforcement through algorithmic tools. Those critical issues relate to the central questions of whether intermediaries should be liable for user-generated illegal hate speech at all and, if so, how should they fulfill these liabilities? Based on the analysis of the different basic standpoints of cyberliberalists and cyberpaternalists on the internet regulation as well as the arguments of proponents and opponents of the intermediary liability exemptions, especially the debates over factual impracticality and legal restraints, impact on internet innovation and the chilling effect on freedom of speech in the case that intermediaries bear liabilities for illegal third-party content, the paper argues that the arguments for intermediary liability exemptions are not any more tenable or plausible in the web 3.0 era. The outdated intermediary immunity doctrine needs to be reformed and amended.Furthermore, intermediaries are becoming the new governors of online speech and platforms now have the power to curtail online hate speech. Thus, the attention should turn to the appropriate design of legal responsibilities of intermediaries. The possible suggestions could be the following three points: Imposing liability on intermediaries for illegal hate speech requires national law and international human rights norms as the outer boundary; openness, transparency and accountability as internal constraints; balance of multi-interests and involvement of multi-stakeholders in internet governance regime.