Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning...Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.展开更多
Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hate...Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.展开更多
In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that op...In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that operate in the Arab countries have embraced social media in their day-to-day business activities at different scales.This is attributed to business owners’understanding of social media’s importance for business development.However,the Arabic morphology is too complicated to understand due to the availability of nearly 10,000 roots and more than 900 patterns that act as the basis for verbs and nouns.Hate speech over online social networking sites turns out to be a worldwide issue that reduces the cohesion of civil societies.In this background,the current study develops a Chaotic Elephant Herd Optimization with Machine Learning for Hate Speech Detection(CEHOML-HSD)model in the context of the Arabic language.The presented CEHOML-HSD model majorly concentrates on identifying and categorising the Arabic text into hate speech and normal.To attain this,the CEHOML-HSD model follows different sub-processes as discussed herewith.At the initial stage,the CEHOML-HSD model undergoes data pre-processing with the help of the TF-IDF vectorizer.Secondly,the Support Vector Machine(SVM)model is utilized to detect and classify the hate speech texts made in the Arabic language.Lastly,the CEHO approach is employed for fine-tuning the parameters involved in SVM.This CEHO approach is developed by combining the chaotic functions with the classical EHO algorithm.The design of the CEHO algorithm for parameter tuning shows the novelty of the work.A widespread experimental analysis was executed to validate the enhanced performance of the proposed CEHOML-HSD approach.The comparative study outcomes established the supremacy of the proposed CEHOML-HSD model over other approaches.展开更多
Arabic is the world’s first language,categorized by its rich and complicated grammatical formats.Furthermore,the Arabic morphology can be perplexing because nearly 10,000 roots and 900 patterns were the basis for ver...Arabic is the world’s first language,categorized by its rich and complicated grammatical formats.Furthermore,the Arabic morphology can be perplexing because nearly 10,000 roots and 900 patterns were the basis for verbs and nouns.The Arabic language consists of distinct variations utilized in a community and particular situations.Social media sites are a medium for expressing opinions and social phenomena like racism,hatred,offensive language,and all kinds of verbal violence.Such conduct does not impact particular nations,communities,or groups only,extending beyond such areas into people’s everyday lives.This study introduces an Improved Ant Lion Optimizer with Deep Learning Dirven Offensive and Hate Speech Detection(IALODL-OHSD)on Arabic Cross-Corpora.The presented IALODL-OHSD model mainly aims to detect and classify offensive/hate speech expressed on social media.In the IALODL-OHSD model,a threestage process is performed,namely pre-processing,word embedding,and classification.Primarily,data pre-processing is performed to transform the Arabic social media text into a useful format.In addition,the word2vec word embedding process is utilized to produce word embeddings.The attentionbased cascaded long short-term memory(ACLSTM)model is utilized for the classification process.Finally,the IALO algorithm is exploited as a hyperparameter optimizer to boost classifier results.To illustrate a brief result analysis of the IALODL-OHSD model,a detailed set of simulations were performed.The extensive comparison study portrayed the enhanced performance of the IALODL-OHSD model over other approaches.展开更多
VIOLENCE IS NOT ONLY WRONG,IT’S DISEASED-it’s always painful and too often fatal-as with Martin Luther King,no less.When emotions run high,doctors have difficulty making progress-calm,dispassionate review of the obv...VIOLENCE IS NOT ONLY WRONG,IT’S DISEASED-it’s always painful and too often fatal-as with Martin Luther King,no less.When emotions run high,doctors have difficulty making progress-calm,dispassionate review of the obvious evidence is vital,if you aim for less pain and fewer deaths.This paper is based on the self-evident precept that violated children unmistakeably predate violent adults.The remedy,highlighted here,is unusual in all psychiatry,in that it is backed by solid,irrefutable,objective,scientific evidence-from brainscans,no less-at least it is,for those willing to look.The paper has 6 parts:1.Introduction;2.Un-memorising Terror;3.Nutritious Emotions;4.Tyrannical Revenge;5.The Way to Cure Nucleargeddon Is Paved With Good Intentions;6.Conclusion.Parenting is a troubled skill,largely because mis-parenting perpetuates itself.As the poet Philip Larkin says of parents-“they fill you with the faults they had,and add some extra just for you”.Larkin moderates his criticism with“they may not mean to,but they do”.Sadly his“solution”-“don’t have any kids yourself”can extinguish the human race as reliably as ever revengeful Emotional Dwarfism will.展开更多
Mobile devices with social media applications are the prevalent user equipment to generate and consume digital hate content.The objective of this paper is to propose a mobile edge computing architecture for regulating...Mobile devices with social media applications are the prevalent user equipment to generate and consume digital hate content.The objective of this paper is to propose a mobile edge computing architecture for regulating and reducing hate content at the user's level.In this regard,the profiling of hate content is obtained from the results of multiple studies by quantitative and qualitative analyses.Profiling resulted in different categories of hate content caused by gender,religion,race,and disability.Based on this information,an architectural framework is developed to regulate and reduce hate content at the user's level in the mobile computing environment.The proposed architecture will be a novel idea to reduce hate content generation and its impact.展开更多
Hate crimes are a culture phenomenon which is perceived by most as an occurrence that should be uprooted from the society. Yet, to date, we have been unable to do so. Hate crimes are the subject of research and commen...Hate crimes are a culture phenomenon which is perceived by most as an occurrence that should be uprooted from the society. Yet, to date, we have been unable to do so. Hate crimes are the subject of research and comments by experts in various fields. In this regard, most scholars agree that a hate based crime is distinguished from a "regular" criminal offence by the motive--the attack is aimed at a victim who is part of a differentiated minority group. However, when reading the relevant documents in the area, it seems that the differences between the experts start at the most basic point--what constitutes hate crimes? This article analyses the concept of "hate crimes" via an interdisciplinary approach aimed at flashing out the fundamental gaps in the research. We have found that the problems include, inter alia, discrepancies in the definition of hate crimes, methodological difficulties regarding validity and legitimacy (mainly due to the absence of information based on the attacker's point of view) and the lack of agreement on the appropriate legal methods required to deal with the ramifications of hate crimes. While part I of this paper revolves around the theoretical aspects of the questions put forth at the centre of this article, part II looks at the same questions from a legal viewpoint. The correlation between the two chapters shows the impact the methodological difficulties have on enforcement endeavors. This relation is further advanced through the examination of test cases from different countries, among them--lsrael. Finally, the article concludes by suggesting a few thoughts on the way to overcome the theoretical problems and making the enforcement efforts more efficient.展开更多
Based on the proposal of freedom of speech,hate speech has become more and more widespread,especially in the past decade.Generally,the constituent elements of hate speech are mainly manifested in four aspects(Jiang,20...Based on the proposal of freedom of speech,hate speech has become more and more widespread,especially in the past decade.Generally,the constituent elements of hate speech are mainly manifested in four aspects(Jiang,2015):the way of expression,the object,the intention of expression,and the harmful consequences.Through these four aspects,hate speech can give a heavy blow to the stability and security of the whole society with the help of social media.Hence,this paper puts forward an analysis method of the recognition and resistance to hate speech from different conditions.展开更多
Frankenstein,as the first science fiction in the world,mainly talks about the life of a young scientist,Victor Frankenstein,how he created the monster and how the monster destroyed his life.In the novel,love existed i...Frankenstein,as the first science fiction in the world,mainly talks about the life of a young scientist,Victor Frankenstein,how he created the monster and how the monster destroyed his life.In the novel,love existed in everyone’s heart including the monster.On the other side,hate also existed in the characters in the novel.Love and hate were described in the novel,and at last love was more powerful than hate and overcome hate.展开更多
The internet has brought together people from diverse cultures,backgrounds,and languages,forming a global community.However,this unstoppable growth in online presence and user numbers has introduced several new challe...The internet has brought together people from diverse cultures,backgrounds,and languages,forming a global community.However,this unstoppable growth in online presence and user numbers has introduced several new challenges.The structure of the cyberspace panopticon,the utilization of big data and its manipulation by interest groups,and the emergence of various ethical issues in digital media,such as deceptive content,deepfakes,and echo chambers,have become significant concerns.When combined with the characteristics of digital dissemination and rapid global interaction,these factors have paved the way for ethical problems related to the production,proliferation,and legitimization of hate speech.Moreover,certain images have gained widespread acceptance as though they were real,despite having no factual basis.This recent realization that much of the information and imagery considered to be true is,in fact,a virtual illusion,is a commonly discussed truth.The alarming increase and growing legitimacy of hate speech within the digital realm,made possible by social media,are leading us toward an unavoidable outcome.This study aims to investigate the reality of hate speech in this context.To achieve this goal,the research question is formulated as follows:“Does social media,particularly Twitter,contain content that includes hate speech,incendiary information,and news?”The study’s population is social media,with the sample consisting of hate speech content found on Twitter.Qualitative research methods are intended to be employed in this study.展开更多
The general problem of this research was how students respond to hate speech.The purpose of the study was to obtain an overview of(1)perceptions;(2)attitudes;and(3)student actions/participation towards hate speech.The...The general problem of this research was how students respond to hate speech.The purpose of the study was to obtain an overview of(1)perceptions;(2)attitudes;and(3)student actions/participation towards hate speech.The research approach used was quantitative and descriptive with survey method.The population of this study was all the administrators of the student executive board in UNTAN,IAIN,and IKIP PGRI Pontianak totaling 162 students.The number of research samples was 115 students determined by Slovin formula.The respondents were choosen randomly.Data collection used a questionnaire.Data analysis used percentage quantitative descriptive analysis techniques.The general conclusion of the study shows that student responses to hate speech are good.Specific conclusions of the study are:(1)student perceptions(knowledge)of hate speech are on average 78.26%know and 21.74%do not know about the utterances of hatred;(2)student attitudes towards hate speech are on average 78.14%students do not agree with hate speech and 21.86%agree;and(3)student actions or participation in hate speech are on average 78.51%students never take acts in hate speech and 21.49%ever.展开更多
Purpose-Hate speech is an expression of intense hatred.Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors.Hate speech detection with social media data has witnessed spe...Purpose-Hate speech is an expression of intense hatred.Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors.Hate speech detection with social media data has witnessed special research attention in recent studies,hence,the need to design a generic metadata architecture and efficient feature extraction technique to enhance hate speech detection.Design/methodology/approach-This study proposes a hybrid embeddings enhanced with a topic inference method and an improved cuckoo search neural network for hate speech detection in Twitter data.The proposed method uses a hybrid embeddings technique that includes Term Frequency-Inverse Document Frequency(TF-IDF)for word-level feature extraction and Long Short Term Memory(LSTM)which is a variant of recurrent neural networks architecture for sentence-level feature extraction.The extracted features from the hybrid embeddings then serve as input into the improved cuckoo search neural network for the prediction of a tweet as hate speech,offensive language or neither.Findings-The proposed method showed better results when tested on the collected Twitter datasets compared to other related methods.In order to validate the performances of the proposed method,t-test and post hoc multiple comparisons were used to compare the significance and means of the proposed method with other related methods for hate speech detection.Furthermore,Paired Sample t-Test was also conducted to validate the performances of the proposed method with other related methods.Research limitations/implications-Finally,the evaluation results showed that the proposed method outperforms other related methods with mean F1-score of 91.3.Originality/value-The main novelty of this study is the use of an automatic topic spotting measure based on na€ıve Bayes model to improve features representation.展开更多
Spatial prediction of any geographic phenomenon can be an intractable problem.Predicting sparse and uncertain spatial events related to many influencing factors necessitates the integration of multiple data sources.We...Spatial prediction of any geographic phenomenon can be an intractable problem.Predicting sparse and uncertain spatial events related to many influencing factors necessitates the integration of multiple data sources.We present an innovative approach that combines data in a Discrete Global Grid System(DGGS)and uses machine learning for analysis.A DGGS provides a structured input for multiple types of spatial data,consistent over multiple scales.This data framework facilitates the training of an Artificial Neural Network(ANN)to map and predict a phenomenon.Spatial lag regression models(SLRM)are used to evaluate and rank the outputs of the ANN.In our case study,we predict hate crimes in the USA.Hate crimes get attention from mass media and the scientific community,but data on such events is sparse.We trained the ANN with data ingested in the DGGS based on a 50%sample of hate crimes as identified by the Southern Poverty Law Center(SPLC).Our spatial prediction is up to 78%accurate and verified at the state level against the independent FBI hate crime statistics with a fit of 80%.The derived risk maps are a guide to action for policy makers and law enforcement.展开更多
Considering the prevalence of online hate speech and its harm and risks to the targeted people, democratic discourse and public security, it is necessary to combat online hate speech. For this purpose, interact interm...Considering the prevalence of online hate speech and its harm and risks to the targeted people, democratic discourse and public security, it is necessary to combat online hate speech. For this purpose, interact intermediaries play a crucial role as new governors of online speech. However, there is no universal definition of hate speech. Rules concerning this vary in different countries depending on their social, ethical, legal and religious backgrounds. The answer to the question of who can be liable for online hate speech also varies in different countries depending on the social, cultural, history, legal and political backgrounds. The First Amendment, cyberliberalism and the priority of promoting the emerging internet industry lead to the U.S. model, which offers intermediaries wide exemptions from liability for third-party illegal content. Conversely, the Chinese model of cyberpaternalism prefers to control online content on ideological, political and national security grounds through indirect methods, whereas the European Union (EU) and most European countries, including Germany, choose the middle ground to achieve balance between restricting online illegal hate speech and the freedom of speech as well as internet innovation. It is worth noting that there is a heated discussion on whether intermediary liability exemptions are still suitable for the world today, and there is a tendency in the EU to expand intermediary liability by imposing obligation on online platforms to tackle illegal hate speech. However, these reforms are again criticized as they could lead to erosion of the EU legal framework as well as privatization of law enforcement through algorithmic tools. Those critical issues relate to the central questions of whether intermediaries should be liable for user-generated illegal hate speech at all and, if so, how should they fulfill these liabilities? Based on the analysis of the different basic standpoints of cyberliberalists and cyberpaternalists on the internet regulation as well as the arguments of proponents and opponents of the intermediary liability exemptions, especially the debates over factual impracticality and legal restraints, impact on internet innovation and the chilling effect on freedom of speech in the case that intermediaries bear liabilities for illegal third-party content, the paper argues that the arguments for intermediary liability exemptions are not any more tenable or plausible in the web 3.0 era. The outdated intermediary immunity doctrine needs to be reformed and amended.Furthermore, intermediaries are becoming the new governors of online speech and platforms now have the power to curtail online hate speech. Thus, the attention should turn to the appropriate design of legal responsibilities of intermediaries. The possible suggestions could be the following three points: Imposing liability on intermediaries for illegal hate speech requires national law and international human rights norms as the outer boundary; openness, transparency and accountability as internal constraints; balance of multi-interests and involvement of multi-stakeholders in internet governance regime.展开更多
Some years ago I worked on a merger between my company and another-although, in my opinion, there is no such thing as a merger. There is only a process by which one company gobbles up a competitor. Sometimes it's ...Some years ago I worked on a merger between my company and another-although, in my opinion, there is no such thing as a merger. There is only a process by which one company gobbles up a competitor. Sometimes it's a big fish downing a little one. On other occasions, a tiny fish with big leverage swallows an entity ten times its size, like a snake eating a boar.展开更多
基金This work is part of the research projects LaTe4PoliticES(PID2022-138099OBI00)funded by MICIU/AEI/10.13039/501100011033the European Regional Development Fund(ERDF)-A Way of Making Europe and LT-SWM(TED2021-131167B-I00)funded by MICIU/AEI/10.13039/501100011033the European Union NextGenerationEU/PRTR.Mr.Ronghao Pan is supported by the Programa Investigo grant,funded by the Region of Murcia,the Spanish Ministry of Labour and Social Economy and the European Union-NextGenerationEU under the“Plan de Recuperación,Transformación y Resiliencia(PRTR).”。
文摘Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.
文摘Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.This study is supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that operate in the Arab countries have embraced social media in their day-to-day business activities at different scales.This is attributed to business owners’understanding of social media’s importance for business development.However,the Arabic morphology is too complicated to understand due to the availability of nearly 10,000 roots and more than 900 patterns that act as the basis for verbs and nouns.Hate speech over online social networking sites turns out to be a worldwide issue that reduces the cohesion of civil societies.In this background,the current study develops a Chaotic Elephant Herd Optimization with Machine Learning for Hate Speech Detection(CEHOML-HSD)model in the context of the Arabic language.The presented CEHOML-HSD model majorly concentrates on identifying and categorising the Arabic text into hate speech and normal.To attain this,the CEHOML-HSD model follows different sub-processes as discussed herewith.At the initial stage,the CEHOML-HSD model undergoes data pre-processing with the help of the TF-IDF vectorizer.Secondly,the Support Vector Machine(SVM)model is utilized to detect and classify the hate speech texts made in the Arabic language.Lastly,the CEHO approach is employed for fine-tuning the parameters involved in SVM.This CEHO approach is developed by combining the chaotic functions with the classical EHO algorithm.The design of the CEHO algorithm for parameter tuning shows the novelty of the work.A widespread experimental analysis was executed to validate the enhanced performance of the proposed CEHOML-HSD approach.The comparative study outcomes established the supremacy of the proposed CEHOML-HSD model over other approaches.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR43.
文摘Arabic is the world’s first language,categorized by its rich and complicated grammatical formats.Furthermore,the Arabic morphology can be perplexing because nearly 10,000 roots and 900 patterns were the basis for verbs and nouns.The Arabic language consists of distinct variations utilized in a community and particular situations.Social media sites are a medium for expressing opinions and social phenomena like racism,hatred,offensive language,and all kinds of verbal violence.Such conduct does not impact particular nations,communities,or groups only,extending beyond such areas into people’s everyday lives.This study introduces an Improved Ant Lion Optimizer with Deep Learning Dirven Offensive and Hate Speech Detection(IALODL-OHSD)on Arabic Cross-Corpora.The presented IALODL-OHSD model mainly aims to detect and classify offensive/hate speech expressed on social media.In the IALODL-OHSD model,a threestage process is performed,namely pre-processing,word embedding,and classification.Primarily,data pre-processing is performed to transform the Arabic social media text into a useful format.In addition,the word2vec word embedding process is utilized to produce word embeddings.The attentionbased cascaded long short-term memory(ACLSTM)model is utilized for the classification process.Finally,the IALO algorithm is exploited as a hyperparameter optimizer to boost classifier results.To illustrate a brief result analysis of the IALODL-OHSD model,a detailed set of simulations were performed.The extensive comparison study portrayed the enhanced performance of the IALODL-OHSD model over other approaches.
文摘VIOLENCE IS NOT ONLY WRONG,IT’S DISEASED-it’s always painful and too often fatal-as with Martin Luther King,no less.When emotions run high,doctors have difficulty making progress-calm,dispassionate review of the obvious evidence is vital,if you aim for less pain and fewer deaths.This paper is based on the self-evident precept that violated children unmistakeably predate violent adults.The remedy,highlighted here,is unusual in all psychiatry,in that it is backed by solid,irrefutable,objective,scientific evidence-from brainscans,no less-at least it is,for those willing to look.The paper has 6 parts:1.Introduction;2.Un-memorising Terror;3.Nutritious Emotions;4.Tyrannical Revenge;5.The Way to Cure Nucleargeddon Is Paved With Good Intentions;6.Conclusion.Parenting is a troubled skill,largely because mis-parenting perpetuates itself.As the poet Philip Larkin says of parents-“they fill you with the faults they had,and add some extra just for you”.Larkin moderates his criticism with“they may not mean to,but they do”.Sadly his“solution”-“don’t have any kids yourself”can extinguish the human race as reliably as ever revengeful Emotional Dwarfism will.
文摘Mobile devices with social media applications are the prevalent user equipment to generate and consume digital hate content.The objective of this paper is to propose a mobile edge computing architecture for regulating and reducing hate content at the user's level.In this regard,the profiling of hate content is obtained from the results of multiple studies by quantitative and qualitative analyses.Profiling resulted in different categories of hate content caused by gender,religion,race,and disability.Based on this information,an architectural framework is developed to regulate and reduce hate content at the user's level in the mobile computing environment.The proposed architecture will be a novel idea to reduce hate content generation and its impact.
文摘Hate crimes are a culture phenomenon which is perceived by most as an occurrence that should be uprooted from the society. Yet, to date, we have been unable to do so. Hate crimes are the subject of research and comments by experts in various fields. In this regard, most scholars agree that a hate based crime is distinguished from a "regular" criminal offence by the motive--the attack is aimed at a victim who is part of a differentiated minority group. However, when reading the relevant documents in the area, it seems that the differences between the experts start at the most basic point--what constitutes hate crimes? This article analyses the concept of "hate crimes" via an interdisciplinary approach aimed at flashing out the fundamental gaps in the research. We have found that the problems include, inter alia, discrepancies in the definition of hate crimes, methodological difficulties regarding validity and legitimacy (mainly due to the absence of information based on the attacker's point of view) and the lack of agreement on the appropriate legal methods required to deal with the ramifications of hate crimes. While part I of this paper revolves around the theoretical aspects of the questions put forth at the centre of this article, part II looks at the same questions from a legal viewpoint. The correlation between the two chapters shows the impact the methodological difficulties have on enforcement endeavors. This relation is further advanced through the examination of test cases from different countries, among them--lsrael. Finally, the article concludes by suggesting a few thoughts on the way to overcome the theoretical problems and making the enforcement efforts more efficient.
文摘Based on the proposal of freedom of speech,hate speech has become more and more widespread,especially in the past decade.Generally,the constituent elements of hate speech are mainly manifested in four aspects(Jiang,2015):the way of expression,the object,the intention of expression,and the harmful consequences.Through these four aspects,hate speech can give a heavy blow to the stability and security of the whole society with the help of social media.Hence,this paper puts forward an analysis method of the recognition and resistance to hate speech from different conditions.
文摘Frankenstein,as the first science fiction in the world,mainly talks about the life of a young scientist,Victor Frankenstein,how he created the monster and how the monster destroyed his life.In the novel,love existed in everyone’s heart including the monster.On the other side,hate also existed in the characters in the novel.Love and hate were described in the novel,and at last love was more powerful than hate and overcome hate.
文摘The internet has brought together people from diverse cultures,backgrounds,and languages,forming a global community.However,this unstoppable growth in online presence and user numbers has introduced several new challenges.The structure of the cyberspace panopticon,the utilization of big data and its manipulation by interest groups,and the emergence of various ethical issues in digital media,such as deceptive content,deepfakes,and echo chambers,have become significant concerns.When combined with the characteristics of digital dissemination and rapid global interaction,these factors have paved the way for ethical problems related to the production,proliferation,and legitimization of hate speech.Moreover,certain images have gained widespread acceptance as though they were real,despite having no factual basis.This recent realization that much of the information and imagery considered to be true is,in fact,a virtual illusion,is a commonly discussed truth.The alarming increase and growing legitimacy of hate speech within the digital realm,made possible by social media,are leading us toward an unavoidable outcome.This study aims to investigate the reality of hate speech in this context.To achieve this goal,the research question is formulated as follows:“Does social media,particularly Twitter,contain content that includes hate speech,incendiary information,and news?”The study’s population is social media,with the sample consisting of hate speech content found on Twitter.Qualitative research methods are intended to be employed in this study.
文摘The general problem of this research was how students respond to hate speech.The purpose of the study was to obtain an overview of(1)perceptions;(2)attitudes;and(3)student actions/participation towards hate speech.The research approach used was quantitative and descriptive with survey method.The population of this study was all the administrators of the student executive board in UNTAN,IAIN,and IKIP PGRI Pontianak totaling 162 students.The number of research samples was 115 students determined by Slovin formula.The respondents were choosen randomly.Data collection used a questionnaire.Data analysis used percentage quantitative descriptive analysis techniques.The general conclusion of the study shows that student responses to hate speech are good.Specific conclusions of the study are:(1)student perceptions(knowledge)of hate speech are on average 78.26%know and 21.74%do not know about the utterances of hatred;(2)student attitudes towards hate speech are on average 78.14%students do not agree with hate speech and 21.86%agree;and(3)student actions or participation in hate speech are on average 78.51%students never take acts in hate speech and 21.49%ever.
文摘Purpose-Hate speech is an expression of intense hatred.Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors.Hate speech detection with social media data has witnessed special research attention in recent studies,hence,the need to design a generic metadata architecture and efficient feature extraction technique to enhance hate speech detection.Design/methodology/approach-This study proposes a hybrid embeddings enhanced with a topic inference method and an improved cuckoo search neural network for hate speech detection in Twitter data.The proposed method uses a hybrid embeddings technique that includes Term Frequency-Inverse Document Frequency(TF-IDF)for word-level feature extraction and Long Short Term Memory(LSTM)which is a variant of recurrent neural networks architecture for sentence-level feature extraction.The extracted features from the hybrid embeddings then serve as input into the improved cuckoo search neural network for the prediction of a tweet as hate speech,offensive language or neither.Findings-The proposed method showed better results when tested on the collected Twitter datasets compared to other related methods.In order to validate the performances of the proposed method,t-test and post hoc multiple comparisons were used to compare the significance and means of the proposed method with other related methods for hate speech detection.Furthermore,Paired Sample t-Test was also conducted to validate the performances of the proposed method with other related methods.Research limitations/implications-Finally,the evaluation results showed that the proposed method outperforms other related methods with mean F1-score of 91.3.Originality/value-The main novelty of this study is the use of an automatic topic spotting measure based on na€ıve Bayes model to improve features representation.
文摘Spatial prediction of any geographic phenomenon can be an intractable problem.Predicting sparse and uncertain spatial events related to many influencing factors necessitates the integration of multiple data sources.We present an innovative approach that combines data in a Discrete Global Grid System(DGGS)and uses machine learning for analysis.A DGGS provides a structured input for multiple types of spatial data,consistent over multiple scales.This data framework facilitates the training of an Artificial Neural Network(ANN)to map and predict a phenomenon.Spatial lag regression models(SLRM)are used to evaluate and rank the outputs of the ANN.In our case study,we predict hate crimes in the USA.Hate crimes get attention from mass media and the scientific community,but data on such events is sparse.We trained the ANN with data ingested in the DGGS based on a 50%sample of hate crimes as identified by the Southern Poverty Law Center(SPLC).Our spatial prediction is up to 78%accurate and verified at the state level against the independent FBI hate crime statistics with a fit of 80%.The derived risk maps are a guide to action for policy makers and law enforcement.
文摘Considering the prevalence of online hate speech and its harm and risks to the targeted people, democratic discourse and public security, it is necessary to combat online hate speech. For this purpose, interact intermediaries play a crucial role as new governors of online speech. However, there is no universal definition of hate speech. Rules concerning this vary in different countries depending on their social, ethical, legal and religious backgrounds. The answer to the question of who can be liable for online hate speech also varies in different countries depending on the social, cultural, history, legal and political backgrounds. The First Amendment, cyberliberalism and the priority of promoting the emerging internet industry lead to the U.S. model, which offers intermediaries wide exemptions from liability for third-party illegal content. Conversely, the Chinese model of cyberpaternalism prefers to control online content on ideological, political and national security grounds through indirect methods, whereas the European Union (EU) and most European countries, including Germany, choose the middle ground to achieve balance between restricting online illegal hate speech and the freedom of speech as well as internet innovation. It is worth noting that there is a heated discussion on whether intermediary liability exemptions are still suitable for the world today, and there is a tendency in the EU to expand intermediary liability by imposing obligation on online platforms to tackle illegal hate speech. However, these reforms are again criticized as they could lead to erosion of the EU legal framework as well as privatization of law enforcement through algorithmic tools. Those critical issues relate to the central questions of whether intermediaries should be liable for user-generated illegal hate speech at all and, if so, how should they fulfill these liabilities? Based on the analysis of the different basic standpoints of cyberliberalists and cyberpaternalists on the internet regulation as well as the arguments of proponents and opponents of the intermediary liability exemptions, especially the debates over factual impracticality and legal restraints, impact on internet innovation and the chilling effect on freedom of speech in the case that intermediaries bear liabilities for illegal third-party content, the paper argues that the arguments for intermediary liability exemptions are not any more tenable or plausible in the web 3.0 era. The outdated intermediary immunity doctrine needs to be reformed and amended.Furthermore, intermediaries are becoming the new governors of online speech and platforms now have the power to curtail online hate speech. Thus, the attention should turn to the appropriate design of legal responsibilities of intermediaries. The possible suggestions could be the following three points: Imposing liability on intermediaries for illegal hate speech requires national law and international human rights norms as the outer boundary; openness, transparency and accountability as internal constraints; balance of multi-interests and involvement of multi-stakeholders in internet governance regime.
文摘Some years ago I worked on a merger between my company and another-although, in my opinion, there is no such thing as a merger. There is only a process by which one company gobbles up a competitor. Sometimes it's a big fish downing a little one. On other occasions, a tiny fish with big leverage swallows an entity ten times its size, like a snake eating a boar.