The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social network...The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social networks.It is very easy for any user to spread misinformation through the media.Therefore,misinformation is a problem for professionals,organizers,and societies.Hence,it is essential to observe the credibility and validity of the News articles being shared on social media.The core challenge is to distinguish the difference between accurate and false information.Recent studies focus on News article content,such as News titles and descriptions,which has limited their achievements.However,there are two ordinarily agreed-upon features of misinformation:first,the title and text of an article,and second,the user engagement.In the case of the News context,we extracted different user engagements with articles,for example,tweets,i.e.,read-only,user retweets,likes,and shares.We calculate user credibility and combine it with article content with the user’s context.After combining both features,we used three Natural language processing(NLP)feature extraction techniques,i.e.,Term Frequency-Inverse Document Frequency(TF-IDF),Count-Vectorizer(CV),and Hashing-Vectorizer(HV).Then,we applied different machine learning classifiers to classify misinformation as real or fake.Therefore,we used a Support Vector Machine(SVM),Naive Byes(NB),Random Forest(RF),Decision Tree(DT),Gradient Boosting(GB),and K-Nearest Neighbors(KNN).The proposed method has been tested on a real-world dataset,i.e.,“fakenewsnet”.We refine the fakenewsnet dataset repository according to our required features.The dataset contains 23000+articles with millions of user engagements.The highest accuracy score is 93.4%.The proposed model achieves its highest accuracy using count vector features and a random forest classifier.Our discoveries confirmed that the proposed classifier would effectively classify misinformation in social networks.展开更多
Despite the salience of misinformation and its consequences, there still lies a tremendous gap in research on the broader tendencies in collective cognition that compels individuals to spread misinformation so excessi...Despite the salience of misinformation and its consequences, there still lies a tremendous gap in research on the broader tendencies in collective cognition that compels individuals to spread misinformation so excessively. This study examined social learning as an antecedent of engaging with misinformation online. Using data released by Twitter for academic research in 2018, Tweets that included URL news links of both known misinformation and reliable domains were analyzed. Lindström’s computational reinforcement learning model was adapted as an expression of social learning, where a Twitter user’s posting frequency of news links is dependent on the relative engagement they receive in consequence. The research found that those who shared misinformation were highly sensitive to social reward. Inflation of positive social feedback was associated with a decrease in posting latency, indicating that users that posted misinformation were strongly influenced by social learning. However, the posting frequency of authentic news sharers remained fixed, even after receiving an increase in relative and absolute engagement. The results identified social learning is a contributor to the spread of misinformation online. In addition, behavior driven by social validation suggests a positive correlation between posting frequency, gratification received from posting, and a growing mental health dependency on social media. Developing interventions for spreading misinformation online may profit by assessing which online environments amplify social learning, particularly the conditions under which misinformation proliferates.展开更多
Health information seeking has a long history.Health information empowers consumers to make informed health decisions and plays significant roles in many health activities such as consumers’self-diagnose,chronic dise...Health information seeking has a long history.Health information empowers consumers to make informed health decisions and plays significant roles in many health activities such as consumers’self-diagnose,chronic disease management,and patient-physician communications.The effectiveness of such empowerment is based on one common assumption,namely the information quality.The accurate information could foster consumer’s informed health decisions,while distorted information could lead to severe health crises.展开更多
Cyberterrorism poses a significant threat to the national security of the United States of America (USA), with critical infrastructure, such as commercial facilities, dams, emergency services, food and agriculture, he...Cyberterrorism poses a significant threat to the national security of the United States of America (USA), with critical infrastructure, such as commercial facilities, dams, emergency services, food and agriculture, healthcare and public health, and transportation systems virtually at risk. Consequently, this is due primarily to the country’s heavy dependence on computer networks. With both domestic and international terrorists increasingly targeting any vulnerabilities in computer systems and networks, information sharing among security agencies has become critical. Cyberterrorism can be regarded as the purest form of information warfare. This literature review examines cyberterrorism and strategic communications, focusing on domestic cyberterrorism. Notable themes include the meaning of cyberterrorism, how cyberterrorism differs from cybercrime, and the threat posed by cyberterrorism to the USA. Prevention and deterrence of cyberterrorism through information sharing and legislation are also key themes. Finally, gaps in knowledge are identified, and questions warranting additional research are outlined.展开更多
Data visualization blends art and science to convey stories from data via graphical representations.Considering different problems,applications,requirements,and design goals,it is challenging to combine these two comp...Data visualization blends art and science to convey stories from data via graphical representations.Considering different problems,applications,requirements,and design goals,it is challenging to combine these two components at their full force.While the art component involves creating visually appealing and easily interpreted graphics for users,the science component requires accurate representations of a large amount of input data.With a lack of the science component,visualization cannot serve its role of creating correct representations of the actual data,thus leading to wrong perception,interpretation,and decision.It might be even worse if incorrect visual representations were intentionally produced to deceive the viewers.To address common pitfalls in graphical representations,this paper focuses on identifying and understanding the root causes of misinformation in graphical representations.We reviewed the misleading data visualization examples in the scientific publications collected from indexing databases and then projected them onto the fundamental units of visual communication such as color,shape,size,and spatial orientation.Moreover,a text mining technique was applied to extract practical insights from common visualization pitfalls.Cochran’s Q test and McNemar’s test were conducted to examine if there is any difference in the proportions of common errors among color,shape,size,and spatial orientation.The findings showed that the pie chart is the most misused graphical representation,and size is the most critical issue.It was also observed that there were statistically significant differences in the proportion of errors among color,shape,size,and spatial orientation.展开更多
Purpose-The rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global audience at a low cost by news chann...Purpose-The rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global audience at a low cost by news channels,freelance reporters and websites.Amid the coronavirus disease 2019(COVID-19)pandemic,individuals are inflicted with these false and potentially harmful claims and stories,which may harm the vaccination process.Psychological studies reveal that the human ability to detect deception is only slightly better than chance;therefore,there is a growing need for serious consideration for developing automated strategies to combat fake news that traverses these platforms at an alarming rate.This paper systematically reviews the existing fake news detection technologies by exploring various machine learning and deep learning techniques pre-and post-pandemic,which has never been done before to the best of the authors’knowledge.Design/methodology/approach-The detailed literature review on fake news detection is divided into three major parts.The authors searched papers no later than 2017 on fake news detection approaches on deep learning andmachine learning.The paperswere initially searched through theGoogle scholar platform,and they have been scrutinized for quality.The authors kept“Scopus”and“Web of Science”as quality indexing parameters.All research gaps and available databases,data pre-processing,feature extraction techniques and evaluationmethods for current fake news detection technologies have been explored,illustrating them using tables,charts and trees.Findings-The paper is dissected into two approaches,namely machine learning and deep learning,to present a better understanding and a clear objective.Next,the authors present a viewpoint on which approach is better and future research trends,issues and challenges for researchers,given the relevance and urgency of a detailed and thorough analysis of existing models.This paper also delves into fake new detection during COVID-19,and it can be inferred that research and modeling are shifting toward the use of ensemble approaches.Originality/value-The study also identifies several novel automated web-based approaches used by researchers to assess the validity of pandemic news that have proven to be successful,although currently reported accuracy has not yet reached consistent levels in the real world.展开更多
文摘The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social networks.It is very easy for any user to spread misinformation through the media.Therefore,misinformation is a problem for professionals,organizers,and societies.Hence,it is essential to observe the credibility and validity of the News articles being shared on social media.The core challenge is to distinguish the difference between accurate and false information.Recent studies focus on News article content,such as News titles and descriptions,which has limited their achievements.However,there are two ordinarily agreed-upon features of misinformation:first,the title and text of an article,and second,the user engagement.In the case of the News context,we extracted different user engagements with articles,for example,tweets,i.e.,read-only,user retweets,likes,and shares.We calculate user credibility and combine it with article content with the user’s context.After combining both features,we used three Natural language processing(NLP)feature extraction techniques,i.e.,Term Frequency-Inverse Document Frequency(TF-IDF),Count-Vectorizer(CV),and Hashing-Vectorizer(HV).Then,we applied different machine learning classifiers to classify misinformation as real or fake.Therefore,we used a Support Vector Machine(SVM),Naive Byes(NB),Random Forest(RF),Decision Tree(DT),Gradient Boosting(GB),and K-Nearest Neighbors(KNN).The proposed method has been tested on a real-world dataset,i.e.,“fakenewsnet”.We refine the fakenewsnet dataset repository according to our required features.The dataset contains 23000+articles with millions of user engagements.The highest accuracy score is 93.4%.The proposed model achieves its highest accuracy using count vector features and a random forest classifier.Our discoveries confirmed that the proposed classifier would effectively classify misinformation in social networks.
文摘Despite the salience of misinformation and its consequences, there still lies a tremendous gap in research on the broader tendencies in collective cognition that compels individuals to spread misinformation so excessively. This study examined social learning as an antecedent of engaging with misinformation online. Using data released by Twitter for academic research in 2018, Tweets that included URL news links of both known misinformation and reliable domains were analyzed. Lindström’s computational reinforcement learning model was adapted as an expression of social learning, where a Twitter user’s posting frequency of news links is dependent on the relative engagement they receive in consequence. The research found that those who shared misinformation were highly sensitive to social reward. Inflation of positive social feedback was associated with a decrease in posting latency, indicating that users that posted misinformation were strongly influenced by social learning. However, the posting frequency of authentic news sharers remained fixed, even after receiving an increase in relative and absolute engagement. The results identified social learning is a contributor to the spread of misinformation online. In addition, behavior driven by social validation suggests a positive correlation between posting frequency, gratification received from posting, and a growing mental health dependency on social media. Developing interventions for spreading misinformation online may profit by assessing which online environments amplify social learning, particularly the conditions under which misinformation proliferates.
文摘Health information seeking has a long history.Health information empowers consumers to make informed health decisions and plays significant roles in many health activities such as consumers’self-diagnose,chronic disease management,and patient-physician communications.The effectiveness of such empowerment is based on one common assumption,namely the information quality.The accurate information could foster consumer’s informed health decisions,while distorted information could lead to severe health crises.
文摘Cyberterrorism poses a significant threat to the national security of the United States of America (USA), with critical infrastructure, such as commercial facilities, dams, emergency services, food and agriculture, healthcare and public health, and transportation systems virtually at risk. Consequently, this is due primarily to the country’s heavy dependence on computer networks. With both domestic and international terrorists increasingly targeting any vulnerabilities in computer systems and networks, information sharing among security agencies has become critical. Cyberterrorism can be regarded as the purest form of information warfare. This literature review examines cyberterrorism and strategic communications, focusing on domestic cyberterrorism. Notable themes include the meaning of cyberterrorism, how cyberterrorism differs from cybercrime, and the threat posed by cyberterrorism to the USA. Prevention and deterrence of cyberterrorism through information sharing and legislation are also key themes. Finally, gaps in knowledge are identified, and questions warranting additional research are outlined.
文摘Data visualization blends art and science to convey stories from data via graphical representations.Considering different problems,applications,requirements,and design goals,it is challenging to combine these two components at their full force.While the art component involves creating visually appealing and easily interpreted graphics for users,the science component requires accurate representations of a large amount of input data.With a lack of the science component,visualization cannot serve its role of creating correct representations of the actual data,thus leading to wrong perception,interpretation,and decision.It might be even worse if incorrect visual representations were intentionally produced to deceive the viewers.To address common pitfalls in graphical representations,this paper focuses on identifying and understanding the root causes of misinformation in graphical representations.We reviewed the misleading data visualization examples in the scientific publications collected from indexing databases and then projected them onto the fundamental units of visual communication such as color,shape,size,and spatial orientation.Moreover,a text mining technique was applied to extract practical insights from common visualization pitfalls.Cochran’s Q test and McNemar’s test were conducted to examine if there is any difference in the proportions of common errors among color,shape,size,and spatial orientation.The findings showed that the pie chart is the most misused graphical representation,and size is the most critical issue.It was also observed that there were statistically significant differences in the proportion of errors among color,shape,size,and spatial orientation.
基金The author would like to thank the anonymous reviewers and respected editors for taking valuable time to go through the manuscript.
文摘Purpose-The rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global audience at a low cost by news channels,freelance reporters and websites.Amid the coronavirus disease 2019(COVID-19)pandemic,individuals are inflicted with these false and potentially harmful claims and stories,which may harm the vaccination process.Psychological studies reveal that the human ability to detect deception is only slightly better than chance;therefore,there is a growing need for serious consideration for developing automated strategies to combat fake news that traverses these platforms at an alarming rate.This paper systematically reviews the existing fake news detection technologies by exploring various machine learning and deep learning techniques pre-and post-pandemic,which has never been done before to the best of the authors’knowledge.Design/methodology/approach-The detailed literature review on fake news detection is divided into three major parts.The authors searched papers no later than 2017 on fake news detection approaches on deep learning andmachine learning.The paperswere initially searched through theGoogle scholar platform,and they have been scrutinized for quality.The authors kept“Scopus”and“Web of Science”as quality indexing parameters.All research gaps and available databases,data pre-processing,feature extraction techniques and evaluationmethods for current fake news detection technologies have been explored,illustrating them using tables,charts and trees.Findings-The paper is dissected into two approaches,namely machine learning and deep learning,to present a better understanding and a clear objective.Next,the authors present a viewpoint on which approach is better and future research trends,issues and challenges for researchers,given the relevance and urgency of a detailed and thorough analysis of existing models.This paper also delves into fake new detection during COVID-19,and it can be inferred that research and modeling are shifting toward the use of ensemble approaches.Originality/value-The study also identifies several novel automated web-based approaches used by researchers to assess the validity of pandemic news that have proven to be successful,although currently reported accuracy has not yet reached consistent levels in the real world.