Government credibility is an important asset of contemporary national governance, an important criterion for evaluating government legitimacy, and a key factor in measuring the effectiveness of government governance. ...Government credibility is an important asset of contemporary national governance, an important criterion for evaluating government legitimacy, and a key factor in measuring the effectiveness of government governance. In recent years, researchers’ research on government credibility has mostly focused on exploring theories and mechanisms, with little empirical research on this topic. This article intends to apply variable selection models in the field of social statistics to the issue of government credibility, in order to achieve empirical research on government credibility and explore its core influencing factors from a statistical perspective. Specifically, this article intends to use four regression-analysis-based methods and three random-forest-based methods to study the influencing factors of government credibility in various provinces in China, and compare the performance of these seven variable selection methods in different dimensions. The research results show that there are certain differences in simplicity, accuracy, and variable importance ranking among different variable selection methods, which present different importance in the study of government credibility issues. This study provides a methodological reference for variable selection models in the field of social science research, and also offers a multidimensional comparative perspective for analyzing the influencing factors of government credibility.展开更多
It is quite often that the theoretic model used in the Kalman filtering may not be sufficiently accurate for practical applications,due to the fact that the covariances of noises are not exactly known.Our previous wor...It is quite often that the theoretic model used in the Kalman filtering may not be sufficiently accurate for practical applications,due to the fact that the covariances of noises are not exactly known.Our previous work reveals that in such scenario the filter calculated mean square errors(FMSE)and the true mean square errors(TMSE)become inconsistent,while FMSE and TMSE are consistent in the Kalman filter with accurate models.This can lead to low credibility of state estimation regardless of using Kalman filters or adaptive Kalman filters.Obviously,it is important to study the inconsistency issue since it is vital to understand the quantitative influence induced by the inaccurate models.Aiming at this,the concept of credibility is adopted to discuss the inconsistency problem in this paper.In order to formulate the degree of the credibility,a trust factor is constructed based on the FMSE and the TMSE.However,the trust factor can not be directly computed since the TMSE cannot be found for practical applications.Based on the definition of trust factor,the estimation of the trust factor is successfully modified to online estimation of the TMSE.More importantly,a necessary and sufficient condition is found,which turns out to be the basis for better design of Kalman filters with high performance.Accordingly,beyond trust factor estimation with Sage-Husa technique(TFE-SHT),three novel trust factor estimation methods,which are directly numerical solving method(TFE-DNS),the particle swarm optimization method(PSO)and expectation maximization-particle swarm optimization method(EM-PSO)are proposed.The analysis and simulation results both show that the proposed TFE-DNS is better than the TFE-SHT for the case of single unknown noise covariance.Meanwhile,the proposed EMPSO performs completely better than the EM and PSO on the estimation of the credibility degree and state when both noise covariances should be estimated online.展开更多
The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social network...The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social networks.It is very easy for any user to spread misinformation through the media.Therefore,misinformation is a problem for professionals,organizers,and societies.Hence,it is essential to observe the credibility and validity of the News articles being shared on social media.The core challenge is to distinguish the difference between accurate and false information.Recent studies focus on News article content,such as News titles and descriptions,which has limited their achievements.However,there are two ordinarily agreed-upon features of misinformation:first,the title and text of an article,and second,the user engagement.In the case of the News context,we extracted different user engagements with articles,for example,tweets,i.e.,read-only,user retweets,likes,and shares.We calculate user credibility and combine it with article content with the user’s context.After combining both features,we used three Natural language processing(NLP)feature extraction techniques,i.e.,Term Frequency-Inverse Document Frequency(TF-IDF),Count-Vectorizer(CV),and Hashing-Vectorizer(HV).Then,we applied different machine learning classifiers to classify misinformation as real or fake.Therefore,we used a Support Vector Machine(SVM),Naive Byes(NB),Random Forest(RF),Decision Tree(DT),Gradient Boosting(GB),and K-Nearest Neighbors(KNN).The proposed method has been tested on a real-world dataset,i.e.,“fakenewsnet”.We refine the fakenewsnet dataset repository according to our required features.The dataset contains 23000+articles with millions of user engagements.The highest accuracy score is 93.4%.The proposed model achieves its highest accuracy using count vector features and a random forest classifier.Our discoveries confirmed that the proposed classifier would effectively classify misinformation in social networks.展开更多
文摘Government credibility is an important asset of contemporary national governance, an important criterion for evaluating government legitimacy, and a key factor in measuring the effectiveness of government governance. In recent years, researchers’ research on government credibility has mostly focused on exploring theories and mechanisms, with little empirical research on this topic. This article intends to apply variable selection models in the field of social statistics to the issue of government credibility, in order to achieve empirical research on government credibility and explore its core influencing factors from a statistical perspective. Specifically, this article intends to use four regression-analysis-based methods and three random-forest-based methods to study the influencing factors of government credibility in various provinces in China, and compare the performance of these seven variable selection methods in different dimensions. The research results show that there are certain differences in simplicity, accuracy, and variable importance ranking among different variable selection methods, which present different importance in the study of government credibility issues. This study provides a methodological reference for variable selection models in the field of social science research, and also offers a multidimensional comparative perspective for analyzing the influencing factors of government credibility.
基金supported by the National Natural Science Foundation of China(62033010)Aeronautical Science Foundation of China(2019460T5001)。
文摘It is quite often that the theoretic model used in the Kalman filtering may not be sufficiently accurate for practical applications,due to the fact that the covariances of noises are not exactly known.Our previous work reveals that in such scenario the filter calculated mean square errors(FMSE)and the true mean square errors(TMSE)become inconsistent,while FMSE and TMSE are consistent in the Kalman filter with accurate models.This can lead to low credibility of state estimation regardless of using Kalman filters or adaptive Kalman filters.Obviously,it is important to study the inconsistency issue since it is vital to understand the quantitative influence induced by the inaccurate models.Aiming at this,the concept of credibility is adopted to discuss the inconsistency problem in this paper.In order to formulate the degree of the credibility,a trust factor is constructed based on the FMSE and the TMSE.However,the trust factor can not be directly computed since the TMSE cannot be found for practical applications.Based on the definition of trust factor,the estimation of the trust factor is successfully modified to online estimation of the TMSE.More importantly,a necessary and sufficient condition is found,which turns out to be the basis for better design of Kalman filters with high performance.Accordingly,beyond trust factor estimation with Sage-Husa technique(TFE-SHT),three novel trust factor estimation methods,which are directly numerical solving method(TFE-DNS),the particle swarm optimization method(PSO)and expectation maximization-particle swarm optimization method(EM-PSO)are proposed.The analysis and simulation results both show that the proposed TFE-DNS is better than the TFE-SHT for the case of single unknown noise covariance.Meanwhile,the proposed EMPSO performs completely better than the EM and PSO on the estimation of the credibility degree and state when both noise covariances should be estimated online.
文摘The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social networks.It is very easy for any user to spread misinformation through the media.Therefore,misinformation is a problem for professionals,organizers,and societies.Hence,it is essential to observe the credibility and validity of the News articles being shared on social media.The core challenge is to distinguish the difference between accurate and false information.Recent studies focus on News article content,such as News titles and descriptions,which has limited their achievements.However,there are two ordinarily agreed-upon features of misinformation:first,the title and text of an article,and second,the user engagement.In the case of the News context,we extracted different user engagements with articles,for example,tweets,i.e.,read-only,user retweets,likes,and shares.We calculate user credibility and combine it with article content with the user’s context.After combining both features,we used three Natural language processing(NLP)feature extraction techniques,i.e.,Term Frequency-Inverse Document Frequency(TF-IDF),Count-Vectorizer(CV),and Hashing-Vectorizer(HV).Then,we applied different machine learning classifiers to classify misinformation as real or fake.Therefore,we used a Support Vector Machine(SVM),Naive Byes(NB),Random Forest(RF),Decision Tree(DT),Gradient Boosting(GB),and K-Nearest Neighbors(KNN).The proposed method has been tested on a real-world dataset,i.e.,“fakenewsnet”.We refine the fakenewsnet dataset repository according to our required features.The dataset contains 23000+articles with millions of user engagements.The highest accuracy score is 93.4%.The proposed model achieves its highest accuracy using count vector features and a random forest classifier.Our discoveries confirmed that the proposed classifier would effectively classify misinformation in social networks.