It is quite often that the theoretic model used in the Kalman filtering may not be sufficiently accurate for practical applications,due to the fact that the covariances of noises are not exactly known.Our previous wor...It is quite often that the theoretic model used in the Kalman filtering may not be sufficiently accurate for practical applications,due to the fact that the covariances of noises are not exactly known.Our previous work reveals that in such scenario the filter calculated mean square errors(FMSE)and the true mean square errors(TMSE)become inconsistent,while FMSE and TMSE are consistent in the Kalman filter with accurate models.This can lead to low credibility of state estimation regardless of using Kalman filters or adaptive Kalman filters.Obviously,it is important to study the inconsistency issue since it is vital to understand the quantitative influence induced by the inaccurate models.Aiming at this,the concept of credibility is adopted to discuss the inconsistency problem in this paper.In order to formulate the degree of the credibility,a trust factor is constructed based on the FMSE and the TMSE.However,the trust factor can not be directly computed since the TMSE cannot be found for practical applications.Based on the definition of trust factor,the estimation of the trust factor is successfully modified to online estimation of the TMSE.More importantly,a necessary and sufficient condition is found,which turns out to be the basis for better design of Kalman filters with high performance.Accordingly,beyond trust factor estimation with Sage-Husa technique(TFE-SHT),three novel trust factor estimation methods,which are directly numerical solving method(TFE-DNS),the particle swarm optimization method(PSO)and expectation maximization-particle swarm optimization method(EM-PSO)are proposed.The analysis and simulation results both show that the proposed TFE-DNS is better than the TFE-SHT for the case of single unknown noise covariance.Meanwhile,the proposed EMPSO performs completely better than the EM and PSO on the estimation of the credibility degree and state when both noise covariances should be estimated online.展开更多
The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social network...The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social networks.It is very easy for any user to spread misinformation through the media.Therefore,misinformation is a problem for professionals,organizers,and societies.Hence,it is essential to observe the credibility and validity of the News articles being shared on social media.The core challenge is to distinguish the difference between accurate and false information.Recent studies focus on News article content,such as News titles and descriptions,which has limited their achievements.However,there are two ordinarily agreed-upon features of misinformation:first,the title and text of an article,and second,the user engagement.In the case of the News context,we extracted different user engagements with articles,for example,tweets,i.e.,read-only,user retweets,likes,and shares.We calculate user credibility and combine it with article content with the user’s context.After combining both features,we used three Natural language processing(NLP)feature extraction techniques,i.e.,Term Frequency-Inverse Document Frequency(TF-IDF),Count-Vectorizer(CV),and Hashing-Vectorizer(HV).Then,we applied different machine learning classifiers to classify misinformation as real or fake.Therefore,we used a Support Vector Machine(SVM),Naive Byes(NB),Random Forest(RF),Decision Tree(DT),Gradient Boosting(GB),and K-Nearest Neighbors(KNN).The proposed method has been tested on a real-world dataset,i.e.,“fakenewsnet”.We refine the fakenewsnet dataset repository according to our required features.The dataset contains 23000+articles with millions of user engagements.The highest accuracy score is 93.4%.The proposed model achieves its highest accuracy using count vector features and a random forest classifier.Our discoveries confirmed that the proposed classifier would effectively classify misinformation in social networks.展开更多
With the development of technology,the connected vehicle has been upgraded from a traditional transport vehicle to an information terminal and energy storage terminal.The data of ICV(intelligent connected vehicles)is ...With the development of technology,the connected vehicle has been upgraded from a traditional transport vehicle to an information terminal and energy storage terminal.The data of ICV(intelligent connected vehicles)is the key to organically maximizing their efficiency.However,in the context of increasingly strict global data security supervision and compliance,numerous problems,including complex types of connected vehicle data,poor data collaboration between the IT(information technology)domain and OT(operation technology)domain,different data format standards,lack of shared trust sources,difficulty in ensuring the quality of shared data,lack of data control rights,as well as difficulty in defining data ownership,make vehicle data sharing face a lot of problems,and data islands are widespread.This study proposes FADSF(Fuzzy Anonymous Data Share Frame),an automobile data sharing scheme based on blockchain.The data holder publishes the shared data information and forms the corresponding label storage on the blockchain.The data demander browses the data directory information to select and purchase data assets and verify them.The data demander selects and purchases data assets and verifies them by browsing the data directory information.Meanwhile,this paper designs a data structure Data Discrimination Bloom Filter(DDBF),making complaints about illegal data.When the number of data complaints reaches the threshold,the audit traceability contract is triggered to punish the illegal data publisher,aiming to improve the data quality and maintain a good data sharing ecology.In this paper,based on Ethereum,the above scheme is tested to demonstrate its feasibility,efficiency and security.展开更多
基金supported by the National Natural Science Foundation of China(62033010)Aeronautical Science Foundation of China(2019460T5001)。
文摘It is quite often that the theoretic model used in the Kalman filtering may not be sufficiently accurate for practical applications,due to the fact that the covariances of noises are not exactly known.Our previous work reveals that in such scenario the filter calculated mean square errors(FMSE)and the true mean square errors(TMSE)become inconsistent,while FMSE and TMSE are consistent in the Kalman filter with accurate models.This can lead to low credibility of state estimation regardless of using Kalman filters or adaptive Kalman filters.Obviously,it is important to study the inconsistency issue since it is vital to understand the quantitative influence induced by the inaccurate models.Aiming at this,the concept of credibility is adopted to discuss the inconsistency problem in this paper.In order to formulate the degree of the credibility,a trust factor is constructed based on the FMSE and the TMSE.However,the trust factor can not be directly computed since the TMSE cannot be found for practical applications.Based on the definition of trust factor,the estimation of the trust factor is successfully modified to online estimation of the TMSE.More importantly,a necessary and sufficient condition is found,which turns out to be the basis for better design of Kalman filters with high performance.Accordingly,beyond trust factor estimation with Sage-Husa technique(TFE-SHT),three novel trust factor estimation methods,which are directly numerical solving method(TFE-DNS),the particle swarm optimization method(PSO)and expectation maximization-particle swarm optimization method(EM-PSO)are proposed.The analysis and simulation results both show that the proposed TFE-DNS is better than the TFE-SHT for the case of single unknown noise covariance.Meanwhile,the proposed EMPSO performs completely better than the EM and PSO on the estimation of the credibility degree and state when both noise covariances should be estimated online.
文摘The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social networks.It is very easy for any user to spread misinformation through the media.Therefore,misinformation is a problem for professionals,organizers,and societies.Hence,it is essential to observe the credibility and validity of the News articles being shared on social media.The core challenge is to distinguish the difference between accurate and false information.Recent studies focus on News article content,such as News titles and descriptions,which has limited their achievements.However,there are two ordinarily agreed-upon features of misinformation:first,the title and text of an article,and second,the user engagement.In the case of the News context,we extracted different user engagements with articles,for example,tweets,i.e.,read-only,user retweets,likes,and shares.We calculate user credibility and combine it with article content with the user’s context.After combining both features,we used three Natural language processing(NLP)feature extraction techniques,i.e.,Term Frequency-Inverse Document Frequency(TF-IDF),Count-Vectorizer(CV),and Hashing-Vectorizer(HV).Then,we applied different machine learning classifiers to classify misinformation as real or fake.Therefore,we used a Support Vector Machine(SVM),Naive Byes(NB),Random Forest(RF),Decision Tree(DT),Gradient Boosting(GB),and K-Nearest Neighbors(KNN).The proposed method has been tested on a real-world dataset,i.e.,“fakenewsnet”.We refine the fakenewsnet dataset repository according to our required features.The dataset contains 23000+articles with millions of user engagements.The highest accuracy score is 93.4%.The proposed model achieves its highest accuracy using count vector features and a random forest classifier.Our discoveries confirmed that the proposed classifier would effectively classify misinformation in social networks.
基金This work was financially supported by the National Key Research and Development Program of China(2022YFB3103200).
文摘With the development of technology,the connected vehicle has been upgraded from a traditional transport vehicle to an information terminal and energy storage terminal.The data of ICV(intelligent connected vehicles)is the key to organically maximizing their efficiency.However,in the context of increasingly strict global data security supervision and compliance,numerous problems,including complex types of connected vehicle data,poor data collaboration between the IT(information technology)domain and OT(operation technology)domain,different data format standards,lack of shared trust sources,difficulty in ensuring the quality of shared data,lack of data control rights,as well as difficulty in defining data ownership,make vehicle data sharing face a lot of problems,and data islands are widespread.This study proposes FADSF(Fuzzy Anonymous Data Share Frame),an automobile data sharing scheme based on blockchain.The data holder publishes the shared data information and forms the corresponding label storage on the blockchain.The data demander browses the data directory information to select and purchase data assets and verify them.The data demander selects and purchases data assets and verifies them by browsing the data directory information.Meanwhile,this paper designs a data structure Data Discrimination Bloom Filter(DDBF),making complaints about illegal data.When the number of data complaints reaches the threshold,the audit traceability contract is triggered to punish the illegal data publisher,aiming to improve the data quality and maintain a good data sharing ecology.In this paper,based on Ethereum,the above scheme is tested to demonstrate its feasibility,efficiency and security.