Online social networks have gradually permeated into every aspect of people's life.As a research hotspot in social network, user influence is of theoretical and practical significant for information transmission, ...Online social networks have gradually permeated into every aspect of people's life.As a research hotspot in social network, user influence is of theoretical and practical significant for information transmission, optimization and integration. A prominent application is a viral marketing campaign which aims to use a small number of targeted infl uence users to initiate cascades of infl uence that create a global increase in product adoption. In this paper, we analyze mainly evaluation methods of user infl uence based on IDM evaluation model, Page Rank evaluation model, use behavior model and some other popular influence evaluation models in currently social network. And then, we extract the core idea of these models to build our influence evaluation model from two aspects, relationship and activity. Finally, the proposed approach was validated on real world datasets,and the result of experiments shows that our method is both effective and stable.展开更多
Cloud computing is very useful for big data owner who doesn't want to manage IT infrastructure and big data technique details. However, it is hard for big data owner to trust multi-layer outsourced big data system...Cloud computing is very useful for big data owner who doesn't want to manage IT infrastructure and big data technique details. However, it is hard for big data owner to trust multi-layer outsourced big data system in cloud environment and to verify which outsourced service leads to the problem. Similarly, the cloud service provider cannot simply trust the data computation applications. At last,the verification data itself may also leak the sensitive information from the cloud service provider and data owner. We propose a new three-level definition of the verification, threat model, corresponding trusted policies based on different roles for outsourced big data system in cloud. We also provide two policy enforcement methods for building trusted data computation environment by measuring both the Map Reduce application and its behaviors based on trusted computing and aspect-oriented programming. To prevent sensitive information leakage from verification process,we provide a privacy-preserved verification method. Finally, we implement the TPTVer, a Trusted third Party based Trusted Verifier as a proof of concept system. Our evaluation and analysis show that TPTVer can provide trusted verification for multi-layered outsourced big data system in the cloud with low overhead.展开更多
Security threats to smart and autonomous vehicles cause potential consequences such as traffic accidents,economically damaging traffic jams,hijacking,motivating to wrong routes,and financial losses for businesses and ...Security threats to smart and autonomous vehicles cause potential consequences such as traffic accidents,economically damaging traffic jams,hijacking,motivating to wrong routes,and financial losses for businesses and governments.Smart and autonomous vehicles are connected wirelessly,which are more attracted for attackers due to the open nature of wireless communication.One of the problems is the rogue attack,in which the attacker pretends to be a legitimate user or access point by utilizing fake identity.To figure out the problem of a rogue attack,we propose a reinforcement learning algorithm to identify rogue nodes by exploiting the channel state information of the communication link.We consider the communication link between vehicle-to-vehicle,and vehicle-to-infrastructure.We evaluate the performance of our proposed technique by measuring the rogue attack probability,false alarm rate(FAR),mis-detection rate(MDR),and utility function of a receiver based on the test threshold values of reinforcement learning algorithm.The results show that the FAR and MDR are decreased significantly by selecting an appropriate threshold value in order to improve the receiver’s utility.展开更多
Kernel adaptive algorithm is an extension of adaptive algorithm in nonlinear,and widely used in the field of non-stationary signal processing.But the distribution of classic data sets seems relatively regular and simp...Kernel adaptive algorithm is an extension of adaptive algorithm in nonlinear,and widely used in the field of non-stationary signal processing.But the distribution of classic data sets seems relatively regular and simple in time series.The distribution of the electroencephalograph(EEG)signal is more randomness and non-stationarity,so online prediction of EEG signal can further verify the robustness and applicability of kernel adaptive algorithms.What’s more,the purpose of modeling and analyzing the time series of EEG signals is to discover and extract valuable information,and to reveal the internal relations of EEG signals.The time series prediction of EEG plays an important role in EEG time series analysis.In this paper,kernel RLS tracker(KRLST)is presented to online predict the EEG signals of motor imagery and compared with other 13 kernel adaptive algorithms.The experimental results show that KRLST algorithm has the best effect on the brain computer interface(BCI)dataset.展开更多
Source localization of focal electrical activity from scalp electroencephalogram (sEEG) signal is generally modeled as an inverse problem that is highly ill-posed. In this paper, a novel source localization method is ...Source localization of focal electrical activity from scalp electroencephalogram (sEEG) signal is generally modeled as an inverse problem that is highly ill-posed. In this paper, a novel source localization method is proposed to model the EEG inverse problem using spatio-temporal long-short term memory recurrent neural networks (LSTM). The network model consists of two parts, sEEG encoding and source decoding, to model the sEEG signal and receive the regression of source location. As there does not exist enough annotated sEEG signals correspond to specific source locations, simulated data is generated with forward model using finite element method (FEM) to act as a part of training signals. A framework for source localization is proposed to estimate the source position based on simulated training data. Experiments are done on simulated testing data. The results on simulated data exhibit good robustness on noise signal, and the proposed network solves the EEG inverse problem with spatio-temporal deep network. The result show that the proposed method overcomes the highly ill-posed linear inverse problem with data driven learning.展开更多
In recent years,with the rapid development of the Internet of Things(IoT),RFID tags,industrial controllers,sensor nodes,smart cards and other small computing devices are increasingly widely deployed.In order to help p...In recent years,with the rapid development of the Internet of Things(IoT),RFID tags,industrial controllers,sensor nodes,smart cards and other small computing devices are increasingly widely deployed.In order to help protect low-power,low-cost Internet of things devices,lightweight cryptography came into being.In order to launch the standard of cryptographic algorithm suitable for constrained environment,NIST started the process of lightweight cryptography standardization in 2016,and published the second round of candidate cryptographic algorithms in August2019.SKINNY-Hash in the sponge construction is one of the second round candidates,as well as SKINNY-AEAD.The tweakable block cipher SKINNY is the basic component for both of them.Although cryptanalysts have proposed several cryptanalysis results on SKINNY and SKINNY-AEAD,there is no cryptanalysis results on SKINNY-Hash.Based on the differential cryptanalysis and the method of mixed integer programming(MELP),we perform differential cryptanalysis on SKINNY-Hash.The core is to set up the inequations of the MILP model.Actually,it is hard to obtain the inequations of the substitution(i.e.S-box)obeying the previous method.By a careful study of the permutation,we partition the substitution into a nonlinear part and a linear part,then a series of inequations in the MILP model is obtained to describe the differentials with high possibilities.As a result,we propose a differential hash collision path of 3-round SKINNY-tk3-Hash.By adjusting the bit rate of SKINNY-tk3-Hash,we propose a 7-round collision path for the simplified algorithm.The cryptanalysis in this paper will help to promote the NIST Lightweight Crypto Standardization process.展开更多
Human saccade is a dynamic process of information pursuit. There are many methods using either global context or local context cues to model human saccadic scan-paths. In contrast to them, this paper introduces a mode...Human saccade is a dynamic process of information pursuit. There are many methods using either global context or local context cues to model human saccadic scan-paths. In contrast to them, this paper introduces a model for gaze movement control using both global and local cues. To test the performance of this model, an experiment is done to collect human eye movement data by using an SMI iVIEW X Hi-Speed eye tracker with a sampling rate of 1250 Hz. The experiment used a two-by-four mixed design with the location of the targets and the four initial positions. We compare the saccadic scan-paths generated by the proposed model against human eye movement data on a face benchmark dataset. Experimental results demonstrate that the simulated scan-paths by the proposed model are similar to human saccades in term of the fixation order, Hausdorff distance, and prediction accuracy for both static fixation locations and dynamic scan-paths.展开更多
The difficulty of quantum key agreement is to realize its security and fairness at the same time.This paper presents a new three-party quantum key agreement protocol based on continuous variable single-mode squeezed s...The difficulty of quantum key agreement is to realize its security and fairness at the same time.This paper presents a new three-party quantum key agreement protocol based on continuous variable single-mode squeezed state.The three parties participating in the agreement are peer entities,making same contributions to the final key.Any one or two participants of the agreement cannot determine the shared key separately.The security analysis shows that the proposed protocol can resist both external and internal attacks.展开更多
Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of te...Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.展开更多
基金supported by the Research Fund for the Doctoral Program(New Teachers)Ministry of Education of China under Grant No.20121103120032+2 种基金Humanity and Social Science Youth foundation of Ministry of Education of China under Grant No.13YJCZH065General Program of Science and Technology Development Project of Beijing Municipal Education Commission of China under Grant No.km201410005012Open Research Fund of Beijing Key Laboratory of Trusted Computing,Open Research Fund of Key Laboratory of Trustworthy Distributed Computing and Service(BUPT),Ministry of Education
文摘Online social networks have gradually permeated into every aspect of people's life.As a research hotspot in social network, user influence is of theoretical and practical significant for information transmission, optimization and integration. A prominent application is a viral marketing campaign which aims to use a small number of targeted infl uence users to initiate cascades of infl uence that create a global increase in product adoption. In this paper, we analyze mainly evaluation methods of user infl uence based on IDM evaluation model, Page Rank evaluation model, use behavior model and some other popular influence evaluation models in currently social network. And then, we extract the core idea of these models to build our influence evaluation model from two aspects, relationship and activity. Finally, the proposed approach was validated on real world datasets,and the result of experiments shows that our method is both effective and stable.
基金partially supported by grants from the China 863 High-tech Program (Grant No. 2015AA016002)the Specialized Research Fund for the Doctoral Program of Higher Education (Grant No. 20131103120001)+2 种基金the National Key Research and Development Program of China (Grant No. 2016YFB0800204)the National Science Foundation of China (No. 61502017)the Scientific Research Common Program of Beijing Municipal Commission of Education (KM201710005024)
文摘Cloud computing is very useful for big data owner who doesn't want to manage IT infrastructure and big data technique details. However, it is hard for big data owner to trust multi-layer outsourced big data system in cloud environment and to verify which outsourced service leads to the problem. Similarly, the cloud service provider cannot simply trust the data computation applications. At last,the verification data itself may also leak the sensitive information from the cloud service provider and data owner. We propose a new three-level definition of the verification, threat model, corresponding trusted policies based on different roles for outsourced big data system in cloud. We also provide two policy enforcement methods for building trusted data computation environment by measuring both the Map Reduce application and its behaviors based on trusted computing and aspect-oriented programming. To prevent sensitive information leakage from verification process,we provide a privacy-preserved verification method. Finally, we implement the TPTVer, a Trusted third Party based Trusted Verifier as a proof of concept system. Our evaluation and analysis show that TPTVer can provide trusted verification for multi-layered outsourced big data system in the cloud with low overhead.
基金This work was partially supported by The China’s National Key R&D Program(No.2018YFB0803600)Natural Science Foundation of China(No.61801008)+2 种基金Beijing Natural Science Foundation National(No.L172049)Scientific Research Common Program of Beijing Municipal Commission of Education(No.KM201910005025)Defense Industrial Technology Development Program(No.JCKY2016204A102)sponsored this research in parts.
文摘Security threats to smart and autonomous vehicles cause potential consequences such as traffic accidents,economically damaging traffic jams,hijacking,motivating to wrong routes,and financial losses for businesses and governments.Smart and autonomous vehicles are connected wirelessly,which are more attracted for attackers due to the open nature of wireless communication.One of the problems is the rogue attack,in which the attacker pretends to be a legitimate user or access point by utilizing fake identity.To figure out the problem of a rogue attack,we propose a reinforcement learning algorithm to identify rogue nodes by exploiting the channel state information of the communication link.We consider the communication link between vehicle-to-vehicle,and vehicle-to-infrastructure.We evaluate the performance of our proposed technique by measuring the rogue attack probability,false alarm rate(FAR),mis-detection rate(MDR),and utility function of a receiver based on the test threshold values of reinforcement learning algorithm.The results show that the FAR and MDR are decreased significantly by selecting an appropriate threshold value in order to improve the receiver’s utility.
基金the National Natural Science Foundation of China(No.61672070,62173010)the Beijing Municipal Natural Science Foundation(No.4192005,4202025)+1 种基金the Beijing Municipal Education Commission Project(No.KM201910005008,KM201911232003)the Beijing Innovation Center for Future Chips(No.KYJJ2018004).
文摘Kernel adaptive algorithm is an extension of adaptive algorithm in nonlinear,and widely used in the field of non-stationary signal processing.But the distribution of classic data sets seems relatively regular and simple in time series.The distribution of the electroencephalograph(EEG)signal is more randomness and non-stationarity,so online prediction of EEG signal can further verify the robustness and applicability of kernel adaptive algorithms.What’s more,the purpose of modeling and analyzing the time series of EEG signals is to discover and extract valuable information,and to reveal the internal relations of EEG signals.The time series prediction of EEG plays an important role in EEG time series analysis.In this paper,kernel RLS tracker(KRLST)is presented to online predict the EEG signals of motor imagery and compared with other 13 kernel adaptive algorithms.The experimental results show that KRLST algorithm has the best effect on the brain computer interface(BCI)dataset.
基金supported by the National Natural Science Foundation of China (No. 61672070, 61501007, 11675199, 61572004 and 81501155)the Key Project of Beijing Municipal Education Commission (No. KZ201910005008)+3 种基金general project of science and technology project of Beijing Municipal Education Commission (No. KM201610005023)the Beijing Municipal Natural Science Foundation (No. 4182005)Clinical Technology Innovation Program of Beijing Municipal Administration of Hospitals (No. XMLX201805)Beijing Municipal Science & Tech Commission (No. Z171100000117004)
文摘Source localization of focal electrical activity from scalp electroencephalogram (sEEG) signal is generally modeled as an inverse problem that is highly ill-posed. In this paper, a novel source localization method is proposed to model the EEG inverse problem using spatio-temporal long-short term memory recurrent neural networks (LSTM). The network model consists of two parts, sEEG encoding and source decoding, to model the sEEG signal and receive the regression of source location. As there does not exist enough annotated sEEG signals correspond to specific source locations, simulated data is generated with forward model using finite element method (FEM) to act as a part of training signals. A framework for source localization is proposed to estimate the source position based on simulated training data. Experiments are done on simulated testing data. The results on simulated data exhibit good robustness on noise signal, and the proposed network solves the EEG inverse problem with spatio-temporal deep network. The result show that the proposed method overcomes the highly ill-posed linear inverse problem with data driven learning.
基金supported by the Natural Science Foundation of Beijing,China(Grant No.4172006)Beijing Municipal Education Commission of China(Grant No.km201410005012)。
文摘In recent years,with the rapid development of the Internet of Things(IoT),RFID tags,industrial controllers,sensor nodes,smart cards and other small computing devices are increasingly widely deployed.In order to help protect low-power,low-cost Internet of things devices,lightweight cryptography came into being.In order to launch the standard of cryptographic algorithm suitable for constrained environment,NIST started the process of lightweight cryptography standardization in 2016,and published the second round of candidate cryptographic algorithms in August2019.SKINNY-Hash in the sponge construction is one of the second round candidates,as well as SKINNY-AEAD.The tweakable block cipher SKINNY is the basic component for both of them.Although cryptanalysts have proposed several cryptanalysis results on SKINNY and SKINNY-AEAD,there is no cryptanalysis results on SKINNY-Hash.Based on the differential cryptanalysis and the method of mixed integer programming(MELP),we perform differential cryptanalysis on SKINNY-Hash.The core is to set up the inequations of the MILP model.Actually,it is hard to obtain the inequations of the substitution(i.e.S-box)obeying the previous method.By a careful study of the permutation,we partition the substitution into a nonlinear part and a linear part,then a series of inequations in the MILP model is obtained to describe the differentials with high possibilities.As a result,we propose a differential hash collision path of 3-round SKINNY-tk3-Hash.By adjusting the bit rate of SKINNY-tk3-Hash,we propose a 7-round collision path for the simplified algorithm.The cryptanalysis in this paper will help to promote the NIST Lightweight Crypto Standardization process.
文摘Human saccade is a dynamic process of information pursuit. There are many methods using either global context or local context cues to model human saccadic scan-paths. In contrast to them, this paper introduces a model for gaze movement control using both global and local cues. To test the performance of this model, an experiment is done to collect human eye movement data by using an SMI iVIEW X Hi-Speed eye tracker with a sampling rate of 1250 Hz. The experiment used a two-by-four mixed design with the location of the targets and the four initial positions. We compare the saccadic scan-paths generated by the proposed model against human eye movement data on a face benchmark dataset. Experimental results demonstrate that the simulated scan-paths by the proposed model are similar to human saccades in term of the fixation order, Hausdorff distance, and prediction accuracy for both static fixation locations and dynamic scan-paths.
基金Supported by Beijing Natural Science Foundation under Grant Nos.4182006,4162005National Natural Science Foundation of China under Grant Nos.61572053,61472048,61671087,U1636106,61602019,61502016
文摘The difficulty of quantum key agreement is to realize its security and fairness at the same time.This paper presents a new three-party quantum key agreement protocol based on continuous variable single-mode squeezed state.The three parties participating in the agreement are peer entities,making same contributions to the final key.Any one or two participants of the agreement cannot determine the shared key separately.The security analysis shows that the proposed protocol can resist both external and internal attacks.
基金supported by the Innovation Platform Construction of Qinghai Province(No.2016-ZJ-Y04)the Basic Research Program of Qinghai Province(No.2016-ZJ-740)
文摘Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.