In recent years,the rapid development of computer software has led to numerous security problems,particularly software vulnerabilities.These flaws can cause significant harm to users’privacy and property.Current secu...In recent years,the rapid development of computer software has led to numerous security problems,particularly software vulnerabilities.These flaws can cause significant harm to users’privacy and property.Current security defect detection technology relies on manual or professional reasoning,leading to missed detection and high false detection rates.Artificial intelligence technology has led to the development of neural network models based on machine learning or deep learning to intelligently mine holes,reducing missed alarms and false alarms.So,this project aims to study Java source code defect detection methods for defects like null pointer reference exception,XSS(Transform),and Structured Query Language(SQL)injection.Also,the project uses open-source Javalang to translate the Java source code,conducts a deep search on the AST to obtain the empty syntax feature library,and converts the Java source code into a dependency graph.The feature vector is then used as the learning target for the neural network.Four types of Convolutional Neural Networks(CNN),Long Short-Term Memory(LSTM),Bi-directional Long Short-Term Memory(BiLSTM),and Attention Mechanism+Bidirectional LSTM,are used to investigate various code defects,including blank pointer reference exception,XSS,and SQL injection defects.Experimental results show that the attention mechanism in two-dimensional BLSTM is the most effective for object recognition,verifying the correctness of the method.展开更多
As indispensable components of superconducting circuit-based quantum computers,Josephson junctions determine how well superconducting qubits perform.Reverse Monte Carlo(RMC)can be used to recreate Josephson junction’...As indispensable components of superconducting circuit-based quantum computers,Josephson junctions determine how well superconducting qubits perform.Reverse Monte Carlo(RMC)can be used to recreate Josephson junction’s atomic structure based on experimental data,and the impact of the structure on junctions’properties can be investigated by combining different analysis techniques.In order to build a physical model of the atomic structure and then analyze the factors that affect its performance,this paper briefly reviews the development and evolution of the RMC algorithm.It also summarizes the modeling process and structural feature analysis of the Josephson junction in combination with different feature extraction techniques for electrical characterization devices.Additionally,the obstacles and potential directions of Josephson junction modeling,which serves as the theoretical foundation for the production of superconducting quantum devices at the atomic level,are discussed.展开更多
Geolocating social media users aims to discover the real geographical locations of users from their publicly available data,which can support online location-based applications such as disaster alerts and local conten...Geolocating social media users aims to discover the real geographical locations of users from their publicly available data,which can support online location-based applications such as disaster alerts and local content recommen-dations.Social relationship-based methods represent a classical approach for geolocating social media.However,geographically proximate relationships are sparse and challenging to discern within social networks,thereby affecting the accuracy of user geolocation.To address this challenge,we propose user geolocation methods that integrate neighborhood geographical distribution and social structure influence(NGSI)to improve geolocation accuracy.Firstly,we propose a method for evaluating the homophily of locations based on the k-order neighbor-hood geographic distribution(k-NGD)similarity among users.There are notable differences in the distribution of k-NGD similarity between location-proximate and non-location-proximate users.Exploiting this distinction,we filter out non-location-proximate social relationships to enhance location homophily in the social network.To better utilize the location-proximate relationships in social networks,we propose a graph neural network algorithm based on the social structure influence.The algorithm enables us to perform a weighted aggregation of the information of users’multi-hop neighborhood,thereby mitigating the over-smoothing problem of user features and improving user geolocation performance.Experimental results on real social media dataset demonstrate that the neighborhood geographical distribution similarity metric can effectively filter out non-location-proximate social relationships.Moreover,compared with 7 existing social relationship-based user positioning methods,our proposed method can achieve multi-granularity user geolocation and improve the accuracy by 4.84%to 13.28%.展开更多
Precise localization techniques for indoor Wi-Fi access points(APs)have important application in the security inspection.However,due to the interference of environment factors such as multipath propagation and NLOS(No...Precise localization techniques for indoor Wi-Fi access points(APs)have important application in the security inspection.However,due to the interference of environment factors such as multipath propagation and NLOS(Non-Line-of-Sight),the existing methods for localization indoor Wi-Fi access points based on RSS ranging tend to have lower accuracy as the RSS(Received Signal Strength)is difficult to accurately measure.Therefore,the localization algorithm of indoor Wi-Fi access points based on the signal strength relative relationship and region division is proposed in this paper.The algorithm hierarchically divide the room where the target Wi-Fi AP is located,on the region division line,a modified signal collection device is used to measure RSS in two directions of each reference point.All RSS values are compared and the region where the RSS value has the relative largest signal strength is located as next candidate region.The location coordinate of the target Wi-Fi AP is obtained when the localization region of the target Wi-Fi AP is successively approximated until the candidate region is smaller than the accuracy threshold.There are 360 experiments carried out in this paper with 8 types of Wi-Fi APs including fixed APs and portable APs.The experimental results show that the average localization error of the proposed localization algorithm is 0.30 meters,and the minimum localization error is 0.16 meters,which is significantly higher than the localization accuracy of the existing typical indoor Wi-Fi access point localization methods.展开更多
To detect and recover random tampering areas,a combined-decision-based self-embedding watermarking scheme is proposed herein.In this scheme,the image is first partitioned into 2×2 size blocks.Next,the high 5 bits...To detect and recover random tampering areas,a combined-decision-based self-embedding watermarking scheme is proposed herein.In this scheme,the image is first partitioned into 2×2 size blocks.Next,the high 5 bits of a block’s average value is embedded into its offset block.The tampering type of block is detected by comparing the watermarks of its pre-offset and post-offset blocks.The theoretical analysis and experiments demonstrate that the proposed scheme not only has a lower ratio of false detection but also better performance with regard to avoiding random tampering.展开更多
When doing reverse analysis of program’s binary codes, it is often to encounter the function of cryptographic library. In order to reduce workload, a cryptographic library model has been designed by analysts. Models ...When doing reverse analysis of program’s binary codes, it is often to encounter the function of cryptographic library. In order to reduce workload, a cryptographic library model has been designed by analysts. Models use formalized approach to describe the frame of cryptology and the structure of cryptographic function, complete the mapping from cryptographic function property to its architecture, and accomplish the result presentation of data analysis and mapping at last. The model can solve two problems: the first one is to know the hierarchy of the cryptographic function in the library well;the second one is to know some kinds of information, such as related cryptology algorithm and protocol, etc. These function implements can display the result graphically. The model can find relevant knowledge for the analysts automatically and rapidly, which is helpful to the learning of the overall abstract structure of cryptology.展开更多
In a recent paper, Hu et al. defined the complete weight distributions of quantum codes and proved the Mac Williams identities, and as applications they showed how such weight distributions may be used to obtain the s...In a recent paper, Hu et al. defined the complete weight distributions of quantum codes and proved the Mac Williams identities, and as applications they showed how such weight distributions may be used to obtain the singleton-type and hamming-type bounds for asymmetric quantum codes. In this paper we extend their study much further and obtain several new results concerning the complete weight distributions of quantum codes and applications. In particular, we provide a new proof of the Mac Williams identities of the complete weight distributions of quantum codes. We obtain new information about the weight distributions of quantum MDS codes and the double weight distribution of asymmetric quantum MDS codes. We get new identities involving the complete weight distributions of two different quantum codes. We estimate the complete weight distributions of quantum codes under special conditions and show that quantum BCH codes by the Hermitian construction from primitive, narrow-sense BCH codes satisfy these conditions and hence these estimate applies.展开更多
With the widespread use of network infrastructures such as 5G and low-power wide-area networks,a large number of the Internet of Things(IoT)device nodes are connected to the network,generating massive amounts of data....With the widespread use of network infrastructures such as 5G and low-power wide-area networks,a large number of the Internet of Things(IoT)device nodes are connected to the network,generating massive amounts of data.Therefore,it is a great challenge to achieve anonymous authentication of IoT nodes and secure data transmission.At present,blockchain technology is widely used in authentication and s data storage due to its decentralization and immutability.Recently,Fan et al.proposed a secure and efficient blockchain-based IoT authentication and data sharing scheme.We studied it as one of the state-of-the-art protocols and found that this scheme does not consider the resistance to ephemeral secret compromise attacks and the anonymity of IoT nodes.To overcome these security flaws,this paper proposes an enhanced authentication and data transmission scheme,which is verified by formal security proofs and informal security analysis.Furthermore,Scyther is applied to prove the security of the proposed scheme.Moreover,it is demonstrated that the proposed scheme achieves better performance in terms of communication and computational cost compared to other related schemes.展开更多
With the increasing proportion of encrypted traffic in cyberspace, the classification of encrypted traffic has becomea core key technology in network supervision. In recent years, many different solutions have emerged...With the increasing proportion of encrypted traffic in cyberspace, the classification of encrypted traffic has becomea core key technology in network supervision. In recent years, many different solutions have emerged in this field.Most methods identify and classify traffic by extracting spatiotemporal characteristics of data flows or byte-levelfeatures of packets. However, due to changes in data transmission mediums, such as fiber optics and satellites,temporal features can exhibit significant variations due to changes in communication links and transmissionquality. Additionally, partial spatial features can change due to reasons like data reordering and retransmission.Faced with these challenges, identifying encrypted traffic solely based on packet byte-level features is significantlydifficult. To address this, we propose a universal packet-level encrypted traffic identification method, ComboPacket. This method utilizes convolutional neural networks to extract deep features of the current packet andits contextual information and employs spatial and channel attention mechanisms to select and locate effectivefeatures. Experimental data shows that Combo Packet can effectively distinguish between encrypted traffic servicecategories (e.g., File Transfer Protocol, FTP, and Peer-to-Peer, P2P) and encrypted traffic application categories (e.g.,BitTorrent and Skype). Validated on the ISCX VPN-non VPN dataset, it achieves classification accuracies of 97.0%and 97.1% for service and application categories, respectively. It also provides shorter training times and higherrecognition speeds. The performance and recognition capabilities of Combo Packet are significantly superior tothe existing classification methods mentioned.展开更多
High-density street-level reliable landmarks are one of the important foundations for street-level geolocation.However,the existing methods cannot obtain enough street-level landmarks in a short period of time.In this...High-density street-level reliable landmarks are one of the important foundations for street-level geolocation.However,the existing methods cannot obtain enough street-level landmarks in a short period of time.In this paper,a street-level landmarks acquisition method based on SVM(Support Vector Machine)classifiers is proposed.Firstly,the port detection results of IPs with known services are vectorized,and the vectorization results are used as an input of the SVM training.Then,the kernel function and penalty factor are adjusted for SVM classifiers training,and the optimal SVM classifiers are obtained.After that,the classifier sequence is constructed,and the IPs with unknown service are classified using the sequence.Finally,according to the domain name corresponding to the IP,the relationship between the classified server IP and organization name is established.The experimental results in Guangzhou and Wuhan city in China show that the proposed method can be as a supplement to existing typical methods since the number of obtained street-level landmarks is increased substantially,and the median geolocation error using evaluated landmarks is reduced by about 2 km.展开更多
At present,there is a problem of false positives caused by the too vast mimic scope in mimic transformation technology.Previous studies have focused on the“compensation”method to deal with this problem,which is expe...At present,there is a problem of false positives caused by the too vast mimic scope in mimic transformation technology.Previous studies have focused on the“compensation”method to deal with this problem,which is expensive and cannot fundamentally solve it.This paper provides new insights into coping with the situation.Firstly,this study summarizes the false-positive problem in the mimic transformation,analyzes its possible harm and the root causes.Secondly,three properties about the mimic scope are proposed.Based on the three properties and security quantification technology,the best mimic component set theory is put forward to solve the false-positive problem.There are two algorithms,the supplemental method and the subtraction method.The best mimic component set obtained by these two algorithms can fundamentally solve the mimic system’s false-positive problem but reduce the cost of mimic transformation.Thus make up for the lack of previous researches.展开更多
Multiple images steganography refers to hiding secret messages in multiple natural images to minimize the leakage of secret messages during transmission.Currently,the main multiple images steganography algorithms main...Multiple images steganography refers to hiding secret messages in multiple natural images to minimize the leakage of secret messages during transmission.Currently,the main multiple images steganography algorithms mainly distribute the payloads as sparsely as possible inmultiple cover images to improve the detection error rate of stego images.In order to enable the payloads to be accurately and efficiently distributed in each cover image,this paper proposes a multiple images steganography for JPEG images based on optimal payload redistribution.Firstly,the algorithm uses the principle of dynamic programming to redistribute the payloads of the cover images to reduce the time required in the process of payloads distribution.Then,by reducing the difference between the features of the cover images and the stego images to increase the detection error rate of the stego images.Secondly,this paper uses a data decomposition mechanism based on Vandermonde matrix.Even if part of the data is lost during the transmission of the secret messages,as long as the data loss rate is less than the data redundancy rate,the original secret messages can be recovered.Experimental results show that the method proposed in this paper improves the efficiency of payloads distribution compared with existing multiple images steganography.At the same time,the algorithm can achieve the optimal payload distribution of multiple images steganography to improve the anti-statistical detection performance of stego images.展开更多
In view of the fact that the current adaptive steganography algorithms are difficult to resist scaling attacks and that a method resisting scaling attack is only for the nearest neighbor interpolation method,this pape...In view of the fact that the current adaptive steganography algorithms are difficult to resist scaling attacks and that a method resisting scaling attack is only for the nearest neighbor interpolation method,this paper proposes an image steganography algorithm based on quantization index modulation resisting both scaling attacks and statistical detection.For the spatial image,this paper uses the watermarking algorithm based on quantization index modulation to extract the embedded domain.Then construct the embedding distortion function of the new embedded domain based on S-UNIWARD steganography,and use the minimum distortion coding to realize the embedding of the secret messages.Finally,according to the embedding modification amplitude of secret messages in the new embedded domain,the quantization index modulation algorithm is applied to realize the final embedding of secret messages in the original embedded domain.The experimental results show that the algorithm proposed is robust to the three common interpolation attacks including the nearest neighbor interpolation,the bilinear interpolation and the bicubic interpolation.And the average correct extraction rate of embedded messages increases from 50%to over 93% after 0.5 times-fold scaling attack using the bicubic interpolation method,compared with the classical steganography algorithm S-UNIWARD.Also the algorithm proposed has higher detection resistance than the original watermarking algorithm based on quantization index modulation.展开更多
This article aims at designing a new Multivariate Quadratic (MQ) public-key scheme to avoid the linearization attack and differential attack against the Matsumoto-Imai (MI) scheme. Based on the original scheme, our ne...This article aims at designing a new Multivariate Quadratic (MQ) public-key scheme to avoid the linearization attack and differential attack against the Matsumoto-Imai (MI) scheme. Based on the original scheme, our new scheme, named the Multi-layer MI (MMI) scheme, has a structure of multi-layer central map. Firstly, this article introduces the MI scheme and describes linearization attack and differential attack; then prescribes the designation of MMI in detail, and proves that MMI can resist both linearization attack and differential attack. Besides, this article also proves that MMI can resist recent eXtended Linearization (XL)-like methods. In the end, this article concludes that MMI also maintains the efficiency of MI.展开更多
Existing IP geolocation algorithms based on delay similarity often rely on the principle that geographically adjacent IPs have similar delays.However,this principle is often invalid in real Internet environment,which ...Existing IP geolocation algorithms based on delay similarity often rely on the principle that geographically adjacent IPs have similar delays.However,this principle is often invalid in real Internet environment,which leads to unreliable geolocation results.To improve the accuracy and reliability of locating IP in real Internet,a street-level IP geolocation algorithm based on landmarks clustering is proposed.Firstly,we use the probes to measure the known landmarks to obtain their delay vectors,and cluster landmarks using them.Secondly,the landmarks are clustered again by their latitude and longitude,and the intersection of these two clustering results is taken to form training sets.Thirdly,we train multiple neural networks to get the mapping relationship between delay and location in each training set.Finally,we determine one of the neural networks for the target by the delay similarity and relative hop counts,and then geolocate the target by this network.As it brings together the delay and geographical coordinates clustering,the proposed algorithm largely improves the inconsistency between them and enhances the mapping relationship between them.We evaluate the algorithm by a series of experiments in Hong Kong,Shanghai,Zhengzhou and New York.The experimental results show that the proposed algorithm achieves street-level IP geolocation,and comparing with existing typical streetlevel geolocation algorithms,the proposed algorithm improves the geolocation reliability significantly.展开更多
By allocating IP address and changing IP address in source and destination hosts, network address space randomization is committed to construct a dynamic and heterogeneous network to decrease the attacking possibility...By allocating IP address and changing IP address in source and destination hosts, network address space randomization is committed to construct a dynamic and heterogeneous network to decrease the attacking possibility and predictability. The research mainly deploys the features of OpenFlow network including data plane and control plane decoupling, centralized control of the network and dynamic updating of forwarding rules, combines the advantages of the network address space randomization technology with the features of the OpenFlow network, and designs a novel resolution towards IP conversion in Floodlight controller. The research can help improve the unpredictability and decrease the possibility of worm attacking and IP sniffing by IP allocation.展开更多
For nonlinear feedback shift registers (NFSRs), their greatest common subfamily may be not unique. Given two NFSRs, the authors only consider the case that their greatest common subfamily exists and is unique. If th...For nonlinear feedback shift registers (NFSRs), their greatest common subfamily may be not unique. Given two NFSRs, the authors only consider the case that their greatest common subfamily exists and is unique. If the greatest common subfamily is exactly the set of all sequences which can be generated by both of them, the authors can determine it by Grobner basis theory. Otherwise, the authors can determine it under some conditions and partly solve the problem.展开更多
When dealing with the large-scale program,many automatic vulnerability mining techniques encounter such problems as path explosion,state explosion,and low efficiency.Decomposition of large-scale programs based on safe...When dealing with the large-scale program,many automatic vulnerability mining techniques encounter such problems as path explosion,state explosion,and low efficiency.Decomposition of large-scale programs based on safety-sensitive functions helps solve the above problems.And manual identification of security-sensitive functions is a tedious task,especially for the large-scale program.This study proposes a method to mine security-sensitive functions the arguments of which need to be checked before they are called.Two argument-checking identification algorithms are proposed based on the analysis of two implementations of argument checking.Based on these algorithms,security-sensitive functions are detected based on the ratio of invocation instances the arguments of which have been protected to the total number of instances.The results of experiments on three well-known open-source projects show that the proposed method can outperform competing methods in the literature.展开更多
Code defects can lead to software vulnerability and even produce vulnerability risks.Existing research shows that the code detection technology with text analysis can judge whether object-oriented code files are defec...Code defects can lead to software vulnerability and even produce vulnerability risks.Existing research shows that the code detection technology with text analysis can judge whether object-oriented code files are defective to some extent.However,these detection techniques are mainly based on text features and have weak detection capabilities across programs.Compared with the uncertainty of the code and text caused by the developer’s personalization,the programming language has a stricter logical specification,which reflects the rules and requirements of the language itself and the developer’s potential way of thinking.This article replaces text analysis with programming logic modeling,breaks through the limitation of code text analysis solely relying on the probability of sentence/word occurrence in the code,and proposes an object-oriented language programming logic construction method based on method constraint relationships,selecting features through hypothesis testing ideas,and construct support vector machine classifier to detect class files with defects and reduce the impact of personalized programming on detection methods.In the experiment,some representative Android applications were selected to test and compare the proposed methods.In terms of the accuracy of code defect detection,through cross validation,the proposed method and the existing leading methods all reach an average of more than 90%.In the aspect of cross program detection,the method proposed in this paper is superior to the other two leading methods in accuracy,recall and F1 value.展开更多
As an active defenses technique,multivariant execution(MVX)can detect attacks by monitoring the consistency of heterogeneous variants with parallel execution.Compared with patch-style passive defense,MVX can defend ag...As an active defenses technique,multivariant execution(MVX)can detect attacks by monitoring the consistency of heterogeneous variants with parallel execution.Compared with patch-style passive defense,MVX can defend against known and even unknown vulnerability-based attacks without relying on attack feature information.However,variants generated with software diversity technologies will introduce new vulnerabilities when they execute in parallel.First,we analyze the security of MVX theory from the perspective of formal description.Then we summarize the general forms and techniques for attacks against MVX,and analyze the new vulnerabilities arising from the combination of variant generation technologies.We propose SecMVX,a secure MVX architecture and variant generation technology.Experimental evaluations based on CVEs and SPEC 2006 benchmark show that SecMVX introduces 11.29%of the average time overhead,and avoids vulnerabilities caused by the improper combination of variant generation technologies while keeping the defensive ability of MVX.展开更多
基金This work is supported by the Provincial Key Science and Technology Special Project of Henan(No.221100240100)。
文摘In recent years,the rapid development of computer software has led to numerous security problems,particularly software vulnerabilities.These flaws can cause significant harm to users’privacy and property.Current security defect detection technology relies on manual or professional reasoning,leading to missed detection and high false detection rates.Artificial intelligence technology has led to the development of neural network models based on machine learning or deep learning to intelligently mine holes,reducing missed alarms and false alarms.So,this project aims to study Java source code defect detection methods for defects like null pointer reference exception,XSS(Transform),and Structured Query Language(SQL)injection.Also,the project uses open-source Javalang to translate the Java source code,conducts a deep search on the AST to obtain the empty syntax feature library,and converts the Java source code into a dependency graph.The feature vector is then used as the learning target for the neural network.Four types of Convolutional Neural Networks(CNN),Long Short-Term Memory(LSTM),Bi-directional Long Short-Term Memory(BiLSTM),and Attention Mechanism+Bidirectional LSTM,are used to investigate various code defects,including blank pointer reference exception,XSS,and SQL injection defects.Experimental results show that the attention mechanism in two-dimensional BLSTM is the most effective for object recognition,verifying the correctness of the method.
基金This paper is supported by the Major Science and Technology Projects of Henan Province under Grant No.221100210400.
文摘As indispensable components of superconducting circuit-based quantum computers,Josephson junctions determine how well superconducting qubits perform.Reverse Monte Carlo(RMC)can be used to recreate Josephson junction’s atomic structure based on experimental data,and the impact of the structure on junctions’properties can be investigated by combining different analysis techniques.In order to build a physical model of the atomic structure and then analyze the factors that affect its performance,this paper briefly reviews the development and evolution of the RMC algorithm.It also summarizes the modeling process and structural feature analysis of the Josephson junction in combination with different feature extraction techniques for electrical characterization devices.Additionally,the obstacles and potential directions of Josephson junction modeling,which serves as the theoretical foundation for the production of superconducting quantum devices at the atomic level,are discussed.
基金This work was supported by the National Key R&D Program of China(No.2022YFB3102904)the National Natural Science Foundation of China(No.62172435,U23A20305)Key Research and Development Project of Henan Province(No.221111321200).
文摘Geolocating social media users aims to discover the real geographical locations of users from their publicly available data,which can support online location-based applications such as disaster alerts and local content recommen-dations.Social relationship-based methods represent a classical approach for geolocating social media.However,geographically proximate relationships are sparse and challenging to discern within social networks,thereby affecting the accuracy of user geolocation.To address this challenge,we propose user geolocation methods that integrate neighborhood geographical distribution and social structure influence(NGSI)to improve geolocation accuracy.Firstly,we propose a method for evaluating the homophily of locations based on the k-order neighbor-hood geographic distribution(k-NGD)similarity among users.There are notable differences in the distribution of k-NGD similarity between location-proximate and non-location-proximate users.Exploiting this distinction,we filter out non-location-proximate social relationships to enhance location homophily in the social network.To better utilize the location-proximate relationships in social networks,we propose a graph neural network algorithm based on the social structure influence.The algorithm enables us to perform a weighted aggregation of the information of users’multi-hop neighborhood,thereby mitigating the over-smoothing problem of user features and improving user geolocation performance.Experimental results on real social media dataset demonstrate that the neighborhood geographical distribution similarity metric can effectively filter out non-location-proximate social relationships.Moreover,compared with 7 existing social relationship-based user positioning methods,our proposed method can achieve multi-granularity user geolocation and improve the accuracy by 4.84%to 13.28%.
基金The work presented in this paper is supported by the National Key R&D Program of China(No.2016YFB0801303,2016QY01W0105)the National Natural Science Foundation of China(No.U1636219,61602508,61772549,U1736214,61572052)+1 种基金Plan for Scientific Innovation Talent of Henan Province(No.2018JR0018)the Key Technologies R&D Program of Henan Province(No.162102210032).
文摘Precise localization techniques for indoor Wi-Fi access points(APs)have important application in the security inspection.However,due to the interference of environment factors such as multipath propagation and NLOS(Non-Line-of-Sight),the existing methods for localization indoor Wi-Fi access points based on RSS ranging tend to have lower accuracy as the RSS(Received Signal Strength)is difficult to accurately measure.Therefore,the localization algorithm of indoor Wi-Fi access points based on the signal strength relative relationship and region division is proposed in this paper.The algorithm hierarchically divide the room where the target Wi-Fi AP is located,on the region division line,a modified signal collection device is used to measure RSS in two directions of each reference point.All RSS values are compared and the region where the RSS value has the relative largest signal strength is located as next candidate region.The location coordinate of the target Wi-Fi AP is obtained when the localization region of the target Wi-Fi AP is successively approximated until the candidate region is smaller than the accuracy threshold.There are 360 experiments carried out in this paper with 8 types of Wi-Fi APs including fixed APs and portable APs.The experimental results show that the average localization error of the proposed localization algorithm is 0.30 meters,and the minimum localization error is 0.16 meters,which is significantly higher than the localization accuracy of the existing typical indoor Wi-Fi access point localization methods.
基金This work was supported in part by the National Natural Science Foundation of China(No.61401512,61602508,61772549,6141512 and U1636219)the National Key R&D Program of China(No.2016YFB0801303 and 2016QY01W0105)+1 种基金the Key Technologies R&D Program of Henan Province(No.162102210032)the Key Science and Technology Research Project of Henan Province(No.152102210005).
文摘To detect and recover random tampering areas,a combined-decision-based self-embedding watermarking scheme is proposed herein.In this scheme,the image is first partitioned into 2×2 size blocks.Next,the high 5 bits of a block’s average value is embedded into its offset block.The tampering type of block is detected by comparing the watermarks of its pre-offset and post-offset blocks.The theoretical analysis and experiments demonstrate that the proposed scheme not only has a lower ratio of false detection but also better performance with regard to avoiding random tampering.
文摘When doing reverse analysis of program’s binary codes, it is often to encounter the function of cryptographic library. In order to reduce workload, a cryptographic library model has been designed by analysts. Models use formalized approach to describe the frame of cryptology and the structure of cryptographic function, complete the mapping from cryptographic function property to its architecture, and accomplish the result presentation of data analysis and mapping at last. The model can solve two problems: the first one is to know the hierarchy of the cryptographic function in the library well;the second one is to know some kinds of information, such as related cryptology algorithm and protocol, etc. These function implements can display the result graphically. The model can find relevant knowledge for the analysts automatically and rapidly, which is helpful to the learning of the overall abstract structure of cryptology.
基金the National Natural Science Foundation of China (Grant Nos. 61972413, 61901525, and 62002385)the National Key R&D Program of China (Grant No. 2021YFB3100100)RGC under Grant No. N HKUST619/17 from Hong Kong, China。
文摘In a recent paper, Hu et al. defined the complete weight distributions of quantum codes and proved the Mac Williams identities, and as applications they showed how such weight distributions may be used to obtain the singleton-type and hamming-type bounds for asymmetric quantum codes. In this paper we extend their study much further and obtain several new results concerning the complete weight distributions of quantum codes and applications. In particular, we provide a new proof of the Mac Williams identities of the complete weight distributions of quantum codes. We obtain new information about the weight distributions of quantum MDS codes and the double weight distribution of asymmetric quantum MDS codes. We get new identities involving the complete weight distributions of two different quantum codes. We estimate the complete weight distributions of quantum codes under special conditions and show that quantum BCH codes by the Hermitian construction from primitive, narrow-sense BCH codes satisfy these conditions and hence these estimate applies.
基金supported by the National Natural Science Foundation of China(Grant Nos.61872449,U1804263,62172435,62172141,61772173)the Zhongyuan Science and Technology Innovation Leading Talent Project,China(No.214200510019)+2 种基金the Natural Science Foundation of Henan(No.222300420004)the Major Public Welfare Special Projects of Henan Province(No.201300210100)the Key Research and Development Project of Henan Province(No.221111321200).
文摘With the widespread use of network infrastructures such as 5G and low-power wide-area networks,a large number of the Internet of Things(IoT)device nodes are connected to the network,generating massive amounts of data.Therefore,it is a great challenge to achieve anonymous authentication of IoT nodes and secure data transmission.At present,blockchain technology is widely used in authentication and s data storage due to its decentralization and immutability.Recently,Fan et al.proposed a secure and efficient blockchain-based IoT authentication and data sharing scheme.We studied it as one of the state-of-the-art protocols and found that this scheme does not consider the resistance to ephemeral secret compromise attacks and the anonymity of IoT nodes.To overcome these security flaws,this paper proposes an enhanced authentication and data transmission scheme,which is verified by formal security proofs and informal security analysis.Furthermore,Scyther is applied to prove the security of the proposed scheme.Moreover,it is demonstrated that the proposed scheme achieves better performance in terms of communication and computational cost compared to other related schemes.
基金the National Natural Science Foundation of China Youth Project(62302520).
文摘With the increasing proportion of encrypted traffic in cyberspace, the classification of encrypted traffic has becomea core key technology in network supervision. In recent years, many different solutions have emerged in this field.Most methods identify and classify traffic by extracting spatiotemporal characteristics of data flows or byte-levelfeatures of packets. However, due to changes in data transmission mediums, such as fiber optics and satellites,temporal features can exhibit significant variations due to changes in communication links and transmissionquality. Additionally, partial spatial features can change due to reasons like data reordering and retransmission.Faced with these challenges, identifying encrypted traffic solely based on packet byte-level features is significantlydifficult. To address this, we propose a universal packet-level encrypted traffic identification method, ComboPacket. This method utilizes convolutional neural networks to extract deep features of the current packet andits contextual information and employs spatial and channel attention mechanisms to select and locate effectivefeatures. Experimental data shows that Combo Packet can effectively distinguish between encrypted traffic servicecategories (e.g., File Transfer Protocol, FTP, and Peer-to-Peer, P2P) and encrypted traffic application categories (e.g.,BitTorrent and Skype). Validated on the ISCX VPN-non VPN dataset, it achieves classification accuracies of 97.0%and 97.1% for service and application categories, respectively. It also provides shorter training times and higherrecognition speeds. The performance and recognition capabilities of Combo Packet are significantly superior tothe existing classification methods mentioned.
基金The work presented in this paper is supported by the National Key R&D Program of China[Nos.2016YFB0801303,2016QY01W0105]the National Natural Science Foundation of China[Nos.U1636219,U1804263,61602508,61772549,U1736214,61572052]Plan for Scientific Innovation Talent of Henan Province[No.2018JR0018].
文摘High-density street-level reliable landmarks are one of the important foundations for street-level geolocation.However,the existing methods cannot obtain enough street-level landmarks in a short period of time.In this paper,a street-level landmarks acquisition method based on SVM(Support Vector Machine)classifiers is proposed.Firstly,the port detection results of IPs with known services are vectorized,and the vectorization results are used as an input of the SVM training.Then,the kernel function and penalty factor are adjusted for SVM classifiers training,and the optimal SVM classifiers are obtained.After that,the classifier sequence is constructed,and the IPs with unknown service are classified using the sequence.Finally,according to the domain name corresponding to the IP,the relationship between the classified server IP and organization name is established.The experimental results in Guangzhou and Wuhan city in China show that the proposed method can be as a supplement to existing typical methods since the number of obtained street-level landmarks is increased substantially,and the median geolocation error using evaluated landmarks is reduced by about 2 km.
基金This work was supported by National Key Research and Development Program of China(Grant No.2018YF0804001).
文摘At present,there is a problem of false positives caused by the too vast mimic scope in mimic transformation technology.Previous studies have focused on the“compensation”method to deal with this problem,which is expensive and cannot fundamentally solve it.This paper provides new insights into coping with the situation.Firstly,this study summarizes the false-positive problem in the mimic transformation,analyzes its possible harm and the root causes.Secondly,three properties about the mimic scope are proposed.Based on the three properties and security quantification technology,the best mimic component set theory is put forward to solve the false-positive problem.There are two algorithms,the supplemental method and the subtraction method.The best mimic component set obtained by these two algorithms can fundamentally solve the mimic system’s false-positive problem but reduce the cost of mimic transformation.Thus make up for the lack of previous researches.
基金This work was supported by the National Natural Science Foundation of China(Nos.U1736214,U1804263,U1636219,61772281,61772549,and 61872448)the National Key R&D Program of China(Nos.2016YFB0801303,2016QY01W0105)the Science and Technology Innovation Talent Project of Henan Province(No.184200510018).
文摘Multiple images steganography refers to hiding secret messages in multiple natural images to minimize the leakage of secret messages during transmission.Currently,the main multiple images steganography algorithms mainly distribute the payloads as sparsely as possible inmultiple cover images to improve the detection error rate of stego images.In order to enable the payloads to be accurately and efficiently distributed in each cover image,this paper proposes a multiple images steganography for JPEG images based on optimal payload redistribution.Firstly,the algorithm uses the principle of dynamic programming to redistribute the payloads of the cover images to reduce the time required in the process of payloads distribution.Then,by reducing the difference between the features of the cover images and the stego images to increase the detection error rate of the stego images.Secondly,this paper uses a data decomposition mechanism based on Vandermonde matrix.Even if part of the data is lost during the transmission of the secret messages,as long as the data loss rate is less than the data redundancy rate,the original secret messages can be recovered.Experimental results show that the method proposed in this paper improves the efficiency of payloads distribution compared with existing multiple images steganography.At the same time,the algorithm can achieve the optimal payload distribution of multiple images steganography to improve the anti-statistical detection performance of stego images.
基金This work was supported by the National Natural Science Foundation of China(No.61379151,61401512,61572052,U1636219)the National Key Research and Development Program of China(No.2016YFB0801303,2016QY01W0105)the Key Technologies Research and Development Program of Henan Provinces(No.162102210032).
文摘In view of the fact that the current adaptive steganography algorithms are difficult to resist scaling attacks and that a method resisting scaling attack is only for the nearest neighbor interpolation method,this paper proposes an image steganography algorithm based on quantization index modulation resisting both scaling attacks and statistical detection.For the spatial image,this paper uses the watermarking algorithm based on quantization index modulation to extract the embedded domain.Then construct the embedding distortion function of the new embedded domain based on S-UNIWARD steganography,and use the minimum distortion coding to realize the embedding of the secret messages.Finally,according to the embedding modification amplitude of secret messages in the new embedded domain,the quantization index modulation algorithm is applied to realize the final embedding of secret messages in the original embedded domain.The experimental results show that the algorithm proposed is robust to the three common interpolation attacks including the nearest neighbor interpolation,the bilinear interpolation and the bicubic interpolation.And the average correct extraction rate of embedded messages increases from 50%to over 93% after 0.5 times-fold scaling attack using the bicubic interpolation method,compared with the classical steganography algorithm S-UNIWARD.Also the algorithm proposed has higher detection resistance than the original watermarking algorithm based on quantization index modulation.
基金Supported by the National High Technology Research and Development Program of China(863Program)(No.2009-aa012201)Key Library of Communication Technology(No.9140C1103040902)
文摘This article aims at designing a new Multivariate Quadratic (MQ) public-key scheme to avoid the linearization attack and differential attack against the Matsumoto-Imai (MI) scheme. Based on the original scheme, our new scheme, named the Multi-layer MI (MMI) scheme, has a structure of multi-layer central map. Firstly, this article introduces the MI scheme and describes linearization attack and differential attack; then prescribes the designation of MMI in detail, and proves that MMI can resist both linearization attack and differential attack. Besides, this article also proves that MMI can resist recent eXtended Linearization (XL)-like methods. In the end, this article concludes that MMI also maintains the efficiency of MI.
基金the National Key R&D Program of China 2016YFB0801303(F.L.received the grant,the sponsors’website is https://service.most.gov.cn/)by the National Key R&D Program of China 2016QY01W0105(X.L.received the grant,the sponsors’website is https://service.most.gov.cn/)+5 种基金by the National Natural Science Foundation of China U1636219(X.L.received the grant,the sponsors’website is http://www.nsfc.gov.cn/)by the National Natural Science Foundation of China 61602508(J.L.received the grant,the sponsors’website is http://www.nsfc.gov.cn/)by the National Natural Science Foundation of China 61772549(F.L.received the grant,the sponsors’website is http://www.nsfc.gov.cn/)by the National Natural Science Foundation of China U1736214(F.L.received the grant,the sponsors’website is http://www.nsfc.gov.cn/)by the National Natural Science Foundation of China U1804263(X.L.received the grant,the sponsors’website is http://www.nsfc.gov.cn/)by the Science and Technology Innovation Talent Project of Henan Province 184200510018(X.L.received the grant,the sponsors’website is http://www.hnkjt.gov.cn/).
文摘Existing IP geolocation algorithms based on delay similarity often rely on the principle that geographically adjacent IPs have similar delays.However,this principle is often invalid in real Internet environment,which leads to unreliable geolocation results.To improve the accuracy and reliability of locating IP in real Internet,a street-level IP geolocation algorithm based on landmarks clustering is proposed.Firstly,we use the probes to measure the known landmarks to obtain their delay vectors,and cluster landmarks using them.Secondly,the landmarks are clustered again by their latitude and longitude,and the intersection of these two clustering results is taken to form training sets.Thirdly,we train multiple neural networks to get the mapping relationship between delay and location in each training set.Finally,we determine one of the neural networks for the target by the delay similarity and relative hop counts,and then geolocate the target by this network.As it brings together the delay and geographical coordinates clustering,the proposed algorithm largely improves the inconsistency between them and enhances the mapping relationship between them.We evaluate the algorithm by a series of experiments in Hong Kong,Shanghai,Zhengzhou and New York.The experimental results show that the proposed algorithm achieves street-level IP geolocation,and comparing with existing typical streetlevel geolocation algorithms,the proposed algorithm improves the geolocation reliability significantly.
文摘By allocating IP address and changing IP address in source and destination hosts, network address space randomization is committed to construct a dynamic and heterogeneous network to decrease the attacking possibility and predictability. The research mainly deploys the features of OpenFlow network including data plane and control plane decoupling, centralized control of the network and dynamic updating of forwarding rules, combines the advantages of the network address space randomization technology with the features of the OpenFlow network, and designs a novel resolution towards IP conversion in Floodlight controller. The research can help improve the unpredictability and decrease the possibility of worm attacking and IP sniffing by IP allocation.
基金supported by the Natural Science Foundation of China under Grant Nos.61272042,61100202and 61170235
文摘For nonlinear feedback shift registers (NFSRs), their greatest common subfamily may be not unique. Given two NFSRs, the authors only consider the case that their greatest common subfamily exists and is unique. If the greatest common subfamily is exactly the set of all sequences which can be generated by both of them, the authors can determine it by Grobner basis theory. Otherwise, the authors can determine it under some conditions and partly solve the problem.
基金This study was supported in part by the National Natural Science Foundation of China(Nos.61401512,61602508,61772549,U1636219 and U1736214)the National Key R&D Program of China(No.2016YFB0801303 and 2016QY01W0105)+1 种基金the Key Technologies R&D Program of Henan Province(No.162102210032)and the Key Science and Technology Research Project of Henan Province(No.152102210005).
文摘When dealing with the large-scale program,many automatic vulnerability mining techniques encounter such problems as path explosion,state explosion,and low efficiency.Decomposition of large-scale programs based on safety-sensitive functions helps solve the above problems.And manual identification of security-sensitive functions is a tedious task,especially for the large-scale program.This study proposes a method to mine security-sensitive functions the arguments of which need to be checked before they are called.Two argument-checking identification algorithms are proposed based on the analysis of two implementations of argument checking.Based on these algorithms,security-sensitive functions are detected based on the ratio of invocation instances the arguments of which have been protected to the total number of instances.The results of experiments on three well-known open-source projects show that the proposed method can outperform competing methods in the literature.
基金This work was supported by National Key RD Program of China under Grant 2017YFB0802901.
文摘Code defects can lead to software vulnerability and even produce vulnerability risks.Existing research shows that the code detection technology with text analysis can judge whether object-oriented code files are defective to some extent.However,these detection techniques are mainly based on text features and have weak detection capabilities across programs.Compared with the uncertainty of the code and text caused by the developer’s personalization,the programming language has a stricter logical specification,which reflects the rules and requirements of the language itself and the developer’s potential way of thinking.This article replaces text analysis with programming logic modeling,breaks through the limitation of code text analysis solely relying on the probability of sentence/word occurrence in the code,and proposes an object-oriented language programming logic construction method based on method constraint relationships,selecting features through hypothesis testing ideas,and construct support vector machine classifier to detect class files with defects and reduce the impact of personalized programming on detection methods.In the experiment,some representative Android applications were selected to test and compare the proposed methods.In terms of the accuracy of code defect detection,through cross validation,the proposed method and the existing leading methods all reach an average of more than 90%.In the aspect of cross program detection,the method proposed in this paper is superior to the other two leading methods in accuracy,recall and F1 value.
基金National Key Research and Development Program of China(Grant No.2018YF0804003)the National Key Research and Development Program of China under Grant No.2017YFB0803204.
文摘As an active defenses technique,multivariant execution(MVX)can detect attacks by monitoring the consistency of heterogeneous variants with parallel execution.Compared with patch-style passive defense,MVX can defend against known and even unknown vulnerability-based attacks without relying on attack feature information.However,variants generated with software diversity technologies will introduce new vulnerabilities when they execute in parallel.First,we analyze the security of MVX theory from the perspective of formal description.Then we summarize the general forms and techniques for attacks against MVX,and analyze the new vulnerabilities arising from the combination of variant generation technologies.We propose SecMVX,a secure MVX architecture and variant generation technology.Experimental evaluations based on CVEs and SPEC 2006 benchmark show that SecMVX introduces 11.29%of the average time overhead,and avoids vulnerabilities caused by the improper combination of variant generation technologies while keeping the defensive ability of MVX.