When dealing with the large-scale program,many automatic vulnerability mining techniques encounter such problems as path explosion,state explosion,and low efficiency.Decomposition of large-scale programs based on safe...When dealing with the large-scale program,many automatic vulnerability mining techniques encounter such problems as path explosion,state explosion,and low efficiency.Decomposition of large-scale programs based on safety-sensitive functions helps solve the above problems.And manual identification of security-sensitive functions is a tedious task,especially for the large-scale program.This study proposes a method to mine security-sensitive functions the arguments of which need to be checked before they are called.Two argument-checking identification algorithms are proposed based on the analysis of two implementations of argument checking.Based on these algorithms,security-sensitive functions are detected based on the ratio of invocation instances the arguments of which have been protected to the total number of instances.The results of experiments on three well-known open-source projects show that the proposed method can outperform competing methods in the literature.展开更多
In order to deal with the complex association relationships between classes in an object-oriented software system,a novel approach for identifying refactoring opportunities is proposed.The approach can be used to dete...In order to deal with the complex association relationships between classes in an object-oriented software system,a novel approach for identifying refactoring opportunities is proposed.The approach can be used to detect complex and duplicated many-to-many association relationships in source code,and to provide guidance for further refactoring.In the approach,source code is first transformed to an abstract syntax tree from which all data members of each class are extracted,then each class is characterized in connection with a set of association classes saving its data members.Next,classes in common associations are obtained by comparing different association classes sets in integrated analysis.Finally,on condition of pre-defined thresholds,all class sets in candidate for refactoring and their common association classes are saved and exported.This approach is tested on 4 projects.The results show that the precision is over 96%when the threshold is 3,and 100%when the threshold is 4.Meanwhile,this approach has good execution efficiency as the execution time taken for a project with more than 500 classes is less than 4 s,which also indicates that it can be applied to projects of different scales to identify their refactoring opportunities effectively.展开更多
In the face threat of the Internet attack, malware classification is one of the promising solutions in the field of intrusion detection and digital forensics. In previous work, researchers performed dynamic analysis o...In the face threat of the Internet attack, malware classification is one of the promising solutions in the field of intrusion detection and digital forensics. In previous work, researchers performed dynamic analysis or static analysis after reverse engineering. But malware developers even use anti-virtual machine(VM) and obfuscation techniques to evade malware classifiers. By means of the deployment of honeypots, malware source code could be collected and analyzed. Source code analysis provides a better classification for understanding the purpose of attackers and forensics. In this paper, a novel classification approach is proposed, based on content similarity and directory structure similarity. Such a classification avoids to re-analyze known malware and allocates resources for new malware. Malware classification also let network administrators know the purpose of attackers. The experimental results demonstrate that the proposed system can classify the malware efficiently with a small misclassification ratio and the performance is better than virustotal.展开更多
ne way to speed up the execution of sequential programs is to divide them into concurrent segments and execute such segments in a parallel manner over a distributed computing environment. We argue that the execution s...ne way to speed up the execution of sequential programs is to divide them into concurrent segments and execute such segments in a parallel manner over a distributed computing environment. We argue that the execution speedup primarily depends on the concurrency degree between the identified segments as well as communication overhead between the segments. To guar-antee the best speedup, we have to obtain the maximum possible concurrency degree between the identified segments, taking communication overhead into consideration. Existing code distributor and multi-threading approaches do not fulfill such re-quirements;hence, they cannot provide expected distributability gains in advance. To overcome such limitations, we propose a novel approach for verifying the distributability of sequential object-oriented programs. The proposed approach enables users to see the maximum speedup gains before the actual distributability implementations, as it computes an objective function which is used to measure different distribution values from the same program, taking into consideration both remote and sequential calls. Experimental results showed that the proposed approach successfully determines the distributability of different real-life software applications compared with their real-life sequential and distributed implementations.展开更多
To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize tr...To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize transform in spatial domain, the proposed algorithm applies transform in spectral domain. Set-partitioning-based approach is applied to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Gray encoded. Because of the dependency along the line dimension of hyperspectral images, low density paritycheck-(LDPC)-based Slepian-Wolf coder is adopted to implement the DSC strategy. Experimental results on airborne visible/infrared imaging spectrometer (AVIRIS) dataset show that the proposed paradigm achieves up to 6 dB improvement over DSC-based coders which apply transform in spatial domain, with significantly reduced computational complexity and memory storage.展开更多
The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses ...The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses an intelligent computation technique based on the combination of two methods:Rebalancing data and representation learning to analyze and evaluate the code property graph(CPG)of the source code for detecting abnormal behavior of software vulnerabilities.To do that,DrCSE performs a combination of 3 main processing techniques:(i)building the source code feature profiles,(ii)rebalancing data,and(iii)contrastive learning.In which,the method(i)extracts the source code’s features based on the vertices and edges of the CPG.The method of rebalancing data has the function of supporting the training process by balancing the experimental dataset.Finally,contrastive learning techniques learn the important features of the source code by finding and pulling similar ones together while pushing the outliers away.The experiment part of this paper demonstrates the superiority of the DrCSE Framework for detecting source code security vulnerabilities using the Verum dataset.As a result,the method proposed in the article has brought a pretty good performance in all metrics,especially the Precision and Recall scores of 39.35%and 69.07%,respectively,proving the efficiency of the DrCSE Framework.It performs better than other approaches,with a 5%boost in Precision and a 5%boost in Recall.Overall,this is considered the best research result for the software vulnerability detection problem using the Verum dataset according to our survey to date.展开更多
This article presents a proposal for a model of a microprogram control unit (CMCU) with output identification adapted for implementation in complex programmable logic devices (CPLD) equipped with integrated memory mod...This article presents a proposal for a model of a microprogram control unit (CMCU) with output identification adapted for implementation in complex programmable logic devices (CPLD) equipped with integrated memory modules [1]. An approach which applies two sources of code and one-hot encoding has been used in a base CMCU model with output identification [2] [3]. The article depicts a complete example of processing for the proposed CMCU model. Furthermore, it also discusses the advantages and disadvantages of the approach in question and presents the results of the experiments conducted on a real CPLD system.展开更多
The article presents a modification to the method which applies two sources of data. The modification is depicted on the example of a compositional microprogram control unit (CMCU) model with base structure implemente...The article presents a modification to the method which applies two sources of data. The modification is depicted on the example of a compositional microprogram control unit (CMCU) model with base structure implemented in the complex programmable logic devices (CPLD). First, the conditions needed to apply the method are presented, followed by the results of its implementation in real hardware.展开更多
In order to provide ultra low-latency and high energy-efficient communication for intelligences,the sixth generation(6G)wireless communication networks need to break out of the dilemma of the depleting gain of the sep...In order to provide ultra low-latency and high energy-efficient communication for intelligences,the sixth generation(6G)wireless communication networks need to break out of the dilemma of the depleting gain of the separated optimization paradigm.In this context,this paper provides a comprehensive tutorial that overview how joint source-channel coding(JSCC)can be employed for improving overall system performance.For the purpose,we first introduce the communication requirements and performance metrics for 6G.Then,we provide an overview of the source-channel separation theorem and why it may not hold in practical applications.In addition,we focus on two new JSCC schemes called the double low-density parity-check(LDPC)codes and the double polar codes,respectively,giving their detailed coding and decoding processes and corresponding performance simulations.In a nutshell,this paper constitutes a tutorial on the JSCC scheme tailored to the needs of future 6G communications.展开更多
An improved FGS (Fine Granular Scalability) coding method is proposed in this letter, which is based on human visual characteristics. This method adjusts FGS coding frame rate according to the evaluation of video sequ...An improved FGS (Fine Granular Scalability) coding method is proposed in this letter, which is based on human visual characteristics. This method adjusts FGS coding frame rate according to the evaluation of video sequences so as to improve the coding efficiency and subject perceived quality of reconstructed images. Finally, a fine granular joint source channel coding is proposed based on the source coding method, which not only utilizes the network resources efficiently, but guarantees the reliable transmission of video information.展开更多
An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when ...An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding.展开更多
A new neural network based method for solving the problem of congestion control arising at the user network interface (UNI) of ATM networks is proposed in this paper. Unlike the previous methods where the coding rate ...A new neural network based method for solving the problem of congestion control arising at the user network interface (UNI) of ATM networks is proposed in this paper. Unlike the previous methods where the coding rate for all traffic sources as controller output signals is tuned in a body, the proposed method adjusts the coding rate for only a part of the traffic sources while the remainder sources send the cells in the previous coding rate in case of occurrence of congestion. The controller output signals include the source coding rate and the percentage of the sources that send cells at the corresponding coding rate. The control methods not only minimize the cell loss rate but also guarantee the quality of information (such as voice sources) fed into the multiplexer buffer. Simulations with 150 ADPCM voice sources fed into the multiplexer buffer showed that the proposed methods have advantage over the previous methods in the aspect of the performance indices such as cell loss rate (CLR) and voice quality.展开更多
A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a...A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a forbidden symbol in the arithmetic codes is used to distinguish the quantization distortion and the channel distortion, all the coefficients from the first error one to the end of the sequence are set to be a value within the variance range of the coefficients instead of zero, then the error propagation coming from the entropy coding can be essentially estimated, which is disregarded in the most conventional joint source channel coding (JSCC) systems. The precision of the model in terms of average peak-signal-to-noise has been improved about 0.5 dB compared to classical works. An efficient unequal error protection system based on the model is developed, and can be used in the wireless communication systems.展开更多
Distributed source coding (DSC) is applied to interferential multispectral image compression owing to strong correlation among the image frames. Many DSC systems in the literature use feedback channel (FC) to cont...Distributed source coding (DSC) is applied to interferential multispectral image compression owing to strong correlation among the image frames. Many DSC systems in the literature use feedback channel (FC) to control rate at the decoder, which limits the application of DSC. Upon an analysis of the image data, a rate control approach is proposed to avoid FC. Low-complexity motion compensation is applied first to estimate side information at the encoder. Using a polynomial fitting method, a new mathematical model is then derived to estimate rate based on the correlation between the source and side information. The experimental results show that our estimated rate is a good approximation to the actual rate required by FC while incurring a little bit-rate overhead. Our compression scheme performs comparable with the FC based DSC system and outperforms JPEG2000 significantly.展开更多
Csiszar's strong coding theorem for discrete memoryless scarce is generalized to arbitrarily varying source.We also determine the asymptotic error exponent for arbitrarily wrying source.
Robust video streaming through high error prone wireless channel has attracted much attention. In this paper the authors introduce an effective algorithm by joining the Unequal Error Protection ability of the channel ...Robust video streaming through high error prone wireless channel has attracted much attention. In this paper the authors introduce an effective algorithm by joining the Unequal Error Protection ability of the channel multiplexing protocol H.223 Annex D, and the new H.263++ Annex V Data Partition together. Based on the optimal trade off of these two technologies, the Joint Source and Channel Coding algorithm can result in stronger error resilience. The simulation results have shown its superiority against separate coding mode and some Unequal Error Protection mode under recommended wireless channel error patterns.展开更多
Smart contracts have led to more efficient development in finance and healthcare,but vulnerabilities in contracts pose high risks to their future applications.The current vulnerability detection methods for contracts ...Smart contracts have led to more efficient development in finance and healthcare,but vulnerabilities in contracts pose high risks to their future applications.The current vulnerability detection methods for contracts are either based on fixed expert rules,which are inefficient,or rely on simplistic deep learning techniques that do not fully leverage contract semantic information.Therefore,there is ample room for improvement in terms of detection precision.To solve these problems,this paper proposes a vulnerability detector based on deep learning techniques,graph representation,and Transformer,called GRATDet.The method first performs swapping,insertion,and symbolization operations for contract functions,increasing the amount of small sample data.Each line of code is then treated as a basic semantic element,and information such as control and data relationships is extracted to construct a new representation in the form of a Line Graph(LG),which shows more structural features that differ from the serialized presentation of the contract.Finally,the node information and edge information of the graph are jointly learned using an improved Transformer-GP model to extract information globally and locally,and the fused features are used for vulnerability detection.The effectiveness of the method in reentrancy vulnerability detection is verified in experiments,where the F1 score reaches 95.16%,exceeding stateof-the-art methods.展开更多
To solve the problems caused by military software security issues,this paper firstly introduces a new software fault injection technique,namely main static fault injection method:program mutation.And then source code ...To solve the problems caused by military software security issues,this paper firstly introduces a new software fault injection technique,namely main static fault injection method:program mutation.And then source code for testing this algorithm is put forward.On this basis buffer overflow testing based on program mutation is conducted.Finally several military software source codes for buffer overflow testing are tested using deficiency tracking system(DTS)tool,Experimental results show the effectiveness of the proposed algorithm.展开更多
Any linear transform matrix can be used to easily calculate a consistent form, and a plurality of conversion can be easily connected together by matrix multiplication. When performing file transfers, you can encrypt f...Any linear transform matrix can be used to easily calculate a consistent form, and a plurality of conversion can be easily connected together by matrix multiplication. When performing file transfers, you can encrypt files matrix transformation. Article presents a matrix-based electronic document encryption and decryption algorithm, which relies on a special class of matrices combinatorial problems, the method to improve the security of electronic document system is feasible and effective, and finally give the source code and programming software.展开更多
A robust progressive image transmission scheme over broadband wireless fading channels is developed for 4th generation wireless communication systems (4G) in this paper. The proposed scheme is based on space-time bl...A robust progressive image transmission scheme over broadband wireless fading channels is developed for 4th generation wireless communication systems (4G) in this paper. The proposed scheme is based on space-time block coded orthogonal frequency-division multiplexing (OFDM) with 4 transmit antennas and 2 receive antennas and uses a simplified minimum mean square error (MMSE) detector instead of maximum likelihood (ML) detectors. Considering DCT is simpler and more widely applied in the industry than wavelet transforms, a progressive image compression method based on DCT called mean-subtract embedded DCT (MSEDCT) is developed, with a simple mean-subtract method for the redundancy of reorganized DC blocks in addition to a structure similar to the embedded zerotree wavelet coding (EZW) method. Then after analyzing and testing bit importance of the progressive MSEDCT bitstreams, the layered unequal error protection method of joint source-channels coding based on Reed-Solomon (RS) codes is used to protect different parts of bitstreams, providing different QoS assurances and good flexibility. Simulation experiments show our proposed scheme can effectively degrade fading effects and obtain better image transmission effects with 10 -20 dB average peak-sig- nal-noise-ratio (PSNR) gains at the median Eb/No than those schemes without space-time coded OFDM or equal error protections with space-time coded OFDM.展开更多
基金This study was supported in part by the National Natural Science Foundation of China(Nos.61401512,61602508,61772549,U1636219 and U1736214)the National Key R&D Program of China(No.2016YFB0801303 and 2016QY01W0105)+1 种基金the Key Technologies R&D Program of Henan Province(No.162102210032)and the Key Science and Technology Research Project of Henan Province(No.152102210005).
文摘When dealing with the large-scale program,many automatic vulnerability mining techniques encounter such problems as path explosion,state explosion,and low efficiency.Decomposition of large-scale programs based on safety-sensitive functions helps solve the above problems.And manual identification of security-sensitive functions is a tedious task,especially for the large-scale program.This study proposes a method to mine security-sensitive functions the arguments of which need to be checked before they are called.Two argument-checking identification algorithms are proposed based on the analysis of two implementations of argument checking.Based on these algorithms,security-sensitive functions are detected based on the ratio of invocation instances the arguments of which have been protected to the total number of instances.The results of experiments on three well-known open-source projects show that the proposed method can outperform competing methods in the literature.
文摘In order to deal with the complex association relationships between classes in an object-oriented software system,a novel approach for identifying refactoring opportunities is proposed.The approach can be used to detect complex and duplicated many-to-many association relationships in source code,and to provide guidance for further refactoring.In the approach,source code is first transformed to an abstract syntax tree from which all data members of each class are extracted,then each class is characterized in connection with a set of association classes saving its data members.Next,classes in common associations are obtained by comparing different association classes sets in integrated analysis.Finally,on condition of pre-defined thresholds,all class sets in candidate for refactoring and their common association classes are saved and exported.This approach is tested on 4 projects.The results show that the precision is over 96%when the threshold is 3,and 100%when the threshold is 4.Meanwhile,this approach has good execution efficiency as the execution time taken for a project with more than 500 classes is less than 4 s,which also indicates that it can be applied to projects of different scales to identify their refactoring opportunities effectively.
基金the Project of the Ministry of Science and Technology,Taiwan,China(Nos.NSC 100-2218-E110-004-MY3 and NSC 100-2218-E-110-011)
文摘In the face threat of the Internet attack, malware classification is one of the promising solutions in the field of intrusion detection and digital forensics. In previous work, researchers performed dynamic analysis or static analysis after reverse engineering. But malware developers even use anti-virtual machine(VM) and obfuscation techniques to evade malware classifiers. By means of the deployment of honeypots, malware source code could be collected and analyzed. Source code analysis provides a better classification for understanding the purpose of attackers and forensics. In this paper, a novel classification approach is proposed, based on content similarity and directory structure similarity. Such a classification avoids to re-analyze known malware and allocates resources for new malware. Malware classification also let network administrators know the purpose of attackers. The experimental results demonstrate that the proposed system can classify the malware efficiently with a small misclassification ratio and the performance is better than virustotal.
文摘ne way to speed up the execution of sequential programs is to divide them into concurrent segments and execute such segments in a parallel manner over a distributed computing environment. We argue that the execution speedup primarily depends on the concurrency degree between the identified segments as well as communication overhead between the segments. To guar-antee the best speedup, we have to obtain the maximum possible concurrency degree between the identified segments, taking communication overhead into consideration. Existing code distributor and multi-threading approaches do not fulfill such re-quirements;hence, they cannot provide expected distributability gains in advance. To overcome such limitations, we propose a novel approach for verifying the distributability of sequential object-oriented programs. The proposed approach enables users to see the maximum speedup gains before the actual distributability implementations, as it computes an objective function which is used to measure different distribution values from the same program, taking into consideration both remote and sequential calls. Experimental results showed that the proposed approach successfully determines the distributability of different real-life software applications compared with their real-life sequential and distributed implementations.
基金supported by the National Natural Science Foundationof China (60702012)the Scientific Research Foundation for the Re-turned Overseas Chinese Scholars, State Education Ministry
文摘To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize transform in spatial domain, the proposed algorithm applies transform in spectral domain. Set-partitioning-based approach is applied to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Gray encoded. Because of the dependency along the line dimension of hyperspectral images, low density paritycheck-(LDPC)-based Slepian-Wolf coder is adopted to implement the DSC strategy. Experimental results on airborne visible/infrared imaging spectrometer (AVIRIS) dataset show that the proposed paradigm achieves up to 6 dB improvement over DSC-based coders which apply transform in spatial domain, with significantly reduced computational complexity and memory storage.
文摘The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses an intelligent computation technique based on the combination of two methods:Rebalancing data and representation learning to analyze and evaluate the code property graph(CPG)of the source code for detecting abnormal behavior of software vulnerabilities.To do that,DrCSE performs a combination of 3 main processing techniques:(i)building the source code feature profiles,(ii)rebalancing data,and(iii)contrastive learning.In which,the method(i)extracts the source code’s features based on the vertices and edges of the CPG.The method of rebalancing data has the function of supporting the training process by balancing the experimental dataset.Finally,contrastive learning techniques learn the important features of the source code by finding and pulling similar ones together while pushing the outliers away.The experiment part of this paper demonstrates the superiority of the DrCSE Framework for detecting source code security vulnerabilities using the Verum dataset.As a result,the method proposed in the article has brought a pretty good performance in all metrics,especially the Precision and Recall scores of 39.35%and 69.07%,respectively,proving the efficiency of the DrCSE Framework.It performs better than other approaches,with a 5%boost in Precision and a 5%boost in Recall.Overall,this is considered the best research result for the software vulnerability detection problem using the Verum dataset according to our survey to date.
文摘This article presents a proposal for a model of a microprogram control unit (CMCU) with output identification adapted for implementation in complex programmable logic devices (CPLD) equipped with integrated memory modules [1]. An approach which applies two sources of code and one-hot encoding has been used in a base CMCU model with output identification [2] [3]. The article depicts a complete example of processing for the proposed CMCU model. Furthermore, it also discusses the advantages and disadvantages of the approach in question and presents the results of the experiments conducted on a real CPLD system.
文摘The article presents a modification to the method which applies two sources of data. The modification is depicted on the example of a compositional microprogram control unit (CMCU) model with base structure implemented in the complex programmable logic devices (CPLD). First, the conditions needed to apply the method are presented, followed by the results of its implementation in real hardware.
基金supported by National Natural Science Foundation of China(No.92067202,No.62001049,&No.62071058)Beijing Natural Science Foundation under Grant 4222012Beijing University of Posts and Telecommunications-China Mobile Research Institute Joint Innovation Center。
文摘In order to provide ultra low-latency and high energy-efficient communication for intelligences,the sixth generation(6G)wireless communication networks need to break out of the dilemma of the depleting gain of the separated optimization paradigm.In this context,this paper provides a comprehensive tutorial that overview how joint source-channel coding(JSCC)can be employed for improving overall system performance.For the purpose,we first introduce the communication requirements and performance metrics for 6G.Then,we provide an overview of the source-channel separation theorem and why it may not hold in practical applications.In addition,we focus on two new JSCC schemes called the double low-density parity-check(LDPC)codes and the double polar codes,respectively,giving their detailed coding and decoding processes and corresponding performance simulations.In a nutshell,this paper constitutes a tutorial on the JSCC scheme tailored to the needs of future 6G communications.
基金Supported by National Natural Science Foundation of China (No.90104013) and 863 project(2001AA121061)
文摘An improved FGS (Fine Granular Scalability) coding method is proposed in this letter, which is based on human visual characteristics. This method adjusts FGS coding frame rate according to the evaluation of video sequences so as to improve the coding efficiency and subject perceived quality of reconstructed images. Finally, a fine granular joint source channel coding is proposed based on the source coding method, which not only utilizes the network resources efficiently, but guarantees the reliable transmission of video information.
基金Project supported by the Research Grants Council of the Hong Kong Special Administrative Region,China (Grant No.CityU 123009)
文摘An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding.
文摘A new neural network based method for solving the problem of congestion control arising at the user network interface (UNI) of ATM networks is proposed in this paper. Unlike the previous methods where the coding rate for all traffic sources as controller output signals is tuned in a body, the proposed method adjusts the coding rate for only a part of the traffic sources while the remainder sources send the cells in the previous coding rate in case of occurrence of congestion. The controller output signals include the source coding rate and the percentage of the sources that send cells at the corresponding coding rate. The control methods not only minimize the cell loss rate but also guarantee the quality of information (such as voice sources) fed into the multiplexer buffer. Simulations with 150 ADPCM voice sources fed into the multiplexer buffer showed that the proposed methods have advantage over the previous methods in the aspect of the performance indices such as cell loss rate (CLR) and voice quality.
基金The National Natural Science Foundation of China (No. 60202006)
文摘A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a forbidden symbol in the arithmetic codes is used to distinguish the quantization distortion and the channel distortion, all the coefficients from the first error one to the end of the sequence are set to be a value within the variance range of the coefficients instead of zero, then the error propagation coming from the entropy coding can be essentially estimated, which is disregarded in the most conventional joint source channel coding (JSCC) systems. The precision of the model in terms of average peak-signal-to-noise has been improved about 0.5 dB compared to classical works. An efficient unequal error protection system based on the model is developed, and can be used in the wireless communication systems.
基金Supported by the National Natural Science Foundation of China (No. 60532060 60672117), the Program for Changjiang Scholars and Innovative Research Team in University (PCS1TR).
文摘Distributed source coding (DSC) is applied to interferential multispectral image compression owing to strong correlation among the image frames. Many DSC systems in the literature use feedback channel (FC) to control rate at the decoder, which limits the application of DSC. Upon an analysis of the image data, a rate control approach is proposed to avoid FC. Low-complexity motion compensation is applied first to estimate side information at the encoder. Using a polynomial fitting method, a new mathematical model is then derived to estimate rate based on the correlation between the source and side information. The experimental results show that our estimated rate is a good approximation to the actual rate required by FC while incurring a little bit-rate overhead. Our compression scheme performs comparable with the FC based DSC system and outperforms JPEG2000 significantly.
文摘Csiszar's strong coding theorem for discrete memoryless scarce is generalized to arbitrarily varying source.We also determine the asymptotic error exponent for arbitrarily wrying source.
文摘Robust video streaming through high error prone wireless channel has attracted much attention. In this paper the authors introduce an effective algorithm by joining the Unequal Error Protection ability of the channel multiplexing protocol H.223 Annex D, and the new H.263++ Annex V Data Partition together. Based on the optimal trade off of these two technologies, the Joint Source and Channel Coding algorithm can result in stronger error resilience. The simulation results have shown its superiority against separate coding mode and some Unequal Error Protection mode under recommended wireless channel error patterns.
基金supported by the Science and Technology Program Project(No.2020A02001-1)of Xinjiang Autonomous Region,China.
文摘Smart contracts have led to more efficient development in finance and healthcare,but vulnerabilities in contracts pose high risks to their future applications.The current vulnerability detection methods for contracts are either based on fixed expert rules,which are inefficient,or rely on simplistic deep learning techniques that do not fully leverage contract semantic information.Therefore,there is ample room for improvement in terms of detection precision.To solve these problems,this paper proposes a vulnerability detector based on deep learning techniques,graph representation,and Transformer,called GRATDet.The method first performs swapping,insertion,and symbolization operations for contract functions,increasing the amount of small sample data.Each line of code is then treated as a basic semantic element,and information such as control and data relationships is extracted to construct a new representation in the form of a Line Graph(LG),which shows more structural features that differ from the serialized presentation of the contract.Finally,the node information and edge information of the graph are jointly learned using an improved Transformer-GP model to extract information globally and locally,and the fused features are used for vulnerability detection.The effectiveness of the method in reentrancy vulnerability detection is verified in experiments,where the F1 score reaches 95.16%,exceeding stateof-the-art methods.
文摘To solve the problems caused by military software security issues,this paper firstly introduces a new software fault injection technique,namely main static fault injection method:program mutation.And then source code for testing this algorithm is put forward.On this basis buffer overflow testing based on program mutation is conducted.Finally several military software source codes for buffer overflow testing are tested using deficiency tracking system(DTS)tool,Experimental results show the effectiveness of the proposed algorithm.
文摘Any linear transform matrix can be used to easily calculate a consistent form, and a plurality of conversion can be easily connected together by matrix multiplication. When performing file transfers, you can encrypt files matrix transformation. Article presents a matrix-based electronic document encryption and decryption algorithm, which relies on a special class of matrices combinatorial problems, the method to improve the security of electronic document system is feasible and effective, and finally give the source code and programming software.
文摘A robust progressive image transmission scheme over broadband wireless fading channels is developed for 4th generation wireless communication systems (4G) in this paper. The proposed scheme is based on space-time block coded orthogonal frequency-division multiplexing (OFDM) with 4 transmit antennas and 2 receive antennas and uses a simplified minimum mean square error (MMSE) detector instead of maximum likelihood (ML) detectors. Considering DCT is simpler and more widely applied in the industry than wavelet transforms, a progressive image compression method based on DCT called mean-subtract embedded DCT (MSEDCT) is developed, with a simple mean-subtract method for the redundancy of reorganized DC blocks in addition to a structure similar to the embedded zerotree wavelet coding (EZW) method. Then after analyzing and testing bit importance of the progressive MSEDCT bitstreams, the layered unequal error protection method of joint source-channels coding based on Reed-Solomon (RS) codes is used to protect different parts of bitstreams, providing different QoS assurances and good flexibility. Simulation experiments show our proposed scheme can effectively degrade fading effects and obtain better image transmission effects with 10 -20 dB average peak-sig- nal-noise-ratio (PSNR) gains at the median Eb/No than those schemes without space-time coded OFDM or equal error protections with space-time coded OFDM.