To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize tr...To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize transform in spatial domain, the proposed algorithm applies transform in spectral domain. Set-partitioning-based approach is applied to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Gray encoded. Because of the dependency along the line dimension of hyperspectral images, low density paritycheck-(LDPC)-based Slepian-Wolf coder is adopted to implement the DSC strategy. Experimental results on airborne visible/infrared imaging spectrometer (AVIRIS) dataset show that the proposed paradigm achieves up to 6 dB improvement over DSC-based coders which apply transform in spatial domain, with significantly reduced computational complexity and memory storage.展开更多
An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when ...An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding.展开更多
Distributed source coding (DSC) is applied to interferential multispectral image compression owing to strong correlation among the image frames. Many DSC systems in the literature use feedback channel (FC) to cont...Distributed source coding (DSC) is applied to interferential multispectral image compression owing to strong correlation among the image frames. Many DSC systems in the literature use feedback channel (FC) to control rate at the decoder, which limits the application of DSC. Upon an analysis of the image data, a rate control approach is proposed to avoid FC. Low-complexity motion compensation is applied first to estimate side information at the encoder. Using a polynomial fitting method, a new mathematical model is then derived to estimate rate based on the correlation between the source and side information. The experimental results show that our estimated rate is a good approximation to the actual rate required by FC while incurring a little bit-rate overhead. Our compression scheme performs comparable with the FC based DSC system and outperforms JPEG2000 significantly.展开更多
In order to provide ultra low-latency and high energy-efficient communication for intelligences,the sixth generation(6G)wireless communication networks need to break out of the dilemma of the depleting gain of the sep...In order to provide ultra low-latency and high energy-efficient communication for intelligences,the sixth generation(6G)wireless communication networks need to break out of the dilemma of the depleting gain of the separated optimization paradigm.In this context,this paper provides a comprehensive tutorial that overview how joint source-channel coding(JSCC)can be employed for improving overall system performance.For the purpose,we first introduce the communication requirements and performance metrics for 6G.Then,we provide an overview of the source-channel separation theorem and why it may not hold in practical applications.In addition,we focus on two new JSCC schemes called the double low-density parity-check(LDPC)codes and the double polar codes,respectively,giving their detailed coding and decoding processes and corresponding performance simulations.In a nutshell,this paper constitutes a tutorial on the JSCC scheme tailored to the needs of future 6G communications.展开更多
An improved FGS (Fine Granular Scalability) coding method is proposed in this letter, which is based on human visual characteristics. This method adjusts FGS coding frame rate according to the evaluation of video sequ...An improved FGS (Fine Granular Scalability) coding method is proposed in this letter, which is based on human visual characteristics. This method adjusts FGS coding frame rate according to the evaluation of video sequences so as to improve the coding efficiency and subject perceived quality of reconstructed images. Finally, a fine granular joint source channel coding is proposed based on the source coding method, which not only utilizes the network resources efficiently, but guarantees the reliable transmission of video information.展开更多
This article presents a proposal for a model of a microprogram control unit (CMCU) with output identification adapted for implementation in complex programmable logic devices (CPLD) equipped with integrated memory mod...This article presents a proposal for a model of a microprogram control unit (CMCU) with output identification adapted for implementation in complex programmable logic devices (CPLD) equipped with integrated memory modules [1]. An approach which applies two sources of code and one-hot encoding has been used in a base CMCU model with output identification [2] [3]. The article depicts a complete example of processing for the proposed CMCU model. Furthermore, it also discusses the advantages and disadvantages of the approach in question and presents the results of the experiments conducted on a real CPLD system.展开更多
The article presents a modification to the method which applies two sources of data. The modification is depicted on the example of a compositional microprogram control unit (CMCU) model with base structure implemente...The article presents a modification to the method which applies two sources of data. The modification is depicted on the example of a compositional microprogram control unit (CMCU) model with base structure implemented in the complex programmable logic devices (CPLD). First, the conditions needed to apply the method are presented, followed by the results of its implementation in real hardware.展开更多
Csiszar's strong coding theorem for discrete memoryless scarce is generalized to arbitrarily varying source.We also determine the asymptotic error exponent for arbitrarily wrying source.
Robust video streaming through high error prone wireless channel has attracted much attention. In this paper the authors introduce an effective algorithm by joining the Unequal Error Protection ability of the channel ...Robust video streaming through high error prone wireless channel has attracted much attention. In this paper the authors introduce an effective algorithm by joining the Unequal Error Protection ability of the channel multiplexing protocol H.223 Annex D, and the new H.263++ Annex V Data Partition together. Based on the optimal trade off of these two technologies, the Joint Source and Channel Coding algorithm can result in stronger error resilience. The simulation results have shown its superiority against separate coding mode and some Unequal Error Protection mode under recommended wireless channel error patterns.展开更多
The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses ...The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses an intelligent computation technique based on the combination of two methods:Rebalancing data and representation learning to analyze and evaluate the code property graph(CPG)of the source code for detecting abnormal behavior of software vulnerabilities.To do that,DrCSE performs a combination of 3 main processing techniques:(i)building the source code feature profiles,(ii)rebalancing data,and(iii)contrastive learning.In which,the method(i)extracts the source code’s features based on the vertices and edges of the CPG.The method of rebalancing data has the function of supporting the training process by balancing the experimental dataset.Finally,contrastive learning techniques learn the important features of the source code by finding and pulling similar ones together while pushing the outliers away.The experiment part of this paper demonstrates the superiority of the DrCSE Framework for detecting source code security vulnerabilities using the Verum dataset.As a result,the method proposed in the article has brought a pretty good performance in all metrics,especially the Precision and Recall scores of 39.35%and 69.07%,respectively,proving the efficiency of the DrCSE Framework.It performs better than other approaches,with a 5%boost in Precision and a 5%boost in Recall.Overall,this is considered the best research result for the software vulnerability detection problem using the Verum dataset according to our survey to date.展开更多
A new neural network based method for solving the problem of congestion control arising at the user network interface (UNI) of ATM networks is proposed in this paper. Unlike the previous methods where the coding rate ...A new neural network based method for solving the problem of congestion control arising at the user network interface (UNI) of ATM networks is proposed in this paper. Unlike the previous methods where the coding rate for all traffic sources as controller output signals is tuned in a body, the proposed method adjusts the coding rate for only a part of the traffic sources while the remainder sources send the cells in the previous coding rate in case of occurrence of congestion. The controller output signals include the source coding rate and the percentage of the sources that send cells at the corresponding coding rate. The control methods not only minimize the cell loss rate but also guarantee the quality of information (such as voice sources) fed into the multiplexer buffer. Simulations with 150 ADPCM voice sources fed into the multiplexer buffer showed that the proposed methods have advantage over the previous methods in the aspect of the performance indices such as cell loss rate (CLR) and voice quality.展开更多
In order to deal with the complex association relationships between classes in an object-oriented software system,a novel approach for identifying refactoring opportunities is proposed.The approach can be used to dete...In order to deal with the complex association relationships between classes in an object-oriented software system,a novel approach for identifying refactoring opportunities is proposed.The approach can be used to detect complex and duplicated many-to-many association relationships in source code,and to provide guidance for further refactoring.In the approach,source code is first transformed to an abstract syntax tree from which all data members of each class are extracted,then each class is characterized in connection with a set of association classes saving its data members.Next,classes in common associations are obtained by comparing different association classes sets in integrated analysis.Finally,on condition of pre-defined thresholds,all class sets in candidate for refactoring and their common association classes are saved and exported.This approach is tested on 4 projects.The results show that the precision is over 96%when the threshold is 3,and 100%when the threshold is 4.Meanwhile,this approach has good execution efficiency as the execution time taken for a project with more than 500 classes is less than 4 s,which also indicates that it can be applied to projects of different scales to identify their refactoring opportunities effectively.展开更多
When dealing with the large-scale program,many automatic vulnerability mining techniques encounter such problems as path explosion,state explosion,and low efficiency.Decomposition of large-scale programs based on safe...When dealing with the large-scale program,many automatic vulnerability mining techniques encounter such problems as path explosion,state explosion,and low efficiency.Decomposition of large-scale programs based on safety-sensitive functions helps solve the above problems.And manual identification of security-sensitive functions is a tedious task,especially for the large-scale program.This study proposes a method to mine security-sensitive functions the arguments of which need to be checked before they are called.Two argument-checking identification algorithms are proposed based on the analysis of two implementations of argument checking.Based on these algorithms,security-sensitive functions are detected based on the ratio of invocation instances the arguments of which have been protected to the total number of instances.The results of experiments on three well-known open-source projects show that the proposed method can outperform competing methods in the literature.展开更多
We present an unequal decoding power allocation (UDPA) approach for minimization of the receiver power consumption subject to a given quality of service (QoS), by exploiting data partitioning and turbo decoding. W...We present an unequal decoding power allocation (UDPA) approach for minimization of the receiver power consumption subject to a given quality of service (QoS), by exploiting data partitioning and turbo decoding. We assign unequal decoding power of forward error correction (FEC) to data partitions with different priority by jointly considering the source coding, channel coding and receiver power consumption. The proposed scheme is applied to H.264 video over additive white Gaussion noise (AWGN) channel, and achieves excellent tradeoff between video delivery quality and power consumption, and yields significant power saving compared with the conventional equal decoding power allocation (EDPA) approach in wireless video transmission.展开更多
Smart contracts have led to more efficient development in finance and healthcare,but vulnerabilities in contracts pose high risks to their future applications.The current vulnerability detection methods for contracts ...Smart contracts have led to more efficient development in finance and healthcare,but vulnerabilities in contracts pose high risks to their future applications.The current vulnerability detection methods for contracts are either based on fixed expert rules,which are inefficient,or rely on simplistic deep learning techniques that do not fully leverage contract semantic information.Therefore,there is ample room for improvement in terms of detection precision.To solve these problems,this paper proposes a vulnerability detector based on deep learning techniques,graph representation,and Transformer,called GRATDet.The method first performs swapping,insertion,and symbolization operations for contract functions,increasing the amount of small sample data.Each line of code is then treated as a basic semantic element,and information such as control and data relationships is extracted to construct a new representation in the form of a Line Graph(LG),which shows more structural features that differ from the serialized presentation of the contract.Finally,the node information and edge information of the graph are jointly learned using an improved Transformer-GP model to extract information globally and locally,and the fused features are used for vulnerability detection.The effectiveness of the method in reentrancy vulnerability detection is verified in experiments,where the F1 score reaches 95.16%,exceeding stateof-the-art methods.展开更多
A robust progressive image transmission scheme over broadband wireless fading channels is developed for 4th generation wireless communication systems (4G) in this paper. The proposed scheme is based on space-time bl...A robust progressive image transmission scheme over broadband wireless fading channels is developed for 4th generation wireless communication systems (4G) in this paper. The proposed scheme is based on space-time block coded orthogonal frequency-division multiplexing (OFDM) with 4 transmit antennas and 2 receive antennas and uses a simplified minimum mean square error (MMSE) detector instead of maximum likelihood (ML) detectors. Considering DCT is simpler and more widely applied in the industry than wavelet transforms, a progressive image compression method based on DCT called mean-subtract embedded DCT (MSEDCT) is developed, with a simple mean-subtract method for the redundancy of reorganized DC blocks in addition to a structure similar to the embedded zerotree wavelet coding (EZW) method. Then after analyzing and testing bit importance of the progressive MSEDCT bitstreams, the layered unequal error protection method of joint source-channels coding based on Reed-Solomon (RS) codes is used to protect different parts of bitstreams, providing different QoS assurances and good flexibility. Simulation experiments show our proposed scheme can effectively degrade fading effects and obtain better image transmission effects with 10 -20 dB average peak-sig- nal-noise-ratio (PSNR) gains at the median Eb/No than those schemes without space-time coded OFDM or equal error protections with space-time coded OFDM.展开更多
We consider a quadratic Gaussian distributed lossy source coding setup with an additional constraint of identical reconstructions between the encoder and the decoder.The setup consists of two correlated Gaussian sourc...We consider a quadratic Gaussian distributed lossy source coding setup with an additional constraint of identical reconstructions between the encoder and the decoder.The setup consists of two correlated Gaussian sources,wherein one of them has to be reconstructed to be within some distortion constraint and match with a corresponding reconstruction at the encoder,while the other source acts as coded side information.We study the tradeoff between the rates of two encoders for a given distortion constraint on the reconstruction.An explicit characterization of this trade-off is the main result of the paper.We also give close inner and outer bounds for the discrete memoryless version of the problem.展开更多
Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Cons...Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in mukispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian-Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band.展开更多
In-network data aggregation is severely affected due to information in transmits attack. This is an important problem since wireless sensor networks (WSN) are highly vulnerable to node compromises due to this attack. ...In-network data aggregation is severely affected due to information in transmits attack. This is an important problem since wireless sensor networks (WSN) are highly vulnerable to node compromises due to this attack. As a result, large error in the aggregate computed at the base station due to false sub aggregate values contributed by compromised nodes. When falsified event messages forwarded through intermediate nodes lead to wastage of their limited energy too. Since wireless sensor nodes are battery operated, it has low computational power and energy. In view of this, the algorithms designed for wireless sensor nodes should be such that, they extend the lifetime, use less computation and enhance security so as to enhance the network life time. This article presents Vernam Cipher cryptographic technique based data compression algorithm using huff man source coding scheme in order to enhance security and lifetime of the energy constrained wireless sensor nodes. In addition, this scheme is evaluated by using different processor based sensor node implementations and the results are compared against to other existing schemes. In particular, we present a secure light weight algorithm for the wireless sensor nodes which are consuming less energy for its operation. Using this, the entropy improvement is achieved to a greater extend.展开更多
A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a...A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a forbidden symbol in the arithmetic codes is used to distinguish the quantization distortion and the channel distortion, all the coefficients from the first error one to the end of the sequence are set to be a value within the variance range of the coefficients instead of zero, then the error propagation coming from the entropy coding can be essentially estimated, which is disregarded in the most conventional joint source channel coding (JSCC) systems. The precision of the model in terms of average peak-signal-to-noise has been improved about 0.5 dB compared to classical works. An efficient unequal error protection system based on the model is developed, and can be used in the wireless communication systems.展开更多
基金supported by the National Natural Science Foundationof China (60702012)the Scientific Research Foundation for the Re-turned Overseas Chinese Scholars, State Education Ministry
文摘To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize transform in spatial domain, the proposed algorithm applies transform in spectral domain. Set-partitioning-based approach is applied to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Gray encoded. Because of the dependency along the line dimension of hyperspectral images, low density paritycheck-(LDPC)-based Slepian-Wolf coder is adopted to implement the DSC strategy. Experimental results on airborne visible/infrared imaging spectrometer (AVIRIS) dataset show that the proposed paradigm achieves up to 6 dB improvement over DSC-based coders which apply transform in spatial domain, with significantly reduced computational complexity and memory storage.
基金Project supported by the Research Grants Council of the Hong Kong Special Administrative Region,China (Grant No.CityU 123009)
文摘An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding.
基金Supported by the National Natural Science Foundation of China (No. 60532060 60672117), the Program for Changjiang Scholars and Innovative Research Team in University (PCS1TR).
文摘Distributed source coding (DSC) is applied to interferential multispectral image compression owing to strong correlation among the image frames. Many DSC systems in the literature use feedback channel (FC) to control rate at the decoder, which limits the application of DSC. Upon an analysis of the image data, a rate control approach is proposed to avoid FC. Low-complexity motion compensation is applied first to estimate side information at the encoder. Using a polynomial fitting method, a new mathematical model is then derived to estimate rate based on the correlation between the source and side information. The experimental results show that our estimated rate is a good approximation to the actual rate required by FC while incurring a little bit-rate overhead. Our compression scheme performs comparable with the FC based DSC system and outperforms JPEG2000 significantly.
基金supported by National Natural Science Foundation of China(No.92067202,No.62001049,&No.62071058)Beijing Natural Science Foundation under Grant 4222012Beijing University of Posts and Telecommunications-China Mobile Research Institute Joint Innovation Center。
文摘In order to provide ultra low-latency and high energy-efficient communication for intelligences,the sixth generation(6G)wireless communication networks need to break out of the dilemma of the depleting gain of the separated optimization paradigm.In this context,this paper provides a comprehensive tutorial that overview how joint source-channel coding(JSCC)can be employed for improving overall system performance.For the purpose,we first introduce the communication requirements and performance metrics for 6G.Then,we provide an overview of the source-channel separation theorem and why it may not hold in practical applications.In addition,we focus on two new JSCC schemes called the double low-density parity-check(LDPC)codes and the double polar codes,respectively,giving their detailed coding and decoding processes and corresponding performance simulations.In a nutshell,this paper constitutes a tutorial on the JSCC scheme tailored to the needs of future 6G communications.
基金Supported by National Natural Science Foundation of China (No.90104013) and 863 project(2001AA121061)
文摘An improved FGS (Fine Granular Scalability) coding method is proposed in this letter, which is based on human visual characteristics. This method adjusts FGS coding frame rate according to the evaluation of video sequences so as to improve the coding efficiency and subject perceived quality of reconstructed images. Finally, a fine granular joint source channel coding is proposed based on the source coding method, which not only utilizes the network resources efficiently, but guarantees the reliable transmission of video information.
文摘This article presents a proposal for a model of a microprogram control unit (CMCU) with output identification adapted for implementation in complex programmable logic devices (CPLD) equipped with integrated memory modules [1]. An approach which applies two sources of code and one-hot encoding has been used in a base CMCU model with output identification [2] [3]. The article depicts a complete example of processing for the proposed CMCU model. Furthermore, it also discusses the advantages and disadvantages of the approach in question and presents the results of the experiments conducted on a real CPLD system.
文摘The article presents a modification to the method which applies two sources of data. The modification is depicted on the example of a compositional microprogram control unit (CMCU) model with base structure implemented in the complex programmable logic devices (CPLD). First, the conditions needed to apply the method are presented, followed by the results of its implementation in real hardware.
文摘Csiszar's strong coding theorem for discrete memoryless scarce is generalized to arbitrarily varying source.We also determine the asymptotic error exponent for arbitrarily wrying source.
文摘Robust video streaming through high error prone wireless channel has attracted much attention. In this paper the authors introduce an effective algorithm by joining the Unequal Error Protection ability of the channel multiplexing protocol H.223 Annex D, and the new H.263++ Annex V Data Partition together. Based on the optimal trade off of these two technologies, the Joint Source and Channel Coding algorithm can result in stronger error resilience. The simulation results have shown its superiority against separate coding mode and some Unequal Error Protection mode under recommended wireless channel error patterns.
文摘The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses an intelligent computation technique based on the combination of two methods:Rebalancing data and representation learning to analyze and evaluate the code property graph(CPG)of the source code for detecting abnormal behavior of software vulnerabilities.To do that,DrCSE performs a combination of 3 main processing techniques:(i)building the source code feature profiles,(ii)rebalancing data,and(iii)contrastive learning.In which,the method(i)extracts the source code’s features based on the vertices and edges of the CPG.The method of rebalancing data has the function of supporting the training process by balancing the experimental dataset.Finally,contrastive learning techniques learn the important features of the source code by finding and pulling similar ones together while pushing the outliers away.The experiment part of this paper demonstrates the superiority of the DrCSE Framework for detecting source code security vulnerabilities using the Verum dataset.As a result,the method proposed in the article has brought a pretty good performance in all metrics,especially the Precision and Recall scores of 39.35%and 69.07%,respectively,proving the efficiency of the DrCSE Framework.It performs better than other approaches,with a 5%boost in Precision and a 5%boost in Recall.Overall,this is considered the best research result for the software vulnerability detection problem using the Verum dataset according to our survey to date.
文摘A new neural network based method for solving the problem of congestion control arising at the user network interface (UNI) of ATM networks is proposed in this paper. Unlike the previous methods where the coding rate for all traffic sources as controller output signals is tuned in a body, the proposed method adjusts the coding rate for only a part of the traffic sources while the remainder sources send the cells in the previous coding rate in case of occurrence of congestion. The controller output signals include the source coding rate and the percentage of the sources that send cells at the corresponding coding rate. The control methods not only minimize the cell loss rate but also guarantee the quality of information (such as voice sources) fed into the multiplexer buffer. Simulations with 150 ADPCM voice sources fed into the multiplexer buffer showed that the proposed methods have advantage over the previous methods in the aspect of the performance indices such as cell loss rate (CLR) and voice quality.
文摘In order to deal with the complex association relationships between classes in an object-oriented software system,a novel approach for identifying refactoring opportunities is proposed.The approach can be used to detect complex and duplicated many-to-many association relationships in source code,and to provide guidance for further refactoring.In the approach,source code is first transformed to an abstract syntax tree from which all data members of each class are extracted,then each class is characterized in connection with a set of association classes saving its data members.Next,classes in common associations are obtained by comparing different association classes sets in integrated analysis.Finally,on condition of pre-defined thresholds,all class sets in candidate for refactoring and their common association classes are saved and exported.This approach is tested on 4 projects.The results show that the precision is over 96%when the threshold is 3,and 100%when the threshold is 4.Meanwhile,this approach has good execution efficiency as the execution time taken for a project with more than 500 classes is less than 4 s,which also indicates that it can be applied to projects of different scales to identify their refactoring opportunities effectively.
基金This study was supported in part by the National Natural Science Foundation of China(Nos.61401512,61602508,61772549,U1636219 and U1736214)the National Key R&D Program of China(No.2016YFB0801303 and 2016QY01W0105)+1 种基金the Key Technologies R&D Program of Henan Province(No.162102210032)and the Key Science and Technology Research Project of Henan Province(No.152102210005).
文摘When dealing with the large-scale program,many automatic vulnerability mining techniques encounter such problems as path explosion,state explosion,and low efficiency.Decomposition of large-scale programs based on safety-sensitive functions helps solve the above problems.And manual identification of security-sensitive functions is a tedious task,especially for the large-scale program.This study proposes a method to mine security-sensitive functions the arguments of which need to be checked before they are called.Two argument-checking identification algorithms are proposed based on the analysis of two implementations of argument checking.Based on these algorithms,security-sensitive functions are detected based on the ratio of invocation instances the arguments of which have been protected to the total number of instances.The results of experiments on three well-known open-source projects show that the proposed method can outperform competing methods in the literature.
基金supported by the Scientific Research Innovation Project of the Shanghai Municipal Education Commission (Grant No.08YZ18)the Key Project of Natural Science Foundation of China (Grant No.60832003)+2 种基金the National Natural Science Foundation of China (Grant Nos.60972137,60672052)the Innovation Foundation Project of Shanghai Universitythe Special Research Foundation of Shanghai Excellent Youth University Teacher Training
文摘We present an unequal decoding power allocation (UDPA) approach for minimization of the receiver power consumption subject to a given quality of service (QoS), by exploiting data partitioning and turbo decoding. We assign unequal decoding power of forward error correction (FEC) to data partitions with different priority by jointly considering the source coding, channel coding and receiver power consumption. The proposed scheme is applied to H.264 video over additive white Gaussion noise (AWGN) channel, and achieves excellent tradeoff between video delivery quality and power consumption, and yields significant power saving compared with the conventional equal decoding power allocation (EDPA) approach in wireless video transmission.
基金supported by the Science and Technology Program Project(No.2020A02001-1)of Xinjiang Autonomous Region,China.
文摘Smart contracts have led to more efficient development in finance and healthcare,but vulnerabilities in contracts pose high risks to their future applications.The current vulnerability detection methods for contracts are either based on fixed expert rules,which are inefficient,or rely on simplistic deep learning techniques that do not fully leverage contract semantic information.Therefore,there is ample room for improvement in terms of detection precision.To solve these problems,this paper proposes a vulnerability detector based on deep learning techniques,graph representation,and Transformer,called GRATDet.The method first performs swapping,insertion,and symbolization operations for contract functions,increasing the amount of small sample data.Each line of code is then treated as a basic semantic element,and information such as control and data relationships is extracted to construct a new representation in the form of a Line Graph(LG),which shows more structural features that differ from the serialized presentation of the contract.Finally,the node information and edge information of the graph are jointly learned using an improved Transformer-GP model to extract information globally and locally,and the fused features are used for vulnerability detection.The effectiveness of the method in reentrancy vulnerability detection is verified in experiments,where the F1 score reaches 95.16%,exceeding stateof-the-art methods.
文摘A robust progressive image transmission scheme over broadband wireless fading channels is developed for 4th generation wireless communication systems (4G) in this paper. The proposed scheme is based on space-time block coded orthogonal frequency-division multiplexing (OFDM) with 4 transmit antennas and 2 receive antennas and uses a simplified minimum mean square error (MMSE) detector instead of maximum likelihood (ML) detectors. Considering DCT is simpler and more widely applied in the industry than wavelet transforms, a progressive image compression method based on DCT called mean-subtract embedded DCT (MSEDCT) is developed, with a simple mean-subtract method for the redundancy of reorganized DC blocks in addition to a structure similar to the embedded zerotree wavelet coding (EZW) method. Then after analyzing and testing bit importance of the progressive MSEDCT bitstreams, the layered unequal error protection method of joint source-channels coding based on Reed-Solomon (RS) codes is used to protect different parts of bitstreams, providing different QoS assurances and good flexibility. Simulation experiments show our proposed scheme can effectively degrade fading effects and obtain better image transmission effects with 10 -20 dB average peak-sig- nal-noise-ratio (PSNR) gains at the median Eb/No than those schemes without space-time coded OFDM or equal error protections with space-time coded OFDM.
文摘We consider a quadratic Gaussian distributed lossy source coding setup with an additional constraint of identical reconstructions between the encoder and the decoder.The setup consists of two correlated Gaussian sources,wherein one of them has to be reconstructed to be within some distortion constraint and match with a corresponding reconstruction at the encoder,while the other source acts as coded side information.We study the tradeoff between the rates of two encoders for a given distortion constraint on the reconstruction.An explicit characterization of this trade-off is the main result of the paper.We also give close inner and outer bounds for the discrete memoryless version of the problem.
基金supported by the National High Technology Research and Development Program of China (Grant No. 863-2-5-1-13B)
文摘Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in mukispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian-Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band.
文摘In-network data aggregation is severely affected due to information in transmits attack. This is an important problem since wireless sensor networks (WSN) are highly vulnerable to node compromises due to this attack. As a result, large error in the aggregate computed at the base station due to false sub aggregate values contributed by compromised nodes. When falsified event messages forwarded through intermediate nodes lead to wastage of their limited energy too. Since wireless sensor nodes are battery operated, it has low computational power and energy. In view of this, the algorithms designed for wireless sensor nodes should be such that, they extend the lifetime, use less computation and enhance security so as to enhance the network life time. This article presents Vernam Cipher cryptographic technique based data compression algorithm using huff man source coding scheme in order to enhance security and lifetime of the energy constrained wireless sensor nodes. In addition, this scheme is evaluated by using different processor based sensor node implementations and the results are compared against to other existing schemes. In particular, we present a secure light weight algorithm for the wireless sensor nodes which are consuming less energy for its operation. Using this, the entropy improvement is achieved to a greater extend.
基金The National Natural Science Foundation of China (No. 60202006)
文摘A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a forbidden symbol in the arithmetic codes is used to distinguish the quantization distortion and the channel distortion, all the coefficients from the first error one to the end of the sequence are set to be a value within the variance range of the coefficients instead of zero, then the error propagation coming from the entropy coding can be essentially estimated, which is disregarded in the most conventional joint source channel coding (JSCC) systems. The precision of the model in terms of average peak-signal-to-noise has been improved about 0.5 dB compared to classical works. An efficient unequal error protection system based on the model is developed, and can be used in the wireless communication systems.