期刊文献+
共找到20篇文章
< 1 >
每页显示 20 50 100
Low complexity DCT-based distributed source coding with Gray code for hyperspectral images 被引量:1
1
作者 Rongke Liu Jianrong Wang Xuzhou Pan 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2010年第6期927-933,共7页
To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize tr... To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize transform in spatial domain, the proposed algorithm applies transform in spectral domain. Set-partitioning-based approach is applied to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Gray encoded. Because of the dependency along the line dimension of hyperspectral images, low density paritycheck-(LDPC)-based Slepian-Wolf coder is adopted to implement the DSC strategy. Experimental results on airborne visible/infrared imaging spectrometer (AVIRIS) dataset show that the proposed paradigm achieves up to 6 dB improvement over DSC-based coders which apply transform in spatial domain, with significantly reduced computational complexity and memory storage. 展开更多
关键词 image compression hyperspectral images distributed source coding (DSC) discrete cosine transform (DCT) Gray code band-interleaved-by-pixel (BIP).
下载PDF
An efficient chaotic source coding scheme with variable-length blocks
2
作者 林秋镇 黄国和 陈剑勇 《Chinese Physics B》 SCIE EI CAS CSCD 2011年第7期94-100,共7页
An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when ... An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. 展开更多
关键词 CHAOS compression source coding finite-precision implementation
下载PDF
Joint Source-Channel Coding for 6G Communications 被引量:2
3
作者 Yanfei Dong Jincheng Dai +2 位作者 Kai Niu Sen Wang Yifei Yuan 《China Communications》 SCIE CSCD 2022年第3期101-115,共15页
In order to provide ultra low-latency and high energy-efficient communication for intelligences,the sixth generation(6G)wireless communication networks need to break out of the dilemma of the depleting gain of the sep... In order to provide ultra low-latency and high energy-efficient communication for intelligences,the sixth generation(6G)wireless communication networks need to break out of the dilemma of the depleting gain of the separated optimization paradigm.In this context,this paper provides a comprehensive tutorial that overview how joint source-channel coding(JSCC)can be employed for improving overall system performance.For the purpose,we first introduce the communication requirements and performance metrics for 6G.Then,we provide an overview of the source-channel separation theorem and why it may not hold in practical applications.In addition,we focus on two new JSCC schemes called the double low-density parity-check(LDPC)codes and the double polar codes,respectively,giving their detailed coding and decoding processes and corresponding performance simulations.In a nutshell,this paper constitutes a tutorial on the JSCC scheme tailored to the needs of future 6G communications. 展开更多
关键词 6G joint source and channel coding doubel LDPC codes doubel polar codes
下载PDF
Adaptation of the Two Sources of Code and One-Hot Encoding Method for Designing a Model of Microprogram Control Unit with Output Identification 被引量:3
4
作者 Lukasz Smolinski Alexander Barkalov Larysa Titarenko 《Intelligent Control and Automation》 2015年第2期116-125,共10页
This article presents a proposal for a model of a microprogram control unit (CMCU) with output identification adapted for implementation in complex programmable logic devices (CPLD) equipped with integrated memory mod... This article presents a proposal for a model of a microprogram control unit (CMCU) with output identification adapted for implementation in complex programmable logic devices (CPLD) equipped with integrated memory modules [1]. An approach which applies two sources of code and one-hot encoding has been used in a base CMCU model with output identification [2] [3]. The article depicts a complete example of processing for the proposed CMCU model. Furthermore, it also discusses the advantages and disadvantages of the approach in question and presents the results of the experiments conducted on a real CPLD system. 展开更多
关键词 CPLD PAL UFM CLB Two sources of Code One-Hot Encoding CMCU
下载PDF
The Application and Adaptation of the Two Sources of Code and Natural Encoding Method for Designing a Model of Microprogram Control Unit with Base Structure 被引量:2
5
作者 Lukasz Smolinski Alexander Barkalov Larysa Titarenko 《Circuits and Systems》 2014年第12期301-308,共8页
The article presents a modification to the method which applies two sources of data. The modification is depicted on the example of a compositional microprogram control unit (CMCU) model with base structure implemente... The article presents a modification to the method which applies two sources of data. The modification is depicted on the example of a compositional microprogram control unit (CMCU) model with base structure implemented in the complex programmable logic devices (CPLD). First, the conditions needed to apply the method are presented, followed by the results of its implementation in real hardware. 展开更多
关键词 CPLD PAL UFM CLB Two sources of Code One-Hot Encoding Natural Encoding CMCU
下载PDF
STRONG CODING THEOREM AND ASYMPTOTIC ERROR EXPONENT OF ARBITRARILY VARYING SOURCE
6
作者 符方伟 沈世镒 《Acta Mathematica Scientia》 SCIE CSCD 1996年第1期23-30,共8页
Csiszar's strong coding theorem for discrete memoryless scarce is generalized to arbitrarily varying source.We also determine the asymptotic error exponent for arbitrarily wrying source.
关键词 arbitrarily varying source coding theorem error exponent information quantity types.
下载PDF
A New Framework for Software Vulnerability Detection Based on an Advanced Computing
7
作者 Bui Van Cong Cho Do Xuan 《Computers, Materials & Continua》 SCIE EI 2024年第6期3699-3723,共25页
The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses ... The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses an intelligent computation technique based on the combination of two methods:Rebalancing data and representation learning to analyze and evaluate the code property graph(CPG)of the source code for detecting abnormal behavior of software vulnerabilities.To do that,DrCSE performs a combination of 3 main processing techniques:(i)building the source code feature profiles,(ii)rebalancing data,and(iii)contrastive learning.In which,the method(i)extracts the source code’s features based on the vertices and edges of the CPG.The method of rebalancing data has the function of supporting the training process by balancing the experimental dataset.Finally,contrastive learning techniques learn the important features of the source code by finding and pulling similar ones together while pushing the outliers away.The experiment part of this paper demonstrates the superiority of the DrCSE Framework for detecting source code security vulnerabilities using the Verum dataset.As a result,the method proposed in the article has brought a pretty good performance in all metrics,especially the Precision and Recall scores of 39.35%and 69.07%,respectively,proving the efficiency of the DrCSE Framework.It performs better than other approaches,with a 5%boost in Precision and a 5%boost in Recall.Overall,this is considered the best research result for the software vulnerability detection problem using the Verum dataset according to our survey to date. 展开更多
关键词 source code vulnerability source code vulnerability detection code property graph feature profile contrastive learning data rebalancing
下载PDF
Automatic Mining of Security-Sensitive Functions from Source Code
8
作者 Lin Chen Chunfang Yang +2 位作者 Fenlin Liu Daofu Gong Shichang Ding 《Computers, Materials & Continua》 SCIE EI 2018年第8期199-210,共12页
When dealing with the large-scale program,many automatic vulnerability mining techniques encounter such problems as path explosion,state explosion,and low efficiency.Decomposition of large-scale programs based on safe... When dealing with the large-scale program,many automatic vulnerability mining techniques encounter such problems as path explosion,state explosion,and low efficiency.Decomposition of large-scale programs based on safety-sensitive functions helps solve the above problems.And manual identification of security-sensitive functions is a tedious task,especially for the large-scale program.This study proposes a method to mine security-sensitive functions the arguments of which need to be checked before they are called.Two argument-checking identification algorithms are proposed based on the analysis of two implementations of argument checking.Based on these algorithms,security-sensitive functions are detected based on the ratio of invocation instances the arguments of which have been protected to the total number of instances.The results of experiments on three well-known open-source projects show that the proposed method can outperform competing methods in the literature. 展开更多
关键词 Code mining VULNERABILITIES static analysis security-sensitive function source code.
下载PDF
Unequal decoding power allocation for efficient video transmission
9
作者 王永芳 余松煜 +1 位作者 杨小康 张兆杨 《Journal of Shanghai University(English Edition)》 2010年第1期60-65,共6页
We present an unequal decoding power allocation (UDPA) approach for minimization of the receiver power consumption subject to a given quality of service (QoS), by exploiting data partitioning and turbo decoding. W... We present an unequal decoding power allocation (UDPA) approach for minimization of the receiver power consumption subject to a given quality of service (QoS), by exploiting data partitioning and turbo decoding. We assign unequal decoding power of forward error correction (FEC) to data partitions with different priority by jointly considering the source coding, channel coding and receiver power consumption. The proposed scheme is applied to H.264 video over additive white Gaussion noise (AWGN) channel, and achieves excellent tradeoff between video delivery quality and power consumption, and yields significant power saving compared with the conventional equal decoding power allocation (EDPA) approach in wireless video transmission. 展开更多
关键词 power allocation turbo code data partition joint source and channel code
下载PDF
GRATDet:Smart Contract Vulnerability Detector Based on Graph Representation and Transformer
10
作者 Peng Gong Wenzhong Yang +3 位作者 Liejun Wang Fuyuan Wei KeZiErBieKe HaiLaTi Yuanyuan Liao 《Computers, Materials & Continua》 SCIE EI 2023年第8期1439-1462,共24页
Smart contracts have led to more efficient development in finance and healthcare,but vulnerabilities in contracts pose high risks to their future applications.The current vulnerability detection methods for contracts ... Smart contracts have led to more efficient development in finance and healthcare,but vulnerabilities in contracts pose high risks to their future applications.The current vulnerability detection methods for contracts are either based on fixed expert rules,which are inefficient,or rely on simplistic deep learning techniques that do not fully leverage contract semantic information.Therefore,there is ample room for improvement in terms of detection precision.To solve these problems,this paper proposes a vulnerability detector based on deep learning techniques,graph representation,and Transformer,called GRATDet.The method first performs swapping,insertion,and symbolization operations for contract functions,increasing the amount of small sample data.Each line of code is then treated as a basic semantic element,and information such as control and data relationships is extracted to construct a new representation in the form of a Line Graph(LG),which shows more structural features that differ from the serialized presentation of the contract.Finally,the node information and edge information of the graph are jointly learned using an improved Transformer-GP model to extract information globally and locally,and the fused features are used for vulnerability detection.The effectiveness of the method in reentrancy vulnerability detection is verified in experiments,where the F1 score reaches 95.16%,exceeding stateof-the-art methods. 展开更多
关键词 Vulnerability detection smart contract graph representation deep learning source code
下载PDF
The partial side information problem with additional reconstructions
11
作者 Viswanathan Ramachandran 《Digital Communications and Networks》 SCIE 2020年第1期123-128,共6页
We consider a quadratic Gaussian distributed lossy source coding setup with an additional constraint of identical reconstructions between the encoder and the decoder.The setup consists of two correlated Gaussian sourc... We consider a quadratic Gaussian distributed lossy source coding setup with an additional constraint of identical reconstructions between the encoder and the decoder.The setup consists of two correlated Gaussian sources,wherein one of them has to be reconstructed to be within some distortion constraint and match with a corresponding reconstruction at the encoder,while the other source acts as coded side information.We study the tradeoff between the rates of two encoders for a given distortion constraint on the reconstruction.An explicit characterization of this trade-off is the main result of the paper.We also give close inner and outer bounds for the discrete memoryless version of the problem. 展开更多
关键词 Rate-distortion theory source coding Network information theory Wyner Ziv compression MMSE estimation Correlated Gaussian sources
下载PDF
An image compression method for space multispectral time delay and integration charge coupled device camera
12
作者 李进 金龙旭 张然峰 《Chinese Physics B》 SCIE EI CAS CSCD 2013年第6期360-365,共6页
Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Cons... Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in mukispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian-Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band. 展开更多
关键词 multispectral CCD images Consultative Committee for Space Data Systems - image data compression (CCSDS-IDC) distributed source coding (DSC)
下载PDF
Implementation of an Efficient Light Weight Security Algorithm for Energy-Constrained Wireless Sensor Nodes
13
作者 A. Saravanaselvan B. Paramasivan 《Circuits and Systems》 2016年第9期2234-2241,共9页
In-network data aggregation is severely affected due to information in transmits attack. This is an important problem since wireless sensor networks (WSN) are highly vulnerable to node compromises due to this attack. ... In-network data aggregation is severely affected due to information in transmits attack. This is an important problem since wireless sensor networks (WSN) are highly vulnerable to node compromises due to this attack. As a result, large error in the aggregate computed at the base station due to false sub aggregate values contributed by compromised nodes. When falsified event messages forwarded through intermediate nodes lead to wastage of their limited energy too. Since wireless sensor nodes are battery operated, it has low computational power and energy. In view of this, the algorithms designed for wireless sensor nodes should be such that, they extend the lifetime, use less computation and enhance security so as to enhance the network life time. This article presents Vernam Cipher cryptographic technique based data compression algorithm using huff man source coding scheme in order to enhance security and lifetime of the energy constrained wireless sensor nodes. In addition, this scheme is evaluated by using different processor based sensor node implementations and the results are compared against to other existing schemes. In particular, we present a secure light weight algorithm for the wireless sensor nodes which are consuming less energy for its operation. Using this, the entropy improvement is achieved to a greater extend. 展开更多
关键词 In-Network Data Aggregation Security Attacks Vernam Cipher Cryptographic Technique Huffman source coding ENTROPY
下载PDF
Method of militarysoftware securityand vulnerability testing based on process mutation
14
作者 金丽亚 王荣辉 肖庆 《Journal of Measurement Science and Instrumentation》 CAS 2013年第3期228-230,共3页
To solve the problems caused by military software security issues,this paper firstly introduces a new software fault injection technique,namely main static fault injection method:program mutation.And then source code ... To solve the problems caused by military software security issues,this paper firstly introduces a new software fault injection technique,namely main static fault injection method:program mutation.And then source code for testing this algorithm is put forward.On this basis buffer overflow testing based on program mutation is conducted.Finally several military software source codes for buffer overflow testing are tested using deficiency tracking system(DTS)tool,Experimental results show the effectiveness of the proposed algorithm. 展开更多
关键词 military software testing fault injection buffer overflow source code scanning
下载PDF
Evolutionary algorithm based index assignment algorithm for noisy channel
15
作者 李天昊 余松煜 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2004年第3期431-435,共5页
A globally optimal solution to vector quantization (VQ) index assignment on noisy channel, the evolutionary algorithm based index assignment algorithm (EAIAA), is presented. The algorithm yields a significant reductio... A globally optimal solution to vector quantization (VQ) index assignment on noisy channel, the evolutionary algorithm based index assignment algorithm (EAIAA), is presented. The algorithm yields a significant reduction in average distortion due to channel errors, over conventional arbitrary index assignment, as confirmed by experimental results over the memoryless binary symmetric channel (BSC) for any bit error. 展开更多
关键词 joint source/channel coding vector quantization index assignment binary symmetric channel.
下载PDF
Scalable broadcast with network coding in heterogeneous networks
16
作者 SI Jing-jing ZHUANG Bo-jin CAI An-ni 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2010年第5期72-79,共8页
This article studies the scalable broadcast scheme realized with the joint application of layered source coding,unequal error protection (UEP) and random network coding from the theoretical point of view.The success... This article studies the scalable broadcast scheme realized with the joint application of layered source coding,unequal error protection (UEP) and random network coding from the theoretical point of view.The success probability for any non-source node in a heterogeneous network to recover the most important layers of the source data is deduced.This probability proves that in this broadcast scheme every non-source node with enough capacity can always recover the source data partially or entirely as long as the finite field size is sufficiently large.Furthermore,a special construction for the local encoding kernel at the source node is proposed.With this special construction,an increased success probability for partial decoding at any non-source node is achieved,i.e.,the partial decodability offered by the scalable broadcast scheme is improved. 展开更多
关键词 network coding linear broadcast layered source coding UEP
原文传递
An analytical model for source code distributability verification
17
作者 Ayaz ISAZADEH Jaber KARIMPOUR +1 位作者 Islam ELGEDAWY Habib IZADKHAH 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2014年第2期126-138,共13页
One way to speed up the execution of sequential programs is to divide them into concurrent segments and execute such segments in a parallel manner over a distributed computing environment.We argue that the execution s... One way to speed up the execution of sequential programs is to divide them into concurrent segments and execute such segments in a parallel manner over a distributed computing environment.We argue that the execution speedup primarily depends on the concurrency degree between the identified segments as well as communication overhead between the segments.To guarantee the best speedup,we have to obtain the maximum possible concurrency degree between the identified segments,taking communication overhead into consideration.Existing code distributor and multi-threading approaches do not fulfill such requirements;hence,they cannot provide expected distributability gains in advance.To overcome such limitations,we propose a novel approach for verifying the distributability of sequential object-oriented programs.The proposed approach enables users to see the maximum speedup gains before the actual distributability implementations,as it computes an objective function which is used to measure different distribution values from the same program,taking into consideration both remote and sequential calls.Experimental results showed that the proposed approach successfully determines the distributability of different real-life software applications compared with their real-life sequential and distributed implementations. 展开更多
关键词 Code distributability Synchronous calls Asynchronous calls Distributed software systems source code
原文传递
Research on Classification of Malware Source Code
18
作者 陈嘉玫 赖谷鑫 《Journal of Shanghai Jiaotong university(Science)》 EI 2014年第4期425-430,共6页
In the face threat of the Internet attack, malware classification is one of the promising solutions in the field of intrusion detection and digital forensics. In previous work, researchers performed dynamic analysis o... In the face threat of the Internet attack, malware classification is one of the promising solutions in the field of intrusion detection and digital forensics. In previous work, researchers performed dynamic analysis or static analysis after reverse engineering. But malware developers even use anti-virtual machine(VM) and obfuscation techniques to evade malware classifiers. By means of the deployment of honeypots, malware source code could be collected and analyzed. Source code analysis provides a better classification for understanding the purpose of attackers and forensics. In this paper, a novel classification approach is proposed, based on content similarity and directory structure similarity. Such a classification avoids to re-analyze known malware and allocates resources for new malware. Malware classification also let network administrators know the purpose of attackers. The experimental results demonstrate that the proposed system can classify the malware efficiently with a small misclassification ratio and the performance is better than virustotal. 展开更多
关键词 MALWARE source code classification static analysis HONEYPOT
原文传递
Modelling the utility of group testing for public health surveillance
19
作者 Günther Koliander Georg Pichler 《Infectious Disease Modelling》 2021年第1期1009-1024,共16页
In epidemic or pandemic situations,resources for testing the infection status of individuals may be scarce.Although group testing can help to significantly increase testing capabilities,the(repeated)testing of entire ... In epidemic or pandemic situations,resources for testing the infection status of individuals may be scarce.Although group testing can help to significantly increase testing capabilities,the(repeated)testing of entire populations can exceed the resources of any country.We thus propose an extension of the theory of group testing that takes into account the fact that definitely specifying the infection status of each individual is impossible.Our theory builds on assigning to each individual an infection status(healthy/infected),as well as an associated cost function for erroneous assignments.This cost function is versatile,e.g.,it could take into account that false negative assignments are worse than false positive assignments and that false assignments in critical areas,such as health care workers,are more severe than in the general population.Based on this model,we study the optimal use of a limited number of tests to minimize the expected cost.More specifically,we utilize information-theoretic methods to give a lower bound on the expected cost and describe simple strategies that can significantly reduce the expected cost over currently known strategies.A detailed example is provided to illustrate our theory. 展开更多
关键词 Group testing Public health surveillance source coding Rate-distortion theory
原文传递
Summarizing Software Artifacts: A Literature Review 被引量:5
20
作者 Najam Nazar Yan Hu He Jiang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2016年第5期883-909,共27页
This paper presents a literature review in the field of summarizing software artifacts, focusing on bug reports, source code, mailing lists and developer discussions artifacts. From Jan. 2010 to Apr. 2016, numerous su... This paper presents a literature review in the field of summarizing software artifacts, focusing on bug reports, source code, mailing lists and developer discussions artifacts. From Jan. 2010 to Apr. 2016, numerous summarization techniques, approaches, and tools have been proposed to satisfy the ongoing demand of improving software performance and quality and facilitating developers in understanding the problems at hand. Since aforementioned artifacts contain both structured and unstructured data at the same time, researchers have applied different machine learning and data mining techniques to generate summaries. Therefore, this paper first intends to provide a general perspective on the state of the art, describing the type of artifacts, approaches for summarization, as well as the common portions of experimental procedures shared among these artifacts. Moreover, we discuss the applications of summarization, i.e., what tasks at hand have been achieved through summarization. Next, this paper presents tools that are generated for summarization tasks or employed during summarization tasks. In addition, we present different summarization evaluation methods employed in selected studies as well as other important factors that are used for the evaluation of generated summaries such as adequacy and quality. Moreover, we briefly present modern communication channels and complementarities with commonalities among different software artifacts. Finally, some thoughts about the challenges applicable to the existing studies in general as well as future research directions are also discussed. The survey of existing studies will allow future researchers to have a wide and useful background knowledge on the main and important aspects of this research field. 展开更多
关键词 mining software repositories mining software engineering data machine learning summarizing softwar eartifacts summarizing source code
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部