Integrated sensing and communication(ISAC)is one of the main usage scenarios for 6G wireless networks.To most efficiently utilize the limited wireless resources,integrated super-resolution sensing and communication(IS...Integrated sensing and communication(ISAC)is one of the main usage scenarios for 6G wireless networks.To most efficiently utilize the limited wireless resources,integrated super-resolution sensing and communication(ISSAC)has been recently proposed to significantly improve sensing performance with super-resolution algorithms for ISAC systems,such as the Multiple Signal Classification(MUSIC)algorithm.However,traditional super-resolution sensing algorithms suffer from prohibitive computational complexity of orthogonal-frequency division multiplexing(OFDM)systems due to the large dimensions of the signals in the subcarrier and symbol domains.To address such issues,we propose a novel two-stage approach to reduce the computational complexity for super-resolution range estimation significantly.The key idea of the proposed scheme is to first uniformly decimate signals in the subcarrier domain so that the computational complexity is significantly reduced without missing any target in the range domain.However,the decimation operation may result in range ambiguity due to pseudo peaks,which is addressed by the second stage where the total collocated subcarrier data are used to verify the detected peaks.Compared with traditional MUSIC algorithms,the proposed scheme reduces computational complexity by two orders of magnitude,while maintaining the range resolution and unambiguity.Simulation results verify the effectiveness of the proposed scheme.展开更多
Martens proposed a highly efficient and simply formed DFT algorithm——RCFA,whose efficien-cy is comparable with that of WFTA or that of PFA,and whose structure is similar to that of FFT.Theauthors have proved that,in...Martens proposed a highly efficient and simply formed DFT algorithm——RCFA,whose efficien-cy is comparable with that of WFTA or that of PFA,and whose structure is similar to that of FFT.Theauthors have proved that,in the case of radix 2,the RCFA is exactly equivalent to the twiddle factor mergedfrequency-decimal FFT algorithm.The twiddle factor merged time-decimal FFT algorithm is providedin this paper.Thus,in any case,the FFT algorithm used currently can be replaced by the more efficientalgorithm——the twiddle factor merged FFT algorithm,with exactly the same external property and thesimilar internal structure.Also in this paper,the software for implementing the twiddle factor merged FFTalgorithm(TMFFT)is provided.展开更多
In this paper, we construct some continuous but non-differentiable functions defined by quinary dec-imal, that are Kiesswetter-like functions. We discuss their properties, then investigate the Hausdorff dimensions of ...In this paper, we construct some continuous but non-differentiable functions defined by quinary dec-imal, that are Kiesswetter-like functions. We discuss their properties, then investigate the Hausdorff dimensions of graphs of these functions and give a detailed proof.展开更多
Purpose:With more and more digital collections of various information resources becoming available,also increasing is the challenge of assigning subject index terms and classes from quality knowledge organization syst...Purpose:With more and more digital collections of various information resources becoming available,also increasing is the challenge of assigning subject index terms and classes from quality knowledge organization systems.While the ultimate purpose is to understand the value of automatically produced Dewey Decimal Classification(DDC)classes for Swedish digital collections,the paper aims to evaluate the performance of six machine learning algorithms as well as a string-matching algorithm based on characteristics of DDC.Design/methodology/approach:State-of-the-art machine learning algorithms require at least 1,000 training examples per class.The complete data set at the time of research involved 143,838 records which had to be reduced to top three hierarchical levels of DDC in order to provide sufficient training data(totaling 802 classes in the training and testing sample,out of 14,413 classes at all levels).Findings:Evaluation shows that Support Vector Machine with linear kernel outperforms other machine learning algorithms as well as the string-matching algorithm on average;the string-matching algorithm outperforms machine learning for specific classes when characteristics of DDC are most suitable for the task.Word embeddings combined with different types of neural networks(simple linear network,standard neural network,1 D convolutional neural network,and recurrent neural network)produced worse results than Support Vector Machine,but reach close results,with the benefit of a smaller representation size.Impact of features in machine learning shows that using keywords or combining titles and keywords gives better results than using only titles as input.Stemming only marginally improves the results.Removed stop-words reduced accuracy in most cases,while removing less frequent words increased it marginally.The greatest impact is produced by the number of training examples:81.90%accuracy on the training set is achieved when at least 1,000 records per class are available in the training set,and 66.13%when too few records(often less than A Comparison of Approaches100 per class)on which to train are available—and these hold only for top 3 hierarchical levels(803 instead of 14,413 classes).Research limitations:Having to reduce the number of hierarchical levels to top three levels of DDC because of the lack of training data for all classes,skews the results so that they work in experimental conditions but barely for end users in operational retrieval systems.Practical implications:In conclusion,for operative information retrieval systems applying purely automatic DDC does not work,either using machine learning(because of the lack of training data for the large number of DDC classes)or using string-matching algorithm(because DDC characteristics perform well for automatic classification only in a small number of classes).Over time,more training examples may become available,and DDC may be enriched with synonyms in order to enhance accuracy of automatic classification which may also benefit information retrieval performance based on DDC.In order for quality information services to reach the objective of highest possible precision and recall,automatic classification should never be implemented on its own;instead,machine-aided indexing that combines the efficiency of automatic suggestions with quality of human decisions at the final stage should be the way for the future.Originality/value:The study explored machine learning on a large classification system of over 14,000 classes which is used in operational information retrieval systems.Due to lack of sufficient training data across the entire set of classes,an approach complementing machine learning,that of string matching,was applied.This combination should be explored further since it provides the potential for real-life applications with large target classification systems.展开更多
A three-part comb decimator is presented in this paper, for the applications with severe requirements of circuit performance and frequency response. Based on the modified prime factorization method and multistage poly...A three-part comb decimator is presented in this paper, for the applications with severe requirements of circuit performance and frequency response. Based on the modified prime factorization method and multistage polyphase decomposition, an efficient non-recursive structure for the cascaded integrator-comb (CIC) decimation filter is derived. Utilizing this structure as the core part, the proposed comb decimator can not only loosen the decimation ratio's limitation, but also balance the tradeoff among the overall power consumption, circuit area and maximum speed. Further, to improve the frequency response of the comb decimator, a cos-prefilter is introduced as the preprocessing part for increasing the aliasing rejection, and an optimum sin-based filter is used as the compensation part for decreasing the passband droop.展开更多
Traditional Evolutionary Algorithm (EAs) is based on the binary code, real number code, structure code and so on. But these coding strategies have their own advantages and disadvantages for the optimization of functio...Traditional Evolutionary Algorithm (EAs) is based on the binary code, real number code, structure code and so on. But these coding strategies have their own advantages and disadvantages for the optimization of functions. In this paper a new Decimal Coding Strategy (DCS), which is convenient for space division and alterable precision, was proposed, and the theory analysis of its implicit parallelism and convergence was also discussed. We also redesign several genetic operators for the decimal code. In order to utilize the historial information of the existing individuals in the process of evolution and avoid repeated exploring, the strategies of space shrinking and precision alterable, are adopted. Finally, the evolutionary algorithm based on decimal coding (DCEAs) was applied to the optimization of functions, the optimization of parameter, mixed-integer nonlinear programming. Comparison with traditional GAs was made and the experimental results show that the performances of DCEAS are better than the tradition GAs.展开更多
This paper introduces decimated filter banks for the one-dimensional empirical mode decomposition (1D-EMD). These filter banks can provide perfect reconstruction and allow for an arbitrary tree structure. Since the ...This paper introduces decimated filter banks for the one-dimensional empirical mode decomposition (1D-EMD). These filter banks can provide perfect reconstruction and allow for an arbitrary tree structure. Since the EMD is a data driven decomposition, it is a very useful analysis instrument for non-stationary and non-linear signals. However, the traditional 1D-EMD has the disadvantage of expanding the data. Large data sets can be generated as the amount of data to be stored increases with every decomposition level. The 1D-EMD can be thought as having the structure of a single dyadic filter. However, a methodology to incorporate the decomposition into any arbitrary tree structure has not been reported yet in the literature. This paper shows how to extend the 1D-EMD into any arbitrary tree structure while maintaining the perfect reconstruction property. Furthermore, the technique allows for downsampling the decomposed signals. This paper, thus, presents a method to minimize the data-expansion drawback of the 1D-EMD by using decimation and merging the EMD coefficients. The proposed algorithm is applicable for any arbitrary tree structure including a full binary tree structure.展开更多
The infinite time-evolving block decimation algorithm(i TEBD)provides an efficient way to determine the ground state and dynamics of the quantum lattice systems in the thermodynamic limit.In this paper we suggest an o...The infinite time-evolving block decimation algorithm(i TEBD)provides an efficient way to determine the ground state and dynamics of the quantum lattice systems in the thermodynamic limit.In this paper we suggest an optimized way to take the i TEBD calculation,which takes advantage of additional reduced decompositions to speed up the calculation.The numerical calculations show that for a comparable computation time our method provides more accurate results than the traditional i TEBD,especially for lattice systems with large on-site degrees of freedom.展开更多
The entity and symbolic fraction comparison tasks separating identification and semantic access stages based on event-related potential technology were used to investigate neural differences between fraction and decim...The entity and symbolic fraction comparison tasks separating identification and semantic access stages based on event-related potential technology were used to investigate neural differences between fraction and decimal strategies in magnitude processing of nonsymbolic entities and symbolic numbers.The experimental results show that continuous entities elicit stronger left-lateralized anterior N2 in decimals,while discretized ones elicit more significant right-lateralized posterior N2 in fractions during the identification stage.On the other hand,decimals elicit stronger N2 over the left-lateralized fronto-central sites while fractions elicit the more profound P2 over the right-lateralized fronto-central sites and N2 at biparietal regions during the semantic access stage.Hence,for nonsymbolic entity processing,alignments of decimals and continuous entities activate the phonological network,while alignments of fractions and discretized entities trigger the visuospatial regions.For symbolic numbers processing,exact strategies with rote arithmetic retrieval in verbal format are used in decimal processing,while approximate strategies with complex magnitude processing in a visuospatial format are used in fraction processing.展开更多
Substitution boxes or S-boxes play a significant role in encryption and de-cryption of bit level plaintext and cipher-text respectively. Irreducible Poly-nomials (IPs) have been used to construct 4-bit or 8-bit substi...Substitution boxes or S-boxes play a significant role in encryption and de-cryption of bit level plaintext and cipher-text respectively. Irreducible Poly-nomials (IPs) have been used to construct 4-bit or 8-bit substitution boxes in many cryptographic block ciphers. In Advance Encryption Standard, the ele-ments of 8-bit S-box have been obtained from the Multiplicative Inverse (MI) of elemental polynomials (EPs) of the 1st IP over Galois field GF(28) by adding an additive element. In this paper, a mathematical method and the algorithm of the said method with the discussion of the execution time of the algorithm, to obtain monic IPs over Galois field GF(pq) have been illustrated with example. The method is very similar to polynomial multiplication of two polynomials over Galois field GF(pq) but has a difference in execution. The decimal equivalents of polynomials have been used to identify Basic Polynomials (BPs), EPs, IPs and Reducible polynomials (RPs). The monic RPs have been determined by this method and have been cancelled out to produce monic IPs. The non-monic IPs have been obtained with multiplication of α where?α∈ GF(pq)?and assume values from 2 to (p −1) to monic IPs.展开更多
Decimal arithmetic is desirable for high precision requirements of many financial, industrial and scientific applications. Furthermore, hardware support for decimal arithmetic has gained momentum with IEEE 754-2008, w...Decimal arithmetic is desirable for high precision requirements of many financial, industrial and scientific applications. Furthermore, hardware support for decimal arithmetic has gained momentum with IEEE 754-2008, which standardized decimal floating-point. This paper presents a new architecture for two operand and multi-operand signed-digit decimal addition. Signed-digit architectures are advantageous because there are no carry-propagate chains. The proposed signed-digit adder reduces the critical path delay by parallelizing the correction stage inherent to decimal addition. For performance evaluation, we synthesize and compare multiple unsigned and signed-digit multi-operand decimal adder architectures on 0.18μm CMOS VLSI technology. Synthesis results for 2, 4, 8, and 16 operands with 8 decimal digits provide critical data in determining each adder's performance and scalability.展开更多
This paper deals with the technology of using comb filters for FIR Decimation in Digital Signal Processing. The process of decreasing the sampling frequency of a sampled signal is called decimation. In the usage of de...This paper deals with the technology of using comb filters for FIR Decimation in Digital Signal Processing. The process of decreasing the sampling frequency of a sampled signal is called decimation. In the usage of decimating filters, only a portion of the out-of-pass band frequencies turns into the pass band, in systems wherein different parts operate at different sample rates. A filter design, tuned to the aliasing frequencies all of which can otherwise steal into the pass band, not only provides multiple stop bands but also exhibits computational efficiency and performance superiority over the single stop band design. These filters are referred to as multiband designs in the family of FIR filters. The other two special versions of FIR filter designs are Halfband and Comb filter designs, both of which are particularly useful for reducing the computational requirements in multirate designs. The proposed method of using Comb FIR decimation procedure is not only efficient but also opens up a new vista of simplicity and elegancy to compute Multiplications per Second (MPS) and Additions per Second (APS) for the desired filter over and above the half band designs.展开更多
The sampling rate conversion is always used in order to decrease computational amount and storage load in a system. The fractional Fourier transform (FRFT) is a powerful tool for the analysis of nonstationary signal...The sampling rate conversion is always used in order to decrease computational amount and storage load in a system. The fractional Fourier transform (FRFT) is a powerful tool for the analysis of nonstationary signals, especially, chirp-like signal. Thus, it has become an active area in the signal processing community, with many applications of radar, communication, electronic warfare, and information security. Therefore, it is necessary for us to generalize the theorem for Fourier domain analysis of decimation and interpolation. Firstly, this paper defines the digital frequency in the fractional Fourier domain (FRFD) through the sampling theorems with FRFT. Secondly, FRFD analysis of decimation and interpolation is proposed in this paper with digital frequency in FRFD followed by the studies of interpolation filter and decimation filter in FRFD. Using these results, FRFD analysis of the sampling rate conversion by a rational factor is illustrated. The noble identities of decimation and interpolation in FRFD are then deduced using previous results and the fractional convolution theorem. The proposed theorems in this study are the bases for the generalizations of the multirate signal processing in FRFD, which can advance the filter banks theorems in FRFD. Finally, the theorems introduced in this paper are validated by simulations.展开更多
Freezing and crystallization of commercial ethylene carbonate-based binary electrolytes,leading to irreversible damage to lithium-ion batteries(LIBs),remain a significant challenge for the survival of energy storage d...Freezing and crystallization of commercial ethylene carbonate-based binary electrolytes,leading to irreversible damage to lithium-ion batteries(LIBs),remain a significant challenge for the survival of energy storage devices at extremely low temperatures(<−40°C).Herein,a decimal solvent-based high-entropy electrolyte is developed with an unprecedented low freezing point of−130°C to significantly extend the service temperature range of LIBs,far superior to−30°C of the commercial counterpart.Distinguished from conventional electrolytes,this molecularly disordered solvent mixture greatly suppresses the freezing crystallization of electrolytes,providing good protection for LIBs from possible mechanical damage at extremely low temperatures.Benefiting from this,our high-entropy electrolyte exhibits extraordinarily high ionic conductivity of 0.62 mS·cm−1 at−60°C,several orders of magnitude higher than the frozen commercial electrolytes.Impressively,LIBs utilizing decimal electrolytes can be charged and discharged even at an ultra-low temperature of−60°C,maintaining high capacity retention(∼80%at−40°C)as well as remarkable rate capability.This study provides design strategies of low-temperature electrolytes to extend the service temperature range of LIBs,creating a new avenue for improving the survival and operation of various energy storage systems under extreme environmental conditions.展开更多
Power analysis has been a powerful and thoroughly studied threat for implementations of block ciphers and public key algorithms but not yet for stream ciphers. Based on the consumed power differences between two neigh...Power analysis has been a powerful and thoroughly studied threat for implementations of block ciphers and public key algorithms but not yet for stream ciphers. Based on the consumed power differences between two neighboring clock cycles, this paper presents a correlation power analysis (CPA) attack on the synchronous stream cipher DECIM^v2 (the tweaked version of the original submission DECIM). This attack resynchronizes the cryptographic device ceaselessly with many different initialization values (IVs) to obtain enough power traces. Then by modeling the statistical properties of the differential power traces with the correlation coefficients, the proposed attack algorithm can completely reveal the secret key of DECIM^v2. Furthermore, a simulation attack is mounted to confirm the validity of the algorithm. The results show that the entire secret key of DECIM^v2 can be restored within several minutes by performing 12 CPA attacks. It seems that there are still some defects in the design of DECIM^v2 and thus some further improvements should be made to resist the proposed attack.展开更多
This work presents an oversampled high-order single-loop single-bit sigma–delta analog-to-digital converter followed by a multi-stage decimation filter.Design details and measurement results for the whole chip are pr...This work presents an oversampled high-order single-loop single-bit sigma–delta analog-to-digital converter followed by a multi-stage decimation filter.Design details and measurement results for the whole chip are presented for a TSMC 0.18μm CMOS implementation to achieve virtually ideal 16-b performance over a baseband of 640 kHz.The modulator in this work is a fully differential circuit that operates from a single 1.8 V power supply. With an oversampling ratio of 64 and a clock rate of 81.92 MHz,the modulator achieves a 94 dB dynamic range. The decimator achieves a pass-band ripple of less than 0.01 dB,a stop-band attenuation of 80 dB and a transition band from 640 to 740 kHz.The whole chip consumes only 56 mW for a 1.28 MHz output rate and occupies a die area of 1×2 mm^2.展开更多
In digital furniture design, skillful designers usually use professional software to create new furniture designs with various textures and then take advantage of rendering tools to produce eye-catching design results...In digital furniture design, skillful designers usually use professional software to create new furniture designs with various textures and then take advantage of rendering tools to produce eye-catching design results. Generally, a fine-grained furniture model holds many geometric details, inducing significant time cost to model rendering and large data size for storage that are not desired in application scenarios where efficiency is greatly emphasized. To accelerate the rendering process while keeping good rendering results as many as possible, we develop a novel decimation technique which not only reduces the number of faces on furniture models, but also retains their geometric and texture features. Two metrics are utilized in our approach to measure the distortion of texture features. Considering these two metrics as guidance for decimation, high texture distortion can be avoided in simplifying the geometric models. Therefore, we are able to build multi-level representations with different detail levels based on the initial design. Our experimental results show that the developed technique can achieve excellent visual effects on the decimated furniture model.展开更多
基金supported by the National Natural Science Foundation of China under Grant No.62071114.
文摘Integrated sensing and communication(ISAC)is one of the main usage scenarios for 6G wireless networks.To most efficiently utilize the limited wireless resources,integrated super-resolution sensing and communication(ISSAC)has been recently proposed to significantly improve sensing performance with super-resolution algorithms for ISAC systems,such as the Multiple Signal Classification(MUSIC)algorithm.However,traditional super-resolution sensing algorithms suffer from prohibitive computational complexity of orthogonal-frequency division multiplexing(OFDM)systems due to the large dimensions of the signals in the subcarrier and symbol domains.To address such issues,we propose a novel two-stage approach to reduce the computational complexity for super-resolution range estimation significantly.The key idea of the proposed scheme is to first uniformly decimate signals in the subcarrier domain so that the computational complexity is significantly reduced without missing any target in the range domain.However,the decimation operation may result in range ambiguity due to pseudo peaks,which is addressed by the second stage where the total collocated subcarrier data are used to verify the detected peaks.Compared with traditional MUSIC algorithms,the proposed scheme reduces computational complexity by two orders of magnitude,while maintaining the range resolution and unambiguity.Simulation results verify the effectiveness of the proposed scheme.
文摘Martens proposed a highly efficient and simply formed DFT algorithm——RCFA,whose efficien-cy is comparable with that of WFTA or that of PFA,and whose structure is similar to that of FFT.Theauthors have proved that,in the case of radix 2,the RCFA is exactly equivalent to the twiddle factor mergedfrequency-decimal FFT algorithm.The twiddle factor merged time-decimal FFT algorithm is providedin this paper.Thus,in any case,the FFT algorithm used currently can be replaced by the more efficientalgorithm——the twiddle factor merged FFT algorithm,with exactly the same external property and thesimilar internal structure.Also in this paper,the software for implementing the twiddle factor merged FFTalgorithm(TMFFT)is provided.
文摘In this paper, we construct some continuous but non-differentiable functions defined by quinary dec-imal, that are Kiesswetter-like functions. We discuss their properties, then investigate the Hausdorff dimensions of graphs of these functions and give a detailed proof.
文摘Purpose:With more and more digital collections of various information resources becoming available,also increasing is the challenge of assigning subject index terms and classes from quality knowledge organization systems.While the ultimate purpose is to understand the value of automatically produced Dewey Decimal Classification(DDC)classes for Swedish digital collections,the paper aims to evaluate the performance of six machine learning algorithms as well as a string-matching algorithm based on characteristics of DDC.Design/methodology/approach:State-of-the-art machine learning algorithms require at least 1,000 training examples per class.The complete data set at the time of research involved 143,838 records which had to be reduced to top three hierarchical levels of DDC in order to provide sufficient training data(totaling 802 classes in the training and testing sample,out of 14,413 classes at all levels).Findings:Evaluation shows that Support Vector Machine with linear kernel outperforms other machine learning algorithms as well as the string-matching algorithm on average;the string-matching algorithm outperforms machine learning for specific classes when characteristics of DDC are most suitable for the task.Word embeddings combined with different types of neural networks(simple linear network,standard neural network,1 D convolutional neural network,and recurrent neural network)produced worse results than Support Vector Machine,but reach close results,with the benefit of a smaller representation size.Impact of features in machine learning shows that using keywords or combining titles and keywords gives better results than using only titles as input.Stemming only marginally improves the results.Removed stop-words reduced accuracy in most cases,while removing less frequent words increased it marginally.The greatest impact is produced by the number of training examples:81.90%accuracy on the training set is achieved when at least 1,000 records per class are available in the training set,and 66.13%when too few records(often less than A Comparison of Approaches100 per class)on which to train are available—and these hold only for top 3 hierarchical levels(803 instead of 14,413 classes).Research limitations:Having to reduce the number of hierarchical levels to top three levels of DDC because of the lack of training data for all classes,skews the results so that they work in experimental conditions but barely for end users in operational retrieval systems.Practical implications:In conclusion,for operative information retrieval systems applying purely automatic DDC does not work,either using machine learning(because of the lack of training data for the large number of DDC classes)or using string-matching algorithm(because DDC characteristics perform well for automatic classification only in a small number of classes).Over time,more training examples may become available,and DDC may be enriched with synonyms in order to enhance accuracy of automatic classification which may also benefit information retrieval performance based on DDC.In order for quality information services to reach the objective of highest possible precision and recall,automatic classification should never be implemented on its own;instead,machine-aided indexing that combines the efficiency of automatic suggestions with quality of human decisions at the final stage should be the way for the future.Originality/value:The study explored machine learning on a large classification system of over 14,000 classes which is used in operational information retrieval systems.Due to lack of sufficient training data across the entire set of classes,an approach complementing machine learning,that of string matching,was applied.This combination should be explored further since it provides the potential for real-life applications with large target classification systems.
基金Supported by the China Postdoctoral Science Foundation (20080431379).
文摘A three-part comb decimator is presented in this paper, for the applications with severe requirements of circuit performance and frequency response. Based on the modified prime factorization method and multistage polyphase decomposition, an efficient non-recursive structure for the cascaded integrator-comb (CIC) decimation filter is derived. Utilizing this structure as the core part, the proposed comb decimator can not only loosen the decimation ratio's limitation, but also balance the tradeoff among the overall power consumption, circuit area and maximum speed. Further, to improve the frequency response of the comb decimator, a cos-prefilter is introduced as the preprocessing part for increasing the aliasing rejection, and an optimum sin-based filter is used as the compensation part for decreasing the passband droop.
文摘Traditional Evolutionary Algorithm (EAs) is based on the binary code, real number code, structure code and so on. But these coding strategies have their own advantages and disadvantages for the optimization of functions. In this paper a new Decimal Coding Strategy (DCS), which is convenient for space division and alterable precision, was proposed, and the theory analysis of its implicit parallelism and convergence was also discussed. We also redesign several genetic operators for the decimal code. In order to utilize the historial information of the existing individuals in the process of evolution and avoid repeated exploring, the strategies of space shrinking and precision alterable, are adopted. Finally, the evolutionary algorithm based on decimal coding (DCEAs) was applied to the optimization of functions, the optimization of parameter, mixed-integer nonlinear programming. Comparison with traditional GAs was made and the experimental results show that the performances of DCEAS are better than the tradition GAs.
基金supported in part by an internal grant of Eastern Washington University
文摘This paper introduces decimated filter banks for the one-dimensional empirical mode decomposition (1D-EMD). These filter banks can provide perfect reconstruction and allow for an arbitrary tree structure. Since the EMD is a data driven decomposition, it is a very useful analysis instrument for non-stationary and non-linear signals. However, the traditional 1D-EMD has the disadvantage of expanding the data. Large data sets can be generated as the amount of data to be stored increases with every decomposition level. The 1D-EMD can be thought as having the structure of a single dyadic filter. However, a methodology to incorporate the decomposition into any arbitrary tree structure has not been reported yet in the literature. This paper shows how to extend the 1D-EMD into any arbitrary tree structure while maintaining the perfect reconstruction property. Furthermore, the technique allows for downsampling the decomposed signals. This paper, thus, presents a method to minimize the data-expansion drawback of the 1D-EMD by using decimation and merging the EMD coefficients. The proposed algorithm is applicable for any arbitrary tree structure including a full binary tree structure.
基金Project supported by Fundamental Research Funds for the Central Universities(Grant No.FRF-TP-19-013A3)。
文摘The infinite time-evolving block decimation algorithm(i TEBD)provides an efficient way to determine the ground state and dynamics of the quantum lattice systems in the thermodynamic limit.In this paper we suggest an optimized way to take the i TEBD calculation,which takes advantage of additional reduced decompositions to speed up the calculation.The numerical calculations show that for a comparable computation time our method provides more accurate results than the traditional i TEBD,especially for lattice systems with large on-site degrees of freedom.
基金The National Natural Science Foundation of China(No.62077013,61773114)the Jiangsu Provincial Innovation Project for Scientific Research of Graduate Students in Universities(No.KYCX17_0160).
文摘The entity and symbolic fraction comparison tasks separating identification and semantic access stages based on event-related potential technology were used to investigate neural differences between fraction and decimal strategies in magnitude processing of nonsymbolic entities and symbolic numbers.The experimental results show that continuous entities elicit stronger left-lateralized anterior N2 in decimals,while discretized ones elicit more significant right-lateralized posterior N2 in fractions during the identification stage.On the other hand,decimals elicit stronger N2 over the left-lateralized fronto-central sites while fractions elicit the more profound P2 over the right-lateralized fronto-central sites and N2 at biparietal regions during the semantic access stage.Hence,for nonsymbolic entity processing,alignments of decimals and continuous entities activate the phonological network,while alignments of fractions and discretized entities trigger the visuospatial regions.For symbolic numbers processing,exact strategies with rote arithmetic retrieval in verbal format are used in decimal processing,while approximate strategies with complex magnitude processing in a visuospatial format are used in fraction processing.
文摘Substitution boxes or S-boxes play a significant role in encryption and de-cryption of bit level plaintext and cipher-text respectively. Irreducible Poly-nomials (IPs) have been used to construct 4-bit or 8-bit substitution boxes in many cryptographic block ciphers. In Advance Encryption Standard, the ele-ments of 8-bit S-box have been obtained from the Multiplicative Inverse (MI) of elemental polynomials (EPs) of the 1st IP over Galois field GF(28) by adding an additive element. In this paper, a mathematical method and the algorithm of the said method with the discussion of the execution time of the algorithm, to obtain monic IPs over Galois field GF(pq) have been illustrated with example. The method is very similar to polynomial multiplication of two polynomials over Galois field GF(pq) but has a difference in execution. The decimal equivalents of polynomials have been used to identify Basic Polynomials (BPs), EPs, IPs and Reducible polynomials (RPs). The monic RPs have been determined by this method and have been cancelled out to produce monic IPs. The non-monic IPs have been obtained with multiplication of α where?α∈ GF(pq)?and assume values from 2 to (p −1) to monic IPs.
文摘Decimal arithmetic is desirable for high precision requirements of many financial, industrial and scientific applications. Furthermore, hardware support for decimal arithmetic has gained momentum with IEEE 754-2008, which standardized decimal floating-point. This paper presents a new architecture for two operand and multi-operand signed-digit decimal addition. Signed-digit architectures are advantageous because there are no carry-propagate chains. The proposed signed-digit adder reduces the critical path delay by parallelizing the correction stage inherent to decimal addition. For performance evaluation, we synthesize and compare multiple unsigned and signed-digit multi-operand decimal adder architectures on 0.18μm CMOS VLSI technology. Synthesis results for 2, 4, 8, and 16 operands with 8 decimal digits provide critical data in determining each adder's performance and scalability.
文摘This paper deals with the technology of using comb filters for FIR Decimation in Digital Signal Processing. The process of decreasing the sampling frequency of a sampled signal is called decimation. In the usage of decimating filters, only a portion of the out-of-pass band frequencies turns into the pass band, in systems wherein different parts operate at different sample rates. A filter design, tuned to the aliasing frequencies all of which can otherwise steal into the pass band, not only provides multiple stop bands but also exhibits computational efficiency and performance superiority over the single stop band design. These filters are referred to as multiband designs in the family of FIR filters. The other two special versions of FIR filter designs are Halfband and Comb filter designs, both of which are particularly useful for reducing the computational requirements in multirate designs. The proposed method of using Comb FIR decimation procedure is not only efficient but also opens up a new vista of simplicity and elegancy to compute Multiplications per Second (MPS) and Additions per Second (APS) for the desired filter over and above the half band designs.
基金the National Natural Science Foundation of China (Grant Nos.60232010 and 60572094)the National Natural Science Foundation of China for Distinguished Young Scholars (Grant No. 60625104)
文摘The sampling rate conversion is always used in order to decrease computational amount and storage load in a system. The fractional Fourier transform (FRFT) is a powerful tool for the analysis of nonstationary signals, especially, chirp-like signal. Thus, it has become an active area in the signal processing community, with many applications of radar, communication, electronic warfare, and information security. Therefore, it is necessary for us to generalize the theorem for Fourier domain analysis of decimation and interpolation. Firstly, this paper defines the digital frequency in the fractional Fourier domain (FRFD) through the sampling theorems with FRFT. Secondly, FRFD analysis of decimation and interpolation is proposed in this paper with digital frequency in FRFD followed by the studies of interpolation filter and decimation filter in FRFD. Using these results, FRFD analysis of the sampling rate conversion by a rational factor is illustrated. The noble identities of decimation and interpolation in FRFD are then deduced using previous results and the fractional convolution theorem. The proposed theorems in this study are the bases for the generalizations of the multirate signal processing in FRFD, which can advance the filter banks theorems in FRFD. Finally, the theorems introduced in this paper are validated by simulations.
基金This study was supported by the National Research Foundation,Prime Minister’s Office,Singapore under the Nanomaterials for Energy and Water Management CREATE Programme,and the Energy Innovation Research Programme(EIRP)administered by the Energy Market Authority(no.NRF2015EWT-EIRP002-008).
文摘Freezing and crystallization of commercial ethylene carbonate-based binary electrolytes,leading to irreversible damage to lithium-ion batteries(LIBs),remain a significant challenge for the survival of energy storage devices at extremely low temperatures(<−40°C).Herein,a decimal solvent-based high-entropy electrolyte is developed with an unprecedented low freezing point of−130°C to significantly extend the service temperature range of LIBs,far superior to−30°C of the commercial counterpart.Distinguished from conventional electrolytes,this molecularly disordered solvent mixture greatly suppresses the freezing crystallization of electrolytes,providing good protection for LIBs from possible mechanical damage at extremely low temperatures.Benefiting from this,our high-entropy electrolyte exhibits extraordinarily high ionic conductivity of 0.62 mS·cm−1 at−60°C,several orders of magnitude higher than the frozen commercial electrolytes.Impressively,LIBs utilizing decimal electrolytes can be charged and discharged even at an ultra-low temperature of−60°C,maintaining high capacity retention(∼80%at−40°C)as well as remarkable rate capability.This study provides design strategies of low-temperature electrolytes to extend the service temperature range of LIBs,creating a new avenue for improving the survival and operation of various energy storage systems under extreme environmental conditions.
基金supported by the National Basic Research Program of China (2007CB311201)the National Natural Science Foundation of China (60833008, 60803149)
文摘Power analysis has been a powerful and thoroughly studied threat for implementations of block ciphers and public key algorithms but not yet for stream ciphers. Based on the consumed power differences between two neighboring clock cycles, this paper presents a correlation power analysis (CPA) attack on the synchronous stream cipher DECIM^v2 (the tweaked version of the original submission DECIM). This attack resynchronizes the cryptographic device ceaselessly with many different initialization values (IVs) to obtain enough power traces. Then by modeling the statistical properties of the differential power traces with the correlation coefficients, the proposed attack algorithm can completely reveal the secret key of DECIM^v2. Furthermore, a simulation attack is mounted to confirm the validity of the algorithm. The results show that the entire secret key of DECIM^v2 can be restored within several minutes by performing 12 CPA attacks. It seems that there are still some defects in the design of DECIM^v2 and thus some further improvements should be made to resist the proposed attack.
文摘This work presents an oversampled high-order single-loop single-bit sigma–delta analog-to-digital converter followed by a multi-stage decimation filter.Design details and measurement results for the whole chip are presented for a TSMC 0.18μm CMOS implementation to achieve virtually ideal 16-b performance over a baseband of 640 kHz.The modulator in this work is a fully differential circuit that operates from a single 1.8 V power supply. With an oversampling ratio of 64 and a clock rate of 81.92 MHz,the modulator achieves a 94 dB dynamic range. The decimator achieves a pass-band ripple of less than 0.01 dB,a stop-band attenuation of 80 dB and a transition band from 640 to 740 kHz.The whole chip consumes only 56 mW for a 1.28 MHz output rate and occupies a die area of 1×2 mm^2.
基金This work was supported by the Natural Science Foundation of Guangdong Province of China under Grant No.2017A030313347。
文摘In digital furniture design, skillful designers usually use professional software to create new furniture designs with various textures and then take advantage of rendering tools to produce eye-catching design results. Generally, a fine-grained furniture model holds many geometric details, inducing significant time cost to model rendering and large data size for storage that are not desired in application scenarios where efficiency is greatly emphasized. To accelerate the rendering process while keeping good rendering results as many as possible, we develop a novel decimation technique which not only reduces the number of faces on furniture models, but also retains their geometric and texture features. Two metrics are utilized in our approach to measure the distortion of texture features. Considering these two metrics as guidance for decimation, high texture distortion can be avoided in simplifying the geometric models. Therefore, we are able to build multi-level representations with different detail levels based on the initial design. Our experimental results show that the developed technique can achieve excellent visual effects on the decimated furniture model.