The entity and symbolic fraction comparison tasks separating identification and semantic access stages based on event-related potential technology were used to investigate neural differences between fraction and decim...The entity and symbolic fraction comparison tasks separating identification and semantic access stages based on event-related potential technology were used to investigate neural differences between fraction and decimal strategies in magnitude processing of nonsymbolic entities and symbolic numbers.The experimental results show that continuous entities elicit stronger left-lateralized anterior N2 in decimals,while discretized ones elicit more significant right-lateralized posterior N2 in fractions during the identification stage.On the other hand,decimals elicit stronger N2 over the left-lateralized fronto-central sites while fractions elicit the more profound P2 over the right-lateralized fronto-central sites and N2 at biparietal regions during the semantic access stage.Hence,for nonsymbolic entity processing,alignments of decimals and continuous entities activate the phonological network,while alignments of fractions and discretized entities trigger the visuospatial regions.For symbolic numbers processing,exact strategies with rote arithmetic retrieval in verbal format are used in decimal processing,while approximate strategies with complex magnitude processing in a visuospatial format are used in fraction processing.展开更多
This paper provides a method of the process of computation called the cumulative method, it is based upon repeated cumulative process. The cumulative method is being adapted to the purposes of computation, particularl...This paper provides a method of the process of computation called the cumulative method, it is based upon repeated cumulative process. The cumulative method is being adapted to the purposes of computation, particularly multiplication and division. The operations of multiplication and division are represented by algebraic formulas. An advantage of the method is that the cumulative process can be performed on decimal numbers. The present paper aims to establish a basic and useful formula valid for the two fundamental arithmetic operations of multiplication and division. The new cumulative method proved to be more flexible and made it possible to extend the multiplication and division based on repeated addition/subtraction to decimal numbers.展开更多
Integrated sensing and communication(ISAC)is one of the main usage scenarios for 6G wireless networks.To most efficiently utilize the limited wireless resources,integrated super-resolution sensing and communication(IS...Integrated sensing and communication(ISAC)is one of the main usage scenarios for 6G wireless networks.To most efficiently utilize the limited wireless resources,integrated super-resolution sensing and communication(ISSAC)has been recently proposed to significantly improve sensing performance with super-resolution algorithms for ISAC systems,such as the Multiple Signal Classification(MUSIC)algorithm.However,traditional super-resolution sensing algorithms suffer from prohibitive computational complexity of orthogonal-frequency division multiplexing(OFDM)systems due to the large dimensions of the signals in the subcarrier and symbol domains.To address such issues,we propose a novel two-stage approach to reduce the computational complexity for super-resolution range estimation significantly.The key idea of the proposed scheme is to first uniformly decimate signals in the subcarrier domain so that the computational complexity is significantly reduced without missing any target in the range domain.However,the decimation operation may result in range ambiguity due to pseudo peaks,which is addressed by the second stage where the total collocated subcarrier data are used to verify the detected peaks.Compared with traditional MUSIC algorithms,the proposed scheme reduces computational complexity by two orders of magnitude,while maintaining the range resolution and unambiguity.Simulation results verify the effectiveness of the proposed scheme.展开更多
Accurate frequency estimation in a wideband digital receiver using the FFT algorithm encounters challenges, such as spectral leakage resulting from the FFT’s assumption of signal periodicity. High-resolution FFTs pos...Accurate frequency estimation in a wideband digital receiver using the FFT algorithm encounters challenges, such as spectral leakage resulting from the FFT’s assumption of signal periodicity. High-resolution FFTs pose computational demands, and estimating non-integer multiples of frequency resolution proves exceptionally challenging. This paper introduces two novel methods for enhanced frequency precision: polynomial interpolation and array indexing, comparing their results with super-resolution and scalloping loss. Simulation results demonstrate the effectiveness of the proposed methods in contemporary radar systems, with array indexing providing the best frequency estimation despite utilizing maximum hardware resources. The paper demonstrates a trade-off between accurate frequency estimation and hardware resources when comparing polynomial interpolation and array indexing.展开更多
The infinite time-evolving block decimation algorithm(i TEBD)provides an efficient way to determine the ground state and dynamics of the quantum lattice systems in the thermodynamic limit.In this paper we suggest an o...The infinite time-evolving block decimation algorithm(i TEBD)provides an efficient way to determine the ground state and dynamics of the quantum lattice systems in the thermodynamic limit.In this paper we suggest an optimized way to take the i TEBD calculation,which takes advantage of additional reduced decompositions to speed up the calculation.The numerical calculations show that for a comparable computation time our method provides more accurate results than the traditional i TEBD,especially for lattice systems with large on-site degrees of freedom.展开更多
A 16 bit stereo audio novel stability fifth-order ∑△ A/D converter that consists of switched capacitor ∑△ modulators, a decimation filter, and a bandgap circuit is proposed. A method for the stabilization of a hig...A 16 bit stereo audio novel stability fifth-order ∑△ A/D converter that consists of switched capacitor ∑△ modulators, a decimation filter, and a bandgap circuit is proposed. A method for the stabilization of a high order single stage ∑△ modulator is also proposed. A new multistage comb filter is used for the front end decimation filter. The ∑△ A/D converter achieves a peak SNR of 96dB and a dynamic range of 96dB. The ADC was implemented in 0. 5μm 5V CMOS technology. The chip die area occupies only 4. 1mm × 2.4mm and dissipates 90mW.展开更多
A 16bit sigma-delta audio analog-to-digital converter is developed.It consists of an analog modulator and a digital decimator.A standard 2-order single-loop architecture is employed in the modulator.Chopper stabilizat...A 16bit sigma-delta audio analog-to-digital converter is developed.It consists of an analog modulator and a digital decimator.A standard 2-order single-loop architecture is employed in the modulator.Chopper stabilization is applied to the first integrator to eliminate the 1/f noise.A low-power,area-efficient decimator is used,which includes a poly-phase comb-filter and a wave-digital-filter.The converter achieves a 92dB dynamic range over the 96kHz audio band.This single chip occupies 2.68mm2 in a 0.18μm six-metal CMOS process and dissipates only 15.5mW power.展开更多
This pedagogical proposal was implemented in the curricular space of mathematics,in the first year of the basic cycle of the pluricurso in IPEM 116“Manuel Belgrano”Rural Annex Punta del Agua,with Orientation Agro Am...This pedagogical proposal was implemented in the curricular space of mathematics,in the first year of the basic cycle of the pluricurso in IPEM 116“Manuel Belgrano”Rural Annex Punta del Agua,with Orientation Agro Ambiente,Department Third Arriba,province of Córdoba,República Argentina.It was developed according to the difficulties shown by students in not being able to establish the relationship that exists between natural,fractional,decimal and percentage numbers.Faced with this situation,it is proposed to develop the Cuisenaire Rule as a teaching resource in order to stimulate and develop the logical capacities in students within the framework of thoughtful and operational mathematical thinking by applying it in the resolution of specific learning situations.The construction and application of this resource allowed them to develop the understanding and assimilation of concepts in which they presented difficulties promoted learning and taking of.展开更多
Digital down converter (DDC) is the main part of the next generation high frequency (HF) radar. In order to realize the single chip integrations of digital receiver hardware in the next generation HF Radar, a new ...Digital down converter (DDC) is the main part of the next generation high frequency (HF) radar. In order to realize the single chip integrations of digital receiver hardware in the next generation HF Radar, a new design for DDC by using FPGA is presented. Some important and practical applications are given in this paper, and the result can prove the validity. Because we can adjust the parameters freely according to our need, the DDC system can be adapted to the next generation HF radar system.展开更多
In this work, power efficient butterfly unit based FFT architecture is presented. The butterfly unit is designed using floating-point fused arithmetic units. The fused arithmetic units include two-term dot product uni...In this work, power efficient butterfly unit based FFT architecture is presented. The butterfly unit is designed using floating-point fused arithmetic units. The fused arithmetic units include two-term dot product unit and add-subtract unit. In these arithmetic units, operations are performed over complex data values. A modified fused floating-point two-term dot product and an enhanced model for the Radix-4 FFT butterfly unit are proposed. The modified fused two-term dot product is designed using Radix-16 booth multiplier. Radix-16 booth multiplier will reduce the switching activities compared to Radix-8 booth multiplier in existing system and also will reduce the area required. The proposed architecture is implemented efficiently for Radix-4 decimation in time(DIT) FFT butterfly with the two floating-point fused arithmetic units. The proposed enhanced architecture is synthesized, implemented, placed and routed on a FPGA device using Xilinx ISE tool. It is observed that the Radix-4 DIT fused floating-point FFT butterfly requires 50.17% less space and 12.16% reduced power compared to the existing methods and the proposed enhanced model requires 49.82% less space on the FPGA device compared to the proposed design. Also, reduced power consumption is addressed by utilizing the reusability technique, which results in 11.42% of power reduction of the enhanced model compared to the proposed design.展开更多
Edge detection is a fundamental issue in image analysis. This paper proposes multirate algorithms for efficient implementation of edge detector, and a design example is illustrated.The multirate (decimation and/or int...Edge detection is a fundamental issue in image analysis. This paper proposes multirate algorithms for efficient implementation of edge detector, and a design example is illustrated.The multirate (decimation and/or interpolation) signal processing algorithms can achieve considerable savings in computation and storage. The proposed algorithms result in mapping relations of their z-transfer functions between non-multirate and multirate mathematical expressions in terms of time-varying coefficient instead of traditional polyphase decomposition counterparts.The mapping properties can be readily utilized to efficiently analyze and synthesize multirate edge detection filters. The Very high-speed Hardware Description Language (VHDL) simulation results verify efficiency of the algorithms for real-time Field Programmable Gate-Array (FPGA)implementation.展开更多
A method which adopts the combination of least squares support vector machine(LS-SVM) and Monte Carlo(MC) simulation is used to calculate the foundation settlement reliability.When using LS-SVM,choosing the traini...A method which adopts the combination of least squares support vector machine(LS-SVM) and Monte Carlo(MC) simulation is used to calculate the foundation settlement reliability.When using LS-SVM,choosing the training dataset and the values for LS-SVM parameters is the key.In a representative sense,the orthogonal experimental design with four factors and five levels is used to choose the inputs of the training dataset,and the outputs are calculated by using fast Lagrangian analysis continua(FLAC).The decimal ant colony algorithm(DACA) is also used to determine the parameters.Calculation results show that the values of the two parameters,and δ2 have great effect on the performance of LS-SVM.After the training of LS-SVM,the inputs are sampled according to the probabilistic distribution,and the outputs are predicted with the trained LS-SVM,thus the reliability analysis can be performed by the MC method.A program compiled by Matlab is employed to calculate its reliability.Results show that the method of combining LS-SVM and MC simulation is applicable to the reliability analysis of soft foundation settlement.展开更多
Purpose:With more and more digital collections of various information resources becoming available,also increasing is the challenge of assigning subject index terms and classes from quality knowledge organization syst...Purpose:With more and more digital collections of various information resources becoming available,also increasing is the challenge of assigning subject index terms and classes from quality knowledge organization systems.While the ultimate purpose is to understand the value of automatically produced Dewey Decimal Classification(DDC)classes for Swedish digital collections,the paper aims to evaluate the performance of six machine learning algorithms as well as a string-matching algorithm based on characteristics of DDC.Design/methodology/approach:State-of-the-art machine learning algorithms require at least 1,000 training examples per class.The complete data set at the time of research involved 143,838 records which had to be reduced to top three hierarchical levels of DDC in order to provide sufficient training data(totaling 802 classes in the training and testing sample,out of 14,413 classes at all levels).Findings:Evaluation shows that Support Vector Machine with linear kernel outperforms other machine learning algorithms as well as the string-matching algorithm on average;the string-matching algorithm outperforms machine learning for specific classes when characteristics of DDC are most suitable for the task.Word embeddings combined with different types of neural networks(simple linear network,standard neural network,1 D convolutional neural network,and recurrent neural network)produced worse results than Support Vector Machine,but reach close results,with the benefit of a smaller representation size.Impact of features in machine learning shows that using keywords or combining titles and keywords gives better results than using only titles as input.Stemming only marginally improves the results.Removed stop-words reduced accuracy in most cases,while removing less frequent words increased it marginally.The greatest impact is produced by the number of training examples:81.90%accuracy on the training set is achieved when at least 1,000 records per class are available in the training set,and 66.13%when too few records(often less than A Comparison of Approaches100 per class)on which to train are available—and these hold only for top 3 hierarchical levels(803 instead of 14,413 classes).Research limitations:Having to reduce the number of hierarchical levels to top three levels of DDC because of the lack of training data for all classes,skews the results so that they work in experimental conditions but barely for end users in operational retrieval systems.Practical implications:In conclusion,for operative information retrieval systems applying purely automatic DDC does not work,either using machine learning(because of the lack of training data for the large number of DDC classes)or using string-matching algorithm(because DDC characteristics perform well for automatic classification only in a small number of classes).Over time,more training examples may become available,and DDC may be enriched with synonyms in order to enhance accuracy of automatic classification which may also benefit information retrieval performance based on DDC.In order for quality information services to reach the objective of highest possible precision and recall,automatic classification should never be implemented on its own;instead,machine-aided indexing that combines the efficiency of automatic suggestions with quality of human decisions at the final stage should be the way for the future.Originality/value:The study explored machine learning on a large classification system of over 14,000 classes which is used in operational information retrieval systems.Due to lack of sufficient training data across the entire set of classes,an approach complementing machine learning,that of string matching,was applied.This combination should be explored further since it provides the potential for real-life applications with large target classification systems.展开更多
A three-part comb decimator is presented in this paper, for the applications with severe requirements of circuit performance and frequency response. Based on the modified prime factorization method and multistage poly...A three-part comb decimator is presented in this paper, for the applications with severe requirements of circuit performance and frequency response. Based on the modified prime factorization method and multistage polyphase decomposition, an efficient non-recursive structure for the cascaded integrator-comb (CIC) decimation filter is derived. Utilizing this structure as the core part, the proposed comb decimator can not only loosen the decimation ratio's limitation, but also balance the tradeoff among the overall power consumption, circuit area and maximum speed. Further, to improve the frequency response of the comb decimator, a cos-prefilter is introduced as the preprocessing part for increasing the aliasing rejection, and an optimum sin-based filter is used as the compensation part for decreasing the passband droop.展开更多
In this paper, we construct some continuous but non-differentiable functions defined by quinary dec-imal, that are Kiesswetter-like functions. We discuss their properties, then investigate the Hausdorff dimensions of ...In this paper, we construct some continuous but non-differentiable functions defined by quinary dec-imal, that are Kiesswetter-like functions. We discuss their properties, then investigate the Hausdorff dimensions of graphs of these functions and give a detailed proof.展开更多
Efficient reconfigurable VLSI architecture for 1-D 5/3 and 9/7 wavelet transforms adopted in JPEG2000 proposal, based on lifting scheme is proposed. The embedded decimation technique based on fold and time multiplexin...Efficient reconfigurable VLSI architecture for 1-D 5/3 and 9/7 wavelet transforms adopted in JPEG2000 proposal, based on lifting scheme is proposed. The embedded decimation technique based on fold and time multiplexing, as well as embedded boundary data extension technique, is adopted to optimize the design of the architecture. These reduce significantly the required numbers of the multipliers, adders and registers, as well as the amount of accessing external memory, and lead to decrease efficiently the hardware cost and power consumption of the design. The architecture is designed to generate an output per clock cycle, and the detailed component and the approximation of the input signal are available alternately. Experimental simulation and comparison results are presented, which demonstrate that the proposed architecture has lower hardware complexity, thus it is adapted for embedded applications. The presented architecture is simple, regular and scalable, and well suited for VLSI implementation.展开更多
Purpose: This study aims to discuss the strategies for mapping from Dewey Decimal Classification(DDC) numbers to Chinese Library Classification(CLC) numbers based on co-occurrence mapping while minimizing manual inter...Purpose: This study aims to discuss the strategies for mapping from Dewey Decimal Classification(DDC) numbers to Chinese Library Classification(CLC) numbers based on co-occurrence mapping while minimizing manual intervention.Design/methodology/approach: Several statistical tables were created based on frequency counts of the mapping relations with samples of USMARC records,which contain both DDC and CLC numbers. A manual table was created through direct mapping. In order to find reasonable mapping strategies,the mapping results were compared from three aspects including the sample size,the choice between one-to-one and one-to-multiple mapping relations,and the role of a manual mapping table.Findings: Larger sample size provides more DDC numbers in the mapping table. The statistical table including one-to-multiple DDC-CLC relations provides a higher ratio of correct matches than that including only one-to-one relations. The manual mapping table cannot produce a better result than the statistical tables. Therefore,we should make full use of statistical mapping tables and avoid the time-consuming manual mapping as much as possible.Research limitations: All the sample sizes were small. We did not consider DDC editions in our study. One-to-multiple DDC-CLC relations in the records were collected in the mapping table,but how to select one appropriate CLC number in the matching process needs to be further studied.Practical implications: The ratio of correct matches based on the statistical mapping table came up to about 90% by CLC top-level classes and 76% by the second-level classes in our study. The statistical mapping table will be improved to realize the automatic classification of e-resources and shorten the cataloging cycle significantly.Originality/value: The mapping results were investigated from different aspects in order to find suitable mapping strategies from DDC to CLC while minimizing manual intervention.The findings have facilitated the establishment of DDC-CLC mapping system for practical applications.展开更多
基金The National Natural Science Foundation of China(No.62077013,61773114)the Jiangsu Provincial Innovation Project for Scientific Research of Graduate Students in Universities(No.KYCX17_0160).
文摘The entity and symbolic fraction comparison tasks separating identification and semantic access stages based on event-related potential technology were used to investigate neural differences between fraction and decimal strategies in magnitude processing of nonsymbolic entities and symbolic numbers.The experimental results show that continuous entities elicit stronger left-lateralized anterior N2 in decimals,while discretized ones elicit more significant right-lateralized posterior N2 in fractions during the identification stage.On the other hand,decimals elicit stronger N2 over the left-lateralized fronto-central sites while fractions elicit the more profound P2 over the right-lateralized fronto-central sites and N2 at biparietal regions during the semantic access stage.Hence,for nonsymbolic entity processing,alignments of decimals and continuous entities activate the phonological network,while alignments of fractions and discretized entities trigger the visuospatial regions.For symbolic numbers processing,exact strategies with rote arithmetic retrieval in verbal format are used in decimal processing,while approximate strategies with complex magnitude processing in a visuospatial format are used in fraction processing.
文摘This paper provides a method of the process of computation called the cumulative method, it is based upon repeated cumulative process. The cumulative method is being adapted to the purposes of computation, particularly multiplication and division. The operations of multiplication and division are represented by algebraic formulas. An advantage of the method is that the cumulative process can be performed on decimal numbers. The present paper aims to establish a basic and useful formula valid for the two fundamental arithmetic operations of multiplication and division. The new cumulative method proved to be more flexible and made it possible to extend the multiplication and division based on repeated addition/subtraction to decimal numbers.
基金supported by the National Natural Science Foundation of China under Grant No.62071114.
文摘Integrated sensing and communication(ISAC)is one of the main usage scenarios for 6G wireless networks.To most efficiently utilize the limited wireless resources,integrated super-resolution sensing and communication(ISSAC)has been recently proposed to significantly improve sensing performance with super-resolution algorithms for ISAC systems,such as the Multiple Signal Classification(MUSIC)algorithm.However,traditional super-resolution sensing algorithms suffer from prohibitive computational complexity of orthogonal-frequency division multiplexing(OFDM)systems due to the large dimensions of the signals in the subcarrier and symbol domains.To address such issues,we propose a novel two-stage approach to reduce the computational complexity for super-resolution range estimation significantly.The key idea of the proposed scheme is to first uniformly decimate signals in the subcarrier domain so that the computational complexity is significantly reduced without missing any target in the range domain.However,the decimation operation may result in range ambiguity due to pseudo peaks,which is addressed by the second stage where the total collocated subcarrier data are used to verify the detected peaks.Compared with traditional MUSIC algorithms,the proposed scheme reduces computational complexity by two orders of magnitude,while maintaining the range resolution and unambiguity.Simulation results verify the effectiveness of the proposed scheme.
文摘Accurate frequency estimation in a wideband digital receiver using the FFT algorithm encounters challenges, such as spectral leakage resulting from the FFT’s assumption of signal periodicity. High-resolution FFTs pose computational demands, and estimating non-integer multiples of frequency resolution proves exceptionally challenging. This paper introduces two novel methods for enhanced frequency precision: polynomial interpolation and array indexing, comparing their results with super-resolution and scalloping loss. Simulation results demonstrate the effectiveness of the proposed methods in contemporary radar systems, with array indexing providing the best frequency estimation despite utilizing maximum hardware resources. The paper demonstrates a trade-off between accurate frequency estimation and hardware resources when comparing polynomial interpolation and array indexing.
基金Project supported by Fundamental Research Funds for the Central Universities(Grant No.FRF-TP-19-013A3)。
文摘The infinite time-evolving block decimation algorithm(i TEBD)provides an efficient way to determine the ground state and dynamics of the quantum lattice systems in the thermodynamic limit.In this paper we suggest an optimized way to take the i TEBD calculation,which takes advantage of additional reduced decompositions to speed up the calculation.The numerical calculations show that for a comparable computation time our method provides more accurate results than the traditional i TEBD,especially for lattice systems with large on-site degrees of freedom.
文摘A 16 bit stereo audio novel stability fifth-order ∑△ A/D converter that consists of switched capacitor ∑△ modulators, a decimation filter, and a bandgap circuit is proposed. A method for the stabilization of a high order single stage ∑△ modulator is also proposed. A new multistage comb filter is used for the front end decimation filter. The ∑△ A/D converter achieves a peak SNR of 96dB and a dynamic range of 96dB. The ADC was implemented in 0. 5μm 5V CMOS technology. The chip die area occupies only 4. 1mm × 2.4mm and dissipates 90mW.
文摘A 16bit sigma-delta audio analog-to-digital converter is developed.It consists of an analog modulator and a digital decimator.A standard 2-order single-loop architecture is employed in the modulator.Chopper stabilization is applied to the first integrator to eliminate the 1/f noise.A low-power,area-efficient decimator is used,which includes a poly-phase comb-filter and a wave-digital-filter.The converter achieves a 92dB dynamic range over the 96kHz audio band.This single chip occupies 2.68mm2 in a 0.18μm six-metal CMOS process and dissipates only 15.5mW power.
文摘This pedagogical proposal was implemented in the curricular space of mathematics,in the first year of the basic cycle of the pluricurso in IPEM 116“Manuel Belgrano”Rural Annex Punta del Agua,with Orientation Agro Ambiente,Department Third Arriba,province of Córdoba,República Argentina.It was developed according to the difficulties shown by students in not being able to establish the relationship that exists between natural,fractional,decimal and percentage numbers.Faced with this situation,it is proposed to develop the Cuisenaire Rule as a teaching resource in order to stimulate and develop the logical capacities in students within the framework of thoughtful and operational mathematical thinking by applying it in the resolution of specific learning situations.The construction and application of this resource allowed them to develop the understanding and assimilation of concepts in which they presented difficulties promoted learning and taking of.
文摘Digital down converter (DDC) is the main part of the next generation high frequency (HF) radar. In order to realize the single chip integrations of digital receiver hardware in the next generation HF Radar, a new design for DDC by using FPGA is presented. Some important and practical applications are given in this paper, and the result can prove the validity. Because we can adjust the parameters freely according to our need, the DDC system can be adapted to the next generation HF radar system.
文摘In this work, power efficient butterfly unit based FFT architecture is presented. The butterfly unit is designed using floating-point fused arithmetic units. The fused arithmetic units include two-term dot product unit and add-subtract unit. In these arithmetic units, operations are performed over complex data values. A modified fused floating-point two-term dot product and an enhanced model for the Radix-4 FFT butterfly unit are proposed. The modified fused two-term dot product is designed using Radix-16 booth multiplier. Radix-16 booth multiplier will reduce the switching activities compared to Radix-8 booth multiplier in existing system and also will reduce the area required. The proposed architecture is implemented efficiently for Radix-4 decimation in time(DIT) FFT butterfly with the two floating-point fused arithmetic units. The proposed enhanced architecture is synthesized, implemented, placed and routed on a FPGA device using Xilinx ISE tool. It is observed that the Radix-4 DIT fused floating-point FFT butterfly requires 50.17% less space and 12.16% reduced power compared to the existing methods and the proposed enhanced model requires 49.82% less space on the FPGA device compared to the proposed design. Also, reduced power consumption is addressed by utilizing the reusability technique, which results in 11.42% of power reduction of the enhanced model compared to the proposed design.
文摘Edge detection is a fundamental issue in image analysis. This paper proposes multirate algorithms for efficient implementation of edge detector, and a design example is illustrated.The multirate (decimation and/or interpolation) signal processing algorithms can achieve considerable savings in computation and storage. The proposed algorithms result in mapping relations of their z-transfer functions between non-multirate and multirate mathematical expressions in terms of time-varying coefficient instead of traditional polyphase decomposition counterparts.The mapping properties can be readily utilized to efficiently analyze and synthesize multirate edge detection filters. The Very high-speed Hardware Description Language (VHDL) simulation results verify efficiency of the algorithms for real-time Field Programmable Gate-Array (FPGA)implementation.
文摘A method which adopts the combination of least squares support vector machine(LS-SVM) and Monte Carlo(MC) simulation is used to calculate the foundation settlement reliability.When using LS-SVM,choosing the training dataset and the values for LS-SVM parameters is the key.In a representative sense,the orthogonal experimental design with four factors and five levels is used to choose the inputs of the training dataset,and the outputs are calculated by using fast Lagrangian analysis continua(FLAC).The decimal ant colony algorithm(DACA) is also used to determine the parameters.Calculation results show that the values of the two parameters,and δ2 have great effect on the performance of LS-SVM.After the training of LS-SVM,the inputs are sampled according to the probabilistic distribution,and the outputs are predicted with the trained LS-SVM,thus the reliability analysis can be performed by the MC method.A program compiled by Matlab is employed to calculate its reliability.Results show that the method of combining LS-SVM and MC simulation is applicable to the reliability analysis of soft foundation settlement.
文摘Purpose:With more and more digital collections of various information resources becoming available,also increasing is the challenge of assigning subject index terms and classes from quality knowledge organization systems.While the ultimate purpose is to understand the value of automatically produced Dewey Decimal Classification(DDC)classes for Swedish digital collections,the paper aims to evaluate the performance of six machine learning algorithms as well as a string-matching algorithm based on characteristics of DDC.Design/methodology/approach:State-of-the-art machine learning algorithms require at least 1,000 training examples per class.The complete data set at the time of research involved 143,838 records which had to be reduced to top three hierarchical levels of DDC in order to provide sufficient training data(totaling 802 classes in the training and testing sample,out of 14,413 classes at all levels).Findings:Evaluation shows that Support Vector Machine with linear kernel outperforms other machine learning algorithms as well as the string-matching algorithm on average;the string-matching algorithm outperforms machine learning for specific classes when characteristics of DDC are most suitable for the task.Word embeddings combined with different types of neural networks(simple linear network,standard neural network,1 D convolutional neural network,and recurrent neural network)produced worse results than Support Vector Machine,but reach close results,with the benefit of a smaller representation size.Impact of features in machine learning shows that using keywords or combining titles and keywords gives better results than using only titles as input.Stemming only marginally improves the results.Removed stop-words reduced accuracy in most cases,while removing less frequent words increased it marginally.The greatest impact is produced by the number of training examples:81.90%accuracy on the training set is achieved when at least 1,000 records per class are available in the training set,and 66.13%when too few records(often less than A Comparison of Approaches100 per class)on which to train are available—and these hold only for top 3 hierarchical levels(803 instead of 14,413 classes).Research limitations:Having to reduce the number of hierarchical levels to top three levels of DDC because of the lack of training data for all classes,skews the results so that they work in experimental conditions but barely for end users in operational retrieval systems.Practical implications:In conclusion,for operative information retrieval systems applying purely automatic DDC does not work,either using machine learning(because of the lack of training data for the large number of DDC classes)or using string-matching algorithm(because DDC characteristics perform well for automatic classification only in a small number of classes).Over time,more training examples may become available,and DDC may be enriched with synonyms in order to enhance accuracy of automatic classification which may also benefit information retrieval performance based on DDC.In order for quality information services to reach the objective of highest possible precision and recall,automatic classification should never be implemented on its own;instead,machine-aided indexing that combines the efficiency of automatic suggestions with quality of human decisions at the final stage should be the way for the future.Originality/value:The study explored machine learning on a large classification system of over 14,000 classes which is used in operational information retrieval systems.Due to lack of sufficient training data across the entire set of classes,an approach complementing machine learning,that of string matching,was applied.This combination should be explored further since it provides the potential for real-life applications with large target classification systems.
基金Supported by the China Postdoctoral Science Foundation (20080431379).
文摘A three-part comb decimator is presented in this paper, for the applications with severe requirements of circuit performance and frequency response. Based on the modified prime factorization method and multistage polyphase decomposition, an efficient non-recursive structure for the cascaded integrator-comb (CIC) decimation filter is derived. Utilizing this structure as the core part, the proposed comb decimator can not only loosen the decimation ratio's limitation, but also balance the tradeoff among the overall power consumption, circuit area and maximum speed. Further, to improve the frequency response of the comb decimator, a cos-prefilter is introduced as the preprocessing part for increasing the aliasing rejection, and an optimum sin-based filter is used as the compensation part for decreasing the passband droop.
文摘In this paper, we construct some continuous but non-differentiable functions defined by quinary dec-imal, that are Kiesswetter-like functions. We discuss their properties, then investigate the Hausdorff dimensions of graphs of these functions and give a detailed proof.
文摘Efficient reconfigurable VLSI architecture for 1-D 5/3 and 9/7 wavelet transforms adopted in JPEG2000 proposal, based on lifting scheme is proposed. The embedded decimation technique based on fold and time multiplexing, as well as embedded boundary data extension technique, is adopted to optimize the design of the architecture. These reduce significantly the required numbers of the multipliers, adders and registers, as well as the amount of accessing external memory, and lead to decrease efficiently the hardware cost and power consumption of the design. The architecture is designed to generate an output per clock cycle, and the detailed component and the approximation of the input signal are available alternately. Experimental simulation and comparison results are presented, which demonstrate that the proposed architecture has lower hardware complexity, thus it is adapted for embedded applications. The presented architecture is simple, regular and scalable, and well suited for VLSI implementation.
基金jointly supported by the Foundation for Humanities and Social Sciences of the Chinese Ministryof Education(Grant No.:11BTQ007)Shanghai Society for Library Science(Grant No.:10BSTX02)
文摘Purpose: This study aims to discuss the strategies for mapping from Dewey Decimal Classification(DDC) numbers to Chinese Library Classification(CLC) numbers based on co-occurrence mapping while minimizing manual intervention.Design/methodology/approach: Several statistical tables were created based on frequency counts of the mapping relations with samples of USMARC records,which contain both DDC and CLC numbers. A manual table was created through direct mapping. In order to find reasonable mapping strategies,the mapping results were compared from three aspects including the sample size,the choice between one-to-one and one-to-multiple mapping relations,and the role of a manual mapping table.Findings: Larger sample size provides more DDC numbers in the mapping table. The statistical table including one-to-multiple DDC-CLC relations provides a higher ratio of correct matches than that including only one-to-one relations. The manual mapping table cannot produce a better result than the statistical tables. Therefore,we should make full use of statistical mapping tables and avoid the time-consuming manual mapping as much as possible.Research limitations: All the sample sizes were small. We did not consider DDC editions in our study. One-to-multiple DDC-CLC relations in the records were collected in the mapping table,but how to select one appropriate CLC number in the matching process needs to be further studied.Practical implications: The ratio of correct matches based on the statistical mapping table came up to about 90% by CLC top-level classes and 76% by the second-level classes in our study. The statistical mapping table will be improved to realize the automatic classification of e-resources and shorten the cataloging cycle significantly.Originality/value: The mapping results were investigated from different aspects in order to find suitable mapping strategies from DDC to CLC while minimizing manual intervention.The findings have facilitated the establishment of DDC-CLC mapping system for practical applications.