In this work, power efficient butterfly unit based FFT architecture is presented. The butterfly unit is designed using floating-point fused arithmetic units. The fused arithmetic units include two-term dot product uni...In this work, power efficient butterfly unit based FFT architecture is presented. The butterfly unit is designed using floating-point fused arithmetic units. The fused arithmetic units include two-term dot product unit and add-subtract unit. In these arithmetic units, operations are performed over complex data values. A modified fused floating-point two-term dot product and an enhanced model for the Radix-4 FFT butterfly unit are proposed. The modified fused two-term dot product is designed using Radix-16 booth multiplier. Radix-16 booth multiplier will reduce the switching activities compared to Radix-8 booth multiplier in existing system and also will reduce the area required. The proposed architecture is implemented efficiently for Radix-4 decimation in time(DIT) FFT butterfly with the two floating-point fused arithmetic units. The proposed enhanced architecture is synthesized, implemented, placed and routed on a FPGA device using Xilinx ISE tool. It is observed that the Radix-4 DIT fused floating-point FFT butterfly requires 50.17% less space and 12.16% reduced power compared to the existing methods and the proposed enhanced model requires 49.82% less space on the FPGA device compared to the proposed design. Also, reduced power consumption is addressed by utilizing the reusability technique, which results in 11.42% of power reduction of the enhanced model compared to the proposed design.展开更多
We present a constructive generalization of Abel-Gontscharoff's series expansion to higher dimensions. A constructive application to a problem of multivariate interpolation is also investigated. In addition, two algo...We present a constructive generalization of Abel-Gontscharoff's series expansion to higher dimensions. A constructive application to a problem of multivariate interpolation is also investigated. In addition, two algorithms for constructing the basis functions of the interpolants are given.展开更多
The dot product of bases vectors on the super-surface of constraints of the nonlinear non-holonomic space and Mesherskii equations may act as the equations of fundamental dynamics of mechanical system for the variable...The dot product of bases vectors on the super-surface of constraints of the nonlinear non-holonomic space and Mesherskii equations may act as the equations of fundamental dynamics of mechanical system for the variable mass.These are very simple and convenient for computation.From these known equations,the equations of Chaplygin,Nielson,Appell,Mac-Millan et al.are deriv d;it is unnecessary to introduce the definition if Appell-Chetaev or Niu Qinping for the virtual displacement.These are compatible with the D'Alembert-Lagrange's principle.展开更多
Spam emails pose a threat to individuals. The proliferation of spam emails daily has rendered traditional machine learning and deep learning methods for screening them ineffective and inefficient. In our research, we ...Spam emails pose a threat to individuals. The proliferation of spam emails daily has rendered traditional machine learning and deep learning methods for screening them ineffective and inefficient. In our research, we employ deep neural networks like RNN, LSTM, and GRU, incorporating attention mechanisms such as Bahdanua, scaled dot product (SDP), and Luong scaled dot product self-attention for spam email filtering. We evaluate our approach on various datasets, including Trec spam, Enron spam emails, SMS spam collections, and the Ling spam dataset, which constitutes a substantial custom dataset. All these datasets are publicly available. For the Enron dataset, we attain an accuracy of 99.97% using LSTM with SDP self-attention. Our custom dataset exhibits the highest accuracy of 99.01% when employing GRU with SDP self-attention. The SMS spam collection dataset yields a peak accuracy of 99.61% with LSTM and SDP attention. Using the GRU (Gated Recurrent Unit) alongside Luong and SDP (Structured Self-Attention) attention mechanisms, the peak accuracy of 99.89% in the Ling spam dataset. For the Trec spam dataset, the most accurate results are achieved using Luong attention LSTM, with an accuracy rate of 99.01%. Our performance analyses consistently indicate that employing the scaled dot product attention mechanism in conjunction with gated recurrent neural networks (GRU) delivers the most effective results. In summary, our research underscores the efficacy of employing advanced deep learning techniques and attention mechanisms for spam email filtering, with remarkable accuracy across multiple datasets. This approach presents a promising solution to the ever-growing problem of spam emails.展开更多
Let m and n be fixed, positive integers and P a space composed of real polynomials in m variables. The authors study functions f : R →R which map Gram matrices, based upon n points of R^m, into matrices, which are n...Let m and n be fixed, positive integers and P a space composed of real polynomials in m variables. The authors study functions f : R →R which map Gram matrices, based upon n points of R^m, into matrices, which are nonnegative definite with respect to P Among other things, the authors discuss continuity, differentiability, convexity, and convexity in the sense of Jensen, of such functions展开更多
In the paper, we prove that for every integer n ≥ 1, there exists a Petersen power pn with nonorientable genus and Euler genus precisely n, which improves the upper bound of Mohar and Vodopivec's result [J. Graph Th...In the paper, we prove that for every integer n ≥ 1, there exists a Petersen power pn with nonorientable genus and Euler genus precisely n, which improves the upper bound of Mohar and Vodopivec's result [J. Graph Theory, 67, 1-8 (2011)] that for every integer k (2 ≤ k ≤ n- 1), a Petersen power Pn exists with nonorientable genus and Euler genus precisely k.展开更多
文摘In this work, power efficient butterfly unit based FFT architecture is presented. The butterfly unit is designed using floating-point fused arithmetic units. The fused arithmetic units include two-term dot product unit and add-subtract unit. In these arithmetic units, operations are performed over complex data values. A modified fused floating-point two-term dot product and an enhanced model for the Radix-4 FFT butterfly unit are proposed. The modified fused two-term dot product is designed using Radix-16 booth multiplier. Radix-16 booth multiplier will reduce the switching activities compared to Radix-8 booth multiplier in existing system and also will reduce the area required. The proposed architecture is implemented efficiently for Radix-4 decimation in time(DIT) FFT butterfly with the two floating-point fused arithmetic units. The proposed enhanced architecture is synthesized, implemented, placed and routed on a FPGA device using Xilinx ISE tool. It is observed that the Radix-4 DIT fused floating-point FFT butterfly requires 50.17% less space and 12.16% reduced power compared to the existing methods and the proposed enhanced model requires 49.82% less space on the FPGA device compared to the proposed design. Also, reduced power consumption is addressed by utilizing the reusability technique, which results in 11.42% of power reduction of the enhanced model compared to the proposed design.
基金This paper is a talk on the held in Nanjing, P. R. China, July, 2004.
文摘We present a constructive generalization of Abel-Gontscharoff's series expansion to higher dimensions. A constructive application to a problem of multivariate interpolation is also investigated. In addition, two algorithms for constructing the basis functions of the interpolants are given.
文摘The dot product of bases vectors on the super-surface of constraints of the nonlinear non-holonomic space and Mesherskii equations may act as the equations of fundamental dynamics of mechanical system for the variable mass.These are very simple and convenient for computation.From these known equations,the equations of Chaplygin,Nielson,Appell,Mac-Millan et al.are deriv d;it is unnecessary to introduce the definition if Appell-Chetaev or Niu Qinping for the virtual displacement.These are compatible with the D'Alembert-Lagrange's principle.
文摘Spam emails pose a threat to individuals. The proliferation of spam emails daily has rendered traditional machine learning and deep learning methods for screening them ineffective and inefficient. In our research, we employ deep neural networks like RNN, LSTM, and GRU, incorporating attention mechanisms such as Bahdanua, scaled dot product (SDP), and Luong scaled dot product self-attention for spam email filtering. We evaluate our approach on various datasets, including Trec spam, Enron spam emails, SMS spam collections, and the Ling spam dataset, which constitutes a substantial custom dataset. All these datasets are publicly available. For the Enron dataset, we attain an accuracy of 99.97% using LSTM with SDP self-attention. Our custom dataset exhibits the highest accuracy of 99.01% when employing GRU with SDP self-attention. The SMS spam collection dataset yields a peak accuracy of 99.61% with LSTM and SDP attention. Using the GRU (Gated Recurrent Unit) alongside Luong and SDP (Structured Self-Attention) attention mechanisms, the peak accuracy of 99.89% in the Ling spam dataset. For the Trec spam dataset, the most accurate results are achieved using Luong attention LSTM, with an accuracy rate of 99.01%. Our performance analyses consistently indicate that employing the scaled dot product attention mechanism in conjunction with gated recurrent neural networks (GRU) delivers the most effective results. In summary, our research underscores the efficacy of employing advanced deep learning techniques and attention mechanisms for spam email filtering, with remarkable accuracy across multiple datasets. This approach presents a promising solution to the ever-growing problem of spam emails.
文摘Let m and n be fixed, positive integers and P a space composed of real polynomials in m variables. The authors study functions f : R →R which map Gram matrices, based upon n points of R^m, into matrices, which are nonnegative definite with respect to P Among other things, the authors discuss continuity, differentiability, convexity, and convexity in the sense of Jensen, of such functions
基金supported by the Fundamental Research Funds for the Central Universities(Grand No.NZ2015106)supported by National Natural Science Foundation of China(Grant Nos.11471106 and 11371133)NSFC of Hu’nan(Grant No.14JJ2043)
文摘In the paper, we prove that for every integer n ≥ 1, there exists a Petersen power pn with nonorientable genus and Euler genus precisely n, which improves the upper bound of Mohar and Vodopivec's result [J. Graph Theory, 67, 1-8 (2011)] that for every integer k (2 ≤ k ≤ n- 1), a Petersen power Pn exists with nonorientable genus and Euler genus precisely k.