By extending the usual Weyl transformation to the s-parameterized Weyl transformation with s being a real parameter,we obtain the s-parameterized quantization scheme which includes P–Q quantization, Q–P quantization...By extending the usual Weyl transformation to the s-parameterized Weyl transformation with s being a real parameter,we obtain the s-parameterized quantization scheme which includes P–Q quantization, Q–P quantization, and Weyl ordering as its three special cases. Some operator identities can be derived directly by virtue of the s-parameterized quantization scheme.展开更多
For a mesoscopic L-C circuit,besides the Louisell's quantization scheme in which electric charge q andelectric current I are respectively quantized as the coordinate operator Q and momentum operator P,in this pape...For a mesoscopic L-C circuit,besides the Louisell's quantization scheme in which electric charge q andelectric current I are respectively quantized as the coordinate operator Q and momentum operator P,in this paperwe propose a new quantization scheme in the context of number-phase quantization through the standard Lagrangianformalism.The comparison between this number-phase quantization with the Josephson junction's Cooper pair number-phase-difference quantization scheme is made.展开更多
By introducing the s-parameterized generalized Wigner operator into phase-space quantum mechanics we invent the technique of integration within s-ordered product of operators (which considers normally ordered, antino...By introducing the s-parameterized generalized Wigner operator into phase-space quantum mechanics we invent the technique of integration within s-ordered product of operators (which considers normally ordered, antinormally ordered and Weyl ordered product of operators as its special cases). The s-ordered operator expansion (denoted by s…s ) formula of density operators is derived, which isρ=2/1-s∫d^2β/π〈-β|ρ|β〉sexp{2/s-1(s|β|^2-β*α+βa-αα)}s The s-parameterized quantization scheme is thus completely established.展开更多
Based on the generalized Weyl quantization scheme, which relies on the generalized Wigner operator Ok (p, q) with a real k parameter and can unify the P-Q, Q-P, and Weyl ordering of operators in k = 1, - 1,0, respec...Based on the generalized Weyl quantization scheme, which relies on the generalized Wigner operator Ok (p, q) with a real k parameter and can unify the P-Q, Q-P, and Weyl ordering of operators in k = 1, - 1,0, respectively, we find the mutual transformations between 6 (p - P) (q - Q), (q - Q) 3 (p - P), and (p, q), which are, respectively, the integration kernels of the P-Q, Q-P, and generalized Weyl quantization schemes. The mutual transformations provide us with a new approach to deriving the Wigner function of quantum states. The - and - ordered forms of (p, q) are also derived, which helps us to put the operators into their - and - ordering, respectively.展开更多
A coding method of speech compression, which is based on Wavlet Transform and Vector Quantization (VQ), is developed and studied. The Wavlet Thansform or Wavlet Packet Thansform is used to process the speech signal, t...A coding method of speech compression, which is based on Wavlet Transform and Vector Quantization (VQ), is developed and studied. The Wavlet Thansform or Wavlet Packet Thansform is used to process the speech signal, then VQ is used to compress the coefficients of Wavlet Thansform, and the entropy coding is used to decrease the bit rate. The experimental results show that the speech signal, sampled by 8 kHz sampling rate and 8 bit quatisation,i.e., 64 kbit/s bit rate, can be compressed to 6 - 8 kbit/s, and still have high speech quality,and the low-delay, only 8 ms.展开更多
Exploring the expected quantizing scheme with suitable mixed-precision policy is the key to compress deep neural networks(DNNs)in high efficiency and accuracy.This exploration implies heavy workloads for domain expert...Exploring the expected quantizing scheme with suitable mixed-precision policy is the key to compress deep neural networks(DNNs)in high efficiency and accuracy.This exploration implies heavy workloads for domain experts,and an automatic compression method is needed.However,the huge search space of the automatic method introduces plenty of computing budgets that make the automatic process challenging to be applied in real scenarios.In this paper,we propose an end-to-end framework named AutoQNN,for automatically quantizing different layers utilizing different schemes and bitwidths without any human labor.AutoQNN can seek desirable quantizing schemes and mixed-precision policies for mainstream DNN models efficiently by involving three techniques:quantizing scheme search(QSS),quantizing precision learning(QPL),and quantized architecture generation(QAG).QSS introduces five quantizing schemes and defines three new schemes as a candidate set for scheme search,and then uses the Differentiable Neural Architecture Search(DNAS)algorithm to seek the layer-or model-desired scheme from the set.QPL is the first method to learn mixed-precision policies by reparameterizing the bitwidths of quantizing schemes,to the best of our knowledge.QPL optimizes both classification loss and precision loss of DNNs efficiently and obtains the relatively optimal mixed-precision model within limited model size and memory footprint.QAG is designed to convert arbitrary architectures into corresponding quantized ones without manual intervention,to facilitate end-to-end neural network quantization.We have implemented AutoQNN and integrated it into Keras.Extensive experiments demonstrate that AutoQNN can consistently outperform state-of-the-art quantization.For 2-bit weight and activation of AlexNet and ResNet18,AutoQNN can achieve the accuracy results of 59.75%and 68.86%,respectively,and obtain accuracy improvements by up to 1.65%and 1.74%,respectively,compared with state-of-the-art methods.Especially,compared with the full-precision AlexNet and ResNet18,the 2-bit models only slightly incur accuracy degradation by 0.26%and 0.76%,respectively,which can fulfill practical application demands.展开更多
We propose a modified version of the Faddeev–Popov(FP)quantization approach for nonAbelian gauge field theory to avoid Gribov ambiguity.We show that by means of introducing a new method of inserting the correct ident...We propose a modified version of the Faddeev–Popov(FP)quantization approach for nonAbelian gauge field theory to avoid Gribov ambiguity.We show that by means of introducing a new method of inserting the correct identity into the Yang–Mills generating functional and considering the identity generated by an integral through a subgroup of the gauge group,the problem of Gribov ambiguity can be removed naturally.Meanwhile by handling the absolute value of the FP determinant with the method introduced by Williams and collaborators,we lift the Jacobian determinant together with the absolute value and obtain a local Lagrangian.The new Lagrangian will have a nilpotent symmetry which can be viewed as an analog of the Becchi–Rouet–Stora–Tyutin symmetry.展开更多
基金Project supported by the National Natural Science Foundation of China(Grant Nos.11147009,11347026,and 11244005)the Natural Science Foundation of Shandong Province,China(Grant Nos.ZR2013AM012 and ZR2012AM004)the Natural Science Foundation of Liaocheng University,China
文摘By extending the usual Weyl transformation to the s-parameterized Weyl transformation with s being a real parameter,we obtain the s-parameterized quantization scheme which includes P–Q quantization, Q–P quantization, and Weyl ordering as its three special cases. Some operator identities can be derived directly by virtue of the s-parameterized quantization scheme.
基金The project supported by the President Foundation of the Chinese Academy of Sciences
文摘For a mesoscopic L-C circuit,besides the Louisell's quantization scheme in which electric charge q andelectric current I are respectively quantized as the coordinate operator Q and momentum operator P,in this paperwe propose a new quantization scheme in the context of number-phase quantization through the standard Lagrangianformalism.The comparison between this number-phase quantization with the Josephson junction's Cooper pair number-phase-difference quantization scheme is made.
基金Project supported by the National Natural Science Foundation of China (Grant Nos. 10775097 and 10874174)
文摘By introducing the s-parameterized generalized Wigner operator into phase-space quantum mechanics we invent the technique of integration within s-ordered product of operators (which considers normally ordered, antinormally ordered and Weyl ordered product of operators as its special cases). The s-ordered operator expansion (denoted by s…s ) formula of density operators is derived, which isρ=2/1-s∫d^2β/π〈-β|ρ|β〉sexp{2/s-1(s|β|^2-β*α+βa-αα)}s The s-parameterized quantization scheme is thus completely established.
基金Project supported by the National Natural Science Foundation of China(Grant No.11175113)the Natural Science Foundation of Shandong Province of China(Grant No.Y2008A16)+1 种基金the University Experimental Technology Foundation of Shandong Province of China(Grant No.S04W138)the Natural Science Foundation of Heze University of Shandong Province of China(Grants Nos.XY07WL01 and XY08WL03)
文摘Based on the generalized Weyl quantization scheme, which relies on the generalized Wigner operator Ok (p, q) with a real k parameter and can unify the P-Q, Q-P, and Weyl ordering of operators in k = 1, - 1,0, respectively, we find the mutual transformations between 6 (p - P) (q - Q), (q - Q) 3 (p - P), and (p, q), which are, respectively, the integration kernels of the P-Q, Q-P, and generalized Weyl quantization schemes. The mutual transformations provide us with a new approach to deriving the Wigner function of quantum states. The - and - ordered forms of (p, q) are also derived, which helps us to put the operators into their - and - ordering, respectively.
文摘A coding method of speech compression, which is based on Wavlet Transform and Vector Quantization (VQ), is developed and studied. The Wavlet Thansform or Wavlet Packet Thansform is used to process the speech signal, then VQ is used to compress the coefficients of Wavlet Thansform, and the entropy coding is used to decrease the bit rate. The experimental results show that the speech signal, sampled by 8 kHz sampling rate and 8 bit quatisation,i.e., 64 kbit/s bit rate, can be compressed to 6 - 8 kbit/s, and still have high speech quality,and the low-delay, only 8 ms.
基金supported by the China Postdoctoral Science Foundation under Grant No.2022M721707the National Natural Science Foundation of China under Grant Nos.62002175 and 62272248+1 种基金the Special Funding for Excellent Enterprise Technology Correspondent of Tianjin under Grant No.21YDTPJC00380the Open Project Foundation of Information Security Evaluation Center of Civil Aviation,Civil Aviation University of China,under Grant No.ISECCA-202102.
文摘Exploring the expected quantizing scheme with suitable mixed-precision policy is the key to compress deep neural networks(DNNs)in high efficiency and accuracy.This exploration implies heavy workloads for domain experts,and an automatic compression method is needed.However,the huge search space of the automatic method introduces plenty of computing budgets that make the automatic process challenging to be applied in real scenarios.In this paper,we propose an end-to-end framework named AutoQNN,for automatically quantizing different layers utilizing different schemes and bitwidths without any human labor.AutoQNN can seek desirable quantizing schemes and mixed-precision policies for mainstream DNN models efficiently by involving three techniques:quantizing scheme search(QSS),quantizing precision learning(QPL),and quantized architecture generation(QAG).QSS introduces five quantizing schemes and defines three new schemes as a candidate set for scheme search,and then uses the Differentiable Neural Architecture Search(DNAS)algorithm to seek the layer-or model-desired scheme from the set.QPL is the first method to learn mixed-precision policies by reparameterizing the bitwidths of quantizing schemes,to the best of our knowledge.QPL optimizes both classification loss and precision loss of DNNs efficiently and obtains the relatively optimal mixed-precision model within limited model size and memory footprint.QAG is designed to convert arbitrary architectures into corresponding quantized ones without manual intervention,to facilitate end-to-end neural network quantization.We have implemented AutoQNN and integrated it into Keras.Extensive experiments demonstrate that AutoQNN can consistently outperform state-of-the-art quantization.For 2-bit weight and activation of AlexNet and ResNet18,AutoQNN can achieve the accuracy results of 59.75%and 68.86%,respectively,and obtain accuracy improvements by up to 1.65%and 1.74%,respectively,compared with state-of-the-art methods.Especially,compared with the full-precision AlexNet and ResNet18,the 2-bit models only slightly incur accuracy degradation by 0.26%and 0.76%,respectively,which can fulfill practical application demands.
基金supported by the National Natural Science Foundation of China under Contract No.11435001 and No.11775041the National Key Basic Research Program of China under Contract No.G2013CB834400 and No.2015CB856900。
文摘We propose a modified version of the Faddeev–Popov(FP)quantization approach for nonAbelian gauge field theory to avoid Gribov ambiguity.We show that by means of introducing a new method of inserting the correct identity into the Yang–Mills generating functional and considering the identity generated by an integral through a subgroup of the gauge group,the problem of Gribov ambiguity can be removed naturally.Meanwhile by handling the absolute value of the FP determinant with the method introduced by Williams and collaborators,we lift the Jacobian determinant together with the absolute value and obtain a local Lagrangian.The new Lagrangian will have a nilpotent symmetry which can be viewed as an analog of the Becchi–Rouet–Stora–Tyutin symmetry.