In previous works, the theoretical and experimental deterministic scalar kinematic structures, the theoretical and experimental deterministic vector kinematic structures, the theoretical and experimental deterministic...In previous works, the theoretical and experimental deterministic scalar kinematic structures, the theoretical and experimental deterministic vector kinematic structures, the theoretical and experimental deterministic scalar dynamic structures, and the theoretical and experimental deterministic vector dynamic structures have been developed to compute the exact solution for deterministic chaos of the exponential pulsons and oscillons that is governed by the nonstationary three-dimensional Navier-Stokes equations. To explore properties of the kinetic energy, rectangular, diagonal, and triangular summations of a matrix of the kinetic energy and general terms of various sums have been used in the current paper to develop quantization of the kinetic energy of deterministic chaos. Nested structures of a cumulative energy pulson, an energy pulson of propagation, an internal energy oscillon, a diagonal energy oscillon, and an external energy oscillon have been established. In turn, the energy pulsons and oscillons include group pulsons of propagation, internal group oscillons, diagonal group oscillons, and external group oscillons. Sequentially, the group pulsons and oscillons contain wave pulsons of propagation, internal wave oscillons, diagonal wave oscillons, and external wave oscillons. Consecutively, the wave pulsons and oscillons are composed of elementary pulsons of propagation, internal elementary oscillons, diagonal elementary oscillons, and external elementary oscillons. Topology, periodicity, and integral properties of the exponential pulsons and oscillons have been studied using the novel method of the inhomogeneous Fourier expansions via eigenfunctions in coordinates and time. Symbolic computations of the exact expansions have been performed using the experimental and theoretical programming in Maple. Results of the symbolic computations have been justified by probe visualizations.展开更多
The uncertainty principle is a fundamental principle of quantum mechanics, but its exact mathematical expression cannot obtain correct results when used to solve theoretical problems such as the energy levels of hydro...The uncertainty principle is a fundamental principle of quantum mechanics, but its exact mathematical expression cannot obtain correct results when used to solve theoretical problems such as the energy levels of hydrogen atoms, one-dimensional deep potential wells, one-dimensional harmonic oscillators, and double-slit experiments. Even after approximate treatment, the results obtained are not completely consistent with those obtained by solving Schrödinger’s equation. This indicates that further research on the uncertainty principle is necessary. Therefore, using the de Broglie matter wave hypothesis, we quantize the action of an elementary particle in natural coordinates and obtain the quantization condition and a new deterministic relation. Using this quantization condition, we obtain the energy level formulas of an elementary particle in different conditions in a classical way that is completely consistent with the results obtained by solving Schrödinger’s equation. A new physical interpretation is given for the particle eigenfunction independence of probability for an elementary particle: an elementary particle is in a particle state at the space-time point where the action is quantized, and in a wave state in the rest of the space-time region. The space-time points of particle nature and the wave regions of particle motion constitute the continuous trajectory of particle motion. When an elementary particle is in a particle state, it is localized, whereas in the wave state region, it is nonlocalized.展开更多
The quantization algorithm compresses the original network by reducing the numerical bit width of the model,which improves the computation speed. Because different layers have different redundancy and sensitivity to d...The quantization algorithm compresses the original network by reducing the numerical bit width of the model,which improves the computation speed. Because different layers have different redundancy and sensitivity to databit width. Reducing the data bit width will result in a loss of accuracy. Therefore, it is difficult to determinethe optimal bit width for different parts of the network with guaranteed accuracy. Mixed precision quantizationcan effectively reduce the amount of computation while keeping the model accuracy basically unchanged. In thispaper, a hardware-aware mixed precision quantization strategy optimal assignment algorithm adapted to low bitwidth is proposed, and reinforcement learning is used to automatically predict the mixed precision that meets theconstraints of hardware resources. In the state-space design, the standard deviation of weights is used to measurethe distribution difference of data, the execution speed feedback of simulated neural network accelerator inferenceis used as the environment to limit the action space of the agent, and the accuracy of the quantization model afterretraining is used as the reward function to guide the agent to carry out deep reinforcement learning training. Theexperimental results show that the proposed method obtains a suitable model layer-by-layer quantization strategyunder the condition that the computational resources are satisfied, and themodel accuracy is effectively improved.The proposed method has strong intelligence and certain universality and has strong application potential in thefield of mixed precision quantization and embedded neural network model deployment.展开更多
Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship ...Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship between data attributes.However,the creation of fuzzy rules typically depends on expert knowledge,which may not fully leverage the label information in training data and may be subjective.To address this issue,a novel fuzzy rule oversampling approach is developed based on the learning vector quantization(LVQ)algorithm.In this method,the label information of the training data is utilized to determine the antecedent part of If-Then fuzzy rules by dynamically dividing attribute intervals using LVQ.Subsequently,fuzzy rules are generated and adjusted to calculate rule weights.The number of new samples to be synthesized for each rule is then computed,and samples from the minority class are synthesized based on the newly generated fuzzy rules.This results in the establishment of a fuzzy rule oversampling method based on LVQ.To evaluate the effectiveness of this method,comparative experiments are conducted on 12 publicly available imbalance datasets with five other sampling techniques in combination with the support function machine.The experimental results demonstrate that the proposed method can significantly enhance the classification algorithm across seven performance indicators,including a boost of 2.15%to 12.34%in Accuracy,6.11%to 27.06%in G-mean,and 4.69%to 18.78%in AUC.These show that the proposed method is capable of more efficiently improving the classification performance of imbalanced data.展开更多
An aperture design technique using multi-step amplitude quantization for two-dimensional solid-state active phased arrays to achieve low sidelobe is described. It can be applied to antennas with arbitrary complex aper...An aperture design technique using multi-step amplitude quantization for two-dimensional solid-state active phased arrays to achieve low sidelobe is described. It can be applied to antennas with arbitrary complex aperture. Also, the gain drop and sidelobe degradation due to random amplitude and phase errors and element (or T/R module) failures are investigated.展开更多
The demand for adopting neural networks in resource-constrained embedded devices is continuously increasing.Quantization is one of the most promising solutions to reduce computational cost and memory storage on embedd...The demand for adopting neural networks in resource-constrained embedded devices is continuously increasing.Quantization is one of the most promising solutions to reduce computational cost and memory storage on embedded devices.In order to reduce the complexity and overhead of deploying neural networks on Integeronly hardware,most current quantization methods use a symmetric quantization mapping strategy to quantize a floating-point neural network into an integer network.However,although symmetric quantization has the advantage of easier implementation,it is sub-optimal for cases where the range could be skewed and not symmetric.This often comes at the cost of lower accuracy.This paper proposed an activation redistribution-based hybrid asymmetric quantizationmethod for neural networks.The proposedmethod takes data distribution into consideration and can resolve the contradiction between the quantization accuracy and the ease of implementation,balance the trade-off between clipping range and quantization resolution,and thus improve the accuracy of the quantized neural network.The experimental results indicate that the accuracy of the proposed method is 2.02%and 5.52%higher than the traditional symmetric quantization method for classification and detection tasks,respectively.The proposed method paves the way for computationally intensive neural network models to be deployed on devices with limited computing resources.Codes will be available on https://github.com/ycjcy/Hybrid-Asymmetric-Quantization.展开更多
The quantization thermal excitation isotherms based on the maximum triad spin number (G) of each energy level for metal cluster were derived as a function of temperature by expanding the binomial theorems according to...The quantization thermal excitation isotherms based on the maximum triad spin number (G) of each energy level for metal cluster were derived as a function of temperature by expanding the binomial theorems according to energy levels. From them the quantized geometric mean heat capacity equations are expressed in sequence. Among them the five quantized geometric heat capacity equations, fit the best to the experimental heat capacity data of metal atoms at constant pressure. In the derivations we assume that the triad spin composed of an electron, its proton and its neutron in a metal cluster become a basic unit of thermal excitation. Boltzmann constant (kB) is found to be an average specific heat of an energy level in a metal cluster. And then the constant (kK) is found to be an average specific heat of a photon in a metal cluster. The core triad spin made of free neutrons may exist as the second one additional energy level. The energy levels are grouped according to the forms of four spins throughout two axes. Planck constant is theoretically obtained with the ratio of the internal energy of metal (U) to total isotherm number (N) through Equipartition theorem.展开更多
We are looking at comparison of two action integrals and we identify the Lagrangian multiplier as setting up a constraint equation (on cosmological expansion). This is a direct result of the fourth equation of our man...We are looking at comparison of two action integrals and we identify the Lagrangian multiplier as setting up a constraint equation (on cosmological expansion). This is a direct result of the fourth equation of our manuscript which unconventionally compares the action integral of General relativity with the second derived action integral, which then permits Equation (5), which is a bound on the Cosmological constant. What we have done is to replace the Hamber Quantum gravity reference-based action integral with a result from John Klauder’s “Enhanced Quantization”. In doing so, with Padamabhan’s treatment of the inflaton, we then initiate an explicit bound upon the cosmological constant. The other approximation is to use the inflaton results and conflate them with John Klauder’s Action principle for a way, if we have the idea of a potential well, generalized by Klauder, with a wall of space time in the Pre Planckian-regime to ask what bounds the Cosmological constant prior to inflation, and to get an upper bound on the mass of a graviton. We conclude with a redo of a multiverse version of the Penrose cyclic conformal cosmology. Our objective is to show how a value of the rest mass of the heavy graviton is invariant from cycle to cycle. All this is possible due to Equation (4). And we compare all these with results of Reference [1] in the conclusion, while showing its relevance to early universe production of black holes, and the volume of space producing 100 black holes of value about 10^2 times Planck Mass. Initially evaluated in a space-time of about 10^3 Planck length, in spherical length, we assume a starting entropy of about 1000 initially.展开更多
The recently developed magic-intensity trapping technique of neutral atoms efficiently mitigates the detrimental effect of light shifts on atomic qubits and substantially enhances the coherence time. This technique re...The recently developed magic-intensity trapping technique of neutral atoms efficiently mitigates the detrimental effect of light shifts on atomic qubits and substantially enhances the coherence time. This technique relies on applying a bias magnetic field precisely parallel to the wave vector of a circularly polarized trapping laser field. However, due to the presence of the vector light shift experienced by the trapped atoms, it is challenging to precisely define a parallel magnetic field, especially at a low bias magnetic field strength, for the magic-intensity trapping of85Rb qubits. In this work, we present a method to calibrate the angle between the bias magnetic field and the trapping laser field with the compensating magnetic fields in the other two directions orthogonal to the bias magnetic field direction. Experimentally, with a constantdepth trap and a fixed bias magnetic field, we measure the respective resonant frequencies of the atomic qubits in a linearly polarized trap and a circularly polarized one via the conventional microwave Rabi spectra with different compensating magnetic fields and obtain the corresponding total magnetic fields via the respective resonant frequencies using the Breit–Rabi formula. With known total magnetic fields, the angle is a function of the other two compensating magnetic fields.Finally, the projection value of the angle on either of the directions orthogonal to the bias magnetic field direction can be reduced to 0(4)° by applying specific compensating magnetic fields. The measurement error is mainly attributed to the fluctuation of atomic temperature. Moreover, it also demonstrates that, even for a small angle, the effect is strong enough to cause large decoherence of Rabi oscillation in a magic-intensity trap. Although the compensation method demonstrated here is explored for the magic-intensity trapping technique, it can be applied to a variety of similar precision measurements with trapped neutral atoms.展开更多
In this paper,we investigate networkassisted full-duplex(NAFD)cell-free millimeter-wave(mmWave)massive multiple-input multiple-output(MIMO)systems with digital-to-analog converter(DAC)quantization and fronthaul compre...In this paper,we investigate networkassisted full-duplex(NAFD)cell-free millimeter-wave(mmWave)massive multiple-input multiple-output(MIMO)systems with digital-to-analog converter(DAC)quantization and fronthaul compression.We propose to maximize the weighted uplink and downlink sum rate by jointly optimizing the power allocation of both the transmitting remote antenna units(T-RAUs)and uplink users and the variances of the downlink and uplink fronthaul compression noises.To deal with this challenging problem,we further apply a successive convex approximation(SCA)method to handle the non-convex bidirectional limited-capacity fronthaul constraints.The simulation results verify the convergence of the proposed SCA-based algorithm and analyze the impact of fronthaul capacity and DAC quantization on the spectral efficiency of the NAFD cell-free mmWave massive MIMO systems.Moreover,some insightful conclusions are obtained through the comparisons of spectral efficiency,which shows that NAFD achieves better performance gains than cotime co-frequency full-duplex cloud radio access network(CCFD C-RAN)in the cases of practical limited-resolution DACs.Specifically,their performance gaps with 8-bit DAC quantization are larger than that with1-bit DAC quantization,which attains a 5.5-fold improvement.展开更多
A new steganographic method by pixel-value differencing(PVD)using general quantization ranges of pixel pairs’difference values is proposed.The objective of this method is to provide a data embedding technique with a ...A new steganographic method by pixel-value differencing(PVD)using general quantization ranges of pixel pairs’difference values is proposed.The objective of this method is to provide a data embedding technique with a range table with range widths not limited to powers of 2,extending PVD-based methods to enhance their flexibility and data-embedding rates without changing their capabilities to resist security attacks.Specifically,the conventional PVD technique partitions a grayscale image into 1×2 non-overlapping blocks.The entire range[0,255]of all possible absolute values of the pixel pairs’grayscale differences in the blocks is divided into multiple quantization ranges.The width of each quantization range is a power of two to facilitate the direct embedding of the bit information with high embedding rates.Without using power-of-two range widths,the embedding rates can drop using conventional embedding techniques.In contrast,the proposed method uses general quantization range widths,and a multiple-based number conversion mechanism is employed skillfully to implement the use of nonpower-of-two range widths,with each pixel pair being employed to embed a digit in the multiple-based number.All the message bits are converted into a big multiple-based number whose digits can be embedded into the pixel pairs with a higher embedding rate.Good experimental results showed the feasibility of the proposed method and its resistance to security attacks.In addition,implementation examples are provided,where the proposed method adopts non-power-of-two range widths and employsmultiple-based number conversion to expand the data-hiding and steganalysis-resisting capabilities of other PVD methods.展开更多
Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro...Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro-posed to improve the efficiency for edge inference of Deep Neural Networks(DNNs),existing PoT schemes require a huge amount of bit-wise manipulation and have large memory overhead,and their efficiency is bounded by the bottleneck of computation latency and memory footprint.To tackle this challenge,we present an efficient inference approach on the basis of PoT quantization and model compression.An integer-only scalar PoT quantization(IOS-PoT)is designed jointly with a distribution loss regularizer,wherein the regularizer minimizes quantization errors and training disturbances.Additionally,two-stage model compression is developed to effectively reduce memory requirement,and alleviate bandwidth usage in communications of networked heterogenous learning systems.The product look-up table(P-LUT)inference scheme is leveraged to replace bit-shifting with only indexing and addition operations for achieving low-latency computation and implementing efficient edge accelerators.Finally,comprehensive experiments on Residual Networks(ResNets)and efficient architectures with Canadian Institute for Advanced Research(CIFAR),ImageNet,and Real-world Affective Faces Database(RAF-DB)datasets,indicate that our approach achieves 2×∼10×improvement in the reduction of both weight size and computation cost in comparison to state-of-the-art methods.A P-LUT accelerator prototype is implemented on the Xilinx KV260 Field Programmable Gate Array(FPGA)platform for accelerating convolution operations,with performance results showing that P-LUT reduces memory footprint by 1.45×,achieves more than 3×power efficiency and 2×resource efficiency,compared to the conventional bit-shifting scheme.展开更多
Linear temporal logic(LTL)is an intuitive and expressive language to specify complex control tasks,and how to design an efficient control strategy for LTL specification is still a challenge.In this paper,we implement ...Linear temporal logic(LTL)is an intuitive and expressive language to specify complex control tasks,and how to design an efficient control strategy for LTL specification is still a challenge.In this paper,we implement the dynamic quantization technique to propose a novel hierarchical control strategy for nonlinear control systems under LTL specifications.Based on the regions of interest involved in the LTL formula,an accepting path is derived first to provide a high-level solution for the controller synthesis problem.Second,we develop a dynamic quantization based approach to verify the realization of the accepting path.The realization verification results in the necessity of the controller design and a sequence of quantization regions for the controller design.Third,the techniques of dynamic quantization and abstraction-based control are combined together to establish the local-to-global control strategy.Both abstraction construction and controller design are local and dynamic,thereby resulting in the potential reduction of the computational complexity.Since each quantization region can be considered locally and individually,the proposed hierarchical mechanism is more efficient and can solve much larger problems than many existing methods.Finally,the proposed control strategy is illustrated via two examples from the path planning and tracking problems of mobile robots.展开更多
What does it mean to study PDE (Partial Differential Equation)? How and what to do “to claim proudly that I’m studying a certain PDE”? Newton mechanic uses mainly ODE (Ordinary Differential Equation) and describes ...What does it mean to study PDE (Partial Differential Equation)? How and what to do “to claim proudly that I’m studying a certain PDE”? Newton mechanic uses mainly ODE (Ordinary Differential Equation) and describes nicely movements of Sun, Moon and Earth etc. Now, so-called quantum phenomenum is described by, say Schrödinger equation, PDE which explains both wave and particle characters after quantization of ODE. The coupled Maxwell-Dirac equation is also “quantized” and QED (Quantum Electro-Dynamics) theory is invented by physicists. Though it is said this QED gives very good coincidence between theoretical1 and experimental observed quantities, but what is the equation corresponding to QED? Or, is it possible to describe QED by “equation” in naive sense?展开更多
We justify and extend the standard model of elementary particle physics by generalizing the theory of relativity and quantum mechanics. The usual assumption that space and time are continuous implies, indeed, that it ...We justify and extend the standard model of elementary particle physics by generalizing the theory of relativity and quantum mechanics. The usual assumption that space and time are continuous implies, indeed, that it should be possible to measure arbitrarily small intervals of space and time, but we ignore if that is true or not. It is thus more realistic to consider an extremely small “quantum of length” of yet unknown value <em>a</em>. It is only required to be a universal constant for all inertial frames, like<em> c</em> and <em>h</em>. This yields a logically consistent theory and accounts for elementary particles by means of four new quantum numbers. They define “particle states” in terms of modulations of wave functions at the smallest possible scale in space-time. The resulting classification of elementary particles accounts also for dark matter. Antiparticles are redefined, without needing negative energy states and recently observed “anomalies” can be explained.展开更多
In the field of image and data compression, there are always new approaches being tried and tested to improve the quality of the reconstructed image and to reduce the computational complexity of the algorithm employed...In the field of image and data compression, there are always new approaches being tried and tested to improve the quality of the reconstructed image and to reduce the computational complexity of the algorithm employed. However, there is no one perfect technique that can offer both maximum compression possible and best reconstruction quality, for any type of image. Depending on the level of compression desired and characteristics of the input image, a suitable choice must be made from the options available. For example in the field of video compression, the integer adaptation of discrete cosine transform (DCT) with fixed quantization is widely used in view of its ease of computation and adequate performance. There exist transforms like, discrete Tchebichef transform (DTT), which are suitable too, but are potentially unexploited. This work aims to bridge this gap and examine cases where DTT could be an alternative compression transform to DCT based on various image quality parameters. A multiplier-free fast implementation of integer DTT (ITT) of size 8 × 8 is also studied, for its low computational complexity. Due to the uneven spread of data across images, some areas might have intricate detail, whereas others might be rather plain. This prompts the use of a compression method that can be adapted according to the amount of detail. So, instead of fixed quantization this paper employs quantization that varies depending on the characteristics of the image block. This implementation is free from additional computational or transmission overhead. The image compression performance of ITT and ICT, using both variable and fixed quantization, is compared with a variety of images and the cases suitable for ITT-based image compression employing variable quantization are identified.展开更多
According to the formula of translational motion of vector along an infinitesimal closed curve in gravitational space, this article shows that the space and time both are quantized;the called center singularity of Sch...According to the formula of translational motion of vector along an infinitesimal closed curve in gravitational space, this article shows that the space and time both are quantized;the called center singularity of Schwarzschild metric does not exist physically, and Einstein’s theory of gravity is compatible with the traditional quantum theory in essence;the quantized gravitational space is just the spin network which consists of infinite quantized loops linking and intersecting each other, and that whether the particle is in spin eigenstate depends on the translational track of its spin vector in gravitational space.展开更多
A photon structure is advanced based on the experimental evidence and the vector potential quantization at a single photon level. It is shown that the photon is neither a point particle nor an infinite wave but behave...A photon structure is advanced based on the experimental evidence and the vector potential quantization at a single photon level. It is shown that the photon is neither a point particle nor an infinite wave but behaves rather like a local “wave-corpuscle” extended over a wavelength, occupying a minimum quantization volume and guided by a non-local vector potential real wave function. The quantized vector potential oscillates over a wavelength with circular left or right polarization giving birth to orthogonal magnetic and electric fields whose amplitudes are proportional to the square of the frequency. The energy and momentum are carried by the local wave-corpuscle guided by the non-local vector potential wave function suitably normalized.展开更多
A low sidelobe aperture design method of multi-step amplitude quantization with pedestal is proposed, and general analysis and formulas are described. The computation results compared with our previous method "Mu...A low sidelobe aperture design method of multi-step amplitude quantization with pedestal is proposed, and general analysis and formulas are described. The computation results compared with our previous method "Multi-Step Amplitude Quantization(MSAQ)" on peak side-lobe level, aperture efficiency, normalized input power and sidelobe degradation with tolerance are given. It is shown that, under the same conditions, the method presented in this paper is better than the MSAQ.展开更多
A fast encoding algorithm was presented which made full use of two characteristics of a vector, its sum and variance. In this paper, a vector was separated into two subvectors, one is the first half of the coordinates...A fast encoding algorithm was presented which made full use of two characteristics of a vector, its sum and variance. In this paper, a vector was separated into two subvectors, one is the first half of the coordinates and the other contains the remaining coordinates. Three inequalities based on the characteristics of the sums and variances of a vector and its two subvectors were introduced to reject those codewords which are impossible to be the nearest codeword. The simulation results show that the proposed algorithm is faster than the improved equal average eaual variance nearest neighbor search (EENNS) algorithm.展开更多
文摘In previous works, the theoretical and experimental deterministic scalar kinematic structures, the theoretical and experimental deterministic vector kinematic structures, the theoretical and experimental deterministic scalar dynamic structures, and the theoretical and experimental deterministic vector dynamic structures have been developed to compute the exact solution for deterministic chaos of the exponential pulsons and oscillons that is governed by the nonstationary three-dimensional Navier-Stokes equations. To explore properties of the kinetic energy, rectangular, diagonal, and triangular summations of a matrix of the kinetic energy and general terms of various sums have been used in the current paper to develop quantization of the kinetic energy of deterministic chaos. Nested structures of a cumulative energy pulson, an energy pulson of propagation, an internal energy oscillon, a diagonal energy oscillon, and an external energy oscillon have been established. In turn, the energy pulsons and oscillons include group pulsons of propagation, internal group oscillons, diagonal group oscillons, and external group oscillons. Sequentially, the group pulsons and oscillons contain wave pulsons of propagation, internal wave oscillons, diagonal wave oscillons, and external wave oscillons. Consecutively, the wave pulsons and oscillons are composed of elementary pulsons of propagation, internal elementary oscillons, diagonal elementary oscillons, and external elementary oscillons. Topology, periodicity, and integral properties of the exponential pulsons and oscillons have been studied using the novel method of the inhomogeneous Fourier expansions via eigenfunctions in coordinates and time. Symbolic computations of the exact expansions have been performed using the experimental and theoretical programming in Maple. Results of the symbolic computations have been justified by probe visualizations.
文摘The uncertainty principle is a fundamental principle of quantum mechanics, but its exact mathematical expression cannot obtain correct results when used to solve theoretical problems such as the energy levels of hydrogen atoms, one-dimensional deep potential wells, one-dimensional harmonic oscillators, and double-slit experiments. Even after approximate treatment, the results obtained are not completely consistent with those obtained by solving Schrödinger’s equation. This indicates that further research on the uncertainty principle is necessary. Therefore, using the de Broglie matter wave hypothesis, we quantize the action of an elementary particle in natural coordinates and obtain the quantization condition and a new deterministic relation. Using this quantization condition, we obtain the energy level formulas of an elementary particle in different conditions in a classical way that is completely consistent with the results obtained by solving Schrödinger’s equation. A new physical interpretation is given for the particle eigenfunction independence of probability for an elementary particle: an elementary particle is in a particle state at the space-time point where the action is quantized, and in a wave state in the rest of the space-time region. The space-time points of particle nature and the wave regions of particle motion constitute the continuous trajectory of particle motion. When an elementary particle is in a particle state, it is localized, whereas in the wave state region, it is nonlocalized.
文摘The quantization algorithm compresses the original network by reducing the numerical bit width of the model,which improves the computation speed. Because different layers have different redundancy and sensitivity to databit width. Reducing the data bit width will result in a loss of accuracy. Therefore, it is difficult to determinethe optimal bit width for different parts of the network with guaranteed accuracy. Mixed precision quantizationcan effectively reduce the amount of computation while keeping the model accuracy basically unchanged. In thispaper, a hardware-aware mixed precision quantization strategy optimal assignment algorithm adapted to low bitwidth is proposed, and reinforcement learning is used to automatically predict the mixed precision that meets theconstraints of hardware resources. In the state-space design, the standard deviation of weights is used to measurethe distribution difference of data, the execution speed feedback of simulated neural network accelerator inferenceis used as the environment to limit the action space of the agent, and the accuracy of the quantization model afterretraining is used as the reward function to guide the agent to carry out deep reinforcement learning training. Theexperimental results show that the proposed method obtains a suitable model layer-by-layer quantization strategyunder the condition that the computational resources are satisfied, and themodel accuracy is effectively improved.The proposed method has strong intelligence and certain universality and has strong application potential in thefield of mixed precision quantization and embedded neural network model deployment.
基金funded by the National Science Foundation of China(62006068)Hebei Natural Science Foundation(A2021402008),Natural Science Foundation of Scientific Research Project of Higher Education in Hebei Province(ZD2020185,QN2020188)333 Talent Supported Project of Hebei Province(C20221026).
文摘Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship between data attributes.However,the creation of fuzzy rules typically depends on expert knowledge,which may not fully leverage the label information in training data and may be subjective.To address this issue,a novel fuzzy rule oversampling approach is developed based on the learning vector quantization(LVQ)algorithm.In this method,the label information of the training data is utilized to determine the antecedent part of If-Then fuzzy rules by dynamically dividing attribute intervals using LVQ.Subsequently,fuzzy rules are generated and adjusted to calculate rule weights.The number of new samples to be synthesized for each rule is then computed,and samples from the minority class are synthesized based on the newly generated fuzzy rules.This results in the establishment of a fuzzy rule oversampling method based on LVQ.To evaluate the effectiveness of this method,comparative experiments are conducted on 12 publicly available imbalance datasets with five other sampling techniques in combination with the support function machine.The experimental results demonstrate that the proposed method can significantly enhance the classification algorithm across seven performance indicators,including a boost of 2.15%to 12.34%in Accuracy,6.11%to 27.06%in G-mean,and 4.69%to 18.78%in AUC.These show that the proposed method is capable of more efficiently improving the classification performance of imbalanced data.
文摘An aperture design technique using multi-step amplitude quantization for two-dimensional solid-state active phased arrays to achieve low sidelobe is described. It can be applied to antennas with arbitrary complex aperture. Also, the gain drop and sidelobe degradation due to random amplitude and phase errors and element (or T/R module) failures are investigated.
基金The Qian Xuesen Youth Innovation Foundation from China Aerospace Science and Technology Corporation(Grant Number 2022JY51).
文摘The demand for adopting neural networks in resource-constrained embedded devices is continuously increasing.Quantization is one of the most promising solutions to reduce computational cost and memory storage on embedded devices.In order to reduce the complexity and overhead of deploying neural networks on Integeronly hardware,most current quantization methods use a symmetric quantization mapping strategy to quantize a floating-point neural network into an integer network.However,although symmetric quantization has the advantage of easier implementation,it is sub-optimal for cases where the range could be skewed and not symmetric.This often comes at the cost of lower accuracy.This paper proposed an activation redistribution-based hybrid asymmetric quantizationmethod for neural networks.The proposedmethod takes data distribution into consideration and can resolve the contradiction between the quantization accuracy and the ease of implementation,balance the trade-off between clipping range and quantization resolution,and thus improve the accuracy of the quantized neural network.The experimental results indicate that the accuracy of the proposed method is 2.02%and 5.52%higher than the traditional symmetric quantization method for classification and detection tasks,respectively.The proposed method paves the way for computationally intensive neural network models to be deployed on devices with limited computing resources.Codes will be available on https://github.com/ycjcy/Hybrid-Asymmetric-Quantization.
文摘The quantization thermal excitation isotherms based on the maximum triad spin number (G) of each energy level for metal cluster were derived as a function of temperature by expanding the binomial theorems according to energy levels. From them the quantized geometric mean heat capacity equations are expressed in sequence. Among them the five quantized geometric heat capacity equations, fit the best to the experimental heat capacity data of metal atoms at constant pressure. In the derivations we assume that the triad spin composed of an electron, its proton and its neutron in a metal cluster become a basic unit of thermal excitation. Boltzmann constant (kB) is found to be an average specific heat of an energy level in a metal cluster. And then the constant (kK) is found to be an average specific heat of a photon in a metal cluster. The core triad spin made of free neutrons may exist as the second one additional energy level. The energy levels are grouped according to the forms of four spins throughout two axes. Planck constant is theoretically obtained with the ratio of the internal energy of metal (U) to total isotherm number (N) through Equipartition theorem.
文摘We are looking at comparison of two action integrals and we identify the Lagrangian multiplier as setting up a constraint equation (on cosmological expansion). This is a direct result of the fourth equation of our manuscript which unconventionally compares the action integral of General relativity with the second derived action integral, which then permits Equation (5), which is a bound on the Cosmological constant. What we have done is to replace the Hamber Quantum gravity reference-based action integral with a result from John Klauder’s “Enhanced Quantization”. In doing so, with Padamabhan’s treatment of the inflaton, we then initiate an explicit bound upon the cosmological constant. The other approximation is to use the inflaton results and conflate them with John Klauder’s Action principle for a way, if we have the idea of a potential well, generalized by Klauder, with a wall of space time in the Pre Planckian-regime to ask what bounds the Cosmological constant prior to inflation, and to get an upper bound on the mass of a graviton. We conclude with a redo of a multiverse version of the Penrose cyclic conformal cosmology. Our objective is to show how a value of the rest mass of the heavy graviton is invariant from cycle to cycle. All this is possible due to Equation (4). And we compare all these with results of Reference [1] in the conclusion, while showing its relevance to early universe production of black holes, and the volume of space producing 100 black holes of value about 10^2 times Planck Mass. Initially evaluated in a space-time of about 10^3 Planck length, in spherical length, we assume a starting entropy of about 1000 initially.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.12104414,12122412,12104464,and 12104413)the China Postdoctoral Science Foundation(Grant No.2021M702955).
文摘The recently developed magic-intensity trapping technique of neutral atoms efficiently mitigates the detrimental effect of light shifts on atomic qubits and substantially enhances the coherence time. This technique relies on applying a bias magnetic field precisely parallel to the wave vector of a circularly polarized trapping laser field. However, due to the presence of the vector light shift experienced by the trapped atoms, it is challenging to precisely define a parallel magnetic field, especially at a low bias magnetic field strength, for the magic-intensity trapping of85Rb qubits. In this work, we present a method to calibrate the angle between the bias magnetic field and the trapping laser field with the compensating magnetic fields in the other two directions orthogonal to the bias magnetic field direction. Experimentally, with a constantdepth trap and a fixed bias magnetic field, we measure the respective resonant frequencies of the atomic qubits in a linearly polarized trap and a circularly polarized one via the conventional microwave Rabi spectra with different compensating magnetic fields and obtain the corresponding total magnetic fields via the respective resonant frequencies using the Breit–Rabi formula. With known total magnetic fields, the angle is a function of the other two compensating magnetic fields.Finally, the projection value of the angle on either of the directions orthogonal to the bias magnetic field direction can be reduced to 0(4)° by applying specific compensating magnetic fields. The measurement error is mainly attributed to the fluctuation of atomic temperature. Moreover, it also demonstrates that, even for a small angle, the effect is strong enough to cause large decoherence of Rabi oscillation in a magic-intensity trap. Although the compensation method demonstrated here is explored for the magic-intensity trapping technique, it can be applied to a variety of similar precision measurements with trapped neutral atoms.
基金supported in part by the National Natural Science Foundation of China(NSFC)under Grants 61971127,61871465,61871122in part by the National Key Research and Development Program under Grant 2020YFB1806600in part by the open research fund of National Mobile Communications Research Laboratory,Southeast University under Grant 2022D11。
文摘In this paper,we investigate networkassisted full-duplex(NAFD)cell-free millimeter-wave(mmWave)massive multiple-input multiple-output(MIMO)systems with digital-to-analog converter(DAC)quantization and fronthaul compression.We propose to maximize the weighted uplink and downlink sum rate by jointly optimizing the power allocation of both the transmitting remote antenna units(T-RAUs)and uplink users and the variances of the downlink and uplink fronthaul compression noises.To deal with this challenging problem,we further apply a successive convex approximation(SCA)method to handle the non-convex bidirectional limited-capacity fronthaul constraints.The simulation results verify the convergence of the proposed SCA-based algorithm and analyze the impact of fronthaul capacity and DAC quantization on the spectral efficiency of the NAFD cell-free mmWave massive MIMO systems.Moreover,some insightful conclusions are obtained through the comparisons of spectral efficiency,which shows that NAFD achieves better performance gains than cotime co-frequency full-duplex cloud radio access network(CCFD C-RAN)in the cases of practical limited-resolution DACs.Specifically,their performance gaps with 8-bit DAC quantization are larger than that with1-bit DAC quantization,which attains a 5.5-fold improvement.
文摘A new steganographic method by pixel-value differencing(PVD)using general quantization ranges of pixel pairs’difference values is proposed.The objective of this method is to provide a data embedding technique with a range table with range widths not limited to powers of 2,extending PVD-based methods to enhance their flexibility and data-embedding rates without changing their capabilities to resist security attacks.Specifically,the conventional PVD technique partitions a grayscale image into 1×2 non-overlapping blocks.The entire range[0,255]of all possible absolute values of the pixel pairs’grayscale differences in the blocks is divided into multiple quantization ranges.The width of each quantization range is a power of two to facilitate the direct embedding of the bit information with high embedding rates.Without using power-of-two range widths,the embedding rates can drop using conventional embedding techniques.In contrast,the proposed method uses general quantization range widths,and a multiple-based number conversion mechanism is employed skillfully to implement the use of nonpower-of-two range widths,with each pixel pair being employed to embed a digit in the multiple-based number.All the message bits are converted into a big multiple-based number whose digits can be embedded into the pixel pairs with a higher embedding rate.Good experimental results showed the feasibility of the proposed method and its resistance to security attacks.In addition,implementation examples are provided,where the proposed method adopts non-power-of-two range widths and employsmultiple-based number conversion to expand the data-hiding and steganalysis-resisting capabilities of other PVD methods.
基金This work was supported by Open Fund Project of State Key Laboratory of Intelligent Vehicle Safety Technology by Grant with No.IVSTSKL-202311Key Projects of Science and Technology Research Programme of Chongqing Municipal Education Commission by Grant with No.KJZD-K202301505+1 种基金Cooperation Project between Chongqing Municipal Undergraduate Universities and Institutes Affiliated to the Chinese Academy of Sciences in 2021 by Grant with No.HZ2021015Chongqing Graduate Student Research Innovation Program by Grant with No.CYS240801.
文摘Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro-posed to improve the efficiency for edge inference of Deep Neural Networks(DNNs),existing PoT schemes require a huge amount of bit-wise manipulation and have large memory overhead,and their efficiency is bounded by the bottleneck of computation latency and memory footprint.To tackle this challenge,we present an efficient inference approach on the basis of PoT quantization and model compression.An integer-only scalar PoT quantization(IOS-PoT)is designed jointly with a distribution loss regularizer,wherein the regularizer minimizes quantization errors and training disturbances.Additionally,two-stage model compression is developed to effectively reduce memory requirement,and alleviate bandwidth usage in communications of networked heterogenous learning systems.The product look-up table(P-LUT)inference scheme is leveraged to replace bit-shifting with only indexing and addition operations for achieving low-latency computation and implementing efficient edge accelerators.Finally,comprehensive experiments on Residual Networks(ResNets)and efficient architectures with Canadian Institute for Advanced Research(CIFAR),ImageNet,and Real-world Affective Faces Database(RAF-DB)datasets,indicate that our approach achieves 2×∼10×improvement in the reduction of both weight size and computation cost in comparison to state-of-the-art methods.A P-LUT accelerator prototype is implemented on the Xilinx KV260 Field Programmable Gate Array(FPGA)platform for accelerating convolution operations,with performance results showing that P-LUT reduces memory footprint by 1.45×,achieves more than 3×power efficiency and 2×resource efficiency,compared to the conventional bit-shifting scheme.
基金supported by the Fundamental Research Funds for the Central Universities(DUT22RT(3)090)the National Natural Science Foundation of China(61890920,61890921,62122016,08120003)Liaoning Science and Technology Program(2023JH2/101700361).
文摘Linear temporal logic(LTL)is an intuitive and expressive language to specify complex control tasks,and how to design an efficient control strategy for LTL specification is still a challenge.In this paper,we implement the dynamic quantization technique to propose a novel hierarchical control strategy for nonlinear control systems under LTL specifications.Based on the regions of interest involved in the LTL formula,an accepting path is derived first to provide a high-level solution for the controller synthesis problem.Second,we develop a dynamic quantization based approach to verify the realization of the accepting path.The realization verification results in the necessity of the controller design and a sequence of quantization regions for the controller design.Third,the techniques of dynamic quantization and abstraction-based control are combined together to establish the local-to-global control strategy.Both abstraction construction and controller design are local and dynamic,thereby resulting in the potential reduction of the computational complexity.Since each quantization region can be considered locally and individually,the proposed hierarchical mechanism is more efficient and can solve much larger problems than many existing methods.Finally,the proposed control strategy is illustrated via two examples from the path planning and tracking problems of mobile robots.
文摘What does it mean to study PDE (Partial Differential Equation)? How and what to do “to claim proudly that I’m studying a certain PDE”? Newton mechanic uses mainly ODE (Ordinary Differential Equation) and describes nicely movements of Sun, Moon and Earth etc. Now, so-called quantum phenomenum is described by, say Schrödinger equation, PDE which explains both wave and particle characters after quantization of ODE. The coupled Maxwell-Dirac equation is also “quantized” and QED (Quantum Electro-Dynamics) theory is invented by physicists. Though it is said this QED gives very good coincidence between theoretical1 and experimental observed quantities, but what is the equation corresponding to QED? Or, is it possible to describe QED by “equation” in naive sense?
文摘We justify and extend the standard model of elementary particle physics by generalizing the theory of relativity and quantum mechanics. The usual assumption that space and time are continuous implies, indeed, that it should be possible to measure arbitrarily small intervals of space and time, but we ignore if that is true or not. It is thus more realistic to consider an extremely small “quantum of length” of yet unknown value <em>a</em>. It is only required to be a universal constant for all inertial frames, like<em> c</em> and <em>h</em>. This yields a logically consistent theory and accounts for elementary particles by means of four new quantum numbers. They define “particle states” in terms of modulations of wave functions at the smallest possible scale in space-time. The resulting classification of elementary particles accounts also for dark matter. Antiparticles are redefined, without needing negative energy states and recently observed “anomalies” can be explained.
文摘In the field of image and data compression, there are always new approaches being tried and tested to improve the quality of the reconstructed image and to reduce the computational complexity of the algorithm employed. However, there is no one perfect technique that can offer both maximum compression possible and best reconstruction quality, for any type of image. Depending on the level of compression desired and characteristics of the input image, a suitable choice must be made from the options available. For example in the field of video compression, the integer adaptation of discrete cosine transform (DCT) with fixed quantization is widely used in view of its ease of computation and adequate performance. There exist transforms like, discrete Tchebichef transform (DTT), which are suitable too, but are potentially unexploited. This work aims to bridge this gap and examine cases where DTT could be an alternative compression transform to DCT based on various image quality parameters. A multiplier-free fast implementation of integer DTT (ITT) of size 8 × 8 is also studied, for its low computational complexity. Due to the uneven spread of data across images, some areas might have intricate detail, whereas others might be rather plain. This prompts the use of a compression method that can be adapted according to the amount of detail. So, instead of fixed quantization this paper employs quantization that varies depending on the characteristics of the image block. This implementation is free from additional computational or transmission overhead. The image compression performance of ITT and ICT, using both variable and fixed quantization, is compared with a variety of images and the cases suitable for ITT-based image compression employing variable quantization are identified.
文摘According to the formula of translational motion of vector along an infinitesimal closed curve in gravitational space, this article shows that the space and time both are quantized;the called center singularity of Schwarzschild metric does not exist physically, and Einstein’s theory of gravity is compatible with the traditional quantum theory in essence;the quantized gravitational space is just the spin network which consists of infinite quantized loops linking and intersecting each other, and that whether the particle is in spin eigenstate depends on the translational track of its spin vector in gravitational space.
文摘A photon structure is advanced based on the experimental evidence and the vector potential quantization at a single photon level. It is shown that the photon is neither a point particle nor an infinite wave but behaves rather like a local “wave-corpuscle” extended over a wavelength, occupying a minimum quantization volume and guided by a non-local vector potential real wave function. The quantized vector potential oscillates over a wavelength with circular left or right polarization giving birth to orthogonal magnetic and electric fields whose amplitudes are proportional to the square of the frequency. The energy and momentum are carried by the local wave-corpuscle guided by the non-local vector potential wave function suitably normalized.
文摘A low sidelobe aperture design method of multi-step amplitude quantization with pedestal is proposed, and general analysis and formulas are described. The computation results compared with our previous method "Multi-Step Amplitude Quantization(MSAQ)" on peak side-lobe level, aperture efficiency, normalized input power and sidelobe degradation with tolerance are given. It is shown that, under the same conditions, the method presented in this paper is better than the MSAQ.
文摘A fast encoding algorithm was presented which made full use of two characteristics of a vector, its sum and variance. In this paper, a vector was separated into two subvectors, one is the first half of the coordinates and the other contains the remaining coordinates. Three inequalities based on the characteristics of the sums and variances of a vector and its two subvectors were introduced to reject those codewords which are impossible to be the nearest codeword. The simulation results show that the proposed algorithm is faster than the improved equal average eaual variance nearest neighbor search (EENNS) algorithm.