Using a Linde reference, as well as another one from Padmanabhan for calculation of how the early universe expands, we obtain, by default the coefficient of scale factor expansion, t to the alpha value, with alpha bei...Using a Linde reference, as well as another one from Padmanabhan for calculation of how the early universe expands, we obtain, by default the coefficient of scale factor expansion, t to the alpha value, with alpha being approximately the square root of five in value. We from there make an estimate as to the number of initial particles produced in the very beginning, which leads us to conclude that a graviton, would be a preferred initial by product. The argument as to gravitons, also reflects a choice of how the decay of initial BEC condensates of Planck sized black holes would commence, using the work produced by Chavanis, as to BEC condensates and black holes. The object will be to obtain initial frequency spread plus strength of GW production plus a suggestion as to what polarization state may be accessible from initial conditions.展开更多
This paper explains that the terms“horizontal and vertical scales”are not appropriate in map projections theory.Instead,the authors suggest using the term“scales in the direction of coordinate axes.”Since it is no...This paper explains that the terms“horizontal and vertical scales”are not appropriate in map projections theory.Instead,the authors suggest using the term“scales in the direction of coordinate axes.”Since it is not possible to read a local linear scale factor in the direction of a coordinate axis immediately from the definition of a local linear scale factor,this paper considers the derivation of new formulae that enable local linear scale factors in the direction of coordinate x and y axes to be calculated.The formula for computing the local linear scale factor in any direction defined by dx and dy is also derived.Furthermore,the position and magnitude of the extreme values of the local linear scale factor are considered and new formulas derived.展开更多
Existing methods of physiological signal analysis based on nonlinear dynamic theories only examine the complexity difference of the signals under a single sampling frequency.We developed a technique to measure the mul...Existing methods of physiological signal analysis based on nonlinear dynamic theories only examine the complexity difference of the signals under a single sampling frequency.We developed a technique to measure the multifractal characteristic parameter intimately associated with physiological activities through a frequency scale factor.This parameter is highly sensitive to physiological and pathological status.Mice received various drugs to imitate different physiological and pathological conditions,and the distributions of mass exponent spectrum curvature with scale factors from the electrocardiogram (ECG) signals of healthy and drug injected mice were determined.Next,we determined the characteristic frequency scope in which the signal was of the highest complexity and most sensitive to impaired cardiac function,and examined the relationships between heart rate,heartbeat dynamic complexity,and sensitive frequency scope of the ECG signal.We found that all animals exhibited a scale factor range in which the absolute magnitudes of ECG mass exponent spectrum curvature achieve the maximum,and this range (or frequency scope) is not changed with calculated data points or maximal coarse-grained scale factor.Further,the heart rate of mice was not necessarily associated with the nonlinear complexity of cardiac dynamics,but closely related to the most sensitive ECG frequency scope determined by characterization of this complex dynamic features for certain heartbeat conditions.Finally,we found that the health status of the hearts of mice was directly related to the heartbeat dynamic complexity,both of which were positively correlated within the scale factor around the extremum region of the multifractal parameter.With increasing heart rate,the sensitive frequency scope increased to a relatively high location.In conclusion,these data provide important theoretical and practical data for the early diagnosis of cardiac disorders.展开更多
Geometric or sub-scale modeling techniques are used for the evaluation of large and complex dynamic structures to ensure accurate reproduction of load path and thus leading to true dynamic characteristics of such stru...Geometric or sub-scale modeling techniques are used for the evaluation of large and complex dynamic structures to ensure accurate reproduction of load path and thus leading to true dynamic characteristics of such structures. The sub-scale modeling technique is very effective in the prediction of vibration characteristics of original large structure when the experimental testing is not feasible due to the absence of a large testing facility. Previous researches were more focused on free and harmonic vibration case with little or no consideration for readily encountered random vibration. A sub-scale modeling technique is proposed for estimating the vibration characteristics of any large scale structure such as Launch vehicles, Mega structures, etc., under various vibration load cases by utilizing precise scaled-down model of that dynamic structure. In order to establish an analytical correlation between the original structure and its scaled models, different scale models of isotropic cantilever beam are selected and analyzed under various vibration conditions( i.e. free, harmonic and random) using finite element package ANSYS. The developed correlations are also validated through experimental testing The prediction made from the vibratory response of the scaled-down beam through the established sets of correlation are found similar to the response measured from the testing of original beam structure. The established correlations are equally applicable in the prediction of dynamic characteristics of any complex structure through its scaled-down models. This paper presents modified sub-scale modeling technique that enables accurate prediction of vibration characteristics of large and complex structure under not only sinusoidal but also for random vibrations.展开更多
One of the crucial and challenging issues for researchers is presenting an appropriate approach to evaluate the aerodynamic characteristics of air cushion vehicles(ACVs)in terms of system design parameters.One of thes...One of the crucial and challenging issues for researchers is presenting an appropriate approach to evaluate the aerodynamic characteristics of air cushion vehicles(ACVs)in terms of system design parameters.One of these issues includes introducing a suitable approach to analyze the effect of geometric shapes on the aerodynamic characteristics of ACVs.The main novelty of this paper lies in presenting an innovative method to study the geometric shape effect on air cushion lift force,which has not been investigated thus far.Moreover,this paper introduces a new approximate mathematical formula for calculating the air cushion lift force in terms of parameters,including the air gap,lateral gaps,air inlet velocity,and scaling factor for the first time.Thus,we calculate the aerodynamic lift force applied to nine different shapes of the air cushions used in the ACVs in the present paper through the ANSYS Fluent software.The geometrical shapes studied in this paper are rectangular,square,equilateral triangle,circular,elliptic shapes,and four other combined shapes,including circle-rectangle,circle-square,hexagonal,and fillet square.Results showed that the cushion with a circular pattern produces the highest lift force among other geometric shapes with the same conditions.The increase in the cushion lift force can be attributed to the fillet with a square shape and its increasing radius compared with the square shape.展开更多
In the software engineering literature, it is commonly believed that economies of scale do not occur in case of software Development and Enhancement Projects (D&EP). Their per-unit cost does not decrease but increa...In the software engineering literature, it is commonly believed that economies of scale do not occur in case of software Development and Enhancement Projects (D&EP). Their per-unit cost does not decrease but increase with the growth of such projects product size. Thus this is diseconomies of scale that occur in them. The significance of this phenomenon results from the fact that it is commonly considered to be one of the fundamental objective causes of their low effectiveness. This is of particular significance with regard to Business Software Systems (BSS) D&EP characterized by exceptionally low effectiveness comparing to other software D&EP. Thus the paper aims at answering the following two questions: (1) Do economies of scale really not occur in BSS D&EP? (2) If economies of scale may occur in BSS D&EP, what factors are then promoting them? These issues classify into economics problems of software engineering research and practice.展开更多
To overcome the drawbacks such as irregular circuit construction and low system throughput that exist in conventional methods, a new factor correction scheme for coordinate rotation digital computer( CORDIC) algorit...To overcome the drawbacks such as irregular circuit construction and low system throughput that exist in conventional methods, a new factor correction scheme for coordinate rotation digital computer( CORDIC) algorithm is proposed. Based on the relationship between the iteration formulae, a new iteration formula is introduced, which leads the correction operation to be several simple shifting and adding operations. As one key part, the effects caused by rounding error are analyzed mathematically and it is concluded that the effects can be degraded by an appropriate selection of coefficients in the iteration formula. The model is then set up in Matlab and coded in Verilog HDL language. The proposed algorithm is also synthesized and verified in field-programmable gate array (FPGA). The results show that this new scheme requires only one additional clock cycle and there is no change in the elementary iteration for the same precision compared with the conventional algorithm. In addition, the circuit realization is regular and the change in system throughput is very minimal.展开更多
In order to analyze the effect of wavelength-dependent radiation-induced attenuation (RIA) on the mean trans- mission wavelength in optical fiber and the scale factor of interferometric fiber optic gyroscopes (IFOG...In order to analyze the effect of wavelength-dependent radiation-induced attenuation (RIA) on the mean trans- mission wavelength in optical fiber and the scale factor of interferometric fiber optic gyroscopes (IFOGs), three types of polarization-maintaining (PM) fibers are tested by using a 60Co γ-radiation source. The observed different mean wave- length shift (MWS) behaviors for different fibers are interpreted by color-center theory involving dose rate-dependent absorption bands in ultraviolet and visible ranges and total dose-dependent near-infrared absorption bands. To evaluate the mean wavelength variation in a fiber coil and the induced scale factor change for space-borne IFOGs under low radiation doses in a space environment, the influence of dose rate on the mean wavelength is investigated by testing four germanium (Ge) doped fibers and two germanium-phosphorus (Ge-P) codoped fibers irradiated at different dose rates. Experimental results indicate that the Ge-doped fibers show the least mean wavelength shift during irradiation and their mean wavelength of optical signal transmission in fibers will shift to a shorter wavelength in a low-dose-rate radiation environment. Finally, the change in the scale factor of IFOG resulting from the mean wavelength shift is estimated and tested, and it is found that the significant radiation-induced scale factor variation must be considered during the design of space-borne IFOGs.展开更多
Multifdelity surrogates(MFSs)replace computationally intensive models by synergistically combining information from diferent fdelity data with a signifcant improvement in modeling efciency.In this paper,a modifed MFS(...Multifdelity surrogates(MFSs)replace computationally intensive models by synergistically combining information from diferent fdelity data with a signifcant improvement in modeling efciency.In this paper,a modifed MFS(MMFS)model based on a radial basis function(RBF)is proposed,in which two fdelities of information can be analyzed by adaptively obtaining the scale factor.In the MMFS,an RBF was employed to establish the low-fdelity model.The correlation matrix of the high-fdelity samples and corresponding low-fdelity responses were integrated into an expansion matrix to determine the scaling function parameters.The shape parameters of the basis function were optimized by minimizing the leave-one-out cross-validation error of the high-fdelity sample points.The performance of the MMFS was compared with those of other MFS models(MFS-RBF and cooperative RBF)and single-fdelity RBF using four benchmark test functions,by which the impacts of diferent high-fdelity sample sizes on the prediction accuracy were also analyzed.The sensitivity of the MMFS model to the randomness of the design of experiments(DoE)was investigated by repeating sampling plans with 20 diferent DoEs.Stress analysis of the steel plate is presented to highlight the prediction ability of the proposed MMFS model.This research proposes a new multifdelity modeling method that can fully use two fdelity sample sets,rapidly calculate model parameters,and exhibit good prediction accuracy and robustness.展开更多
Estimation of random errors, which are due to shot noise of photomultiplier tube(PMT) or avalanche photodiode(APD) detectors, is very necessary in lidar observation. Due to the Poisson distribution of incident electro...Estimation of random errors, which are due to shot noise of photomultiplier tube(PMT) or avalanche photodiode(APD) detectors, is very necessary in lidar observation. Due to the Poisson distribution of incident electrons, there still exists a proportional relationship between standard deviation and square root of its mean value. Based on this relationship,noise scale factor(NSF) is introduced into the estimation, which only needs a single data sample. This method overcomes the distractions of atmospheric fluctuations during calculation of random errors. The results show that this method is feasible and reliable.展开更多
The scaled boundary finite element method (SBFEM) is a recently developed numerical method combining advantages of both finite element methods (FEM) and boundary element methods (BEM) and with its own special fe...The scaled boundary finite element method (SBFEM) is a recently developed numerical method combining advantages of both finite element methods (FEM) and boundary element methods (BEM) and with its own special features as well. One of the most prominent advantages is its capability of calculating stress intensity factors (SIFs) directly from the stress solutions whose singularities at crack tips are analytically represented. This advantage is taken in this study to model static and dynamic fracture problems. For static problems, a remeshing algorithm as simple as used in the BEM is developed while retaining the generality and flexibility of the FEM. Fully-automatic modelling of the mixed-mode crack propagation is then realised by combining the remeshing algorithm with a propagation criterion. For dynamic fracture problems, a newly developed series-increasing solution to the SBFEM governing equations in the frequency domain is applied to calculate dynamic SIFs. Three plane problems are modelled. The numerical results show that the SBFEM can accurately predict static and dynamic SIFs, cracking paths and load-displacement curves, using only a fraction of degrees of freedom generally needed by the traditional finite element methods.展开更多
The prediction of dynamic crack propagation in brittle materials is still an important issue in many engineering fields. The remeshing technique based on scaled boundary finite element method(SBFEM) is extended to pre...The prediction of dynamic crack propagation in brittle materials is still an important issue in many engineering fields. The remeshing technique based on scaled boundary finite element method(SBFEM) is extended to predict the dynamic crack propagation in brittle materials. The structure is firstly divided into a number of superelements, only the boundaries of which need to be discretized with line elements. In the SBFEM formulation, the stiffness and mass matrices of the super-elements can be coupled seamlessly with standard finite elements, thus the advantages of versatility and flexibility of the FEM are well maintained. The transient response of the structure can be calculated directly in the time domain using a standard time-integration scheme. Then the dynamic stress intensity factor(DSIF) during crack propagation can be solved analytically due to the semi-analytical nature of SBFEM. Only the fine mesh discretization for the crack-tip super-element is needed to ensure the required accuracy for the determination of stress intensity factor(SIF). According to the predicted crack-tip position, a simple remeshing algorithm with the minimum mesh changes is suggested to simulate the dynamic crack propagation. Numerical examples indicate that the proposed method can be effectively used to deal with the dynamic crack propagation in a finite sized rectangular plate including a central crack. Comparison is made with the results available in the literature, which shows good agreement between each other.展开更多
We study projective synchronization with different scaling factors (PSDF) in N coupled chaotic systems networks. By using the adaptive linear control, some sufficient criteria for the PSDF in symmetrical and asymmet...We study projective synchronization with different scaling factors (PSDF) in N coupled chaotic systems networks. By using the adaptive linear control, some sufficient criteria for the PSDF in symmetrical and asymmetrical coupled networks are separately given based on the Lyapunov function method and the left eigenvalue theory. Numerical simulations for a generalized chaotic unified system are illustrated to verify the theoretical results.展开更多
Considering that the hardware implementation of the normalized minimum sum(NMS)decoding algorithm for low-density parity-check(LDPC)code is difficult due to the uncertainty of scale factor,an NMS decoding algorithm wi...Considering that the hardware implementation of the normalized minimum sum(NMS)decoding algorithm for low-density parity-check(LDPC)code is difficult due to the uncertainty of scale factor,an NMS decoding algorithm with variable scale factor is proposed for the near-earth space LDPC codes(8177,7154)in the consultative committee for space data systems(CCSDS)standard.The shift characteristics of field programmable gate array(FPGA)is used to optimize the quantization data of check nodes,and finally the function of LDPC decoder is realized.The simulation and experimental results show that the designed FPGA-based LDPC decoder adopts the scaling factor in the NMS decoding algorithm to improve the decoding performance,simplify the hardware structure,accelerate the convergence speed and improve the error correction ability.展开更多
To increase the variety and security of communication, we present the definitions of modified projective synchronization with complex scaling factors (CMPS) of real chaotic systems and complex chaotic systems, where...To increase the variety and security of communication, we present the definitions of modified projective synchronization with complex scaling factors (CMPS) of real chaotic systems and complex chaotic systems, where complex scaling factors establish a link between real chaos and complex chaos. Considering all situations of unknown parameters and pseudo-gradient condition, we design adaptive CMPS schemes based on the speed-gradient method for the real drive chaotic system and complex response chaotic system and for the complex drive chaotic system and the real response chaotic system, respectively. The convergence factors and dynamical control strength are added to regulate the convergence speed and increase robustness. Numerical simulations verify the feasibility and effectiveness of the presented schemes.展开更多
The global bathymetry models are usually of low accuracy over the coastline of polar areas due to the harsh climatic environment and the complex topography.Satellite altimetric gravity data can be a supplement and pla...The global bathymetry models are usually of low accuracy over the coastline of polar areas due to the harsh climatic environment and the complex topography.Satellite altimetric gravity data can be a supplement and plays a key role in bathymetry modeling over these regions.The Synthetic Aperture Radar(SAR)altimeters in the missions like CryoSat-2 and Sentinel-3A/3B can relieve waveform contamination that existed in conventional altimeters and provide data with improved accuracy and spatial resolution.In this study,we investigate the potential application of SAR altimetric gravity data in enhancing coastal bathymetry,where the effects on local bathymetry modeling introduced from SAR altimetry data are quantified and evaluated.Furthermore,we study the effects on bathymetry modeling by using different scale factor calculation approaches,where a partition-wise scheme is implemented.The numerical experiment over the South Sandwich Islands near Antarctica suggests that using SARbased altimetric gravity data improves local coastal bathymetry modeling,compared with the model calculated without SAR altimetry data by a magnitude of 3:55 m within 10 km of offshore areas.Moreover,by using the partition-wise scheme for scale factor calculation,the quality of the coastal bathymetry model is improved by 7.34 m compared with the result derived from the traditional method.These results indicate the superiority of using SAR altimetry data in coastal bathymetry inversion.展开更多
This study aimed to deal with three challenges:robustness,imperceptibility,and capacity in the image watermarking field.To reach a high capacity,a novel similarity-based edge detection algorithm was developed that fin...This study aimed to deal with three challenges:robustness,imperceptibility,and capacity in the image watermarking field.To reach a high capacity,a novel similarity-based edge detection algorithm was developed that finds more edge points than traditional techniques.The colored watermark image was created by inserting a randomly generated message on the edge points detected by this algorithm.To ensure robustness and imperceptibility,watermark and cover images were combined in the high-frequency subbands using Discrete Wavelet Transform and Singular Value Decomposition.In the watermarking stage,the watermark image was weighted by the adaptive scaling factor calculated by the standard deviation of the similarity image.According to the results,the proposed edge-based color image watermarking technique has achieved high payload capacity,imperceptibility,and robustness to all attacks.In addition,the highest performance values were obtained against rotation attack,to which sufficient robustness has not been reached in the related studies.展开更多
We revisit how we utilized how Weber in 1961 initiated the process of quantization of early universe fields to the issue of what was for a wormhole mouth. While the wormhole models are well understood, there is not su...We revisit how we utilized how Weber in 1961 initiated the process of quantization of early universe fields to the issue of what was for a wormhole mouth. While the wormhole models are well understood, there is not such a consensus as to how the mouth of a wormhole could generate signals. We try to develop a model for doing so and then revisit it, the Wormhole while considering a Tokamak model we used in a different publication as a way of generating GW, and Gravitons.展开更多
A novel closed-loop control strategy of a silicon microgyroscope (SMG) is proposed. The SMG is sealed in metal can package in drive and sense modes and works under the air pressure of 10 Pa. Its quality factor reach...A novel closed-loop control strategy of a silicon microgyroscope (SMG) is proposed. The SMG is sealed in metal can package in drive and sense modes and works under the air pressure of 10 Pa. Its quality factor reaches greater than l0 000. Self-oscillating and closed-loop methods based on electrostatic force feedback are adopted in both measure and control circuits. Both single side driving and sensing methods are used to simplify the drive circuit. These dual channel decomposition and reconstruction closed loops are applied in sense modes. The testing results demonstrate that useful signals and guadrature signals do not interact with each other because of the decoupling of their phases. Under the condition of a scale factor of 9. 6 mV/((°) .s), in a full measurement range of±300 (°)/s, the zero bias stability reaches 28 (°)/h with a nonlinear coefficient of 400 × 10^-6 and a simulated bandwidth of more than 100 Hz. The overall performance is improved by two orders of magnitude in comparison to that at atmospheric pressure.展开更多
In this paper, three existing source spectral models for stochastic finite-fault modeling of ground motion were reviewed. These three models were used to calculate the far-field received energy at a site from a vertic...In this paper, three existing source spectral models for stochastic finite-fault modeling of ground motion were reviewed. These three models were used to calculate the far-field received energy at a site from a vertical fault and the mean spectral ratio over 15 stations of the Northridge earthquake, and then compared. From the comparison, a necessary measure was observed to maintain the far-field received energy independent of subfault size and avoid overestimation of the long- period spectra/level. Two improvements were made to one of the three models (i.e., the model based on dynamic comer frequency) as follows: (i) a new method to compute the subfault comer frequency was proposed, where the subfault comer frequency is determined based on a basic value calculated from the total seismic moment of the entire fault and an increment depending on the seismic moment assigned to the subfault; and (ii) the difference of the radiation energy from each suhfault was considered into the scaling factor. The improved model was also compared with the unimproved model through the far-field received energy and the mean spectral ratio. The comparison proves that the improved model allows the received energy to be more independent of subfault size than the unimproved model, and decreases the overestimation degree of the long-period spectral amplitude.展开更多
文摘Using a Linde reference, as well as another one from Padmanabhan for calculation of how the early universe expands, we obtain, by default the coefficient of scale factor expansion, t to the alpha value, with alpha being approximately the square root of five in value. We from there make an estimate as to the number of initial particles produced in the very beginning, which leads us to conclude that a graviton, would be a preferred initial by product. The argument as to gravitons, also reflects a choice of how the decay of initial BEC condensates of Planck sized black holes would commence, using the work produced by Chavanis, as to BEC condensates and black holes. The object will be to obtain initial frequency spread plus strength of GW production plus a suggestion as to what polarization state may be accessible from initial conditions.
文摘This paper explains that the terms“horizontal and vertical scales”are not appropriate in map projections theory.Instead,the authors suggest using the term“scales in the direction of coordinate axes.”Since it is not possible to read a local linear scale factor in the direction of a coordinate axis immediately from the definition of a local linear scale factor,this paper considers the derivation of new formulae that enable local linear scale factors in the direction of coordinate x and y axes to be calculated.The formula for computing the local linear scale factor in any direction defined by dx and dy is also derived.Furthermore,the position and magnitude of the extreme values of the local linear scale factor are considered and new formulas derived.
基金supported by the National Natural Science Foundation of China (Grant No. 61003169)the Ph.D. Programs Foundation of Ministry of Education of China (Grant No. 20090095120013)the Technology Funding Project of China University of Mining and Technology (Grant No. 2008C004)
文摘Existing methods of physiological signal analysis based on nonlinear dynamic theories only examine the complexity difference of the signals under a single sampling frequency.We developed a technique to measure the multifractal characteristic parameter intimately associated with physiological activities through a frequency scale factor.This parameter is highly sensitive to physiological and pathological status.Mice received various drugs to imitate different physiological and pathological conditions,and the distributions of mass exponent spectrum curvature with scale factors from the electrocardiogram (ECG) signals of healthy and drug injected mice were determined.Next,we determined the characteristic frequency scope in which the signal was of the highest complexity and most sensitive to impaired cardiac function,and examined the relationships between heart rate,heartbeat dynamic complexity,and sensitive frequency scope of the ECG signal.We found that all animals exhibited a scale factor range in which the absolute magnitudes of ECG mass exponent spectrum curvature achieve the maximum,and this range (or frequency scope) is not changed with calculated data points or maximal coarse-grained scale factor.Further,the heart rate of mice was not necessarily associated with the nonlinear complexity of cardiac dynamics,but closely related to the most sensitive ECG frequency scope determined by characterization of this complex dynamic features for certain heartbeat conditions.Finally,we found that the health status of the hearts of mice was directly related to the heartbeat dynamic complexity,both of which were positively correlated within the scale factor around the extremum region of the multifractal parameter.With increasing heart rate,the sensitive frequency scope increased to a relatively high location.In conclusion,these data provide important theoretical and practical data for the early diagnosis of cardiac disorders.
文摘Geometric or sub-scale modeling techniques are used for the evaluation of large and complex dynamic structures to ensure accurate reproduction of load path and thus leading to true dynamic characteristics of such structures. The sub-scale modeling technique is very effective in the prediction of vibration characteristics of original large structure when the experimental testing is not feasible due to the absence of a large testing facility. Previous researches were more focused on free and harmonic vibration case with little or no consideration for readily encountered random vibration. A sub-scale modeling technique is proposed for estimating the vibration characteristics of any large scale structure such as Launch vehicles, Mega structures, etc., under various vibration load cases by utilizing precise scaled-down model of that dynamic structure. In order to establish an analytical correlation between the original structure and its scaled models, different scale models of isotropic cantilever beam are selected and analyzed under various vibration conditions( i.e. free, harmonic and random) using finite element package ANSYS. The developed correlations are also validated through experimental testing The prediction made from the vibratory response of the scaled-down beam through the established sets of correlation are found similar to the response measured from the testing of original beam structure. The established correlations are equally applicable in the prediction of dynamic characteristics of any complex structure through its scaled-down models. This paper presents modified sub-scale modeling technique that enables accurate prediction of vibration characteristics of large and complex structure under not only sinusoidal but also for random vibrations.
文摘One of the crucial and challenging issues for researchers is presenting an appropriate approach to evaluate the aerodynamic characteristics of air cushion vehicles(ACVs)in terms of system design parameters.One of these issues includes introducing a suitable approach to analyze the effect of geometric shapes on the aerodynamic characteristics of ACVs.The main novelty of this paper lies in presenting an innovative method to study the geometric shape effect on air cushion lift force,which has not been investigated thus far.Moreover,this paper introduces a new approximate mathematical formula for calculating the air cushion lift force in terms of parameters,including the air gap,lateral gaps,air inlet velocity,and scaling factor for the first time.Thus,we calculate the aerodynamic lift force applied to nine different shapes of the air cushions used in the ACVs in the present paper through the ANSYS Fluent software.The geometrical shapes studied in this paper are rectangular,square,equilateral triangle,circular,elliptic shapes,and four other combined shapes,including circle-rectangle,circle-square,hexagonal,and fillet square.Results showed that the cushion with a circular pattern produces the highest lift force among other geometric shapes with the same conditions.The increase in the cushion lift force can be attributed to the fillet with a square shape and its increasing radius compared with the square shape.
文摘In the software engineering literature, it is commonly believed that economies of scale do not occur in case of software Development and Enhancement Projects (D&EP). Their per-unit cost does not decrease but increase with the growth of such projects product size. Thus this is diseconomies of scale that occur in them. The significance of this phenomenon results from the fact that it is commonly considered to be one of the fundamental objective causes of their low effectiveness. This is of particular significance with regard to Business Software Systems (BSS) D&EP characterized by exceptionally low effectiveness comparing to other software D&EP. Thus the paper aims at answering the following two questions: (1) Do economies of scale really not occur in BSS D&EP? (2) If economies of scale may occur in BSS D&EP, what factors are then promoting them? These issues classify into economics problems of software engineering research and practice.
基金The National High Technology Research and Development Program of China (863 Program)(No.2007AA01Z280)
文摘To overcome the drawbacks such as irregular circuit construction and low system throughput that exist in conventional methods, a new factor correction scheme for coordinate rotation digital computer( CORDIC) algorithm is proposed. Based on the relationship between the iteration formulae, a new iteration formula is introduced, which leads the correction operation to be several simple shifting and adding operations. As one key part, the effects caused by rounding error are analyzed mathematically and it is concluded that the effects can be degraded by an appropriate selection of coefficients in the iteration formula. The model is then set up in Matlab and coded in Verilog HDL language. The proposed algorithm is also synthesized and verified in field-programmable gate array (FPGA). The results show that this new scheme requires only one additional clock cycle and there is no change in the elementary iteration for the same precision compared with the conventional algorithm. In addition, the circuit realization is regular and the change in system throughput is very minimal.
基金Project supported by the National Natural Science Foundation of China (Grant No. 61007040)
文摘In order to analyze the effect of wavelength-dependent radiation-induced attenuation (RIA) on the mean trans- mission wavelength in optical fiber and the scale factor of interferometric fiber optic gyroscopes (IFOGs), three types of polarization-maintaining (PM) fibers are tested by using a 60Co γ-radiation source. The observed different mean wave- length shift (MWS) behaviors for different fibers are interpreted by color-center theory involving dose rate-dependent absorption bands in ultraviolet and visible ranges and total dose-dependent near-infrared absorption bands. To evaluate the mean wavelength variation in a fiber coil and the induced scale factor change for space-borne IFOGs under low radiation doses in a space environment, the influence of dose rate on the mean wavelength is investigated by testing four germanium (Ge) doped fibers and two germanium-phosphorus (Ge-P) codoped fibers irradiated at different dose rates. Experimental results indicate that the Ge-doped fibers show the least mean wavelength shift during irradiation and their mean wavelength of optical signal transmission in fibers will shift to a shorter wavelength in a low-dose-rate radiation environment. Finally, the change in the scale factor of IFOG resulting from the mean wavelength shift is estimated and tested, and it is found that the significant radiation-induced scale factor variation must be considered during the design of space-borne IFOGs.
基金Supported by National Key R&D Program of China(Grant No.2018YFB1700704).
文摘Multifdelity surrogates(MFSs)replace computationally intensive models by synergistically combining information from diferent fdelity data with a signifcant improvement in modeling efciency.In this paper,a modifed MFS(MMFS)model based on a radial basis function(RBF)is proposed,in which two fdelities of information can be analyzed by adaptively obtaining the scale factor.In the MMFS,an RBF was employed to establish the low-fdelity model.The correlation matrix of the high-fdelity samples and corresponding low-fdelity responses were integrated into an expansion matrix to determine the scaling function parameters.The shape parameters of the basis function were optimized by minimizing the leave-one-out cross-validation error of the high-fdelity sample points.The performance of the MMFS was compared with those of other MFS models(MFS-RBF and cooperative RBF)and single-fdelity RBF using four benchmark test functions,by which the impacts of diferent high-fdelity sample sizes on the prediction accuracy were also analyzed.The sensitivity of the MMFS model to the randomness of the design of experiments(DoE)was investigated by repeating sampling plans with 20 diferent DoEs.Stress analysis of the steel plate is presented to highlight the prediction ability of the proposed MMFS model.This research proposes a new multifdelity modeling method that can fully use two fdelity sample sets,rapidly calculate model parameters,and exhibit good prediction accuracy and robustness.
基金supported by the Strategic Priority Research Program of the Chinese Academy of Sciences(Grant No.XDB05040300)the National Natural Science Foundation of China(Grant No.41205119)
文摘Estimation of random errors, which are due to shot noise of photomultiplier tube(PMT) or avalanche photodiode(APD) detectors, is very necessary in lidar observation. Due to the Poisson distribution of incident electrons, there still exists a proportional relationship between standard deviation and square root of its mean value. Based on this relationship,noise scale factor(NSF) is introduced into the estimation, which only needs a single data sample. This method overcomes the distractions of atmospheric fluctuations during calculation of random errors. The results show that this method is feasible and reliable.
基金The project supported by the National Natural Science Foundation of China (50579081)the Australian Research Council (DP0452681)The English text was polished by Keren Wang
文摘The scaled boundary finite element method (SBFEM) is a recently developed numerical method combining advantages of both finite element methods (FEM) and boundary element methods (BEM) and with its own special features as well. One of the most prominent advantages is its capability of calculating stress intensity factors (SIFs) directly from the stress solutions whose singularities at crack tips are analytically represented. This advantage is taken in this study to model static and dynamic fracture problems. For static problems, a remeshing algorithm as simple as used in the BEM is developed while retaining the generality and flexibility of the FEM. Fully-automatic modelling of the mixed-mode crack propagation is then realised by combining the remeshing algorithm with a propagation criterion. For dynamic fracture problems, a newly developed series-increasing solution to the SBFEM governing equations in the frequency domain is applied to calculate dynamic SIFs. Three plane problems are modelled. The numerical results show that the SBFEM can accurately predict static and dynamic SIFs, cracking paths and load-displacement curves, using only a fraction of degrees of freedom generally needed by the traditional finite element methods.
基金Supported by the Key Program of National Natural Science Foundation of China(No.51138001)the Science Fund for Creative Research Groups of National Natural Science Foundation of China(No.51121005)+2 种基金the Fundamental Research Funds for the Central Universities(DUT13LK16)the Young Scientists Fund of National Natural Science Foundation of China(No.51109134)China Postdoctoral Science Foundation(No.2011M500814)
文摘The prediction of dynamic crack propagation in brittle materials is still an important issue in many engineering fields. The remeshing technique based on scaled boundary finite element method(SBFEM) is extended to predict the dynamic crack propagation in brittle materials. The structure is firstly divided into a number of superelements, only the boundaries of which need to be discretized with line elements. In the SBFEM formulation, the stiffness and mass matrices of the super-elements can be coupled seamlessly with standard finite elements, thus the advantages of versatility and flexibility of the FEM are well maintained. The transient response of the structure can be calculated directly in the time domain using a standard time-integration scheme. Then the dynamic stress intensity factor(DSIF) during crack propagation can be solved analytically due to the semi-analytical nature of SBFEM. Only the fine mesh discretization for the crack-tip super-element is needed to ensure the required accuracy for the determination of stress intensity factor(SIF). According to the predicted crack-tip position, a simple remeshing algorithm with the minimum mesh changes is suggested to simulate the dynamic crack propagation. Numerical examples indicate that the proposed method can be effectively used to deal with the dynamic crack propagation in a finite sized rectangular plate including a central crack. Comparison is made with the results available in the literature, which shows good agreement between each other.
基金Project supported by the National Natural Science Foundation of China (Grant No 60575038)
文摘We study projective synchronization with different scaling factors (PSDF) in N coupled chaotic systems networks. By using the adaptive linear control, some sufficient criteria for the PSDF in symmetrical and asymmetrical coupled networks are separately given based on the Lyapunov function method and the left eigenvalue theory. Numerical simulations for a generalized chaotic unified system are illustrated to verify the theoretical results.
文摘Considering that the hardware implementation of the normalized minimum sum(NMS)decoding algorithm for low-density parity-check(LDPC)code is difficult due to the uncertainty of scale factor,an NMS decoding algorithm with variable scale factor is proposed for the near-earth space LDPC codes(8177,7154)in the consultative committee for space data systems(CCSDS)standard.The shift characteristics of field programmable gate array(FPGA)is used to optimize the quantization data of check nodes,and finally the function of LDPC decoder is realized.The simulation and experimental results show that the designed FPGA-based LDPC decoder adopts the scaling factor in the NMS decoding algorithm to improve the decoding performance,simplify the hardware structure,accelerate the convergence speed and improve the error correction ability.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61273088,10971120,and 61001099)the Natural Science Foundation of Shandong Province,China(Grant No.ZR2010FM010)
文摘To increase the variety and security of communication, we present the definitions of modified projective synchronization with complex scaling factors (CMPS) of real chaotic systems and complex chaotic systems, where complex scaling factors establish a link between real chaos and complex chaos. Considering all situations of unknown parameters and pseudo-gradient condition, we design adaptive CMPS schemes based on the speed-gradient method for the real drive chaotic system and complex response chaotic system and for the complex drive chaotic system and the real response chaotic system, respectively. The convergence factors and dynamical control strength are added to regulate the convergence speed and increase robustness. Numerical simulations verify the feasibility and effectiveness of the presented schemes.
基金supported by the National Natural Science Foundation of China(No.42004008)the Natural Science Foundation of Jiangsu Province,China(No.BK20190498)+1 种基金the Fundamental Research Funds for the Central Universities(No.B220202055)the State Scholarship Fund from Chinese Scholarship Council(No.201306270014).
文摘The global bathymetry models are usually of low accuracy over the coastline of polar areas due to the harsh climatic environment and the complex topography.Satellite altimetric gravity data can be a supplement and plays a key role in bathymetry modeling over these regions.The Synthetic Aperture Radar(SAR)altimeters in the missions like CryoSat-2 and Sentinel-3A/3B can relieve waveform contamination that existed in conventional altimeters and provide data with improved accuracy and spatial resolution.In this study,we investigate the potential application of SAR altimetric gravity data in enhancing coastal bathymetry,where the effects on local bathymetry modeling introduced from SAR altimetry data are quantified and evaluated.Furthermore,we study the effects on bathymetry modeling by using different scale factor calculation approaches,where a partition-wise scheme is implemented.The numerical experiment over the South Sandwich Islands near Antarctica suggests that using SARbased altimetric gravity data improves local coastal bathymetry modeling,compared with the model calculated without SAR altimetry data by a magnitude of 3:55 m within 10 km of offshore areas.Moreover,by using the partition-wise scheme for scale factor calculation,the quality of the coastal bathymetry model is improved by 7.34 m compared with the result derived from the traditional method.These results indicate the superiority of using SAR altimetry data in coastal bathymetry inversion.
文摘This study aimed to deal with three challenges:robustness,imperceptibility,and capacity in the image watermarking field.To reach a high capacity,a novel similarity-based edge detection algorithm was developed that finds more edge points than traditional techniques.The colored watermark image was created by inserting a randomly generated message on the edge points detected by this algorithm.To ensure robustness and imperceptibility,watermark and cover images were combined in the high-frequency subbands using Discrete Wavelet Transform and Singular Value Decomposition.In the watermarking stage,the watermark image was weighted by the adaptive scaling factor calculated by the standard deviation of the similarity image.According to the results,the proposed edge-based color image watermarking technique has achieved high payload capacity,imperceptibility,and robustness to all attacks.In addition,the highest performance values were obtained against rotation attack,to which sufficient robustness has not been reached in the related studies.
文摘We revisit how we utilized how Weber in 1961 initiated the process of quantization of early universe fields to the issue of what was for a wormhole mouth. While the wormhole models are well understood, there is not such a consensus as to how the mouth of a wormhole could generate signals. We try to develop a model for doing so and then revisit it, the Wormhole while considering a Tokamak model we used in a different publication as a way of generating GW, and Gravitons.
基金The National High Technology Research and Development Program of China (863Program)(No.2002AA812038)the National Defense Pre-Research Support Program (No.41308050109)
文摘A novel closed-loop control strategy of a silicon microgyroscope (SMG) is proposed. The SMG is sealed in metal can package in drive and sense modes and works under the air pressure of 10 Pa. Its quality factor reaches greater than l0 000. Self-oscillating and closed-loop methods based on electrostatic force feedback are adopted in both measure and control circuits. Both single side driving and sensing methods are used to simplify the drive circuit. These dual channel decomposition and reconstruction closed loops are applied in sense modes. The testing results demonstrate that useful signals and guadrature signals do not interact with each other because of the decoupling of their phases. Under the condition of a scale factor of 9. 6 mV/((°) .s), in a full measurement range of±300 (°)/s, the zero bias stability reaches 28 (°)/h with a nonlinear coefficient of 400 × 10^-6 and a simulated bandwidth of more than 100 Hz. The overall performance is improved by two orders of magnitude in comparison to that at atmospheric pressure.
基金National Natural Science Foundation of China Under Grant No. 50778058 and 90715038National Key Technology R&D Program Under Contract No. 2006BAC13B02
文摘In this paper, three existing source spectral models for stochastic finite-fault modeling of ground motion were reviewed. These three models were used to calculate the far-field received energy at a site from a vertical fault and the mean spectral ratio over 15 stations of the Northridge earthquake, and then compared. From the comparison, a necessary measure was observed to maintain the far-field received energy independent of subfault size and avoid overestimation of the long- period spectra/level. Two improvements were made to one of the three models (i.e., the model based on dynamic comer frequency) as follows: (i) a new method to compute the subfault comer frequency was proposed, where the subfault comer frequency is determined based on a basic value calculated from the total seismic moment of the entire fault and an increment depending on the seismic moment assigned to the subfault; and (ii) the difference of the radiation energy from each suhfault was considered into the scaling factor. The improved model was also compared with the unimproved model through the far-field received energy and the mean spectral ratio. The comparison proves that the improved model allows the received energy to be more independent of subfault size than the unimproved model, and decreases the overestimation degree of the long-period spectral amplitude.