The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corros...The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corrosion rate.However,a better understanding of the correlation between the FSP process parameters and the corrosion rate is still lacking.The current study used machine learning to establish the relationship between the corrosion rate and FSP process parameters(rotational speed,traverse speed,and shoulder diameter)for WE43 alloy.The Taguchi L27 design of experiments was used for the experimental analysis.In addition,synthetic data was generated using particle swarm optimization for virtual sample generation(VSG).The application of VSG has led to an increase in the prediction accuracy of machine learning models.A sensitivity analysis was performed using Shapley Additive Explanations to determine the key factors affecting the corrosion rate.The shoulder diameter had a significant impact in comparison to the traverse speed.A graphical user interface(GUI)has been created to predict the corrosion rate using the identified factors.This study focuses on the WE43 alloy,but its findings can also be used to predict the corrosion rate of other magnesium alloys.展开更多
Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is di...Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective.展开更多
Wideband spectrum sensing with a high-speed analog-digital converter(ADC) presents a challenge for practical systems.The Nyquist folding receiver(NYFR) is a promising scheme for achieving cost-effective real-time spec...Wideband spectrum sensing with a high-speed analog-digital converter(ADC) presents a challenge for practical systems.The Nyquist folding receiver(NYFR) is a promising scheme for achieving cost-effective real-time spectrum sensing,which is subject to the complexity of processing the modulated outputs.In this case,a multipath NYFR architecture with a step-sampling rate for the different paths is proposed.The different numbers of digital channels for each path are designed based on the Chinese remainder theorem(CRT).Then,the detectable frequency range is divided into multiple frequency grids,and the Nyquist zone(NZ) of the input can be obtained by sensing these grids.Thus,high-precision parameter estimation is performed by utilizing the NYFR characteristics.Compared with the existing methods,the scheme proposed in this paper overcomes the challenge of NZ estimation,information damage,many computations,low accuracy,and high false alarm probability.Comparative simulation experiments verify the effectiveness of the proposed architecture in this paper.展开更多
This paper contributes to the structural reliability problem by presenting a novel approach that enables for identification of stochastic oscillatory processes as a critical input for given mechanical models. Identifi...This paper contributes to the structural reliability problem by presenting a novel approach that enables for identification of stochastic oscillatory processes as a critical input for given mechanical models. Identification development follows a transparent image processing paradigm completely independent of state-of-the-art structural dynamics, aiming at delivering a simple and wide purpose method. Validation of the proposed importance sampling strategy is based on multi-scale clusters of realizations of digitally generated non-stationary stochastic processes. Good agreement with the reference pure Monte Carlo results indicates a significant potential in reducing the computational task of first passage probabilities estimation, an important feature in the field of e.g., probabilistic seismic design or risk assessment generally.展开更多
We propose a novel all-optical sampling method using nonlinear polarization rotation in a semiconductor optical amplifier. A rate-equation model capable of describing the all-optical sampling mechanism is presented in...We propose a novel all-optical sampling method using nonlinear polarization rotation in a semiconductor optical amplifier. A rate-equation model capable of describing the all-optical sampling mechanism is presented in this paper. Based on this model, we investigate the optimized operating parameters of the proposed system by simulating the output intensity of the probe light as functions of the input polarization angle, the phase induced by the polarization controller, and the ori- entation of the polarization beam splitter. The simulated results show that we can obtain a good linear slope and a large linear dynamic range,which is suitable for all-optical sampling. The operating power of the pump light can be less than lmW. The presented all-optical sampling method can potentially operate at a sampling rate up to hundreds GS/s and needs low optical power.展开更多
A novel data streams partitioning method is proposed to resolve problems of range-aggregation continuous queries over parallel streams for power industry.The first step of this method is to parallel sample the data,wh...A novel data streams partitioning method is proposed to resolve problems of range-aggregation continuous queries over parallel streams for power industry.The first step of this method is to parallel sample the data,which is implemented as an extended reservoir-sampling algorithm.A skip factor based on the change ratio of data-values is introduced to describe the distribution characteristics of data-values adaptively.The second step of this method is to partition the fluxes of data streams averagely,which is implemented with two alternative equal-depth histogram generating algorithms that fit the different cases:one for incremental maintenance based on heuristics and the other for periodical updates to generate an approximate partition vector.The experimental results on actual data prove that the method is efficient,practical and suitable for time-varying data streams processing.展开更多
China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a...China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a new technology to increase the accuracy of seismic exploration. We briefly discuss point source and receiver technology, analyze the high density spatial sampling in situ method, introduce the symmetric sampling principles presented by Gijs J. O. Vermeer, and discuss high density spatial sampling technology from the point of view of wave field continuity. We emphasize the analysis of the high density spatial sampling characteristics, including the high density first break advantages for investigation of near surface structure, improving static correction precision, the use of dense receiver spacing at short offsets to increase the effective coverage at shallow depth, and the accuracy of reflection imaging. Coherent noise is not aliased and the noise analysis precision and suppression increases as a result. High density spatial sampling enhances wave field continuity and the accuracy of various mathematical transforms, which benefits wave field separation. Finally, we point out that the difficult part of high density spatial sampling technology is the data processing. More research needs to be done on the methods of analyzing and processing huge amounts of seismic data.展开更多
This paper presents an efficient technique for processing of 3D meshed surfaces via spherical wavelets. More specifically, an input 3D mesh is firstly transformed into a spherical vector signal by a fast low distortio...This paper presents an efficient technique for processing of 3D meshed surfaces via spherical wavelets. More specifically, an input 3D mesh is firstly transformed into a spherical vector signal by a fast low distortion spherical parameterization approach based on symmetry analysis of 3D meshes. This signal is then sampled on the sphere with the help of an adaptive sampling scheme. Finally, the sampled signal is transformed into the wavelet domain according to spherical wavelet transform where many 3D mesh processing operations can be implemented such as smoothing, enhancement, compression, and so on. Our main contribution lies in incorporating a fast low distortion spherical parameterization approach and an adaptive sampling scheme into the frame for pro- cessing 3D meshed surfaces by spherical wavelets, which can handle surfaces with complex shapes. A number of experimental ex- amples demonstrate that our algorithm is robust and efficient.展开更多
This fully digital beam position measurement instrument is designed for beam position monitoring and machine research in Shanghai Synchrotron Radiation Facility. The signals received from four position-sensitive detec...This fully digital beam position measurement instrument is designed for beam position monitoring and machine research in Shanghai Synchrotron Radiation Facility. The signals received from four position-sensitive detectors are narrow pulses with a repetition rate up to 499.654 MHz and a pulse width of around 100 ps, and their dynamic range could vary over more than 40 dB in machine research. By the employment of the under-sampling technique based on high-speed high-resolution A/D conversion, all the processing procedure is performed fully by the digital signal processing algorithms integrated in one single Field Programmable Gate Array. This system functions well in the laboratory and commissioning tests, demonstrating a position resolution (at the turn by turn rate of 694 kHz) better than 7 μm over the input amplitude range of -40 dBm to 10 dBm which is well beyond the requirement.展开更多
In this work,an adaptive sampling control strategy for distributed predictive control is proposed.According to the proposed method,the sampling rate of each subsystem of the accused object is determined based on the p...In this work,an adaptive sampling control strategy for distributed predictive control is proposed.According to the proposed method,the sampling rate of each subsystem of the accused object is determined based on the periodic detection of its dynamic behavior and calculations made using a correlation function.Then,the optimal sampling interval within the period is obtained and sent to the corresponding sub-prediction controller,and the sampling interval of the controller is changed accordingly before the next sampling period begins.In the next control period,the adaptive sampling mechanism recalculates the sampling rate of each subsystem’s measurable output variable according to both the abovementioned method and the change in the dynamic behavior of the entire system,and this process is repeated.Such an adaptive sampling interval selection based on an autocorrelation function that measures dynamic behavior can dynamically optimize the selection of sampling rate according to the real-time change in the dynamic behavior of the controlled object.It can also accurately capture dynamic changes,meaning that each sub-prediction controller can more accurately calculate the optimal control quantity at the next moment,significantly improving the performance of distributed model predictive control(DMPC).A comparison demonstrates that the proposed adaptive sampling DMPC algorithm has better tracking performance than the traditional DMPC algorithm.展开更多
A mesh editing framework is presented in this paper, which integrates Free-Form Deformation (FFD) and geometry signal processing. By using simplified model from original mesh, the editing task can be accomplished with...A mesh editing framework is presented in this paper, which integrates Free-Form Deformation (FFD) and geometry signal processing. By using simplified model from original mesh, the editing task can be accomplished with a few operations. We take the deformation of the proxy and the position coordinates of the mesh models as geometry signal. Wavelet analysis is em- ployed to separate local detail information gracefully. The crucial innovation of this paper is a new adaptive regular sampling approach for our signal analysis based editing framework. In our approach, an original mesh is resampled and then refined itera- tively which reflects optimization of our proposed spectrum preserving energy. As an extension of our spectrum editing scheme, the editing principle is applied to geometry details transferring, which brings satisfying results.展开更多
Signals are often of random character since they cannot bear any information if they are predictable for any time t, they are usually modelled as stationary random processes .On the other hand, because of the inertia ...Signals are often of random character since they cannot bear any information if they are predictable for any time t, they are usually modelled as stationary random processes .On the other hand, because of the inertia of the measurement apparatus, measured sampled values obtained in practice may not be the precise value of the signal X(t) at time tk (k∈Z), but only local averages of X(t) near tk. In this paper, it is presented that a wide (or weak ) sense stationary stochastic process can be approximated by generalized sampling series with local average samples.展开更多
The nonuniform distribution of interference spectrum in wavenumber k-space is a key issue to limit the imaging quality of Fourier-domain optical coherence tomography(FD-OCT).At present,the reconstruction quality at di...The nonuniform distribution of interference spectrum in wavenumber k-space is a key issue to limit the imaging quality of Fourier-domain optical coherence tomography(FD-OCT).At present,the reconstruction quality at different depths among a variety of processing methods in k-space is still uncertain.Using simulated and experimental interference spectra at different depths,the effects of common six processing methods including uniform resampling(linear interpolation(LI),cubic spline interpolation(CSI),time-domain interpolation(TDI),and K-B window convolution)and nonuniform sampling direct-reconstruction(Lomb periodogram(LP)and nonuniform discrete Fourier transform(NDFT))on the reconstruction quality of FD-OCT were quantitatively analyzed and compared in this work.The results obtained by using simulated and experimental data were coincident.From the experimental results,the averaged peak intensity,axial resolution,and signal-to-noise ratio(SNR)of NDFT at depth from 0.5 to 3.0mm were improved by about 1.9 dB,1.4 times,and 11.8 dB,respectively,compared to the averaged indices of all the uniform resampling methods at all depths.Similarly,the improvements of the above three indices of LP were 2.0 dB,1.4 times,and 11.7 dB,respectively.The analysis method and the results obtained in this work are helpful to select an appropriate processing method in k-space,so as to improve the imaging quality of FD-OCT.展开更多
Tracy-Widom distribution was rst discovered in the study of largest eigenvalues of high dimensional Gaussian unitary ensembles(GUE),and since then it has appeared in a number of apparently distinct research elds.It is...Tracy-Widom distribution was rst discovered in the study of largest eigenvalues of high dimensional Gaussian unitary ensembles(GUE),and since then it has appeared in a number of apparently distinct research elds.It is believed that Tracy-Widom distribution have a universal feature like classic normal distribution.Airy2 process is de ned through nite dimensional distributions with Tracy-Widom distribution as its marginal distributions.In this introductory survey,we will briey review some basic notions,intuitive background and fundamental properties concerning Tracy-Widom distribution and Airy2 process.For sake of reading,the paper starts with some simple and well-known facts about normal distributions,Gaussian processes and their sample path properties.展开更多
Many sampling formulas are available for processes in baseband (-a,a) at the Nyquist rate a/π. However signals of telecommunications have power spectra which occupate two bands or more. We know that PNS (periodic non...Many sampling formulas are available for processes in baseband (-a,a) at the Nyquist rate a/π. However signals of telecommunications have power spectra which occupate two bands or more. We know that PNS (periodic non-uniform sampling) allow an errorless reconstruction at rate smaller than the Nyquist one. For instance PNS2 can be used in the two-bands case (-a,-b)∪(b,a) at the Landau rate (a-b)/π We prove a set of formulas which are available in cases more general than the PNS2. They take into account two sampling sequences which can be periodic or not and with same mean rate or not.展开更多
Value at Risk (VaR) is an important tool for estimating the risk of a financial portfolio under significant loss. Although Monte Carlo simulation is a powerful tool for estimating VaR, it is quite inefficient since th...Value at Risk (VaR) is an important tool for estimating the risk of a financial portfolio under significant loss. Although Monte Carlo simulation is a powerful tool for estimating VaR, it is quite inefficient since the event of significant loss is usually rare. Previous studies suggest that the performance of the Monte Carlo simulation can be improved by impor-tance sampling if the market returns follow the normality or the distributions. The first contribution of our paper is to extend the importance sampling method for dealing with jump-diffusion market returns, which can more precisely model the phenomenon of high peaks, heavy tails, and jumps of market returns mentioned in numerous empirical study papers. This paper also points out that for portfolios of which the huge loss is triggered by significantly distinct events, naively applying importance sampling method can result in poor performance. The second contribution of our paper is to develop the hybrid importance sampling method for the aforementioned problem. Our method decomposes a Monte Carlo simulation into sub simulations, and each sub simulation focuses only on one huge loss event. Thus the perform-ance for each sub simulation is improved by importance sampling method, and overall performance is optimized by determining the allotment of samples to each sub simulation by Lagrange’s multiplier. Numerical experiments are given to verify the superiority of our method.展开更多
ADSP-TS101 is a high performance DSP with good properties of parallel processing and high speed.According to the real-time processing requirements of underwater acoustic communication algorithms,a real-time parallel p...ADSP-TS101 is a high performance DSP with good properties of parallel processing and high speed.According to the real-time processing requirements of underwater acoustic communication algorithms,a real-time parallel processing system with multi-channel synchronous sample,which is composed of multiple ADSP-TS101s,is designed and carried out.For the hardware design,field programmable gate array(FPGA)logical control is adopted for the design of multi-channel synchronous sample module and cluster/data flow associated pin connection mode is adopted for multiprocessing parallel processing configuration respectively.And the software is optimized by two kinds of communication ways:broadcast writing way through shared bus and point-to-point way through link ports.Through the whole system installation,connective debugging,and experiments in a lake,the results show that the real-time parallel processing system has good stability and real-time processing capability and meets the technical design requirements of real-time processing.展开更多
In industrial process control systems,there is overwhelming evidence corroborating the notion that economic or technical limitations result in some key variables that are very difficult to measure online.The data-driv...In industrial process control systems,there is overwhelming evidence corroborating the notion that economic or technical limitations result in some key variables that are very difficult to measure online.The data-driven soft sensor is an effective solution because it provides a reliable and stable online estimation of such variables.This paper employs a deep neural network with multiscale feature extraction layers to build soft sensors,which are applied to the benchmarked Tennessee-Eastman process(TEP)and a real wind farm case.The comparison of modelling results demonstrates that the multiscale feature extraction layers have the following advantages over other methods.First,the multiscale feature extraction layers significantly reduce the number of parameters compared to the other deep neural networks.Second,the multiscale feature extraction layers can powerfully extract dataset characteristics.Finally,the multiscale feature extraction layers with fully considered historical measurements can contain richer useful information and improved representation compared to traditional data-driven models.展开更多
The reservoir volumetric approach represents a widely accepted, but flawed method of petroleum play resource calculation. In this paper, we propose a combination of techniques that can improve the applicability and qu...The reservoir volumetric approach represents a widely accepted, but flawed method of petroleum play resource calculation. In this paper, we propose a combination of techniques that can improve the applicability and quality of the resource estimation. These techniques include: 1) the use of the Multivariate Discovery Process model (MDP) to derive unbiased distribution parameters of reservoir volumetric variables and to reveal correlations among the variables; 2) the use of the Geo-anchored method to estimate simultaneously the number of oil and gas pools in the same play; and 3) the crossvalidation of assessment results from different methods. These techniques are illustrated by using an example of crude oil and natural gas resource assessment of the Sverdrup Basin, Canadian Archipelago. The example shows that when direct volumetric measurements of the untested prospects are not available, the MDP model can help derive unbiased estimates of the distribution parameters by using information from the discovered oil and gas accumulations. It also shows that an estimation of the number of oil and gas accumulations and associated size ranges from a discovery process model can provide an alternative and efficient approach when inadequate geological data hinder the estimation. Cross-examination of assessment results derived using different methods allows one to focus on and analyze the causes for the major differences, thus providing a more reliable assessment outcome.展开更多
Within the framework of feasibility studies for a reversible, deep geological repository of high-and intermediate-level long-lived radioactive waste(HLW, IL-LLW), the French National Radioactive Waste Management Agenc...Within the framework of feasibility studies for a reversible, deep geological repository of high-and intermediate-level long-lived radioactive waste(HLW, IL-LLW), the French National Radioactive Waste Management Agency(Andra) is investigating the Callovo-Oxfordian(COx) formation near Bure(northeast part of France) as a potential host rock for the repository. The hydro-mechanical(HM) behaviour is an important issue to design and optimise different components of the disposal such as shaft, ramp, drift,and waste package disposal facilities. Over the past 20 years, a large number of laboratory experiments have been carried out to characterise and understand the HM behaviours of COx claystones. At the beginning, samples came from deep boreholes drilled at the ground surface with oil base mud. From2000 onwards, with the launch of the construction of the Meuse/Haute-Marne Underground Research Laboratory(MHM URL), most samples have been extracted from a large number of air drilled boreholes in the URL. In parallel, various constitutive models have been developed for modelling. The thermohydro-mechanical(THM) behaviours of the COx claystones were investigated under different repository conditions. Core samples are subjected to a complex HM loading path before testing, due to drilling, conditioning and preparation. Various kinds of effects on the characteristics of the claystones are highlighted and discussed, and the procedures for core extraction and packaging as well as a systematic sample preparation protocol are proposed in order to minimise the uncertainties on test results. The representativeness of the test results is also addressed with regard to the in situ rock mass.展开更多
文摘The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corrosion rate.However,a better understanding of the correlation between the FSP process parameters and the corrosion rate is still lacking.The current study used machine learning to establish the relationship between the corrosion rate and FSP process parameters(rotational speed,traverse speed,and shoulder diameter)for WE43 alloy.The Taguchi L27 design of experiments was used for the experimental analysis.In addition,synthetic data was generated using particle swarm optimization for virtual sample generation(VSG).The application of VSG has led to an increase in the prediction accuracy of machine learning models.A sensitivity analysis was performed using Shapley Additive Explanations to determine the key factors affecting the corrosion rate.The shoulder diameter had a significant impact in comparison to the traverse speed.A graphical user interface(GUI)has been created to predict the corrosion rate using the identified factors.This study focuses on the WE43 alloy,but its findings can also be used to predict the corrosion rate of other magnesium alloys.
文摘Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective.
基金supported by the Key Projects of the 2022 National Defense Science and Technology Foundation Strengthening Plan 173 (Grant No.2022-173ZD-010)the Equipment PreResearch Foundation of The State Key Laboratory (Grant No.6142101200204)。
文摘Wideband spectrum sensing with a high-speed analog-digital converter(ADC) presents a challenge for practical systems.The Nyquist folding receiver(NYFR) is a promising scheme for achieving cost-effective real-time spectrum sensing,which is subject to the complexity of processing the modulated outputs.In this case,a multipath NYFR architecture with a step-sampling rate for the different paths is proposed.The different numbers of digital channels for each path are designed based on the Chinese remainder theorem(CRT).Then,the detectable frequency range is divided into multiple frequency grids,and the Nyquist zone(NZ) of the input can be obtained by sensing these grids.Thus,high-precision parameter estimation is performed by utilizing the NYFR characteristics.Compared with the existing methods,the scheme proposed in this paper overcomes the challenge of NZ estimation,information damage,many computations,low accuracy,and high false alarm probability.Comparative simulation experiments verify the effectiveness of the proposed architecture in this paper.
文摘This paper contributes to the structural reliability problem by presenting a novel approach that enables for identification of stochastic oscillatory processes as a critical input for given mechanical models. Identification development follows a transparent image processing paradigm completely independent of state-of-the-art structural dynamics, aiming at delivering a simple and wide purpose method. Validation of the proposed importance sampling strategy is based on multi-scale clusters of realizations of digitally generated non-stationary stochastic processes. Good agreement with the reference pure Monte Carlo results indicates a significant potential in reducing the computational task of first passage probabilities estimation, an important feature in the field of e.g., probabilistic seismic design or risk assessment generally.
文摘We propose a novel all-optical sampling method using nonlinear polarization rotation in a semiconductor optical amplifier. A rate-equation model capable of describing the all-optical sampling mechanism is presented in this paper. Based on this model, we investigate the optimized operating parameters of the proposed system by simulating the output intensity of the probe light as functions of the input polarization angle, the phase induced by the polarization controller, and the ori- entation of the polarization beam splitter. The simulated results show that we can obtain a good linear slope and a large linear dynamic range,which is suitable for all-optical sampling. The operating power of the pump light can be less than lmW. The presented all-optical sampling method can potentially operate at a sampling rate up to hundreds GS/s and needs low optical power.
基金The High Technology Research Plan of Jiangsu Prov-ince (No.BG2004034)the Foundation of Graduate Creative Program ofJiangsu Province (No.xm04-36).
文摘A novel data streams partitioning method is proposed to resolve problems of range-aggregation continuous queries over parallel streams for power industry.The first step of this method is to parallel sample the data,which is implemented as an extended reservoir-sampling algorithm.A skip factor based on the change ratio of data-values is introduced to describe the distribution characteristics of data-values adaptively.The second step of this method is to partition the fluxes of data streams averagely,which is implemented with two alternative equal-depth histogram generating algorithms that fit the different cases:one for incremental maintenance based on heuristics and the other for periodical updates to generate an approximate partition vector.The experimental results on actual data prove that the method is efficient,practical and suitable for time-varying data streams processing.
文摘China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a new technology to increase the accuracy of seismic exploration. We briefly discuss point source and receiver technology, analyze the high density spatial sampling in situ method, introduce the symmetric sampling principles presented by Gijs J. O. Vermeer, and discuss high density spatial sampling technology from the point of view of wave field continuity. We emphasize the analysis of the high density spatial sampling characteristics, including the high density first break advantages for investigation of near surface structure, improving static correction precision, the use of dense receiver spacing at short offsets to increase the effective coverage at shallow depth, and the accuracy of reflection imaging. Coherent noise is not aliased and the noise analysis precision and suppression increases as a result. High density spatial sampling enhances wave field continuity and the accuracy of various mathematical transforms, which benefits wave field separation. Finally, we point out that the difficult part of high density spatial sampling technology is the data processing. More research needs to be done on the methods of analyzing and processing huge amounts of seismic data.
基金Supported by the National Natural Science Foundation of China(No.61173102)the NSFC Guangdong Joint Fund(No.U0935004)+2 种基金the Fundamental Research Funds for the Central Universities(No.DUT11SX08)the Opening Foundation of Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education of China(No.93K172012K02)the Doctor Research Start-up Fund of North East Dian Li university(No.BSJXM-200912)
文摘This paper presents an efficient technique for processing of 3D meshed surfaces via spherical wavelets. More specifically, an input 3D mesh is firstly transformed into a spherical vector signal by a fast low distortion spherical parameterization approach based on symmetry analysis of 3D meshes. This signal is then sampled on the sphere with the help of an adaptive sampling scheme. Finally, the sampled signal is transformed into the wavelet domain according to spherical wavelet transform where many 3D mesh processing operations can be implemented such as smoothing, enhancement, compression, and so on. Our main contribution lies in incorporating a fast low distortion spherical parameterization approach and an adaptive sampling scheme into the frame for pro- cessing 3D meshed surfaces by spherical wavelets, which can handle surfaces with complex shapes. A number of experimental ex- amples demonstrate that our algorithm is robust and efficient.
基金Supported by Knowledge Innovation Program of The Chinese Academy of Sciences (KJCX2-YW-N27)the National Natural Science Foundation of China (10875119)100 Talents Program of The Chinese Academy of Sciences
文摘This fully digital beam position measurement instrument is designed for beam position monitoring and machine research in Shanghai Synchrotron Radiation Facility. The signals received from four position-sensitive detectors are narrow pulses with a repetition rate up to 499.654 MHz and a pulse width of around 100 ps, and their dynamic range could vary over more than 40 dB in machine research. By the employment of the under-sampling technique based on high-speed high-resolution A/D conversion, all the processing procedure is performed fully by the digital signal processing algorithms integrated in one single Field Programmable Gate Array. This system functions well in the laboratory and commissioning tests, demonstrating a position resolution (at the turn by turn rate of 694 kHz) better than 7 μm over the input amplitude range of -40 dBm to 10 dBm which is well beyond the requirement.
基金the National Natural Science Foundation of China(61563032,61963025)The Open Foundation of the Key Laboratory of Gansu Advanced Control for Industrial Processes(2019KX01)The Project of Industrial support and guidance of Colleges and Universities in Gansu Province(2019C05).
文摘In this work,an adaptive sampling control strategy for distributed predictive control is proposed.According to the proposed method,the sampling rate of each subsystem of the accused object is determined based on the periodic detection of its dynamic behavior and calculations made using a correlation function.Then,the optimal sampling interval within the period is obtained and sent to the corresponding sub-prediction controller,and the sampling interval of the controller is changed accordingly before the next sampling period begins.In the next control period,the adaptive sampling mechanism recalculates the sampling rate of each subsystem’s measurable output variable according to both the abovementioned method and the change in the dynamic behavior of the entire system,and this process is repeated.Such an adaptive sampling interval selection based on an autocorrelation function that measures dynamic behavior can dynamically optimize the selection of sampling rate according to the real-time change in the dynamic behavior of the controlled object.It can also accurately capture dynamic changes,meaning that each sub-prediction controller can more accurately calculate the optimal control quantity at the next moment,significantly improving the performance of distributed model predictive control(DMPC).A comparison demonstrates that the proposed adaptive sampling DMPC algorithm has better tracking performance than the traditional DMPC algorithm.
基金Project supported by the National Basic Research Program (973) of China (No. 2002CB312102), and the National Natural Science Foun-dation of China (Nos. 60021201, 60333010 and 60505001)
文摘A mesh editing framework is presented in this paper, which integrates Free-Form Deformation (FFD) and geometry signal processing. By using simplified model from original mesh, the editing task can be accomplished with a few operations. We take the deformation of the proxy and the position coordinates of the mesh models as geometry signal. Wavelet analysis is em- ployed to separate local detail information gracefully. The crucial innovation of this paper is a new adaptive regular sampling approach for our signal analysis based editing framework. In our approach, an original mesh is resampled and then refined itera- tively which reflects optimization of our proposed spectrum preserving energy. As an extension of our spectrum editing scheme, the editing principle is applied to geometry details transferring, which brings satisfying results.
基金National Natural Science Foundation of China (No60572113,No10501026) and Liuhui Center for Applied Mathematics
文摘Signals are often of random character since they cannot bear any information if they are predictable for any time t, they are usually modelled as stationary random processes .On the other hand, because of the inertia of the measurement apparatus, measured sampled values obtained in practice may not be the precise value of the signal X(t) at time tk (k∈Z), but only local averages of X(t) near tk. In this paper, it is presented that a wide (or weak ) sense stationary stochastic process can be approximated by generalized sampling series with local average samples.
基金supported by the National Natural Science Foundation of China(Grant Nos.61575205 and 62175022)Sichuan Natural Science Foundation(2022NSFSC0803)Sichuan Science and Technology Program(2021JDRC0035).
文摘The nonuniform distribution of interference spectrum in wavenumber k-space is a key issue to limit the imaging quality of Fourier-domain optical coherence tomography(FD-OCT).At present,the reconstruction quality at different depths among a variety of processing methods in k-space is still uncertain.Using simulated and experimental interference spectra at different depths,the effects of common six processing methods including uniform resampling(linear interpolation(LI),cubic spline interpolation(CSI),time-domain interpolation(TDI),and K-B window convolution)and nonuniform sampling direct-reconstruction(Lomb periodogram(LP)and nonuniform discrete Fourier transform(NDFT))on the reconstruction quality of FD-OCT were quantitatively analyzed and compared in this work.The results obtained by using simulated and experimental data were coincident.From the experimental results,the averaged peak intensity,axial resolution,and signal-to-noise ratio(SNR)of NDFT at depth from 0.5 to 3.0mm were improved by about 1.9 dB,1.4 times,and 11.8 dB,respectively,compared to the averaged indices of all the uniform resampling methods at all depths.Similarly,the improvements of the above three indices of LP were 2.0 dB,1.4 times,and 11.7 dB,respectively.The analysis method and the results obtained in this work are helpful to select an appropriate processing method in k-space,so as to improve the imaging quality of FD-OCT.
基金the National Natural Science Foundation of China(11731012,11871425)Fundamental Research Funds for Central Universities grant(2020XZZX002-03).
文摘Tracy-Widom distribution was rst discovered in the study of largest eigenvalues of high dimensional Gaussian unitary ensembles(GUE),and since then it has appeared in a number of apparently distinct research elds.It is believed that Tracy-Widom distribution have a universal feature like classic normal distribution.Airy2 process is de ned through nite dimensional distributions with Tracy-Widom distribution as its marginal distributions.In this introductory survey,we will briey review some basic notions,intuitive background and fundamental properties concerning Tracy-Widom distribution and Airy2 process.For sake of reading,the paper starts with some simple and well-known facts about normal distributions,Gaussian processes and their sample path properties.
文摘Many sampling formulas are available for processes in baseband (-a,a) at the Nyquist rate a/π. However signals of telecommunications have power spectra which occupate two bands or more. We know that PNS (periodic non-uniform sampling) allow an errorless reconstruction at rate smaller than the Nyquist one. For instance PNS2 can be used in the two-bands case (-a,-b)∪(b,a) at the Landau rate (a-b)/π We prove a set of formulas which are available in cases more general than the PNS2. They take into account two sampling sequences which can be periodic or not and with same mean rate or not.
文摘Value at Risk (VaR) is an important tool for estimating the risk of a financial portfolio under significant loss. Although Monte Carlo simulation is a powerful tool for estimating VaR, it is quite inefficient since the event of significant loss is usually rare. Previous studies suggest that the performance of the Monte Carlo simulation can be improved by impor-tance sampling if the market returns follow the normality or the distributions. The first contribution of our paper is to extend the importance sampling method for dealing with jump-diffusion market returns, which can more precisely model the phenomenon of high peaks, heavy tails, and jumps of market returns mentioned in numerous empirical study papers. This paper also points out that for portfolios of which the huge loss is triggered by significantly distinct events, naively applying importance sampling method can result in poor performance. The second contribution of our paper is to develop the hybrid importance sampling method for the aforementioned problem. Our method decomposes a Monte Carlo simulation into sub simulations, and each sub simulation focuses only on one huge loss event. Thus the perform-ance for each sub simulation is improved by importance sampling method, and overall performance is optimized by determining the allotment of samples to each sub simulation by Lagrange’s multiplier. Numerical experiments are given to verify the superiority of our method.
基金Sponsored by National Natural Science Foundation of China(60572098)
文摘ADSP-TS101 is a high performance DSP with good properties of parallel processing and high speed.According to the real-time processing requirements of underwater acoustic communication algorithms,a real-time parallel processing system with multi-channel synchronous sample,which is composed of multiple ADSP-TS101s,is designed and carried out.For the hardware design,field programmable gate array(FPGA)logical control is adopted for the design of multi-channel synchronous sample module and cluster/data flow associated pin connection mode is adopted for multiprocessing parallel processing configuration respectively.And the software is optimized by two kinds of communication ways:broadcast writing way through shared bus and point-to-point way through link ports.Through the whole system installation,connective debugging,and experiments in a lake,the results show that the real-time parallel processing system has good stability and real-time processing capability and meets the technical design requirements of real-time processing.
基金supported by National Natural Science Foundation of China(No.61873142)the Science and Technology Research Program of the Chongqing Municipal Education Commission,China(Nos.KJZD-K202201901,KJQN202201109,KJQN202101904,KJQN202001903 and CXQT21035)+2 种基金the Scientific Research Foundation of Chongqing University of Technology,China(No.2019ZD76)the Scientific Research Foundation of Chongqing Institute of Engineering,China(No.2020xzky05)the Chongqing Municipal Natural Science Foundation,China(No.cstc2020jcyj-msxmX0666).
文摘In industrial process control systems,there is overwhelming evidence corroborating the notion that economic or technical limitations result in some key variables that are very difficult to measure online.The data-driven soft sensor is an effective solution because it provides a reliable and stable online estimation of such variables.This paper employs a deep neural network with multiscale feature extraction layers to build soft sensors,which are applied to the benchmarked Tennessee-Eastman process(TEP)and a real wind farm case.The comparison of modelling results demonstrates that the multiscale feature extraction layers have the following advantages over other methods.First,the multiscale feature extraction layers significantly reduce the number of parameters compared to the other deep neural networks.Second,the multiscale feature extraction layers can powerfully extract dataset characteristics.Finally,the multiscale feature extraction layers with fully considered historical measurements can contain richer useful information and improved representation compared to traditional data-driven models.
文摘The reservoir volumetric approach represents a widely accepted, but flawed method of petroleum play resource calculation. In this paper, we propose a combination of techniques that can improve the applicability and quality of the resource estimation. These techniques include: 1) the use of the Multivariate Discovery Process model (MDP) to derive unbiased distribution parameters of reservoir volumetric variables and to reveal correlations among the variables; 2) the use of the Geo-anchored method to estimate simultaneously the number of oil and gas pools in the same play; and 3) the crossvalidation of assessment results from different methods. These techniques are illustrated by using an example of crude oil and natural gas resource assessment of the Sverdrup Basin, Canadian Archipelago. The example shows that when direct volumetric measurements of the untested prospects are not available, the MDP model can help derive unbiased estimates of the distribution parameters by using information from the discovered oil and gas accumulations. It also shows that an estimation of the number of oil and gas accumulations and associated size ranges from a discovery process model can provide an alternative and efficient approach when inadequate geological data hinder the estimation. Cross-examination of assessment results derived using different methods allows one to focus on and analyze the causes for the major differences, thus providing a more reliable assessment outcome.
文摘Within the framework of feasibility studies for a reversible, deep geological repository of high-and intermediate-level long-lived radioactive waste(HLW, IL-LLW), the French National Radioactive Waste Management Agency(Andra) is investigating the Callovo-Oxfordian(COx) formation near Bure(northeast part of France) as a potential host rock for the repository. The hydro-mechanical(HM) behaviour is an important issue to design and optimise different components of the disposal such as shaft, ramp, drift,and waste package disposal facilities. Over the past 20 years, a large number of laboratory experiments have been carried out to characterise and understand the HM behaviours of COx claystones. At the beginning, samples came from deep boreholes drilled at the ground surface with oil base mud. From2000 onwards, with the launch of the construction of the Meuse/Haute-Marne Underground Research Laboratory(MHM URL), most samples have been extracted from a large number of air drilled boreholes in the URL. In parallel, various constitutive models have been developed for modelling. The thermohydro-mechanical(THM) behaviours of the COx claystones were investigated under different repository conditions. Core samples are subjected to a complex HM loading path before testing, due to drilling, conditioning and preparation. Various kinds of effects on the characteristics of the claystones are highlighted and discussed, and the procedures for core extraction and packaging as well as a systematic sample preparation protocol are proposed in order to minimise the uncertainties on test results. The representativeness of the test results is also addressed with regard to the in situ rock mass.