Experimental techniques based on SR facilities have emerged with the development of synchrotron radiation(SR)sources.Accordingly,detector miniaturization has become significant for the development of SR experimental t...Experimental techniques based on SR facilities have emerged with the development of synchrotron radiation(SR)sources.Accordingly,detector miniaturization has become significant for the development of SR experimental techniques.In this study,the miniaturization of a detector was achieved by coupling a commercial silicon PIN photodiode(SPPD)into a beamstop,aiming for it not only to acquire X-ray absorption fine structure(XAFS)spectra,but also to protect the subsequent two-dimensional detector from high-brilliance X-ray radiation damage in certain combination techniques.This mini SPPD detector coupled to a beamstop was used as the rear detector in both the conventional sampling scheme and novel high-frequency(HF)sampling scheme to collect the transmission XAFS spectra.Traditional ion chambers were also used to collect the transmission XAFS spectra,which were used as the reference.These XAFS spectra were quantitatively analyzed and compared;the results demonstrated that the XAFS spectra collected by this SPPD in both the conventional sampling scheme and HF sampling scheme are feasible.This study provides a new detector-selection scheme for the acquisition of the quick-scanning XAFS(QXAFS)and HF sampling XAFS spectra.The SPPD detector presented in this study can partially meet the requirements of detector miniaturization.展开更多
We propose a low-speed photonic sampling for independent high-frequency characterization of a Mach–Zehnder modulator(MZM)and a photodetector(PD)in an optical link.A low-speed mode-locked laser diode(MLLD)provides an ...We propose a low-speed photonic sampling for independent high-frequency characterization of a Mach–Zehnder modulator(MZM)and a photodetector(PD)in an optical link.A low-speed mode-locked laser diode(MLLD)provides an ultrawideband optical stimulus with scalable frequency range,working as the photonic sampling source of the link.The uneven spectrum lines of the MLLD are firstly characterized with symmetric modulation within the interesting frequency range.Then,the electro-optic modulated signals are down-converted to the first Nyquist frequency range,yielding the self-referenced extraction of modulation depth and half-wave voltage of the MZM without correcting the responsivity fluctuation of the PD in the link.Finally,the frequency responsivity of the PD is self-referenced measured under null modulation of the MZM.As frequency responses of the MZM and the PD can be independently obtained,our method allows self-referenced high-frequency measurement for a high-speed optical link.In the proof-of-concept experiment,a 96.9 MS/s MLLD is used for measuring a MZM and a PD within the frequency range up to 50 GHz.The consistency between our method and the conventional method verifies that the ultra-wideband and self-referenced high-frequency characterization of high-speed MZMs and PDs.展开更多
The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenz...The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.展开更多
Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS m...Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.展开更多
In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sam...In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sampling will be valid for some classes of multivariate entire functions,satisfying certain growth conditions.We will show that many known results included in Commun Korean Math Soc,2002,17:731-740,Turk J Math,2017,41:387-403 and Filomat,2020,34:3339-3347 are special cases of our results.Moreover,we estimate the truncation error of this sampling based on localized sampling without decay assumption.Illustrative examples are also presented.展开更多
Rapid acquisition of the kinematic deformation field and seismic intensity distribution of large earthquakes is crucial for postseismic emergency rescue,disaster assessment,and future seismic risk research.The advance...Rapid acquisition of the kinematic deformation field and seismic intensity distribution of large earthquakes is crucial for postseismic emergency rescue,disaster assessment,and future seismic risk research.The advancement of GNSS observation and data processing makes it play an important role in this field,especially the high-frequency GNSS.We used the differential positioning method to calculate the 1 HZ GNSS data from 98 sites within 1000 km of the M_(S)7.4 Maduo earthquake epicenter.The kinematic deformation field and the distribution of the seismic intensity by using the peak ground velocity derived from displacement waveforms were obtained.The results show that:1)Horizontal coseismic response deformation levels ranging from 25 mm to 301 mm can be observed within a 1000 km radius from the epicenter.Coseismic response deformation on the east and west sides shows bilateral asymmetry,which markedly differs from the symmetry presented by surface rupture.2)The seismic intensity obtained through high-frequency GNSS and field investigations exhibits good consistency of the scope and orientation in the high seismic intensity area,although the former is generally slightly smaller than the latter.3)There may exist obstacles on the eastern side of the seismogenic fault.The Maduo earthquake induced a certain tectonic stress loading effect on the western Kunlun Pass-Jiangcuo fault(KPJF)and Maqin-Maqu segment,resulting in higher seismic risk in the future.展开更多
Nitrogen doping has been widely used to improve the performance of carbon electrodes in supercapacitors,particularly in terms of their high-frequency response.However,the charge storage and electrolyte ion response me...Nitrogen doping has been widely used to improve the performance of carbon electrodes in supercapacitors,particularly in terms of their high-frequency response.However,the charge storage and electrolyte ion response mechanisms of different nitrogen dopants at high frequencies are still unclear.In this study,melamine foam carbons with different configurations of surfacedoped N were formed by gradient carbonization,and the effects of the configurations on the high-frequency response behavior of the supercapacitors were analyzed.Using a combination of experiments and first-principle calculations,we found that pyrrolic N,characterized by a higher adsorption energy,increases the charge storage capacity of the electrode at high frequencies.On the other hand,graphitic N,with a lower adsorption energy,increases the speed of ion response.We propose the use of adsorption energy as a practical descriptor for electrode/electrolyte design in high-frequency applications,offering a more universal approach for improving the performance of N-doped carbon materials in supercapacitors.展开更多
The specialized equipment utilized in long-line tunnel engineering is evolving towards large-scale,multifunctional,and complex orientations.The vibration caused by the high-frequency units during regular operation is ...The specialized equipment utilized in long-line tunnel engineering is evolving towards large-scale,multifunctional,and complex orientations.The vibration caused by the high-frequency units during regular operation is supported by the foundation of the units,and the magnitude of vibration and the operating frequency fluctuate in different engineering contexts,leading to variations in the dynamic response of the foundation.The high-frequency units yield significantly diverse outcomes under different startup conditions and times,resulting in failure to meet operational requirements,influencing the normal function of the tunnel,and causing harm to the foundation structure,personnel,and property in severe cases.This article formulates a finite element numerical computation model for solid elements using three-dimensional elastic body theory and integrates field measurements to substantiate and ascertain the crucial parameter configurations of the finite element model.By proposing a comprehensive startup timing function for high-frequency dynamic machines under different startup conditions,simulating the frequency andmagnitude variations during the startup process,and suggesting functions for changes in frequency and magnitude,a simulated startup schedule function for high-frequency machines is created through coupling.Taking into account the selection of the transient dynamic analysis step length,the dynamic response results for the lower dynamic foundation during its fundamental frequency crossing process are obtained.The validation checks if the structural magnitude surpasses the safety threshold during the critical phase of unit startup traversing the structural resonance region.The design recommendations for high-frequency units’dynamic foundations are provided,taking into account the startup process of the machine and ensuring the safe operation of the tunnel.展开更多
High-frequency oscillation(HFO)of gridconnected wind power generation systems(WPGS)is one of the most critical issues in recent years that threaten the safe access of WPGS to the grid.Ensuring the WPGS can damp HFO is...High-frequency oscillation(HFO)of gridconnected wind power generation systems(WPGS)is one of the most critical issues in recent years that threaten the safe access of WPGS to the grid.Ensuring the WPGS can damp HFO is becoming more and more vital for the development of wind power.The HFO phenomenon of wind turbines under different scenarios usually has different mechanisms.Hence,engineers need to acquire the working mechanisms of the different HFO damping technologies and select the appropriate one to ensure the effective implementation of oscillation damping in practical engineering.This paper introduces the general assumptions of WPGS when analyzing HFO,systematically summarizes the reasons for the occurrence of HFO in different scenarios,deeply analyses the key points and difficulties of HFO damping under different scenarios,and then compares the technical performances of various types of HFO suppression methods to provide adequate references for engineers in the application of technology.Finally,this paper discusses possible future research difficulties in the problem of HFO,as well as the possible future trends in the demand for HFO damping.展开更多
This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of t...This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of the product based on the time truncated life test employing theWeibull distribution.The control chart developed supports the examination of the mean lifespan variation for a particular product in the process of manufacturing.Three control limit levels are used:the warning control limit,inner control limit,and outer control limit.Together,they enhance the capability for variation detection.A genetic algorithm can be used for optimization during the in-control process,whereby the optimal parameters can be established for the proposed control chart.The control chart performance is assessed using the average run length,while the influence of the model parameters upon the control chart solution is assessed via sensitivity analysis based on an orthogonal experimental design withmultiple linear regression.A comparative study was conducted based on the out-of-control average run length,in which the developed control chart offered greater sensitivity in the detection of process shifts while making use of smaller samples on average than is the case for existing control charts.Finally,to exhibit the utility of the developed control chart,this paper presents its application using simulated data with parameters drawn from the real set of data.展开更多
The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-atten...The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.展开更多
The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with...The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.展开更多
Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.Howev...Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.展开更多
Dielectric barrier discharge(DBD)plasma excited by a high-frequency alternating-current(AC)power supply is widely employed for the degradation of volatile organic compounds(VOCs).However,the thermal effect generated d...Dielectric barrier discharge(DBD)plasma excited by a high-frequency alternating-current(AC)power supply is widely employed for the degradation of volatile organic compounds(VOCs).However,the thermal effect generated during the discharge process leads to energy waste and low energy utilization efficiency.In this work,an innovative DBD thermally-conducted catalysis(DBD-TCC)system,integrating high-frequency AC-DBD plasma and its generated thermal effects to activate the Co/SBA-15 catalyst,was employed for toluene removal.Specifically,Co/SBA-15 catalysts are closely positioned to the ground electrode of the plasma zone and can be heated and activated by the thermal effect when the voltage exceeds 10 k V.At12.4 k V,the temperature in the catalyst zone reached 261℃ in the DBD-TCC system,resulting in an increase in toluene degradation efficiency of 17%,CO_(2)selectivity of 21.2%,and energy efficiency of 27%,respectively,compared to the DBD system alone.In contrast,the DBD thermally-unconducted catalysis(DBD-TUC)system fails to enhance toluene degradation due to insufficient heat absorption and catalytic activation,highlighting the crucial role of AC-DBD generated heat in the activation of the catalyst.Furthermore,the degradation pathway and mechanism of toluene in the DBD-TCC system were hypothesized.This work is expected to provide an energy-efficient approach for high-frequency AC-DBD plasma removal of VOCs.展开更多
For the problem of slow search and tortuous paths in the Rapidly Exploring Random Tree(RRT)algorithm,a feedback-biased sampling RRT,called FS-RRT,is proposedbasedon RRT.Firstly,toimprove the samplingefficiency of RRT ...For the problem of slow search and tortuous paths in the Rapidly Exploring Random Tree(RRT)algorithm,a feedback-biased sampling RRT,called FS-RRT,is proposedbasedon RRT.Firstly,toimprove the samplingefficiency of RRT to shorten the search time,the search area of the randomtree is restricted to improve the sampling efficiency.Secondly,to obtain better information about obstacles to shorten the path length,a feedback-biased sampling strategy is used instead of the traditional random sampling,the collision of the expanding node with an obstacle generates feedback information so that the next expanding node avoids expanding within a specific angle range.Thirdly,this paper proposes using the inverse optimization strategy to remove redundancy points from the initial path,making the path shorter and more accurate.Finally,to satisfy the smooth operation of the robot in practice,auxiliary points are used to optimize the cubic Bezier curve to avoid path-crossing obstacles when using the Bezier curve optimization.The experimental results demonstrate that,compared to the traditional RRT algorithm,the proposed FS-RRT algorithm performs favorably against mainstream algorithms regarding running time,number of search iterations,and path length.Moreover,the improved algorithm also performs well in a narrow obstacle environment,and its effectiveness is further confirmed by experimental verification.展开更多
In order to accurately measure an object’s three-dimensional surface shape,the influence of sampling on it was studied.First,on the basis of deriving spectra expressions through the Fourier transform,the generation o...In order to accurately measure an object’s three-dimensional surface shape,the influence of sampling on it was studied.First,on the basis of deriving spectra expressions through the Fourier transform,the generation of CCD pixels was analyzed,and its expression was given.Then,based on the discrete expression of deformation fringes obtained after sampling,its Fourier spectrum expression was derived,resulting in an infinitely repeated"spectra island"in the frequency domain.Finally,on the basis of using a low-pass filter to remove high-order harmonic components and retaining only one fundamental frequency component,the inverse Fourier transform was used to reconstruct the signal strength.A method of reducing the sampling interval,i.e.,reducing the number of sampling points per fringe,was proposed to increase the ratio between the sampling frequency and the fundamental frequency of the grating.This was done to reconstruct the object’s surface shape more accurately under the condition of m>4.The basic principle was verified through simulation and experiment.In the simulation,the sampling intervals were 8 pixels,4 pixels,2 pixels,and 1 pixel,the maximum absolute error values obtained in the last three situations were 88.80%,38.38%,and 31.50%in the first situation,respectively,and the corresponding average absolute error values are 71.84%,43.27%,and 32.26%.It is demonstrated that the smaller the sampling interval,the better the recovery effect.Taking the same four sampling intervals in the experiment as in the simulation can also lead to the same conclusions.The simulated and experimental results show that reducing the sampling interval can improve the accuracy of object surface shape measurement and achieve better reconstruction results.展开更多
Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces...Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.展开更多
We propose a new framework for the sampling,compression,and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces.Our approach involves constructing a tensor called the RaySe...We propose a new framework for the sampling,compression,and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces.Our approach involves constructing a tensor called the RaySense sketch,which captures nearest neighbors from the underlying geometry of points along a set of rays.We explore various operations that can be performed on the RaySense sketch,leading to different properties and potential applications.Statistical information about the data set can be extracted from the sketch,independent of the ray set.Line integrals on point sets can be efficiently computed using the sketch.We also present several examples illustrating applications of the proposed strategy in practical scenarios.展开更多
Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the los...Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the loss function.The performance of PINNs is generally affected by both training and sampling.Specifically,training methods focus on how to overcome the training difficulties caused by the special PDE residual loss of PINNs,and sampling methods are concerned with the location and distribution of the sampling points upon which evaluations of PDE residual loss are accomplished.However,a common problem among these original PINNs is that they omit special temporal information utilization during the training or sampling stages when dealing with an important PDE category,namely,time-dependent PDEs,where temporal information plays a key role in the algorithms used.There is one method,called Causal PINN,that considers temporal causality at the training level but not special temporal utilization at the sampling level.Incorporating temporal knowledge into sampling remains to be studied.To fill this gap,we propose a novel temporal causality-based adaptive sampling method that dynamically determines the sampling ratio according to both PDE residual and temporal causality.By designing a sampling ratio determined by both residual loss and temporal causality to control the number and location of sampled points in each temporal sub-domain,we provide a practical solution by incorporating temporal information into sampling.Numerical experiments of several nonlinear time-dependent PDEs,including the Cahn–Hilliard,Korteweg–de Vries,Allen–Cahn and wave equations,show that our proposed sampling method can improve the performance.We demonstrate that using such a relatively simple sampling method can improve prediction performance by up to two orders of magnitude compared with the results from other methods,especially when points are limited.展开更多
Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more adva...Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more advantages in the geometric modeling of stochastic media.The explicit modeling method has high computational accuracy and high computational cost.The chord length sampling(CLS)method can improve computational efficiency by sampling the chord length during neutron transport using the matrix chord length?s probability density function.This study shows that the excluded-volume effect in realistic stochastic media can introduce certain deviations into the CLS.A chord length correction approach is proposed to obtain the chord length correction factor by developing the Particle code based on equivalent transmission probability.Through numerical analysis against reference solutions from explicit modeling in the RMC code,it was demonstrated that CLS with the proposed correction method provides good accuracy for addressing the excludedvolume effect in realistic infinite stochastic media.展开更多
基金supported by the National Key R&D Program of China(Nos.2017YFA0403000 and 2017YFA0403100).
文摘Experimental techniques based on SR facilities have emerged with the development of synchrotron radiation(SR)sources.Accordingly,detector miniaturization has become significant for the development of SR experimental techniques.In this study,the miniaturization of a detector was achieved by coupling a commercial silicon PIN photodiode(SPPD)into a beamstop,aiming for it not only to acquire X-ray absorption fine structure(XAFS)spectra,but also to protect the subsequent two-dimensional detector from high-brilliance X-ray radiation damage in certain combination techniques.This mini SPPD detector coupled to a beamstop was used as the rear detector in both the conventional sampling scheme and novel high-frequency(HF)sampling scheme to collect the transmission XAFS spectra.Traditional ion chambers were also used to collect the transmission XAFS spectra,which were used as the reference.These XAFS spectra were quantitatively analyzed and compared;the results demonstrated that the XAFS spectra collected by this SPPD in both the conventional sampling scheme and HF sampling scheme are feasible.This study provides a new detector-selection scheme for the acquisition of the quick-scanning XAFS(QXAFS)and HF sampling XAFS spectra.The SPPD detector presented in this study can partially meet the requirements of detector miniaturization.
基金the National Key Research and Development Program of China(2019YFB2203500)the National Natural Science Foundation of China(NSFC)(61927821)+1 种基金the Joint Research Fund of Ministry of Education of China(6141A02022436)the Fundamental Research Funds for the Central Universities(ZYGX2019Z011).
文摘We propose a low-speed photonic sampling for independent high-frequency characterization of a Mach–Zehnder modulator(MZM)and a photodetector(PD)in an optical link.A low-speed mode-locked laser diode(MLLD)provides an ultrawideband optical stimulus with scalable frequency range,working as the photonic sampling source of the link.The uneven spectrum lines of the MLLD are firstly characterized with symmetric modulation within the interesting frequency range.Then,the electro-optic modulated signals are down-converted to the first Nyquist frequency range,yielding the self-referenced extraction of modulation depth and half-wave voltage of the MZM without correcting the responsivity fluctuation of the PD in the link.Finally,the frequency responsivity of the PD is self-referenced measured under null modulation of the MZM.As frequency responses of the MZM and the PD can be independently obtained,our method allows self-referenced high-frequency measurement for a high-speed optical link.In the proof-of-concept experiment,a 96.9 MS/s MLLD is used for measuring a MZM and a PD within the frequency range up to 50 GHz.The consistency between our method and the conventional method verifies that the ultra-wideband and self-referenced high-frequency characterization of high-speed MZMs and PDs.
文摘The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.
基金supported by the Platform Development Foundation of the China Institute for Radiation Protection(No.YP21030101)the National Natural Science Foundation of China(General Program)(Nos.12175114,U2167209)+1 种基金the National Key R&D Program of China(No.2021YFF0603600)the Tsinghua University Initiative Scientific Research Program(No.20211080081).
文摘Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.
文摘In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sampling will be valid for some classes of multivariate entire functions,satisfying certain growth conditions.We will show that many known results included in Commun Korean Math Soc,2002,17:731-740,Turk J Math,2017,41:387-403 and Filomat,2020,34:3339-3347 are special cases of our results.Moreover,we estimate the truncation error of this sampling based on localized sampling without decay assumption.Illustrative examples are also presented.
基金supported by Grants from the National Natural Science Foundation of China(42004010)the Beijing Natural Science Foundation(8204077)。
文摘Rapid acquisition of the kinematic deformation field and seismic intensity distribution of large earthquakes is crucial for postseismic emergency rescue,disaster assessment,and future seismic risk research.The advancement of GNSS observation and data processing makes it play an important role in this field,especially the high-frequency GNSS.We used the differential positioning method to calculate the 1 HZ GNSS data from 98 sites within 1000 km of the M_(S)7.4 Maduo earthquake epicenter.The kinematic deformation field and the distribution of the seismic intensity by using the peak ground velocity derived from displacement waveforms were obtained.The results show that:1)Horizontal coseismic response deformation levels ranging from 25 mm to 301 mm can be observed within a 1000 km radius from the epicenter.Coseismic response deformation on the east and west sides shows bilateral asymmetry,which markedly differs from the symmetry presented by surface rupture.2)The seismic intensity obtained through high-frequency GNSS and field investigations exhibits good consistency of the scope and orientation in the high seismic intensity area,although the former is generally slightly smaller than the latter.3)There may exist obstacles on the eastern side of the seismogenic fault.The Maduo earthquake induced a certain tectonic stress loading effect on the western Kunlun Pass-Jiangcuo fault(KPJF)and Maqin-Maqu segment,resulting in higher seismic risk in the future.
文摘Nitrogen doping has been widely used to improve the performance of carbon electrodes in supercapacitors,particularly in terms of their high-frequency response.However,the charge storage and electrolyte ion response mechanisms of different nitrogen dopants at high frequencies are still unclear.In this study,melamine foam carbons with different configurations of surfacedoped N were formed by gradient carbonization,and the effects of the configurations on the high-frequency response behavior of the supercapacitors were analyzed.Using a combination of experiments and first-principle calculations,we found that pyrrolic N,characterized by a higher adsorption energy,increases the charge storage capacity of the electrode at high frequencies.On the other hand,graphitic N,with a lower adsorption energy,increases the speed of ion response.We propose the use of adsorption energy as a practical descriptor for electrode/electrolyte design in high-frequency applications,offering a more universal approach for improving the performance of N-doped carbon materials in supercapacitors.
基金Smart Integration Key Technologies and Application Demonstrations of Large Scale Underground Space Disaster Prevention and Reduction in Guangzhou International Financial City([2021]–KJ058).
文摘The specialized equipment utilized in long-line tunnel engineering is evolving towards large-scale,multifunctional,and complex orientations.The vibration caused by the high-frequency units during regular operation is supported by the foundation of the units,and the magnitude of vibration and the operating frequency fluctuate in different engineering contexts,leading to variations in the dynamic response of the foundation.The high-frequency units yield significantly diverse outcomes under different startup conditions and times,resulting in failure to meet operational requirements,influencing the normal function of the tunnel,and causing harm to the foundation structure,personnel,and property in severe cases.This article formulates a finite element numerical computation model for solid elements using three-dimensional elastic body theory and integrates field measurements to substantiate and ascertain the crucial parameter configurations of the finite element model.By proposing a comprehensive startup timing function for high-frequency dynamic machines under different startup conditions,simulating the frequency andmagnitude variations during the startup process,and suggesting functions for changes in frequency and magnitude,a simulated startup schedule function for high-frequency machines is created through coupling.Taking into account the selection of the transient dynamic analysis step length,the dynamic response results for the lower dynamic foundation during its fundamental frequency crossing process are obtained.The validation checks if the structural magnitude surpasses the safety threshold during the critical phase of unit startup traversing the structural resonance region.The design recommendations for high-frequency units’dynamic foundations are provided,taking into account the startup process of the machine and ensuring the safe operation of the tunnel.
基金supported in part by the Fundamental Research Funds for the Central Universities under Grant 2682023CX019National Natural Science Foundation of China under Grant U23B6007 and Grant 52307141Sichuan Science and Technology Program under Grant 2024NSFSC0115。
文摘High-frequency oscillation(HFO)of gridconnected wind power generation systems(WPGS)is one of the most critical issues in recent years that threaten the safe access of WPGS to the grid.Ensuring the WPGS can damp HFO is becoming more and more vital for the development of wind power.The HFO phenomenon of wind turbines under different scenarios usually has different mechanisms.Hence,engineers need to acquire the working mechanisms of the different HFO damping technologies and select the appropriate one to ensure the effective implementation of oscillation damping in practical engineering.This paper introduces the general assumptions of WPGS when analyzing HFO,systematically summarizes the reasons for the occurrence of HFO in different scenarios,deeply analyses the key points and difficulties of HFO damping under different scenarios,and then compares the technical performances of various types of HFO suppression methods to provide adequate references for engineers in the application of technology.Finally,this paper discusses possible future research difficulties in the problem of HFO,as well as the possible future trends in the demand for HFO damping.
基金the Science,Research and Innovation Promotion Funding(TSRI)(Grant No.FRB660012/0168)managed under Rajamangala University of Technology Thanyaburi(FRB66E0646O.4).
文摘This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of the product based on the time truncated life test employing theWeibull distribution.The control chart developed supports the examination of the mean lifespan variation for a particular product in the process of manufacturing.Three control limit levels are used:the warning control limit,inner control limit,and outer control limit.Together,they enhance the capability for variation detection.A genetic algorithm can be used for optimization during the in-control process,whereby the optimal parameters can be established for the proposed control chart.The control chart performance is assessed using the average run length,while the influence of the model parameters upon the control chart solution is assessed via sensitivity analysis based on an orthogonal experimental design withmultiple linear regression.A comparative study was conducted based on the out-of-control average run length,in which the developed control chart offered greater sensitivity in the detection of process shifts while making use of smaller samples on average than is the case for existing control charts.Finally,to exhibit the utility of the developed control chart,this paper presents its application using simulated data with parameters drawn from the real set of data.
基金the Communication University of China(CUC230A013)the Fundamental Research Funds for the Central Universities.
文摘The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.
基金Project supported by the National Key Research and Development Program of China(Grant No.2023YFF1204402)the National Natural Science Foundation of China(Grant Nos.12074079 and 12374208)+1 种基金the Natural Science Foundation of Shanghai(Grant No.22ZR1406800)the China Postdoctoral Science Foundation(Grant No.2022M720815).
文摘The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.
基金This present research work was supported by the National Key R&D Program of China(No.2021YFB2700800)the GHfund B(No.202302024490).
文摘Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.
基金supported by National Natural Science Foundation of China(No.52177130)the Key Projects for Industrial Prospects and Core Technology Research in Suzhou City(No.SYC2022029)。
文摘Dielectric barrier discharge(DBD)plasma excited by a high-frequency alternating-current(AC)power supply is widely employed for the degradation of volatile organic compounds(VOCs).However,the thermal effect generated during the discharge process leads to energy waste and low energy utilization efficiency.In this work,an innovative DBD thermally-conducted catalysis(DBD-TCC)system,integrating high-frequency AC-DBD plasma and its generated thermal effects to activate the Co/SBA-15 catalyst,was employed for toluene removal.Specifically,Co/SBA-15 catalysts are closely positioned to the ground electrode of the plasma zone and can be heated and activated by the thermal effect when the voltage exceeds 10 k V.At12.4 k V,the temperature in the catalyst zone reached 261℃ in the DBD-TCC system,resulting in an increase in toluene degradation efficiency of 17%,CO_(2)selectivity of 21.2%,and energy efficiency of 27%,respectively,compared to the DBD system alone.In contrast,the DBD thermally-unconducted catalysis(DBD-TUC)system fails to enhance toluene degradation due to insufficient heat absorption and catalytic activation,highlighting the crucial role of AC-DBD generated heat in the activation of the catalyst.Furthermore,the degradation pathway and mechanism of toluene in the DBD-TCC system were hypothesized.This work is expected to provide an energy-efficient approach for high-frequency AC-DBD plasma removal of VOCs.
基金provided by Shaanxi Province’s Key Research and Development Plan(No.2022NY-087).
文摘For the problem of slow search and tortuous paths in the Rapidly Exploring Random Tree(RRT)algorithm,a feedback-biased sampling RRT,called FS-RRT,is proposedbasedon RRT.Firstly,toimprove the samplingefficiency of RRT to shorten the search time,the search area of the randomtree is restricted to improve the sampling efficiency.Secondly,to obtain better information about obstacles to shorten the path length,a feedback-biased sampling strategy is used instead of the traditional random sampling,the collision of the expanding node with an obstacle generates feedback information so that the next expanding node avoids expanding within a specific angle range.Thirdly,this paper proposes using the inverse optimization strategy to remove redundancy points from the initial path,making the path shorter and more accurate.Finally,to satisfy the smooth operation of the robot in practice,auxiliary points are used to optimize the cubic Bezier curve to avoid path-crossing obstacles when using the Bezier curve optimization.The experimental results demonstrate that,compared to the traditional RRT algorithm,the proposed FS-RRT algorithm performs favorably against mainstream algorithms regarding running time,number of search iterations,and path length.Moreover,the improved algorithm also performs well in a narrow obstacle environment,and its effectiveness is further confirmed by experimental verification.
文摘In order to accurately measure an object’s three-dimensional surface shape,the influence of sampling on it was studied.First,on the basis of deriving spectra expressions through the Fourier transform,the generation of CCD pixels was analyzed,and its expression was given.Then,based on the discrete expression of deformation fringes obtained after sampling,its Fourier spectrum expression was derived,resulting in an infinitely repeated"spectra island"in the frequency domain.Finally,on the basis of using a low-pass filter to remove high-order harmonic components and retaining only one fundamental frequency component,the inverse Fourier transform was used to reconstruct the signal strength.A method of reducing the sampling interval,i.e.,reducing the number of sampling points per fringe,was proposed to increase the ratio between the sampling frequency and the fundamental frequency of the grating.This was done to reconstruct the object’s surface shape more accurately under the condition of m>4.The basic principle was verified through simulation and experiment.In the simulation,the sampling intervals were 8 pixels,4 pixels,2 pixels,and 1 pixel,the maximum absolute error values obtained in the last three situations were 88.80%,38.38%,and 31.50%in the first situation,respectively,and the corresponding average absolute error values are 71.84%,43.27%,and 32.26%.It is demonstrated that the smaller the sampling interval,the better the recovery effect.Taking the same four sampling intervals in the experiment as in the simulation can also lead to the same conclusions.The simulated and experimental results show that reducing the sampling interval can improve the accuracy of object surface shape measurement and achieve better reconstruction results.
基金the Researchers Supporting Project number(RSPD2024R848),King Saud University,Riyadh,Saudi Arabia.
文摘Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.
基金supported by the National Science Foundation(Grant No.DMS-1440415)partially supported by a grant from the Simons Foundation,NSF Grants DMS-1720171 and DMS-2110895a Discovery Grant from Natural Sciences and Engineering Research Council of Canada.
文摘We propose a new framework for the sampling,compression,and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces.Our approach involves constructing a tensor called the RaySense sketch,which captures nearest neighbors from the underlying geometry of points along a set of rays.We explore various operations that can be performed on the RaySense sketch,leading to different properties and potential applications.Statistical information about the data set can be extracted from the sketch,independent of the ray set.Line integrals on point sets can be efficiently computed using the sketch.We also present several examples illustrating applications of the proposed strategy in practical scenarios.
基金Project supported by the Key National Natural Science Foundation of China(Grant No.62136005)the National Natural Science Foundation of China(Grant Nos.61922087,61906201,and 62006238)。
文摘Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the loss function.The performance of PINNs is generally affected by both training and sampling.Specifically,training methods focus on how to overcome the training difficulties caused by the special PDE residual loss of PINNs,and sampling methods are concerned with the location and distribution of the sampling points upon which evaluations of PDE residual loss are accomplished.However,a common problem among these original PINNs is that they omit special temporal information utilization during the training or sampling stages when dealing with an important PDE category,namely,time-dependent PDEs,where temporal information plays a key role in the algorithms used.There is one method,called Causal PINN,that considers temporal causality at the training level but not special temporal utilization at the sampling level.Incorporating temporal knowledge into sampling remains to be studied.To fill this gap,we propose a novel temporal causality-based adaptive sampling method that dynamically determines the sampling ratio according to both PDE residual and temporal causality.By designing a sampling ratio determined by both residual loss and temporal causality to control the number and location of sampled points in each temporal sub-domain,we provide a practical solution by incorporating temporal information into sampling.Numerical experiments of several nonlinear time-dependent PDEs,including the Cahn–Hilliard,Korteweg–de Vries,Allen–Cahn and wave equations,show that our proposed sampling method can improve the performance.We demonstrate that using such a relatively simple sampling method can improve prediction performance by up to two orders of magnitude compared with the results from other methods,especially when points are limited.
文摘Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more advantages in the geometric modeling of stochastic media.The explicit modeling method has high computational accuracy and high computational cost.The chord length sampling(CLS)method can improve computational efficiency by sampling the chord length during neutron transport using the matrix chord length?s probability density function.This study shows that the excluded-volume effect in realistic stochastic media can introduce certain deviations into the CLS.A chord length correction approach is proposed to obtain the chord length correction factor by developing the Particle code based on equivalent transmission probability.Through numerical analysis against reference solutions from explicit modeling in the RMC code,it was demonstrated that CLS with the proposed correction method provides good accuracy for addressing the excludedvolume effect in realistic infinite stochastic media.