In order to enhance grain sampling efficiency, in this work a truss type multi-rod grain sampling machine is designed and tested. The sampling machine primarily consists of truss support mechanism, main carriage mecha...In order to enhance grain sampling efficiency, in this work a truss type multi-rod grain sampling machine is designed and tested. The sampling machine primarily consists of truss support mechanism, main carriage mechanism, auxiliary carriage mechanism, sampling rods, and a PLC controller. The movement of the main carriage on the truss, the auxiliary carriage on the main carriage, and the vertical movement of the sampling rods on the auxiliary carriage are controlled through PLC programming. The sampling machine accurately controls the position of the sampling rods, enabling random sampling with six rods to ensure comprehensive and random sampling. Additionally, sampling experiments were conducted, and the results showed that the multi-rod grain sampling machine simultaneously samples with six rods, achieving a sampling frequency of 38 times per hour. The round trip time for the sampling rods is 33 seconds per cycle, and the sampling length direction reaches 18 m. This study provides valuable insights for the design of multi-rod grain sampling machines.展开更多
The laboratories in the bauxite processing industry are always under a heavy workload of sample collection, analysis, and compilation of the results. After size reduction from grinding mills, the samples of bauxite ar...The laboratories in the bauxite processing industry are always under a heavy workload of sample collection, analysis, and compilation of the results. After size reduction from grinding mills, the samples of bauxite are collected after intervals of 3 to 4 hours. Large bauxite processing industries producing 1 million tons of pure aluminium can have three grinding mills. Thus, the total number of samples to be tested in one day reaches a figure of 18 to 24. The sample of bauxite ore coming from the grinding mill is tested for its particle size and composition. For testing the composition, the bauxite ore sample is first prepared by fusing it with X-ray flux. Then the sample is sent for X-ray fluorescence analysis. Afterwards, the crucibles are washed in ultrasonic baths to be used for the next testing. The whole procedure takes about 2 - 3 hours. With a large number of samples reaching the laboratory, the chances of error in composition analysis increase. In this study, we have used a composite sampling methodology to reduce the number of samples reaching the laboratory without compromising their validity. The results of the average composition of fifteen samples were measured against composite samples. The mean of difference was calculated. The standard deviation and paired t-test values were evaluated against predetermined critical values obtained using a two-tailed test. It was found from the results that paired test-t values were much lower than the critical values thus validating the composition attained through composite sampling. The composite sampling approach not only reduced the number of samples but also the chemicals used in the laboratory. The objective of improved analytical protocol to reduce the number of samples reaching the laboratory was successfully achieved without compromising the quality of analytical results.展开更多
In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sam...In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sampling will be valid for some classes of multivariate entire functions,satisfying certain growth conditions.We will show that many known results included in Commun Korean Math Soc,2002,17:731-740,Turk J Math,2017,41:387-403 and Filomat,2020,34:3339-3347 are special cases of our results.Moreover,we estimate the truncation error of this sampling based on localized sampling without decay assumption.Illustrative examples are also presented.展开更多
Wideband spectrum sensing with a high-speed analog-digital converter(ADC) presents a challenge for practical systems.The Nyquist folding receiver(NYFR) is a promising scheme for achieving cost-effective real-time spec...Wideband spectrum sensing with a high-speed analog-digital converter(ADC) presents a challenge for practical systems.The Nyquist folding receiver(NYFR) is a promising scheme for achieving cost-effective real-time spectrum sensing,which is subject to the complexity of processing the modulated outputs.In this case,a multipath NYFR architecture with a step-sampling rate for the different paths is proposed.The different numbers of digital channels for each path are designed based on the Chinese remainder theorem(CRT).Then,the detectable frequency range is divided into multiple frequency grids,and the Nyquist zone(NZ) of the input can be obtained by sensing these grids.Thus,high-precision parameter estimation is performed by utilizing the NYFR characteristics.Compared with the existing methods,the scheme proposed in this paper overcomes the challenge of NZ estimation,information damage,many computations,low accuracy,and high false alarm probability.Comparative simulation experiments verify the effectiveness of the proposed architecture in this paper.展开更多
The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenz...The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.展开更多
Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS m...Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.展开更多
This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of t...This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of the product based on the time truncated life test employing theWeibull distribution.The control chart developed supports the examination of the mean lifespan variation for a particular product in the process of manufacturing.Three control limit levels are used:the warning control limit,inner control limit,and outer control limit.Together,they enhance the capability for variation detection.A genetic algorithm can be used for optimization during the in-control process,whereby the optimal parameters can be established for the proposed control chart.The control chart performance is assessed using the average run length,while the influence of the model parameters upon the control chart solution is assessed via sensitivity analysis based on an orthogonal experimental design withmultiple linear regression.A comparative study was conducted based on the out-of-control average run length,in which the developed control chart offered greater sensitivity in the detection of process shifts while making use of smaller samples on average than is the case for existing control charts.Finally,to exhibit the utility of the developed control chart,this paper presents its application using simulated data with parameters drawn from the real set of data.展开更多
The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-atten...The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.展开更多
The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with...The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.展开更多
Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.Howev...Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.展开更多
For the problem of slow search and tortuous paths in the Rapidly Exploring Random Tree(RRT)algorithm,a feedback-biased sampling RRT,called FS-RRT,is proposedbasedon RRT.Firstly,toimprove the samplingefficiency of RRT ...For the problem of slow search and tortuous paths in the Rapidly Exploring Random Tree(RRT)algorithm,a feedback-biased sampling RRT,called FS-RRT,is proposedbasedon RRT.Firstly,toimprove the samplingefficiency of RRT to shorten the search time,the search area of the randomtree is restricted to improve the sampling efficiency.Secondly,to obtain better information about obstacles to shorten the path length,a feedback-biased sampling strategy is used instead of the traditional random sampling,the collision of the expanding node with an obstacle generates feedback information so that the next expanding node avoids expanding within a specific angle range.Thirdly,this paper proposes using the inverse optimization strategy to remove redundancy points from the initial path,making the path shorter and more accurate.Finally,to satisfy the smooth operation of the robot in practice,auxiliary points are used to optimize the cubic Bezier curve to avoid path-crossing obstacles when using the Bezier curve optimization.The experimental results demonstrate that,compared to the traditional RRT algorithm,the proposed FS-RRT algorithm performs favorably against mainstream algorithms regarding running time,number of search iterations,and path length.Moreover,the improved algorithm also performs well in a narrow obstacle environment,and its effectiveness is further confirmed by experimental verification.展开更多
In order to accurately measure an object’s three-dimensional surface shape,the influence of sampling on it was studied.First,on the basis of deriving spectra expressions through the Fourier transform,the generation o...In order to accurately measure an object’s three-dimensional surface shape,the influence of sampling on it was studied.First,on the basis of deriving spectra expressions through the Fourier transform,the generation of CCD pixels was analyzed,and its expression was given.Then,based on the discrete expression of deformation fringes obtained after sampling,its Fourier spectrum expression was derived,resulting in an infinitely repeated"spectra island"in the frequency domain.Finally,on the basis of using a low-pass filter to remove high-order harmonic components and retaining only one fundamental frequency component,the inverse Fourier transform was used to reconstruct the signal strength.A method of reducing the sampling interval,i.e.,reducing the number of sampling points per fringe,was proposed to increase the ratio between the sampling frequency and the fundamental frequency of the grating.This was done to reconstruct the object’s surface shape more accurately under the condition of m>4.The basic principle was verified through simulation and experiment.In the simulation,the sampling intervals were 8 pixels,4 pixels,2 pixels,and 1 pixel,the maximum absolute error values obtained in the last three situations were 88.80%,38.38%,and 31.50%in the first situation,respectively,and the corresponding average absolute error values are 71.84%,43.27%,and 32.26%.It is demonstrated that the smaller the sampling interval,the better the recovery effect.Taking the same four sampling intervals in the experiment as in the simulation can also lead to the same conclusions.The simulated and experimental results show that reducing the sampling interval can improve the accuracy of object surface shape measurement and achieve better reconstruction results.展开更多
We propose a new framework for the sampling,compression,and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces.Our approach involves constructing a tensor called the RaySe...We propose a new framework for the sampling,compression,and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces.Our approach involves constructing a tensor called the RaySense sketch,which captures nearest neighbors from the underlying geometry of points along a set of rays.We explore various operations that can be performed on the RaySense sketch,leading to different properties and potential applications.Statistical information about the data set can be extracted from the sketch,independent of the ray set.Line integrals on point sets can be efficiently computed using the sketch.We also present several examples illustrating applications of the proposed strategy in practical scenarios.展开更多
Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the los...Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the loss function.The performance of PINNs is generally affected by both training and sampling.Specifically,training methods focus on how to overcome the training difficulties caused by the special PDE residual loss of PINNs,and sampling methods are concerned with the location and distribution of the sampling points upon which evaluations of PDE residual loss are accomplished.However,a common problem among these original PINNs is that they omit special temporal information utilization during the training or sampling stages when dealing with an important PDE category,namely,time-dependent PDEs,where temporal information plays a key role in the algorithms used.There is one method,called Causal PINN,that considers temporal causality at the training level but not special temporal utilization at the sampling level.Incorporating temporal knowledge into sampling remains to be studied.To fill this gap,we propose a novel temporal causality-based adaptive sampling method that dynamically determines the sampling ratio according to both PDE residual and temporal causality.By designing a sampling ratio determined by both residual loss and temporal causality to control the number and location of sampled points in each temporal sub-domain,we provide a practical solution by incorporating temporal information into sampling.Numerical experiments of several nonlinear time-dependent PDEs,including the Cahn–Hilliard,Korteweg–de Vries,Allen–Cahn and wave equations,show that our proposed sampling method can improve the performance.We demonstrate that using such a relatively simple sampling method can improve prediction performance by up to two orders of magnitude compared with the results from other methods,especially when points are limited.展开更多
Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more adva...Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more advantages in the geometric modeling of stochastic media.The explicit modeling method has high computational accuracy and high computational cost.The chord length sampling(CLS)method can improve computational efficiency by sampling the chord length during neutron transport using the matrix chord length?s probability density function.This study shows that the excluded-volume effect in realistic stochastic media can introduce certain deviations into the CLS.A chord length correction approach is proposed to obtain the chord length correction factor by developing the Particle code based on equivalent transmission probability.Through numerical analysis against reference solutions from explicit modeling in the RMC code,it was demonstrated that CLS with the proposed correction method provides good accuracy for addressing the excludedvolume effect in realistic infinite stochastic media.展开更多
BACKGROUND Pancreatic ductal leaks complicated by endoscopic ultrasonography-guided tissue sampling(EUS-TS)can manifest as acute pancreatitis.CASE SUMMARY A 63-year-old man presented with persistent abdominal pain and...BACKGROUND Pancreatic ductal leaks complicated by endoscopic ultrasonography-guided tissue sampling(EUS-TS)can manifest as acute pancreatitis.CASE SUMMARY A 63-year-old man presented with persistent abdominal pain and weight loss.Diagnosis:Laboratory findings revealed elevated carbohydrate antigen 19-9(5920 U/mL)and carcinoembryonic antigen(23.7 ng/mL)levels.Magnetic resonance imaging of the pancreas revealed an approximately 3 cm ill-defined space-occupying lesion in the inferior aspect of the head,with severe encasement of the superior mesenteric artery.Pancreatic ductal adenocarcinoma was confirmed after pathological examination of specimens obtained by EUS-TS using the fanning method.Interventions and outcomes:The following day,the patient experienced severe abdominal pain with high amylase(265 U/L)and lipase(1173 U/L)levels.Computed tomography of the abdomen revealed edematous wall thickening of the second portion of the duodenum with adjacent fluid collections and a suspicious leak from either the distal common bile duct or the main pancreatic duct in the head.Endoscopic retrograde cholangiopancreatography revealed dye leakage in the head of the main pancreatic duct.Therefore,a 5F 7 cm linear plastic stent was deployed into the pancreatic duct to divert the pancreatic juice.The patient’s abdominal pain improved immediately after pancreatic stent insertion,and amylase and lipase levels normalized within a week.Neoadjuvant chemotherapy was then initiated.CONCLUSION Using the fanning method in EUS-TS can inadvertently cause damage to the pancreatic duct and may lead to clinically significant pancreatitis.Placing a pancreatic stent may immediately resolve acute pancreatitis and shorten the waiting time for curative therapy.When using the fanning method during EUSTS,ductal structures should be excluded to prevent pancreatic ductal leakage.展开更多
Background Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” scenario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resu...Background Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” scenario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resulting in slow convergence, high computational costs, and learning failures, particularly when small datasets are used. Methods A novel method is presented for dense-shape correspondence, whereby the spatial information transformed by neural networks is combined with the projections onto spectral maps to overcome the “chicken or egg” challenge by selectively sampling only points with high confidence in their alignment. These points then contribute to the alignment and spectral loss terms, boosting training, and accelerating convergence by a factor of five. To ensure full unsupervised learning, the Gromov–Hausdorff distance metric was used to select the points with the maximal alignment score displaying most confidence. Results The effectiveness of the proposed approach was demonstrated on several benchmark datasets, whereby results were reported as superior to those of spectral and spatial-based methods. Conclusions The proposed method provides a promising new approach to dense-shape correspondence, addressing the key challenges in the field and offering significant advantages over the current methods, including faster convergence, improved accuracy, and reduced computational costs.展开更多
Uniform linear array(ULA)radars are widely used in the collision-avoidance radar systems of small unmanned aerial vehicles(UAVs).In practice,a ULA's multi-target direction of arrival(DOA)estimation performance suf...Uniform linear array(ULA)radars are widely used in the collision-avoidance radar systems of small unmanned aerial vehicles(UAVs).In practice,a ULA's multi-target direction of arrival(DOA)estimation performance suffers from significant performance degradation owing to the limited number of physical elements.To improve the underdetermined DOA estimation performance of a ULA radar mounted on a small UAV platform,we propose a nonuniform linear motion sampling underdetermined DOA estimation method.Using the motion of the UAV platform,the echo signal is sampled at different positions.Then,according to the concept of difference co-array,a virtual ULA with multiple array elements and a large aperture is synthesized to increase the degrees of freedom(DOFs).Through position analysis of the original and motion arrays,we propose a nonuniform linear motion sampling method based on ULA for determining the optimal DOFs.Under the condition of no increase in the aperture of the physical array,the proposed method obtains a high DOF with fewer sampling runs and greatly improves the underdetermined DOA estimation performance of ULA.The results of numerical simulations conducted herein verify the superior performance of the proposed method.展开更多
This paper introduces the principle of PPS-based adaptive cluster sampling method and its modified HH estimator and HT estimator calculation method. It compares PPS-based adaptive cluster sampling method with SRS samp...This paper introduces the principle of PPS-based adaptive cluster sampling method and its modified HH estimator and HT estimator calculation method. It compares PPS-based adaptive cluster sampling method with SRS sampling and SRS-based adaptive group. The difference between the group sampling and the advantages and scope of the PPS adaptive cluster sampling method are analyzed. According to the case analysis, the relevant conclusions are drawn: 1) The adaptive cluster sampling method is more accurate than the SRS sampling;2) SRS adaptive The HT estimator of the cluster sampling is more stable than the HH estimator;3) The two estimators of the PPS adaptive cluster sampling method have little difference in the estimation of the population mean, but the HT estimator variance is smaller and more suitable;4) PPS The HH estimator of adaptive cluster sampling is the same as the HH estimator of SRS adaptive cluster sampling, but the variance is larger and unstable.展开更多
In this paper,by combining sampling methods for food statistics with years of sample sampling experience,various sampling points and corresponding sampling methods are summarized.It hopes to discover food safety risks...In this paper,by combining sampling methods for food statistics with years of sample sampling experience,various sampling points and corresponding sampling methods are summarized.It hopes to discover food safety risks and improve the level of food safety.展开更多
文摘In order to enhance grain sampling efficiency, in this work a truss type multi-rod grain sampling machine is designed and tested. The sampling machine primarily consists of truss support mechanism, main carriage mechanism, auxiliary carriage mechanism, sampling rods, and a PLC controller. The movement of the main carriage on the truss, the auxiliary carriage on the main carriage, and the vertical movement of the sampling rods on the auxiliary carriage are controlled through PLC programming. The sampling machine accurately controls the position of the sampling rods, enabling random sampling with six rods to ensure comprehensive and random sampling. Additionally, sampling experiments were conducted, and the results showed that the multi-rod grain sampling machine simultaneously samples with six rods, achieving a sampling frequency of 38 times per hour. The round trip time for the sampling rods is 33 seconds per cycle, and the sampling length direction reaches 18 m. This study provides valuable insights for the design of multi-rod grain sampling machines.
文摘The laboratories in the bauxite processing industry are always under a heavy workload of sample collection, analysis, and compilation of the results. After size reduction from grinding mills, the samples of bauxite are collected after intervals of 3 to 4 hours. Large bauxite processing industries producing 1 million tons of pure aluminium can have three grinding mills. Thus, the total number of samples to be tested in one day reaches a figure of 18 to 24. The sample of bauxite ore coming from the grinding mill is tested for its particle size and composition. For testing the composition, the bauxite ore sample is first prepared by fusing it with X-ray flux. Then the sample is sent for X-ray fluorescence analysis. Afterwards, the crucibles are washed in ultrasonic baths to be used for the next testing. The whole procedure takes about 2 - 3 hours. With a large number of samples reaching the laboratory, the chances of error in composition analysis increase. In this study, we have used a composite sampling methodology to reduce the number of samples reaching the laboratory without compromising their validity. The results of the average composition of fifteen samples were measured against composite samples. The mean of difference was calculated. The standard deviation and paired t-test values were evaluated against predetermined critical values obtained using a two-tailed test. It was found from the results that paired test-t values were much lower than the critical values thus validating the composition attained through composite sampling. The composite sampling approach not only reduced the number of samples but also the chemicals used in the laboratory. The objective of improved analytical protocol to reduce the number of samples reaching the laboratory was successfully achieved without compromising the quality of analytical results.
文摘In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sampling will be valid for some classes of multivariate entire functions,satisfying certain growth conditions.We will show that many known results included in Commun Korean Math Soc,2002,17:731-740,Turk J Math,2017,41:387-403 and Filomat,2020,34:3339-3347 are special cases of our results.Moreover,we estimate the truncation error of this sampling based on localized sampling without decay assumption.Illustrative examples are also presented.
基金supported by the Key Projects of the 2022 National Defense Science and Technology Foundation Strengthening Plan 173 (Grant No.2022-173ZD-010)the Equipment PreResearch Foundation of The State Key Laboratory (Grant No.6142101200204)。
文摘Wideband spectrum sensing with a high-speed analog-digital converter(ADC) presents a challenge for practical systems.The Nyquist folding receiver(NYFR) is a promising scheme for achieving cost-effective real-time spectrum sensing,which is subject to the complexity of processing the modulated outputs.In this case,a multipath NYFR architecture with a step-sampling rate for the different paths is proposed.The different numbers of digital channels for each path are designed based on the Chinese remainder theorem(CRT).Then,the detectable frequency range is divided into multiple frequency grids,and the Nyquist zone(NZ) of the input can be obtained by sensing these grids.Thus,high-precision parameter estimation is performed by utilizing the NYFR characteristics.Compared with the existing methods,the scheme proposed in this paper overcomes the challenge of NZ estimation,information damage,many computations,low accuracy,and high false alarm probability.Comparative simulation experiments verify the effectiveness of the proposed architecture in this paper.
文摘The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.
基金supported by the Platform Development Foundation of the China Institute for Radiation Protection(No.YP21030101)the National Natural Science Foundation of China(General Program)(Nos.12175114,U2167209)+1 种基金the National Key R&D Program of China(No.2021YFF0603600)the Tsinghua University Initiative Scientific Research Program(No.20211080081).
文摘Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.
基金the Science,Research and Innovation Promotion Funding(TSRI)(Grant No.FRB660012/0168)managed under Rajamangala University of Technology Thanyaburi(FRB66E0646O.4).
文摘This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of the product based on the time truncated life test employing theWeibull distribution.The control chart developed supports the examination of the mean lifespan variation for a particular product in the process of manufacturing.Three control limit levels are used:the warning control limit,inner control limit,and outer control limit.Together,they enhance the capability for variation detection.A genetic algorithm can be used for optimization during the in-control process,whereby the optimal parameters can be established for the proposed control chart.The control chart performance is assessed using the average run length,while the influence of the model parameters upon the control chart solution is assessed via sensitivity analysis based on an orthogonal experimental design withmultiple linear regression.A comparative study was conducted based on the out-of-control average run length,in which the developed control chart offered greater sensitivity in the detection of process shifts while making use of smaller samples on average than is the case for existing control charts.Finally,to exhibit the utility of the developed control chart,this paper presents its application using simulated data with parameters drawn from the real set of data.
基金the Communication University of China(CUC230A013)the Fundamental Research Funds for the Central Universities.
文摘The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.
基金Project supported by the National Key Research and Development Program of China(Grant No.2023YFF1204402)the National Natural Science Foundation of China(Grant Nos.12074079 and 12374208)+1 种基金the Natural Science Foundation of Shanghai(Grant No.22ZR1406800)the China Postdoctoral Science Foundation(Grant No.2022M720815).
文摘The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.
基金This present research work was supported by the National Key R&D Program of China(No.2021YFB2700800)the GHfund B(No.202302024490).
文摘Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.
基金provided by Shaanxi Province’s Key Research and Development Plan(No.2022NY-087).
文摘For the problem of slow search and tortuous paths in the Rapidly Exploring Random Tree(RRT)algorithm,a feedback-biased sampling RRT,called FS-RRT,is proposedbasedon RRT.Firstly,toimprove the samplingefficiency of RRT to shorten the search time,the search area of the randomtree is restricted to improve the sampling efficiency.Secondly,to obtain better information about obstacles to shorten the path length,a feedback-biased sampling strategy is used instead of the traditional random sampling,the collision of the expanding node with an obstacle generates feedback information so that the next expanding node avoids expanding within a specific angle range.Thirdly,this paper proposes using the inverse optimization strategy to remove redundancy points from the initial path,making the path shorter and more accurate.Finally,to satisfy the smooth operation of the robot in practice,auxiliary points are used to optimize the cubic Bezier curve to avoid path-crossing obstacles when using the Bezier curve optimization.The experimental results demonstrate that,compared to the traditional RRT algorithm,the proposed FS-RRT algorithm performs favorably against mainstream algorithms regarding running time,number of search iterations,and path length.Moreover,the improved algorithm also performs well in a narrow obstacle environment,and its effectiveness is further confirmed by experimental verification.
文摘In order to accurately measure an object’s three-dimensional surface shape,the influence of sampling on it was studied.First,on the basis of deriving spectra expressions through the Fourier transform,the generation of CCD pixels was analyzed,and its expression was given.Then,based on the discrete expression of deformation fringes obtained after sampling,its Fourier spectrum expression was derived,resulting in an infinitely repeated"spectra island"in the frequency domain.Finally,on the basis of using a low-pass filter to remove high-order harmonic components and retaining only one fundamental frequency component,the inverse Fourier transform was used to reconstruct the signal strength.A method of reducing the sampling interval,i.e.,reducing the number of sampling points per fringe,was proposed to increase the ratio between the sampling frequency and the fundamental frequency of the grating.This was done to reconstruct the object’s surface shape more accurately under the condition of m>4.The basic principle was verified through simulation and experiment.In the simulation,the sampling intervals were 8 pixels,4 pixels,2 pixels,and 1 pixel,the maximum absolute error values obtained in the last three situations were 88.80%,38.38%,and 31.50%in the first situation,respectively,and the corresponding average absolute error values are 71.84%,43.27%,and 32.26%.It is demonstrated that the smaller the sampling interval,the better the recovery effect.Taking the same four sampling intervals in the experiment as in the simulation can also lead to the same conclusions.The simulated and experimental results show that reducing the sampling interval can improve the accuracy of object surface shape measurement and achieve better reconstruction results.
基金supported by the National Science Foundation(Grant No.DMS-1440415)partially supported by a grant from the Simons Foundation,NSF Grants DMS-1720171 and DMS-2110895a Discovery Grant from Natural Sciences and Engineering Research Council of Canada.
文摘We propose a new framework for the sampling,compression,and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces.Our approach involves constructing a tensor called the RaySense sketch,which captures nearest neighbors from the underlying geometry of points along a set of rays.We explore various operations that can be performed on the RaySense sketch,leading to different properties and potential applications.Statistical information about the data set can be extracted from the sketch,independent of the ray set.Line integrals on point sets can be efficiently computed using the sketch.We also present several examples illustrating applications of the proposed strategy in practical scenarios.
基金Project supported by the Key National Natural Science Foundation of China(Grant No.62136005)the National Natural Science Foundation of China(Grant Nos.61922087,61906201,and 62006238)。
文摘Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the loss function.The performance of PINNs is generally affected by both training and sampling.Specifically,training methods focus on how to overcome the training difficulties caused by the special PDE residual loss of PINNs,and sampling methods are concerned with the location and distribution of the sampling points upon which evaluations of PDE residual loss are accomplished.However,a common problem among these original PINNs is that they omit special temporal information utilization during the training or sampling stages when dealing with an important PDE category,namely,time-dependent PDEs,where temporal information plays a key role in the algorithms used.There is one method,called Causal PINN,that considers temporal causality at the training level but not special temporal utilization at the sampling level.Incorporating temporal knowledge into sampling remains to be studied.To fill this gap,we propose a novel temporal causality-based adaptive sampling method that dynamically determines the sampling ratio according to both PDE residual and temporal causality.By designing a sampling ratio determined by both residual loss and temporal causality to control the number and location of sampled points in each temporal sub-domain,we provide a practical solution by incorporating temporal information into sampling.Numerical experiments of several nonlinear time-dependent PDEs,including the Cahn–Hilliard,Korteweg–de Vries,Allen–Cahn and wave equations,show that our proposed sampling method can improve the performance.We demonstrate that using such a relatively simple sampling method can improve prediction performance by up to two orders of magnitude compared with the results from other methods,especially when points are limited.
文摘Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more advantages in the geometric modeling of stochastic media.The explicit modeling method has high computational accuracy and high computational cost.The chord length sampling(CLS)method can improve computational efficiency by sampling the chord length during neutron transport using the matrix chord length?s probability density function.This study shows that the excluded-volume effect in realistic stochastic media can introduce certain deviations into the CLS.A chord length correction approach is proposed to obtain the chord length correction factor by developing the Particle code based on equivalent transmission probability.Through numerical analysis against reference solutions from explicit modeling in the RMC code,it was demonstrated that CLS with the proposed correction method provides good accuracy for addressing the excludedvolume effect in realistic infinite stochastic media.
文摘BACKGROUND Pancreatic ductal leaks complicated by endoscopic ultrasonography-guided tissue sampling(EUS-TS)can manifest as acute pancreatitis.CASE SUMMARY A 63-year-old man presented with persistent abdominal pain and weight loss.Diagnosis:Laboratory findings revealed elevated carbohydrate antigen 19-9(5920 U/mL)and carcinoembryonic antigen(23.7 ng/mL)levels.Magnetic resonance imaging of the pancreas revealed an approximately 3 cm ill-defined space-occupying lesion in the inferior aspect of the head,with severe encasement of the superior mesenteric artery.Pancreatic ductal adenocarcinoma was confirmed after pathological examination of specimens obtained by EUS-TS using the fanning method.Interventions and outcomes:The following day,the patient experienced severe abdominal pain with high amylase(265 U/L)and lipase(1173 U/L)levels.Computed tomography of the abdomen revealed edematous wall thickening of the second portion of the duodenum with adjacent fluid collections and a suspicious leak from either the distal common bile duct or the main pancreatic duct in the head.Endoscopic retrograde cholangiopancreatography revealed dye leakage in the head of the main pancreatic duct.Therefore,a 5F 7 cm linear plastic stent was deployed into the pancreatic duct to divert the pancreatic juice.The patient’s abdominal pain improved immediately after pancreatic stent insertion,and amylase and lipase levels normalized within a week.Neoadjuvant chemotherapy was then initiated.CONCLUSION Using the fanning method in EUS-TS can inadvertently cause damage to the pancreatic duct and may lead to clinically significant pancreatitis.Placing a pancreatic stent may immediately resolve acute pancreatitis and shorten the waiting time for curative therapy.When using the fanning method during EUSTS,ductal structures should be excluded to prevent pancreatic ductal leakage.
基金Supported by the Zimin Institute for Engineering Solutions Advancing Better Lives。
文摘Background Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” scenario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resulting in slow convergence, high computational costs, and learning failures, particularly when small datasets are used. Methods A novel method is presented for dense-shape correspondence, whereby the spatial information transformed by neural networks is combined with the projections onto spectral maps to overcome the “chicken or egg” challenge by selectively sampling only points with high confidence in their alignment. These points then contribute to the alignment and spectral loss terms, boosting training, and accelerating convergence by a factor of five. To ensure full unsupervised learning, the Gromov–Hausdorff distance metric was used to select the points with the maximal alignment score displaying most confidence. Results The effectiveness of the proposed approach was demonstrated on several benchmark datasets, whereby results were reported as superior to those of spectral and spatial-based methods. Conclusions The proposed method provides a promising new approach to dense-shape correspondence, addressing the key challenges in the field and offering significant advantages over the current methods, including faster convergence, improved accuracy, and reduced computational costs.
基金National Natural Science Foundation of China(61973037)National 173 Program Project(2019-JCJQ-ZD-324)。
文摘Uniform linear array(ULA)radars are widely used in the collision-avoidance radar systems of small unmanned aerial vehicles(UAVs).In practice,a ULA's multi-target direction of arrival(DOA)estimation performance suffers from significant performance degradation owing to the limited number of physical elements.To improve the underdetermined DOA estimation performance of a ULA radar mounted on a small UAV platform,we propose a nonuniform linear motion sampling underdetermined DOA estimation method.Using the motion of the UAV platform,the echo signal is sampled at different positions.Then,according to the concept of difference co-array,a virtual ULA with multiple array elements and a large aperture is synthesized to increase the degrees of freedom(DOFs).Through position analysis of the original and motion arrays,we propose a nonuniform linear motion sampling method based on ULA for determining the optimal DOFs.Under the condition of no increase in the aperture of the physical array,the proposed method obtains a high DOF with fewer sampling runs and greatly improves the underdetermined DOA estimation performance of ULA.The results of numerical simulations conducted herein verify the superior performance of the proposed method.
文摘This paper introduces the principle of PPS-based adaptive cluster sampling method and its modified HH estimator and HT estimator calculation method. It compares PPS-based adaptive cluster sampling method with SRS sampling and SRS-based adaptive group. The difference between the group sampling and the advantages and scope of the PPS adaptive cluster sampling method are analyzed. According to the case analysis, the relevant conclusions are drawn: 1) The adaptive cluster sampling method is more accurate than the SRS sampling;2) SRS adaptive The HT estimator of the cluster sampling is more stable than the HH estimator;3) The two estimators of the PPS adaptive cluster sampling method have little difference in the estimation of the population mean, but the HT estimator variance is smaller and more suitable;4) PPS The HH estimator of adaptive cluster sampling is the same as the HH estimator of SRS adaptive cluster sampling, but the variance is larger and unstable.
文摘In this paper,by combining sampling methods for food statistics with years of sample sampling experience,various sampling points and corresponding sampling methods are summarized.It hopes to discover food safety risks and improve the level of food safety.