The encapsulation of lunar samples is a core research area in the third phase of the Chinese Lunar Exploration Program.The seal assembly,opening and closing mechanism(OCM),and locking mechanism are the core components...The encapsulation of lunar samples is a core research area in the third phase of the Chinese Lunar Exploration Program.The seal assembly,opening and closing mechanism(OCM),and locking mechanism are the core components of the encapsulation device of the lunar samples,and the requirements of a tight seal,lightweight,and low power make the design of these core components difficult.In this study,a combined sealing assembly,OCM,and locking mechanism were investigated for the device.The sealing architecture consists of rubber and an Ag-In alloy,and a theory was built to analyze the seal.Experiments of the electroplate Au coating on the knife-edge revealed that the hermetic seal can be significantly improved.The driving principle for coaxial double-helical pairs was investigated and used to design the OCM.Moreover,a locking mechanism was created using an electric initiating explosive device with orifice damping.By optimizing the design,the output parameters were adjusted to meet the requirements of the lunar explorer.The experimental results showed that the helium leak rate of the test pieces were not more than 5×10^(-11) Pa·m^(3)·s^(-1),the minimum power of the OCM was 0.3 W,and the total weight of the principle prototype was 2.9 kg.The explosive driven locking mechanism has low impact.This investigation solved the difficulties in achieving tight seal,light weight,and low power for the lunar explorer,and the results can also be used to explore other extraterrestrial objects in the future.展开更多
Objective To evaluate the diagnostic value of histopathological examination of ultrasound-guided puncture biopsy samples in extrapulmonary tuberculosis(EPTB).Methods This study was conducted at the Shanghai Public Hea...Objective To evaluate the diagnostic value of histopathological examination of ultrasound-guided puncture biopsy samples in extrapulmonary tuberculosis(EPTB).Methods This study was conducted at the Shanghai Public Health Clinical Center.A total of 115patients underwent ultrasound-guided puncture biopsy,followed by MGIT 960 culture(culture),smear,Gene Xpert MTB/RIF(Xpert),and histopathological examination.These assays were performed to evaluate their effectiveness in diagnosing EPTB in comparison to two different diagnostic criteria:liquid culture and composite reference standard(CRS).Results When CRS was used as the reference standard,the sensitivity and specificity of culture,smear,Xpert,and histopathological examination were(44.83%,89.29%),(51.72%,89.29%),(70.11%,96.43%),and(85.06%,82.14%),respectively.Based on liquid culture tests,the sensitivity and specificity of smear,Xpert,and pathological examination were(66.67%,72.60%),(83.33%,63.01%),and(92.86%,45.21%),respectively.Histopathological examination showed the highest sensitivity but lowest specificity.Further,we found that the combination of Xpert and histopathological examination showed a sensitivity of 90.80%and a specificity of 89.29%.Conclusion Ultrasound-guided puncture sampling is safe and effective for the diagnosis of EPTB.Compared with culture,smear,and Xpert,histopathological examination showed higher sensitivity but lower specificity.The combination of histopathology with Xpert showed the best performance characteristics.展开更多
We conduct an experimental study supported by theoretical analysis of single laser ablating copper to investigate the interactions between laser and material at different sample temperatures,and predict the changes of...We conduct an experimental study supported by theoretical analysis of single laser ablating copper to investigate the interactions between laser and material at different sample temperatures,and predict the changes of ablation morphology and lattice temperature.For investigating the effect of sample temperature on femtosecond laser processing,we conduct experiments on and simulate the thermal behavior of femtosecond laser irradiating copper by using a two-temperature model.The simulation results show that both electron peak temperature and the relaxation time needed to reach equilibrium increase as initial sample temperature rises.When the sample temperature rises from 300 K to 600 K,the maximum lattice temperature of the copper surface increases by about 6500 K under femtosecond laser irradiation,and the ablation depth increases by 20%.The simulated ablation depths follow the same general trend as the experimental values.This work provides some theoretical basis and technical support for developing femtosecond laser processing in the field of metal materials.展开更多
Regular fastener detection is necessary to ensure the safety of railways.However,the number of abnormal fasteners is significantly lower than the number of normal fasteners in real railways.Existing supervised inspect...Regular fastener detection is necessary to ensure the safety of railways.However,the number of abnormal fasteners is significantly lower than the number of normal fasteners in real railways.Existing supervised inspectionmethods have insufficient detection ability in cases of imbalanced samples.To solve this problem,we propose an approach based on deep convolutional neural networks(DCNNs),which consists of three stages:fastener localization,abnormal fastener sample generation based on saliency detection,and fastener state inspection.First,a lightweight YOLOv5s is designed to achieve fast and precise localization of fastener regions.Then,the foreground clip region of a fastener image is extracted by the designed fastener saliency detection network(F-SDNet),combined with data augmentation to generate a large number of abnormal fastener samples and balance the number of abnormal and normal samples.Finally,a fastener inspection model called Fastener ResNet-8 is constructed by being trained with the augmented fastener dataset.Results show the effectiveness of our proposed method in solving the problem of sample imbalance in fastener detection.Qualitative and quantitative comparisons show that the proposed F-SDNet outperforms other state-of-the-art methods in clip region extraction,reaching MAE and max F-measure of 0.0215 and 0.9635,respectively.In addition,the FPS of the fastener state inspection model reached 86.2,and the average accuracy reached 98.7%on 614 augmented fastener test sets and 99.9%on 7505 real fastener datasets.展开更多
Perovskite solar cells(PsCs)have developed tremendously over the past decade.However,the key factors influencing the power conversion efficiency(PCE)of PSCs remain incompletely understood,due to the complexity and cou...Perovskite solar cells(PsCs)have developed tremendously over the past decade.However,the key factors influencing the power conversion efficiency(PCE)of PSCs remain incompletely understood,due to the complexity and coupling of these structural and compositional parameters.In this research,we demon-strate an effective approach to optimize PSCs performance via machine learning(ML).To address chal-lenges posed by limited samples,we propose a feature mask(FM)method,which augments training samples through feature transformation rather than synthetic data.Using this approach,squeeze-and-excitation residual network(SEResNet)model achieves an accuracy with a root-mean-square-error(RMSE)of 0.833%and a Pearson's correlation coefficient(r)of 0.980.Furthermore,we employ the permu-tation importance(PI)algorithm to investigate key features for PCE.Subsequently,we predict PCE through high-throughput screenings,in which we study the relationship between PCE and chemical com-positions.After that,we conduct experiments to validate the consistency between predicted results by ML and experimental results.In this work,ML demonstrates the capability to predict device performance,extract key parameters from complex systems,and accelerate the transition from laboratory findings to commercialapplications.展开更多
Accurate and reliable fault detection is essential for the safe operation of electric vehicles.Support vector data description(SVDD)has been widely used in the field of fault detection.However,constructing the hypersp...Accurate and reliable fault detection is essential for the safe operation of electric vehicles.Support vector data description(SVDD)has been widely used in the field of fault detection.However,constructing the hypersphere boundary only describes the distribution of unlabeled samples,while the distribution of faulty samples cannot be effectively described and easilymisses detecting faulty data due to the imbalance of sample distribution.Meanwhile,selecting parameters is critical to the detection performance,and empirical parameterization is generally timeconsuming and laborious and may not result in finding the optimal parameters.Therefore,this paper proposes a semi-supervised data-driven method based on which the SVDD algorithm is improved and achieves excellent fault detection performance.By incorporating faulty samples into the underlying SVDD model,training deals better with the problem of missing detection of faulty samples caused by the imbalance in the distribution of abnormal samples,and the hypersphere boundary ismodified to classify the samplesmore accurately.The Bayesian Optimization NSVDD(BO-NSVDD)model was constructed to quickly and accurately optimize hyperparameter combinations.In the experiments,electric vehicle operation data with four common fault types are used to evaluate the performance with other five models,and the results show that the BO-NSVDD model presents superior detection performance for each type of fault data,especially in the imperceptible early and minor faults,which has seen very obvious advantages.Finally,the strong robustness of the proposed method is verified by adding different intensities of noise in the dataset.展开更多
In engineering application,there is only one adaptive weights estimated by most of traditional early warning radars for adaptive interference suppression in a pulse reputation interval(PRI).Therefore,if the training s...In engineering application,there is only one adaptive weights estimated by most of traditional early warning radars for adaptive interference suppression in a pulse reputation interval(PRI).Therefore,if the training samples used to calculate the weight vector does not contain the jamming,then the jamming cannot be removed by adaptive spatial filtering.If the weight vector is constantly updated in the range dimension,the training data may contain target echo signals,resulting in signal cancellation effect.To cope with the situation that the training samples are contaminated by target signal,an iterative training sample selection method based on non-homogeneous detector(NHD)is proposed in this paper for updating the weight vector in entire range dimension.The principle is presented,and the validity is proven by simulation results.展开更多
The deep mining of coal resources is accompanied by severe environmental challenges and various potential engineering hazards.The implementation of NPR(negative Poisson's ratio)bolts are capable of controlling lar...The deep mining of coal resources is accompanied by severe environmental challenges and various potential engineering hazards.The implementation of NPR(negative Poisson's ratio)bolts are capable of controlling large deformations in the surrounding rock effectively.This paper focuses on studying the mechanical properties of the NPR bolt under static disturbance load.The deep nonlinear mechanical experimental system was used to study the mechanical behavior of rock samples with different anchored types(unanchored/PR anchored/2G NPR anchored)under static disturbance load.The whole process of rock samples was taken by high-speed camera to obtain the real-time failure characteristics under static disturbance load.At the same time,the acoustic emission signal was collected to obtain the key characteristic parameters of acoustic emission such as acoustic emission count,energy,and frequency.The deformation at the failure of the samples was calculated and analyzed by digital speckle software.The findings indicate that the failure mode of rock is influenced by different types of anchoring.The peak failure strength of 2G NPR bolt anchored rock samples exhibits an increase of 6.5%when compared to the unanchored rock samples.The cumulative count and cumulative energy of acoustic emission exhibit a decrease of 62.16%and 62.90%,respectively.The maximum deformation of bearing capacity exhibits an increase of 59.27%,while the failure time demonstrates a delay of 42.86%.The peak failure strength of the 2G NPR bolt anchored ones under static disturbance load exhibits an increase of 5.94%when compared to the rock anchored by PR(Poisson's ratio)bolt.The cumulative count and cumulative energy of acoustic emission exhibit a decrease of 47.16%and 43.86%,respectively.The maximum deformation of the bearing capacity exhibits an increase of 50.43%,and the failure time demonstrates a delay of 32%.After anchoring by 2G NPR bolt,anchoring support effectively reduces the risk of damage caused by static disturbance load.These results demonstrate that the support effect of 2G NPR bolt materials surpasses that of PR bolt.展开更多
Accurate prediction of rockburst proneness is one of challenges for assessing the rockburst risk and selecting effective control measures.This study aims to assess rockburst proneness by considering the energy charact...Accurate prediction of rockburst proneness is one of challenges for assessing the rockburst risk and selecting effective control measures.This study aims to assess rockburst proneness by considering the energy characteristics and qualitative information during rock failure.Several representative rock types in cylindrical and cuboidal sample shapes were tested under uniaxial compression conditions and the failure progress was detected by a high-speed camera.The far-field ejection mass ratio(FEMR)was determined considering the qualitative failure information of the rock samples.The peak-strength energy impact index and the residual elastic energy index were used to quantitatively evaluate the rockburst proneness of both cylindrical and cuboidal samples.Further,the performance of these two indices was analyzed by comparing their estimates with the FEMR.The results show that the accuracy of the residual elastic energy index is significantly higher than that of the peak-strength energy impact index.The residual elastic energy index and the FEMR are in good agreement for both cylindrical and cuboidal rock materials.This is because these two indices can essentially reflect the common energy release mechanism characterized by the mass,ejection velocity,and ejection distance of rock fragments.It suggests that both the FEMR and the residual elastic energy index can be used to accurately measure the rockburst proneness of cylindrical and cuboidal samples based on uniaxial compression test.展开更多
The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction ...The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.展开更多
Background Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” scenario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resu...Background Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” scenario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resulting in slow convergence, high computational costs, and learning failures, particularly when small datasets are used. Methods A novel method is presented for dense-shape correspondence, whereby the spatial information transformed by neural networks is combined with the projections onto spectral maps to overcome the “chicken or egg” challenge by selectively sampling only points with high confidence in their alignment. These points then contribute to the alignment and spectral loss terms, boosting training, and accelerating convergence by a factor of five. To ensure full unsupervised learning, the Gromov–Hausdorff distance metric was used to select the points with the maximal alignment score displaying most confidence. Results The effectiveness of the proposed approach was demonstrated on several benchmark datasets, whereby results were reported as superior to those of spectral and spatial-based methods. Conclusions The proposed method provides a promising new approach to dense-shape correspondence, addressing the key challenges in the field and offering significant advantages over the current methods, including faster convergence, improved accuracy, and reduced computational costs.展开更多
Sample size determination typically relies on a power analysis based on a frequentist conditional approach. This latter can be seen as a particular case of the two-priors approach, which allows to build four distinct ...Sample size determination typically relies on a power analysis based on a frequentist conditional approach. This latter can be seen as a particular case of the two-priors approach, which allows to build four distinct power functions to select the optimal sample size. We revise this approach when the focus is on testing a single binomial proportion. We consider exact methods and introduce a conservative criterion to account for the typical non-monotonic behavior of the power functions, when dealing with discrete data. The main purpose of this paper is to present a Shiny App providing a user-friendly, interactive tool to apply these criteria. The app also provides specific tools to elicit the analysis and the design prior distributions, which are the core of the two-priors approach.展开更多
The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corros...The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corrosion rate.However,a better understanding of the correlation between the FSP process parameters and the corrosion rate is still lacking.The current study used machine learning to establish the relationship between the corrosion rate and FSP process parameters(rotational speed,traverse speed,and shoulder diameter)for WE43 alloy.The Taguchi L27 design of experiments was used for the experimental analysis.In addition,synthetic data was generated using particle swarm optimization for virtual sample generation(VSG).The application of VSG has led to an increase in the prediction accuracy of machine learning models.A sensitivity analysis was performed using Shapley Additive Explanations to determine the key factors affecting the corrosion rate.The shoulder diameter had a significant impact in comparison to the traverse speed.A graphical user interface(GUI)has been created to predict the corrosion rate using the identified factors.This study focuses on the WE43 alloy,but its findings can also be used to predict the corrosion rate of other magnesium alloys.展开更多
The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wo...The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wolfowitz one-sample runs test for randomness, to present a novel approach for computing this probability, and to compare the four procedures by generating samples of 10 and 11 data points, varying the parameters n<sub>0</sub> (number of zeros) and n<sub>1</sub> (number of ones), as well as the number of runs. Fifty-nine samples are created to replicate the behavior of the distribution of the number of runs with 10 and 11 data points. The exact two-tailed probabilities for the four procedures were compared using Friedman’s test. Given the significant difference in central tendency, post-hoc comparisons were conducted using Conover’s test with Benjamini-Yekutielli correction. It is concluded that the procedures of Real Statistics using Excel and R exhibit some inadequacies in the calculation of the exact two-tailed probability, whereas the new proposal and the SPSS procedure are deemed more suitable. The proposed robust algorithm has a more transparent rationale than the SPSS one, albeit being somewhat more conservative. We recommend its implementation for this test and its application to others, such as the binomial and sign test.展开更多
This paper addresses the sampled-data multi-objective active suspension control problem for an in-wheel motor driven electric vehicle subject to stochastic sampling periods and asynchronous premise variables.The focus...This paper addresses the sampled-data multi-objective active suspension control problem for an in-wheel motor driven electric vehicle subject to stochastic sampling periods and asynchronous premise variables.The focus is placed on the scenario that the dynamical state of the half-vehicle active suspension system is transmitted over an in-vehicle controller area network that only permits the transmission of sampled data packets.For this purpose,a stochastic sampling mechanism is developed such that the sampling periods can randomly switch among different values with certain mathematical probabilities.Then,an asynchronous fuzzy sampled-data controller,featuring distinct premise variables from the active suspension system,is constructed to eliminate the stringent requirement that the sampled-data controller has to share the same grades of membership.Furthermore,novel criteria for both stability analysis and controller design are derived in order to guarantee that the resultant closed-loop active suspension system is stochastically stable with simultaneous𝐻2 and𝐻∞performance requirements.Finally,the effectiveness of the proposed stochastic sampled-data multi-objective control method is verified via several numerical cases studies in both time domain and frequency domain under various road disturbance profiles.展开更多
Tea plants are susceptible to diseases during their growth.These diseases seriously affect the yield and quality of tea.The effective prevention and control of diseases requires accurate identification of diseases.Wit...Tea plants are susceptible to diseases during their growth.These diseases seriously affect the yield and quality of tea.The effective prevention and control of diseases requires accurate identification of diseases.With the development of artificial intelligence and computer vision,automatic recognition of plant diseases using image features has become feasible.As the support vector machine(SVM)is suitable for high dimension,high noise,and small sample learning,this paper uses the support vector machine learning method to realize the segmentation of disease spots of diseased tea plants.An improved Conditional Deep Convolutional Generation Adversarial Network with Gradient Penalty(C-DCGAN-GP)was used to expand the segmentation of tea plant spots.Finally,the Visual Geometry Group 16(VGG16)deep learning classification network was trained by the expanded tea lesion images to realize tea disease recognition.展开更多
Sparse representation plays an important role in the research of face recognition.As a deformable sample classification task,face recognition is often used to test the performance of classification algorithms.In face ...Sparse representation plays an important role in the research of face recognition.As a deformable sample classification task,face recognition is often used to test the performance of classification algorithms.In face recognition,differences in expression,angle,posture,and lighting conditions have become key factors that affect recognition accuracy.Essentially,there may be significant differences between different image samples of the same face,which makes image classification very difficult.Therefore,how to build a robust virtual image representation becomes a vital issue.To solve the above problems,this paper proposes a novel image classification algorithm.First,to better retain the global features and contour information of the original sample,the algorithm uses an improved non‐linear image representation method to highlight the low‐intensity and high‐intensity pixels of the original training sample,thus generating a virtual sample.Second,by the principle of sparse representation,the linear expression coefficients of the original sample and the virtual sample can be calculated,respectively.After obtaining these two types of coefficients,calculate the distances between the original sample and the test sample and the distance between the virtual sample and the test sample.These two distances are converted into distance scores.Finally,a simple and effective weight fusion scheme is adopted to fuse the classification scores of the original image and the virtual image.The fused score will determine the final classification result.The experimental results show that the proposed method outperforms other typical sparse representation classification methods.展开更多
Traditional object detectors based on deep learning rely on plenty of labeled samples,which are expensive to obtain.Few-shot object detection(FSOD)attempts to solve this problem,learning detection objects from a few l...Traditional object detectors based on deep learning rely on plenty of labeled samples,which are expensive to obtain.Few-shot object detection(FSOD)attempts to solve this problem,learning detection objects from a few labeled samples,but the performance is often unsatisfactory due to the scarcity of samples.We believe that the main reasons that restrict the performance of few-shot detectors are:(1)the positive samples is scarce,and(2)the quality of positive samples is low.Therefore,we put forward a novel few-shot object detector based on YOLOv4,starting from both improving the quantity and quality of positive samples.First,we design a hybrid multivariate positive sample augmentation(HMPSA)module to amplify the quantity of positive samples and increase positive sample diversity while suppressing negative samples.Then,we design a selective non-local fusion attention(SNFA)module to help the detector better learn the target features and improve the feature quality of positive samples.Finally,we optimize the loss function to make it more suitable for the task of FSOD.Experimental results on PASCAL VOC and MS COCO demonstrate that our designed few-shot object detector has competitive performance with other state-of-the-art detectors.展开更多
In order to solve the problems of weak prediction stability and generalization ability of a neural network algorithm model in the yarn quality prediction research for small samples,a prediction model based on an AdaBo...In order to solve the problems of weak prediction stability and generalization ability of a neural network algorithm model in the yarn quality prediction research for small samples,a prediction model based on an AdaBoost algorithm(AdaBoost model) was established.A prediction model based on a linear regression algorithm(LR model) and a prediction model based on a multi-layer perceptron neural network algorithm(MLP model) were established for comparison.The prediction experiments of the yarn evenness and the yarn strength were implemented.Determination coefficients and prediction errors were used to evaluate the prediction accuracy of these models,and the K-fold cross validation was used to evaluate the generalization ability of these models.In the prediction experiments,the determination coefficient of the yarn evenness prediction result of the AdaBoost model is 76% and 87% higher than that of the LR model and the MLP model,respectively.The determination coefficient of the yarn strength prediction result of the AdaBoost model is slightly higher than that of the other two models.Considering that the yarn evenness dataset has a weaker linear relationship with the cotton dataset than that of the yarn strength dataset in this paper,the AdaBoost model has the best adaptability for the nonlinear dataset among the three models.In addition,the AdaBoost model shows generally better results in the cross-validation experiments and the series of prediction experiments at eight different training set sample sizes.It is proved that the AdaBoost model not only has good prediction accuracy but also has good prediction stability and generalization ability for small samples.展开更多
In airborne gamma ray spectrum processing,different analysis methods,technical requirements,analysis models,and calculation methods need to be established.To meet the engineering practice requirements of airborne gamm...In airborne gamma ray spectrum processing,different analysis methods,technical requirements,analysis models,and calculation methods need to be established.To meet the engineering practice requirements of airborne gamma-ray measurements and improve computational efficiency,an improved shuffled frog leaping algorithm-particle swarm optimization convolutional neural network(SFLA-PSO CNN)for large-sample quantitative analysis of airborne gamma-ray spectra is proposed herein.This method was used to train the weight of the neural network,optimize the structure of the network,delete redundant connections,and enable the neural network to acquire the capability of quantitative spectrum processing.In full-spectrum data processing,this method can perform the functions of energy spectrum peak searching and peak area calculations.After network training,the mean SNR and RMSE of the spectral lines were 31.27 and 2.75,respectively,satisfying the demand for noise reduction.To test the processing ability of the algorithm in large samples of airborne gamma spectra,this study considered the measured data from the Saihangaobi survey area as an example to conduct data spectral analysis.The results show that calculation of the single-peak area takes only 0.13~0.15 ms,and the average relative errors of the peak area in the U,Th,and K spectra are 3.11,9.50,and 6.18%,indicating the high processing efficiency and accuracy of this algorithm.The performance of the model can be further improved by optimizing related parameters,but it can already meet the requirements of practical engineering measurement.This study provides a new idea for the full-spectrum processing of airborne gamma rays.展开更多
基金Supported by Research Foundation of CLEP of China (Grant No.TY3Q20110003)。
文摘The encapsulation of lunar samples is a core research area in the third phase of the Chinese Lunar Exploration Program.The seal assembly,opening and closing mechanism(OCM),and locking mechanism are the core components of the encapsulation device of the lunar samples,and the requirements of a tight seal,lightweight,and low power make the design of these core components difficult.In this study,a combined sealing assembly,OCM,and locking mechanism were investigated for the device.The sealing architecture consists of rubber and an Ag-In alloy,and a theory was built to analyze the seal.Experiments of the electroplate Au coating on the knife-edge revealed that the hermetic seal can be significantly improved.The driving principle for coaxial double-helical pairs was investigated and used to design the OCM.Moreover,a locking mechanism was created using an electric initiating explosive device with orifice damping.By optimizing the design,the output parameters were adjusted to meet the requirements of the lunar explorer.The experimental results showed that the helium leak rate of the test pieces were not more than 5×10^(-11) Pa·m^(3)·s^(-1),the minimum power of the OCM was 0.3 W,and the total weight of the principle prototype was 2.9 kg.The explosive driven locking mechanism has low impact.This investigation solved the difficulties in achieving tight seal,light weight,and low power for the lunar explorer,and the results can also be used to explore other extraterrestrial objects in the future.
基金funded by the grants from the National Key Research and Development Program of China[2021YFC2301503,2022YFC2302900]the National Natural and Science Foundation of China[82171739,82171815,81873884]。
文摘Objective To evaluate the diagnostic value of histopathological examination of ultrasound-guided puncture biopsy samples in extrapulmonary tuberculosis(EPTB).Methods This study was conducted at the Shanghai Public Health Clinical Center.A total of 115patients underwent ultrasound-guided puncture biopsy,followed by MGIT 960 culture(culture),smear,Gene Xpert MTB/RIF(Xpert),and histopathological examination.These assays were performed to evaluate their effectiveness in diagnosing EPTB in comparison to two different diagnostic criteria:liquid culture and composite reference standard(CRS).Results When CRS was used as the reference standard,the sensitivity and specificity of culture,smear,Xpert,and histopathological examination were(44.83%,89.29%),(51.72%,89.29%),(70.11%,96.43%),and(85.06%,82.14%),respectively.Based on liquid culture tests,the sensitivity and specificity of smear,Xpert,and pathological examination were(66.67%,72.60%),(83.33%,63.01%),and(92.86%,45.21%),respectively.Histopathological examination showed the highest sensitivity but lowest specificity.Further,we found that the combination of Xpert and histopathological examination showed a sensitivity of 90.80%and a specificity of 89.29%.Conclusion Ultrasound-guided puncture sampling is safe and effective for the diagnosis of EPTB.Compared with culture,smear,and Xpert,histopathological examination showed higher sensitivity but lower specificity.The combination of histopathology with Xpert showed the best performance characteristics.
基金Project supported by the National Key Research and Development Program of China(Grant No.2019YFA0307701)the National Natural Science Foundation of China(Grant Nos.11674128,11674124,and 11974138).
文摘We conduct an experimental study supported by theoretical analysis of single laser ablating copper to investigate the interactions between laser and material at different sample temperatures,and predict the changes of ablation morphology and lattice temperature.For investigating the effect of sample temperature on femtosecond laser processing,we conduct experiments on and simulate the thermal behavior of femtosecond laser irradiating copper by using a two-temperature model.The simulation results show that both electron peak temperature and the relaxation time needed to reach equilibrium increase as initial sample temperature rises.When the sample temperature rises from 300 K to 600 K,the maximum lattice temperature of the copper surface increases by about 6500 K under femtosecond laser irradiation,and the ablation depth increases by 20%.The simulated ablation depths follow the same general trend as the experimental values.This work provides some theoretical basis and technical support for developing femtosecond laser processing in the field of metal materials.
基金supported in part by the National Natural Science Foundation of China (Grant Nos.51975347 and 51907117)in part by the Shanghai Science and Technology Program (Grant No.22010501600).
文摘Regular fastener detection is necessary to ensure the safety of railways.However,the number of abnormal fasteners is significantly lower than the number of normal fasteners in real railways.Existing supervised inspectionmethods have insufficient detection ability in cases of imbalanced samples.To solve this problem,we propose an approach based on deep convolutional neural networks(DCNNs),which consists of three stages:fastener localization,abnormal fastener sample generation based on saliency detection,and fastener state inspection.First,a lightweight YOLOv5s is designed to achieve fast and precise localization of fastener regions.Then,the foreground clip region of a fastener image is extracted by the designed fastener saliency detection network(F-SDNet),combined with data augmentation to generate a large number of abnormal fastener samples and balance the number of abnormal and normal samples.Finally,a fastener inspection model called Fastener ResNet-8 is constructed by being trained with the augmented fastener dataset.Results show the effectiveness of our proposed method in solving the problem of sample imbalance in fastener detection.Qualitative and quantitative comparisons show that the proposed F-SDNet outperforms other state-of-the-art methods in clip region extraction,reaching MAE and max F-measure of 0.0215 and 0.9635,respectively.In addition,the FPS of the fastener state inspection model reached 86.2,and the average accuracy reached 98.7%on 614 augmented fastener test sets and 99.9%on 7505 real fastener datasets.
基金supported by the National Key Research and Development Program (2022YFF0609504)the National Natural Science Foundation of China (61974126,51902273,62005230,62001405)the Natural Science Foundation of Fujian Province of China (No.2021J06009)
文摘Perovskite solar cells(PsCs)have developed tremendously over the past decade.However,the key factors influencing the power conversion efficiency(PCE)of PSCs remain incompletely understood,due to the complexity and coupling of these structural and compositional parameters.In this research,we demon-strate an effective approach to optimize PSCs performance via machine learning(ML).To address chal-lenges posed by limited samples,we propose a feature mask(FM)method,which augments training samples through feature transformation rather than synthetic data.Using this approach,squeeze-and-excitation residual network(SEResNet)model achieves an accuracy with a root-mean-square-error(RMSE)of 0.833%and a Pearson's correlation coefficient(r)of 0.980.Furthermore,we employ the permu-tation importance(PI)algorithm to investigate key features for PCE.Subsequently,we predict PCE through high-throughput screenings,in which we study the relationship between PCE and chemical com-positions.After that,we conduct experiments to validate the consistency between predicted results by ML and experimental results.In this work,ML demonstrates the capability to predict device performance,extract key parameters from complex systems,and accelerate the transition from laboratory findings to commercialapplications.
基金supported partially by NationalNatural Science Foundation of China(NSFC)(No.U21A20146)Collaborative Innovation Project of Anhui Universities(No.GXXT-2020-070)+8 种基金Cooperation Project of Anhui Future Technology Research Institute and Enterprise(No.2023qyhz32)Development of a New Dynamic Life Prediction Technology for Energy Storage Batteries(No.KH10003598)Opening Project of Key Laboratory of Electric Drive and Control of Anhui Province(No.DQKJ202304)Anhui Provincial Department of Education New Era Education Quality Project(No.2023dshwyx019)Special Fund for Collaborative Innovation between Anhui Polytechnic University and Jiujiang District(No.2022cyxtb10)Key Research and Development Program of Wuhu City(No.2022yf42)Open Research Fund of Anhui Key Laboratory of Detection Technology and Energy Saving Devices(No.JCKJ2021B06)Anhui Provincial Graduate Student Innovation and Entrepreneurship Practice Project(No.2022cxcysj123)Key Scientific Research Project for Anhui Universities(No.2022AH050981).
文摘Accurate and reliable fault detection is essential for the safe operation of electric vehicles.Support vector data description(SVDD)has been widely used in the field of fault detection.However,constructing the hypersphere boundary only describes the distribution of unlabeled samples,while the distribution of faulty samples cannot be effectively described and easilymisses detecting faulty data due to the imbalance of sample distribution.Meanwhile,selecting parameters is critical to the detection performance,and empirical parameterization is generally timeconsuming and laborious and may not result in finding the optimal parameters.Therefore,this paper proposes a semi-supervised data-driven method based on which the SVDD algorithm is improved and achieves excellent fault detection performance.By incorporating faulty samples into the underlying SVDD model,training deals better with the problem of missing detection of faulty samples caused by the imbalance in the distribution of abnormal samples,and the hypersphere boundary ismodified to classify the samplesmore accurately.The Bayesian Optimization NSVDD(BO-NSVDD)model was constructed to quickly and accurately optimize hyperparameter combinations.In the experiments,electric vehicle operation data with four common fault types are used to evaluate the performance with other five models,and the results show that the BO-NSVDD model presents superior detection performance for each type of fault data,especially in the imperceptible early and minor faults,which has seen very obvious advantages.Finally,the strong robustness of the proposed method is verified by adding different intensities of noise in the dataset.
基金supported by the National Natural Science Foundation of China(62371049)。
文摘In engineering application,there is only one adaptive weights estimated by most of traditional early warning radars for adaptive interference suppression in a pulse reputation interval(PRI).Therefore,if the training samples used to calculate the weight vector does not contain the jamming,then the jamming cannot be removed by adaptive spatial filtering.If the weight vector is constantly updated in the range dimension,the training data may contain target echo signals,resulting in signal cancellation effect.To cope with the situation that the training samples are contaminated by target signal,an iterative training sample selection method based on non-homogeneous detector(NHD)is proposed in this paper for updating the weight vector in entire range dimension.The principle is presented,and the validity is proven by simulation results.
基金provided by the National Natural Science Foundation of China(52074300)the Program of China Scholarship Council(202206430024)+2 种基金the National Natural Science Foundation of China Youth Science(52104139)Yueqi Young Scholars Project of China University of Mining and Technology Beijing(2602021RC84)Guizhou province science and technology planning project([2020]3007,[2020]3008)。
文摘The deep mining of coal resources is accompanied by severe environmental challenges and various potential engineering hazards.The implementation of NPR(negative Poisson's ratio)bolts are capable of controlling large deformations in the surrounding rock effectively.This paper focuses on studying the mechanical properties of the NPR bolt under static disturbance load.The deep nonlinear mechanical experimental system was used to study the mechanical behavior of rock samples with different anchored types(unanchored/PR anchored/2G NPR anchored)under static disturbance load.The whole process of rock samples was taken by high-speed camera to obtain the real-time failure characteristics under static disturbance load.At the same time,the acoustic emission signal was collected to obtain the key characteristic parameters of acoustic emission such as acoustic emission count,energy,and frequency.The deformation at the failure of the samples was calculated and analyzed by digital speckle software.The findings indicate that the failure mode of rock is influenced by different types of anchoring.The peak failure strength of 2G NPR bolt anchored rock samples exhibits an increase of 6.5%when compared to the unanchored rock samples.The cumulative count and cumulative energy of acoustic emission exhibit a decrease of 62.16%and 62.90%,respectively.The maximum deformation of bearing capacity exhibits an increase of 59.27%,while the failure time demonstrates a delay of 42.86%.The peak failure strength of the 2G NPR bolt anchored ones under static disturbance load exhibits an increase of 5.94%when compared to the rock anchored by PR(Poisson's ratio)bolt.The cumulative count and cumulative energy of acoustic emission exhibit a decrease of 47.16%and 43.86%,respectively.The maximum deformation of the bearing capacity exhibits an increase of 50.43%,and the failure time demonstrates a delay of 32%.After anchoring by 2G NPR bolt,anchoring support effectively reduces the risk of damage caused by static disturbance load.These results demonstrate that the support effect of 2G NPR bolt materials surpasses that of PR bolt.
基金supported by the National Natural Science Foundation of China(Grant Nos.41877272 and 42077244)the National Key Research and Development Program of China e 2023 Key Special Project(Grant No.2023YFC2907400).
文摘Accurate prediction of rockburst proneness is one of challenges for assessing the rockburst risk and selecting effective control measures.This study aims to assess rockburst proneness by considering the energy characteristics and qualitative information during rock failure.Several representative rock types in cylindrical and cuboidal sample shapes were tested under uniaxial compression conditions and the failure progress was detected by a high-speed camera.The far-field ejection mass ratio(FEMR)was determined considering the qualitative failure information of the rock samples.The peak-strength energy impact index and the residual elastic energy index were used to quantitatively evaluate the rockburst proneness of both cylindrical and cuboidal samples.Further,the performance of these two indices was analyzed by comparing their estimates with the FEMR.The results show that the accuracy of the residual elastic energy index is significantly higher than that of the peak-strength energy impact index.The residual elastic energy index and the FEMR are in good agreement for both cylindrical and cuboidal rock materials.This is because these two indices can essentially reflect the common energy release mechanism characterized by the mass,ejection velocity,and ejection distance of rock fragments.It suggests that both the FEMR and the residual elastic energy index can be used to accurately measure the rockburst proneness of cylindrical and cuboidal samples based on uniaxial compression test.
基金the National Natural Science Foundation of China(Grant No.61973033)Preliminary Research of Equipment(Grant No.9090102010305)for funding the experiments。
文摘The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.
基金Supported by the Zimin Institute for Engineering Solutions Advancing Better Lives。
文摘Background Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” scenario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resulting in slow convergence, high computational costs, and learning failures, particularly when small datasets are used. Methods A novel method is presented for dense-shape correspondence, whereby the spatial information transformed by neural networks is combined with the projections onto spectral maps to overcome the “chicken or egg” challenge by selectively sampling only points with high confidence in their alignment. These points then contribute to the alignment and spectral loss terms, boosting training, and accelerating convergence by a factor of five. To ensure full unsupervised learning, the Gromov–Hausdorff distance metric was used to select the points with the maximal alignment score displaying most confidence. Results The effectiveness of the proposed approach was demonstrated on several benchmark datasets, whereby results were reported as superior to those of spectral and spatial-based methods. Conclusions The proposed method provides a promising new approach to dense-shape correspondence, addressing the key challenges in the field and offering significant advantages over the current methods, including faster convergence, improved accuracy, and reduced computational costs.
文摘Sample size determination typically relies on a power analysis based on a frequentist conditional approach. This latter can be seen as a particular case of the two-priors approach, which allows to build four distinct power functions to select the optimal sample size. We revise this approach when the focus is on testing a single binomial proportion. We consider exact methods and introduce a conservative criterion to account for the typical non-monotonic behavior of the power functions, when dealing with discrete data. The main purpose of this paper is to present a Shiny App providing a user-friendly, interactive tool to apply these criteria. The app also provides specific tools to elicit the analysis and the design prior distributions, which are the core of the two-priors approach.
文摘The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corrosion rate.However,a better understanding of the correlation between the FSP process parameters and the corrosion rate is still lacking.The current study used machine learning to establish the relationship between the corrosion rate and FSP process parameters(rotational speed,traverse speed,and shoulder diameter)for WE43 alloy.The Taguchi L27 design of experiments was used for the experimental analysis.In addition,synthetic data was generated using particle swarm optimization for virtual sample generation(VSG).The application of VSG has led to an increase in the prediction accuracy of machine learning models.A sensitivity analysis was performed using Shapley Additive Explanations to determine the key factors affecting the corrosion rate.The shoulder diameter had a significant impact in comparison to the traverse speed.A graphical user interface(GUI)has been created to predict the corrosion rate using the identified factors.This study focuses on the WE43 alloy,but its findings can also be used to predict the corrosion rate of other magnesium alloys.
文摘The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wolfowitz one-sample runs test for randomness, to present a novel approach for computing this probability, and to compare the four procedures by generating samples of 10 and 11 data points, varying the parameters n<sub>0</sub> (number of zeros) and n<sub>1</sub> (number of ones), as well as the number of runs. Fifty-nine samples are created to replicate the behavior of the distribution of the number of runs with 10 and 11 data points. The exact two-tailed probabilities for the four procedures were compared using Friedman’s test. Given the significant difference in central tendency, post-hoc comparisons were conducted using Conover’s test with Benjamini-Yekutielli correction. It is concluded that the procedures of Real Statistics using Excel and R exhibit some inadequacies in the calculation of the exact two-tailed probability, whereas the new proposal and the SPSS procedure are deemed more suitable. The proposed robust algorithm has a more transparent rationale than the SPSS one, albeit being somewhat more conservative. We recommend its implementation for this test and its application to others, such as the binomial and sign test.
文摘This paper addresses the sampled-data multi-objective active suspension control problem for an in-wheel motor driven electric vehicle subject to stochastic sampling periods and asynchronous premise variables.The focus is placed on the scenario that the dynamical state of the half-vehicle active suspension system is transmitted over an in-vehicle controller area network that only permits the transmission of sampled data packets.For this purpose,a stochastic sampling mechanism is developed such that the sampling periods can randomly switch among different values with certain mathematical probabilities.Then,an asynchronous fuzzy sampled-data controller,featuring distinct premise variables from the active suspension system,is constructed to eliminate the stringent requirement that the sampled-data controller has to share the same grades of membership.Furthermore,novel criteria for both stability analysis and controller design are derived in order to guarantee that the resultant closed-loop active suspension system is stochastically stable with simultaneous𝐻2 and𝐻∞performance requirements.Finally,the effectiveness of the proposed stochastic sampled-data multi-objective control method is verified via several numerical cases studies in both time domain and frequency domain under various road disturbance profiles.
基金Science and Technology Project of Jiangsu Polytechnic of Agriculture and Forestry(Project No.2021kj56)。
文摘Tea plants are susceptible to diseases during their growth.These diseases seriously affect the yield and quality of tea.The effective prevention and control of diseases requires accurate identification of diseases.With the development of artificial intelligence and computer vision,automatic recognition of plant diseases using image features has become feasible.As the support vector machine(SVM)is suitable for high dimension,high noise,and small sample learning,this paper uses the support vector machine learning method to realize the segmentation of disease spots of diseased tea plants.An improved Conditional Deep Convolutional Generation Adversarial Network with Gradient Penalty(C-DCGAN-GP)was used to expand the segmentation of tea plant spots.Finally,the Visual Geometry Group 16(VGG16)deep learning classification network was trained by the expanded tea lesion images to realize tea disease recognition.
基金supported by the Research Foundation for Advanced Talents of Guizhou University under Grant(2016)No.49,Key Disciplines of Guizhou Province Computer Science and Technology(ZDXK[2018]007)Research Projects of Innovation Group of Education(QianJiaoHeKY[2021]022)supported by the National Natural Science Foundation of China(62062023).
文摘Sparse representation plays an important role in the research of face recognition.As a deformable sample classification task,face recognition is often used to test the performance of classification algorithms.In face recognition,differences in expression,angle,posture,and lighting conditions have become key factors that affect recognition accuracy.Essentially,there may be significant differences between different image samples of the same face,which makes image classification very difficult.Therefore,how to build a robust virtual image representation becomes a vital issue.To solve the above problems,this paper proposes a novel image classification algorithm.First,to better retain the global features and contour information of the original sample,the algorithm uses an improved non‐linear image representation method to highlight the low‐intensity and high‐intensity pixels of the original training sample,thus generating a virtual sample.Second,by the principle of sparse representation,the linear expression coefficients of the original sample and the virtual sample can be calculated,respectively.After obtaining these two types of coefficients,calculate the distances between the original sample and the test sample and the distance between the virtual sample and the test sample.These two distances are converted into distance scores.Finally,a simple and effective weight fusion scheme is adopted to fuse the classification scores of the original image and the virtual image.The fused score will determine the final classification result.The experimental results show that the proposed method outperforms other typical sparse representation classification methods.
基金the China National Key Research and Development Program(Grant No.2016YFC0802904)National Natural Science Foundation of China(Grant No.61671470)62nd batch of funded projects of China Postdoctoral Science Foundation(Grant No.2017M623423)to provide fund for conducting experiments。
文摘Traditional object detectors based on deep learning rely on plenty of labeled samples,which are expensive to obtain.Few-shot object detection(FSOD)attempts to solve this problem,learning detection objects from a few labeled samples,but the performance is often unsatisfactory due to the scarcity of samples.We believe that the main reasons that restrict the performance of few-shot detectors are:(1)the positive samples is scarce,and(2)the quality of positive samples is low.Therefore,we put forward a novel few-shot object detector based on YOLOv4,starting from both improving the quantity and quality of positive samples.First,we design a hybrid multivariate positive sample augmentation(HMPSA)module to amplify the quantity of positive samples and increase positive sample diversity while suppressing negative samples.Then,we design a selective non-local fusion attention(SNFA)module to help the detector better learn the target features and improve the feature quality of positive samples.Finally,we optimize the loss function to make it more suitable for the task of FSOD.Experimental results on PASCAL VOC and MS COCO demonstrate that our designed few-shot object detector has competitive performance with other state-of-the-art detectors.
文摘In order to solve the problems of weak prediction stability and generalization ability of a neural network algorithm model in the yarn quality prediction research for small samples,a prediction model based on an AdaBoost algorithm(AdaBoost model) was established.A prediction model based on a linear regression algorithm(LR model) and a prediction model based on a multi-layer perceptron neural network algorithm(MLP model) were established for comparison.The prediction experiments of the yarn evenness and the yarn strength were implemented.Determination coefficients and prediction errors were used to evaluate the prediction accuracy of these models,and the K-fold cross validation was used to evaluate the generalization ability of these models.In the prediction experiments,the determination coefficient of the yarn evenness prediction result of the AdaBoost model is 76% and 87% higher than that of the LR model and the MLP model,respectively.The determination coefficient of the yarn strength prediction result of the AdaBoost model is slightly higher than that of the other two models.Considering that the yarn evenness dataset has a weaker linear relationship with the cotton dataset than that of the yarn strength dataset in this paper,the AdaBoost model has the best adaptability for the nonlinear dataset among the three models.In addition,the AdaBoost model shows generally better results in the cross-validation experiments and the series of prediction experiments at eight different training set sample sizes.It is proved that the AdaBoost model not only has good prediction accuracy but also has good prediction stability and generalization ability for small samples.
基金the National Natural Science Foundation of China(No.42127807)Natural Science Foundation of Sichuan Province(Nos.23NSFSCC0116 and 2022NSFSC12333)the Nuclear Energy Development Project(No.[2021]-88).
文摘In airborne gamma ray spectrum processing,different analysis methods,technical requirements,analysis models,and calculation methods need to be established.To meet the engineering practice requirements of airborne gamma-ray measurements and improve computational efficiency,an improved shuffled frog leaping algorithm-particle swarm optimization convolutional neural network(SFLA-PSO CNN)for large-sample quantitative analysis of airborne gamma-ray spectra is proposed herein.This method was used to train the weight of the neural network,optimize the structure of the network,delete redundant connections,and enable the neural network to acquire the capability of quantitative spectrum processing.In full-spectrum data processing,this method can perform the functions of energy spectrum peak searching and peak area calculations.After network training,the mean SNR and RMSE of the spectral lines were 31.27 and 2.75,respectively,satisfying the demand for noise reduction.To test the processing ability of the algorithm in large samples of airborne gamma spectra,this study considered the measured data from the Saihangaobi survey area as an example to conduct data spectral analysis.The results show that calculation of the single-peak area takes only 0.13~0.15 ms,and the average relative errors of the peak area in the U,Th,and K spectra are 3.11,9.50,and 6.18%,indicating the high processing efficiency and accuracy of this algorithm.The performance of the model can be further improved by optimizing related parameters,but it can already meet the requirements of practical engineering measurement.This study provides a new idea for the full-spectrum processing of airborne gamma rays.