The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenz...The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.展开更多
With its generality and practicality, the combination of partial charging curves and machine learning(ML) for battery capacity estimation has attracted widespread attention. However, a clear classification,fair compar...With its generality and practicality, the combination of partial charging curves and machine learning(ML) for battery capacity estimation has attracted widespread attention. However, a clear classification,fair comparison, and performance rationalization of these methods are lacking, due to the scattered existing studies. To address these issues, we develop 20 capacity estimation methods from three perspectives:charging sequence construction, input forms, and ML models. 22,582 charging curves are generated from 44 cells with different battery chemistry and operating conditions to validate the performance. Through comprehensive and unbiased comparison, the long short-term memory(LSTM) based neural network exhibits the best accuracy and robustness. Across all 6503 tested samples, the mean absolute percentage error(MAPE) for capacity estimation using LSTM is 0.61%, with a maximum error of only 3.94%. Even with the addition of 3 m V voltage noise or the extension of sampling intervals to 60 s, the average MAPE remains below 2%. Furthermore, the charging sequences are provided with physical explanations related to battery degradation to enhance confidence in their application. Recommendations for using other competitive methods are also presented. This work provides valuable insights and guidance for estimating battery capacity based on partial charging curves.展开更多
Data gaps and biases are two important issues that affect the quality of biodiversity information and downstream results.Understanding how best to fill existing gaps and account for biases is necessary to improve our ...Data gaps and biases are two important issues that affect the quality of biodiversity information and downstream results.Understanding how best to fill existing gaps and account for biases is necessary to improve our current information most effectively.Two current main approaches for obtaining and improving data include(1)curation of biological collections,and(2)fieldwork.However,the comparative effectiveness of these approaches in improving biodiversity data remains little explored.We used the Flora de Bogota project to study the magnitude of change in species richness,spatial coverage,and sample coverage of plant records based on curation versus fieldwork.The process of curation resulted in a decrease in species richness(synonym and error removal),but it significantly increased the number of records per species.Fieldwork contributed to a slight increase in species richness,via accumulation of new records.Additionally,curation led to increases in spatial coverage,species observed by locality,the number of plant records by species,and localities by species compared to fieldwork.Overall,curationwas more efficient in producing new information compared to fieldwork,mainly because of the large number of records available in herbaria.We recommend intensive curatorial work as the first step in increasing biodiversity data quality and quantity,to identify bias and gaps at the regional scale that can then be targeted with fieldwork.The stepwise strategy would enable fieldwork to be planned more costeffectively given the limited resources for biodiversity exploration and characterization.展开更多
Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS m...Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.展开更多
The Moon provides a unique environment for investigating nearby astrophysical events such as supernovae.Lunar samples retain valuable information from these events,via detectable long-lived“fingerprint”radionuclides...The Moon provides a unique environment for investigating nearby astrophysical events such as supernovae.Lunar samples retain valuable information from these events,via detectable long-lived“fingerprint”radionuclides such as^(60)Fe.In this work,we stepped up the development of an accelerator mass spectrometry(AMS)method for detecting^(60)Fe using the HI-13tandem accelerator at the China Institute of Atomic Energy(CIAE).Since interferences could not be sufficiently removed solely with the existing magnetic systems of the tandem accelerator and the following Q3D magnetic spectrograph,a Wien filter with a maximum voltage of±60 kV and a maximum magnetic field of 0.3 T was installed after the accelerator magnetic systems to lower the detection background for the low abundance nuclide^(60)Fe.A 1μm thick Si_(3)N_(4) foil was installed in front of the Q3D as an energy degrader.For particle detection,a multi-anode gas ionization chamber was mounted at the center of the focal plane of the spectrograph.Finally,an^(60)Fe sample with an abundance of 1.125×10^(-10)was used to test the new AMS system.These results indicate that^(60)Fe can be clearly distinguished from the isobar^(60)Ni.The sensitivity was assessed to be better than 4.3×10^(-14)based on blank sample measurements lasting 5.8 h,and the sensitivity could,in principle,be expected to be approximately 2.5×10^(-15)when the data were accumulated for 100 h,which is feasible for future lunar sample measurements because the main contaminants were sufficiently separated.展开更多
How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is pro...How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is proposed in this paper.The architecture of the attention-relation network contains two modules:a feature extract module and a feature metric module.Different from other few-shot models,an attention mechanism is applied to metric learning in our model to measure the distance between features,so as to pay attention to the correlation between features and suppress unwanted information.Besides,we combine dilated convolution and skip connection to extract more feature information for follow-up processing.We validate attention-relation network on the mobile phone screen defect dataset.The experimental results show that the classification accuracy of the attentionrelation network is 0.9486 under the 5-way 1-shot training strategy and 0.9039 under the 5-way 5-shot setting.It achieves the excellent effect of classification for mobile phone screen defects and outperforms with dominant advantages.展开更多
The purpose of software defect prediction is to identify defect-prone code modules to assist software quality assurance teams with the appropriate allocation of resources and labor.In previous software defect predicti...The purpose of software defect prediction is to identify defect-prone code modules to assist software quality assurance teams with the appropriate allocation of resources and labor.In previous software defect prediction studies,transfer learning was effective in solving the problem of inconsistent project data distribution.However,target projects often lack sufficient data,which affects the performance of the transfer learning model.In addition,the presence of uncorrelated features between projects can decrease the prediction accuracy of the transfer learning model.To address these problems,this article propose a software defect prediction method based on stable learning(SDP-SL)that combines code visualization techniques and residual networks.This method first transforms code files into code images using code visualization techniques and then constructs a defect prediction model based on these code images.During the model training process,target project data are not required as prior knowledge.Following the principles of stable learning,this paper dynamically adjusted the weights of source project samples to eliminate dependencies between features,thereby capturing the“invariance mechanism”within the data.This approach explores the genuine relationship between code defect features and labels,thereby enhancing defect prediction performance.To evaluate the performance of SDP-SL,this article conducted comparative experiments on 10 open-source projects in the PROMISE dataset.The experimental results demonstrated that in terms of the F-measure,the proposed SDP-SL method outperformed other within-project defect prediction methods by 2.11%-44.03%.In cross-project defect prediction,the SDP-SL method provided an improvement of 5.89%-25.46% in prediction performance compared to other cross-project defect prediction methods.Therefore,SDP-SL can effectively enhance within-and cross-project defect predictions.展开更多
In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes consid...In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes considerable coherence with the SAR transmission waveform together with periodical modulation patterns.This paper develops an MISRJ suppression algorithm for SAR imagery with online dictionary learning.In the algorithm,the jamming modulation temporal properties are exploited with extracting and sorting MISRJ slices using fast-time autocorrelation.Online dictionary learning is followed to separate real signals from jamming slices.Under the learned representation,time-varying MISRJs are suppressed effectively.Both simulated and real-measured SAR data are also used to confirm advantages in suppressing time-varying MISRJs over traditional methods.展开更多
The emergence of digital networks and the wide adoption of information on internet platforms have given rise to threats against users’private information.Many intruders actively seek such private data either for sale...The emergence of digital networks and the wide adoption of information on internet platforms have given rise to threats against users’private information.Many intruders actively seek such private data either for sale or other inappropriate purposes.Similarly,national and international organizations have country-level and company-level private information that could be accessed by different network attacks.Therefore,the need for a Network Intruder Detection System(NIDS)becomes essential for protecting these networks and organizations.In the evolution of NIDS,Artificial Intelligence(AI)assisted tools and methods have been widely adopted to provide effective solutions.However,the development of NIDS still faces challenges at the dataset and machine learning levels,such as large deviations in numeric features,the presence of numerous irrelevant categorical features resulting in reduced cardinality,and class imbalance in multiclass-level data.To address these challenges and offer a unified solution to NIDS development,this study proposes a novel framework that preprocesses datasets and applies a box-cox transformation to linearly transform the numeric features and bring them into closer alignment.Cardinality reduction was applied to categorical features through the binning method.Subsequently,the class imbalance dataset was addressed using the adaptive synthetic sampling data generation method.Finally,the preprocessed,refined,and oversampled feature set was divided into training and test sets with an 80–20 ratio,and two experiments were conducted.In Experiment 1,the binary classification was executed using four machine learning classifiers,with the extra trees classifier achieving the highest accuracy of 97.23%and an AUC of 0.9961.In Experiment 2,multiclass classification was performed,and the extra trees classifier emerged as the most effective,achieving an accuracy of 81.27%and an AUC of 0.97.The results were evaluated based on training,testing,and total time,and a comparative analysis with state-of-the-art studies proved the robustness and significance of the applied methods in developing a timely and precision-efficient solution to NIDS.展开更多
In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sam...In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sampling will be valid for some classes of multivariate entire functions,satisfying certain growth conditions.We will show that many known results included in Commun Korean Math Soc,2002,17:731-740,Turk J Math,2017,41:387-403 and Filomat,2020,34:3339-3347 are special cases of our results.Moreover,we estimate the truncation error of this sampling based on localized sampling without decay assumption.Illustrative examples are also presented.展开更多
The encapsulation of lunar samples is a core research area in the third phase of the Chinese Lunar Exploration Program.The seal assembly,opening and closing mechanism(OCM),and locking mechanism are the core components...The encapsulation of lunar samples is a core research area in the third phase of the Chinese Lunar Exploration Program.The seal assembly,opening and closing mechanism(OCM),and locking mechanism are the core components of the encapsulation device of the lunar samples,and the requirements of a tight seal,lightweight,and low power make the design of these core components difficult.In this study,a combined sealing assembly,OCM,and locking mechanism were investigated for the device.The sealing architecture consists of rubber and an Ag-In alloy,and a theory was built to analyze the seal.Experiments of the electroplate Au coating on the knife-edge revealed that the hermetic seal can be significantly improved.The driving principle for coaxial double-helical pairs was investigated and used to design the OCM.Moreover,a locking mechanism was created using an electric initiating explosive device with orifice damping.By optimizing the design,the output parameters were adjusted to meet the requirements of the lunar explorer.The experimental results showed that the helium leak rate of the test pieces were not more than 5×10^(-11) Pa·m^(3)·s^(-1),the minimum power of the OCM was 0.3 W,and the total weight of the principle prototype was 2.9 kg.The explosive driven locking mechanism has low impact.This investigation solved the difficulties in achieving tight seal,light weight,and low power for the lunar explorer,and the results can also be used to explore other extraterrestrial objects in the future.展开更多
Objective To evaluate the diagnostic value of histopathological examination of ultrasound-guided puncture biopsy samples in extrapulmonary tuberculosis(EPTB).Methods This study was conducted at the Shanghai Public Hea...Objective To evaluate the diagnostic value of histopathological examination of ultrasound-guided puncture biopsy samples in extrapulmonary tuberculosis(EPTB).Methods This study was conducted at the Shanghai Public Health Clinical Center.A total of 115patients underwent ultrasound-guided puncture biopsy,followed by MGIT 960 culture(culture),smear,Gene Xpert MTB/RIF(Xpert),and histopathological examination.These assays were performed to evaluate their effectiveness in diagnosing EPTB in comparison to two different diagnostic criteria:liquid culture and composite reference standard(CRS).Results When CRS was used as the reference standard,the sensitivity and specificity of culture,smear,Xpert,and histopathological examination were(44.83%,89.29%),(51.72%,89.29%),(70.11%,96.43%),and(85.06%,82.14%),respectively.Based on liquid culture tests,the sensitivity and specificity of smear,Xpert,and pathological examination were(66.67%,72.60%),(83.33%,63.01%),and(92.86%,45.21%),respectively.Histopathological examination showed the highest sensitivity but lowest specificity.Further,we found that the combination of Xpert and histopathological examination showed a sensitivity of 90.80%and a specificity of 89.29%.Conclusion Ultrasound-guided puncture sampling is safe and effective for the diagnosis of EPTB.Compared with culture,smear,and Xpert,histopathological examination showed higher sensitivity but lower specificity.The combination of histopathology with Xpert showed the best performance characteristics.展开更多
This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of t...This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of the product based on the time truncated life test employing theWeibull distribution.The control chart developed supports the examination of the mean lifespan variation for a particular product in the process of manufacturing.Three control limit levels are used:the warning control limit,inner control limit,and outer control limit.Together,they enhance the capability for variation detection.A genetic algorithm can be used for optimization during the in-control process,whereby the optimal parameters can be established for the proposed control chart.The control chart performance is assessed using the average run length,while the influence of the model parameters upon the control chart solution is assessed via sensitivity analysis based on an orthogonal experimental design withmultiple linear regression.A comparative study was conducted based on the out-of-control average run length,in which the developed control chart offered greater sensitivity in the detection of process shifts while making use of smaller samples on average than is the case for existing control charts.Finally,to exhibit the utility of the developed control chart,this paper presents its application using simulated data with parameters drawn from the real set of data.展开更多
The study of machine learning has revealed that it can unleash new applications in a variety of disciplines.Many limitations limit their expressiveness,and researchers are working to overcome them to fully exploit the...The study of machine learning has revealed that it can unleash new applications in a variety of disciplines.Many limitations limit their expressiveness,and researchers are working to overcome them to fully exploit the power of data-driven machine learning(ML)and deep learning(DL)techniques.The data imbalance presents major hurdles for classification and prediction problems in machine learning,restricting data analytics and acquiring relevant insights in practically all real-world research domains.In visual learning,network information security,failure prediction,digital marketing,healthcare,and a variety of other domains,raw data suffers from a biased data distribution of one class over the other.This article aims to present a taxonomy of the approaches for handling imbalanced data problems and their comparative study on the classification metrics and their application areas.We have explored very recent trends of techniques employed for solutions to class imbalance problems in datasets and have also discussed their limitations.This article has also identified open challenges for further research in the direction of class data imbalance.展开更多
We conduct an experimental study supported by theoretical analysis of single laser ablating copper to investigate the interactions between laser and material at different sample temperatures,and predict the changes of...We conduct an experimental study supported by theoretical analysis of single laser ablating copper to investigate the interactions between laser and material at different sample temperatures,and predict the changes of ablation morphology and lattice temperature.For investigating the effect of sample temperature on femtosecond laser processing,we conduct experiments on and simulate the thermal behavior of femtosecond laser irradiating copper by using a two-temperature model.The simulation results show that both electron peak temperature and the relaxation time needed to reach equilibrium increase as initial sample temperature rises.When the sample temperature rises from 300 K to 600 K,the maximum lattice temperature of the copper surface increases by about 6500 K under femtosecond laser irradiation,and the ablation depth increases by 20%.The simulated ablation depths follow the same general trend as the experimental values.This work provides some theoretical basis and technical support for developing femtosecond laser processing in the field of metal materials.展开更多
The Ailaoshan Orogen in the southeastern Tibet Plateau,situated between the Yangtze and Simao blocks,underwent a complex structural,magmatic,and metamorphic evolution resulting in different tectonic subzones with vary...The Ailaoshan Orogen in the southeastern Tibet Plateau,situated between the Yangtze and Simao blocks,underwent a complex structural,magmatic,and metamorphic evolution resulting in different tectonic subzones with varying structural lineaments and elemental concentrations.These elements can conceal or reduce anomalies due to the mutual effect between different anomaly areas.Dividing the whole zone into subzones based on tectonic settings,ore cluster areas,or sample catchment basins(Scb),geochemical and structural anomalies associated with gold(Au)mineralization have been identified utilizing mean plus twice standard deviations(Mean+2STD),factor analysis(FA),concentration-area(CA)modeling of stream sediment geochemical data,and lineament density in both the Ailaoshan Orogen and the individual subzones.The FA in the divided 98 Scbs with 6 Scbs containing Au deposits can roughly ascertain unknown rock types,identify specific element associations of known rocks and discern the porphyry or skarn-type Au mineralization.Compared with methods of Mean+2STD and C-A model of data in the whole orogen,which mistake the anomalies as background or act the background as anomalies,the combined methods of FA and C-A in the separate subzones or Scbs works well in regional metallogenic potential analysis.Mapping of lineament densities with a 10-km circle diameter is not suitable to locate Au deposits because of the delineated large areas of medium-high lineament density.In contrast,the use of circle diameters of 1.3 km or 1.7 km in the ore cluster scale delineates areas with a higher concentration of lineament density,consistent with the locations of known Au deposits.By analyzing the map of faults and Au anomalies,two potential prospecting targets,Scbs 1 and 63 with a sandstone as a potential host rock for Au,have been identified in the Ailaoshan Orogen.The use of combined methods in the divided subzones proved to be more effective in improving geological understanding and identifying mineralization anomalies associated with Au,rather than analyzing the entire large area.展开更多
Perovskite solar cells(PsCs)have developed tremendously over the past decade.However,the key factors influencing the power conversion efficiency(PCE)of PSCs remain incompletely understood,due to the complexity and cou...Perovskite solar cells(PsCs)have developed tremendously over the past decade.However,the key factors influencing the power conversion efficiency(PCE)of PSCs remain incompletely understood,due to the complexity and coupling of these structural and compositional parameters.In this research,we demon-strate an effective approach to optimize PSCs performance via machine learning(ML).To address chal-lenges posed by limited samples,we propose a feature mask(FM)method,which augments training samples through feature transformation rather than synthetic data.Using this approach,squeeze-and-excitation residual network(SEResNet)model achieves an accuracy with a root-mean-square-error(RMSE)of 0.833%and a Pearson's correlation coefficient(r)of 0.980.Furthermore,we employ the permu-tation importance(PI)algorithm to investigate key features for PCE.Subsequently,we predict PCE through high-throughput screenings,in which we study the relationship between PCE and chemical com-positions.After that,we conduct experiments to validate the consistency between predicted results by ML and experimental results.In this work,ML demonstrates the capability to predict device performance,extract key parameters from complex systems,and accelerate the transition from laboratory findings to commercialapplications.展开更多
Identifying rare patterns for medical diagnosis is a challenging task due to heterogeneity and the volume of data.Data summarization can create a concise version of the original data that can be used for effective dia...Identifying rare patterns for medical diagnosis is a challenging task due to heterogeneity and the volume of data.Data summarization can create a concise version of the original data that can be used for effective diagnosis.In this paper,we propose an ensemble summarization method that combines clustering and sampling to create a summary of the original data to ensure the inclusion of rare patterns.To the best of our knowledge,there has been no such technique available to augment the performance of anomaly detection techniques and simultaneously increase the efficiency of medical diagnosis.The performance of popular anomaly detection algorithms increases significantly in terms of accuracy and computational complexity when the summaries are used.Therefore,the medical diagnosis becomes more effective,and our experimental results reflect that the combination of the proposed summarization scheme and all underlying algorithms used in this paper outperforms the most popular anomaly detection techniques.展开更多
Investigating the ignition response of nitrate ester plasticized polyether(NEPE) propellant under dynamic extrusion loading is of great significant at least for two cases. Firstly, it helps to understand the mechanism...Investigating the ignition response of nitrate ester plasticized polyether(NEPE) propellant under dynamic extrusion loading is of great significant at least for two cases. Firstly, it helps to understand the mechanism and conditions of unwanted ignition inside charged propellant under accident stimulus.Secondly, evaluates the risk of a shell crevice in a solid rocket motor(SRM) under a falling or overturning scene. In the present study, an innovative visual crevice extrusion experiment is designed using a dropweight apparatus. The dynamic responses of NEPE propellant during extrusion loading, including compaction and compression, rapid shear flow into the crevice, stress concentration, and ignition reaction, have been firstly observed using a high-performance high-speed camera. The ignition reaction is observed in the triangular region of the NEPE propellant sample above the crevice when the drop weight velocity was 1.90 m/s. Based on the user material subroutine interface UMAT provided by finite element software LS-DYNA, a viscoelastic-plastic model and dual ignition criterion related to plastic shear dissipation are developed and applied to the local ignition response analysis under crevice extrusion conditions. The stress concentration occurs in the crevice location of the propellant sample, the shear stress is relatively large, the effective plastic work is relatively large, and the ignition reaction is easy to occur. When the sample thickness decreases from 5 mm to 2.5 mm, the shear stress increases from 22.3 MPa to 28.6 MPa, the critical value of effective plastic work required for ignition is shortened from 1280 μs to 730 μs, and the triangular area is easily triggering an ignition reaction. The propellant sample with a small thickness is more likely to stress concentration, resulting in large shear stress and effective work, triggering an ignition reaction.展开更多
The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with...The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.展开更多
文摘The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.
基金supported by the National Natural Science Foundation of China (52075420)the National Key Research and Development Program of China (2020YFB1708400)。
文摘With its generality and practicality, the combination of partial charging curves and machine learning(ML) for battery capacity estimation has attracted widespread attention. However, a clear classification,fair comparison, and performance rationalization of these methods are lacking, due to the scattered existing studies. To address these issues, we develop 20 capacity estimation methods from three perspectives:charging sequence construction, input forms, and ML models. 22,582 charging curves are generated from 44 cells with different battery chemistry and operating conditions to validate the performance. Through comprehensive and unbiased comparison, the long short-term memory(LSTM) based neural network exhibits the best accuracy and robustness. Across all 6503 tested samples, the mean absolute percentage error(MAPE) for capacity estimation using LSTM is 0.61%, with a maximum error of only 3.94%. Even with the addition of 3 m V voltage noise or the extension of sampling intervals to 60 s, the average MAPE remains below 2%. Furthermore, the charging sequences are provided with physical explanations related to battery degradation to enhance confidence in their application. Recommendations for using other competitive methods are also presented. This work provides valuable insights and guidance for estimating battery capacity based on partial charging curves.
基金supported by Colciencias Doctoral funding (727-2015)Universidad del Rosario, through a teaching assistantship and a doctoral grant
文摘Data gaps and biases are two important issues that affect the quality of biodiversity information and downstream results.Understanding how best to fill existing gaps and account for biases is necessary to improve our current information most effectively.Two current main approaches for obtaining and improving data include(1)curation of biological collections,and(2)fieldwork.However,the comparative effectiveness of these approaches in improving biodiversity data remains little explored.We used the Flora de Bogota project to study the magnitude of change in species richness,spatial coverage,and sample coverage of plant records based on curation versus fieldwork.The process of curation resulted in a decrease in species richness(synonym and error removal),but it significantly increased the number of records per species.Fieldwork contributed to a slight increase in species richness,via accumulation of new records.Additionally,curation led to increases in spatial coverage,species observed by locality,the number of plant records by species,and localities by species compared to fieldwork.Overall,curationwas more efficient in producing new information compared to fieldwork,mainly because of the large number of records available in herbaria.We recommend intensive curatorial work as the first step in increasing biodiversity data quality and quantity,to identify bias and gaps at the regional scale that can then be targeted with fieldwork.The stepwise strategy would enable fieldwork to be planned more costeffectively given the limited resources for biodiversity exploration and characterization.
基金supported by the Platform Development Foundation of the China Institute for Radiation Protection(No.YP21030101)the National Natural Science Foundation of China(General Program)(Nos.12175114,U2167209)+1 种基金the National Key R&D Program of China(No.2021YFF0603600)the Tsinghua University Initiative Scientific Research Program(No.20211080081).
文摘Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.
基金supported by the National Natural Science Foundation of China(Nos.12125509,12222514,11961141003,and 12005304)National Key Research and Development Project(No.2022YFA1602301)+1 种基金CAST Young Talent Support Planthe CNNC Science Fund for Talented Young Scholars Continuous support for basic scientific research projects。
文摘The Moon provides a unique environment for investigating nearby astrophysical events such as supernovae.Lunar samples retain valuable information from these events,via detectable long-lived“fingerprint”radionuclides such as^(60)Fe.In this work,we stepped up the development of an accelerator mass spectrometry(AMS)method for detecting^(60)Fe using the HI-13tandem accelerator at the China Institute of Atomic Energy(CIAE).Since interferences could not be sufficiently removed solely with the existing magnetic systems of the tandem accelerator and the following Q3D magnetic spectrograph,a Wien filter with a maximum voltage of±60 kV and a maximum magnetic field of 0.3 T was installed after the accelerator magnetic systems to lower the detection background for the low abundance nuclide^(60)Fe.A 1μm thick Si_(3)N_(4) foil was installed in front of the Q3D as an energy degrader.For particle detection,a multi-anode gas ionization chamber was mounted at the center of the focal plane of the spectrograph.Finally,an^(60)Fe sample with an abundance of 1.125×10^(-10)was used to test the new AMS system.These results indicate that^(60)Fe can be clearly distinguished from the isobar^(60)Ni.The sensitivity was assessed to be better than 4.3×10^(-14)based on blank sample measurements lasting 5.8 h,and the sensitivity could,in principle,be expected to be approximately 2.5×10^(-15)when the data were accumulated for 100 h,which is feasible for future lunar sample measurements because the main contaminants were sufficiently separated.
文摘How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is proposed in this paper.The architecture of the attention-relation network contains two modules:a feature extract module and a feature metric module.Different from other few-shot models,an attention mechanism is applied to metric learning in our model to measure the distance between features,so as to pay attention to the correlation between features and suppress unwanted information.Besides,we combine dilated convolution and skip connection to extract more feature information for follow-up processing.We validate attention-relation network on the mobile phone screen defect dataset.The experimental results show that the classification accuracy of the attentionrelation network is 0.9486 under the 5-way 1-shot training strategy and 0.9039 under the 5-way 5-shot setting.It achieves the excellent effect of classification for mobile phone screen defects and outperforms with dominant advantages.
基金supported by the NationalNatural Science Foundation of China(Grant No.61867004)the Youth Fund of the National Natural Science Foundation of China(Grant No.41801288).
文摘The purpose of software defect prediction is to identify defect-prone code modules to assist software quality assurance teams with the appropriate allocation of resources and labor.In previous software defect prediction studies,transfer learning was effective in solving the problem of inconsistent project data distribution.However,target projects often lack sufficient data,which affects the performance of the transfer learning model.In addition,the presence of uncorrelated features between projects can decrease the prediction accuracy of the transfer learning model.To address these problems,this article propose a software defect prediction method based on stable learning(SDP-SL)that combines code visualization techniques and residual networks.This method first transforms code files into code images using code visualization techniques and then constructs a defect prediction model based on these code images.During the model training process,target project data are not required as prior knowledge.Following the principles of stable learning,this paper dynamically adjusted the weights of source project samples to eliminate dependencies between features,thereby capturing the“invariance mechanism”within the data.This approach explores the genuine relationship between code defect features and labels,thereby enhancing defect prediction performance.To evaluate the performance of SDP-SL,this article conducted comparative experiments on 10 open-source projects in the PROMISE dataset.The experimental results demonstrated that in terms of the F-measure,the proposed SDP-SL method outperformed other within-project defect prediction methods by 2.11%-44.03%.In cross-project defect prediction,the SDP-SL method provided an improvement of 5.89%-25.46% in prediction performance compared to other cross-project defect prediction methods.Therefore,SDP-SL can effectively enhance within-and cross-project defect predictions.
基金supported by the National Natural Science Foundation of China(61771372,61771367,62101494)the National Outstanding Youth Science Fund Project(61525105)+1 种基金Shenzhen Science and Technology Program(KQTD20190929172704911)the Aeronautic al Science Foundation of China(2019200M1001)。
文摘In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes considerable coherence with the SAR transmission waveform together with periodical modulation patterns.This paper develops an MISRJ suppression algorithm for SAR imagery with online dictionary learning.In the algorithm,the jamming modulation temporal properties are exploited with extracting and sorting MISRJ slices using fast-time autocorrelation.Online dictionary learning is followed to separate real signals from jamming slices.Under the learned representation,time-varying MISRJs are suppressed effectively.Both simulated and real-measured SAR data are also used to confirm advantages in suppressing time-varying MISRJs over traditional methods.
文摘The emergence of digital networks and the wide adoption of information on internet platforms have given rise to threats against users’private information.Many intruders actively seek such private data either for sale or other inappropriate purposes.Similarly,national and international organizations have country-level and company-level private information that could be accessed by different network attacks.Therefore,the need for a Network Intruder Detection System(NIDS)becomes essential for protecting these networks and organizations.In the evolution of NIDS,Artificial Intelligence(AI)assisted tools and methods have been widely adopted to provide effective solutions.However,the development of NIDS still faces challenges at the dataset and machine learning levels,such as large deviations in numeric features,the presence of numerous irrelevant categorical features resulting in reduced cardinality,and class imbalance in multiclass-level data.To address these challenges and offer a unified solution to NIDS development,this study proposes a novel framework that preprocesses datasets and applies a box-cox transformation to linearly transform the numeric features and bring them into closer alignment.Cardinality reduction was applied to categorical features through the binning method.Subsequently,the class imbalance dataset was addressed using the adaptive synthetic sampling data generation method.Finally,the preprocessed,refined,and oversampled feature set was divided into training and test sets with an 80–20 ratio,and two experiments were conducted.In Experiment 1,the binary classification was executed using four machine learning classifiers,with the extra trees classifier achieving the highest accuracy of 97.23%and an AUC of 0.9961.In Experiment 2,multiclass classification was performed,and the extra trees classifier emerged as the most effective,achieving an accuracy of 81.27%and an AUC of 0.97.The results were evaluated based on training,testing,and total time,and a comparative analysis with state-of-the-art studies proved the robustness and significance of the applied methods in developing a timely and precision-efficient solution to NIDS.
文摘In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sampling will be valid for some classes of multivariate entire functions,satisfying certain growth conditions.We will show that many known results included in Commun Korean Math Soc,2002,17:731-740,Turk J Math,2017,41:387-403 and Filomat,2020,34:3339-3347 are special cases of our results.Moreover,we estimate the truncation error of this sampling based on localized sampling without decay assumption.Illustrative examples are also presented.
基金Supported by Research Foundation of CLEP of China (Grant No.TY3Q20110003)。
文摘The encapsulation of lunar samples is a core research area in the third phase of the Chinese Lunar Exploration Program.The seal assembly,opening and closing mechanism(OCM),and locking mechanism are the core components of the encapsulation device of the lunar samples,and the requirements of a tight seal,lightweight,and low power make the design of these core components difficult.In this study,a combined sealing assembly,OCM,and locking mechanism were investigated for the device.The sealing architecture consists of rubber and an Ag-In alloy,and a theory was built to analyze the seal.Experiments of the electroplate Au coating on the knife-edge revealed that the hermetic seal can be significantly improved.The driving principle for coaxial double-helical pairs was investigated and used to design the OCM.Moreover,a locking mechanism was created using an electric initiating explosive device with orifice damping.By optimizing the design,the output parameters were adjusted to meet the requirements of the lunar explorer.The experimental results showed that the helium leak rate of the test pieces were not more than 5×10^(-11) Pa·m^(3)·s^(-1),the minimum power of the OCM was 0.3 W,and the total weight of the principle prototype was 2.9 kg.The explosive driven locking mechanism has low impact.This investigation solved the difficulties in achieving tight seal,light weight,and low power for the lunar explorer,and the results can also be used to explore other extraterrestrial objects in the future.
基金funded by the grants from the National Key Research and Development Program of China[2021YFC2301503,2022YFC2302900]the National Natural and Science Foundation of China[82171739,82171815,81873884]。
文摘Objective To evaluate the diagnostic value of histopathological examination of ultrasound-guided puncture biopsy samples in extrapulmonary tuberculosis(EPTB).Methods This study was conducted at the Shanghai Public Health Clinical Center.A total of 115patients underwent ultrasound-guided puncture biopsy,followed by MGIT 960 culture(culture),smear,Gene Xpert MTB/RIF(Xpert),and histopathological examination.These assays were performed to evaluate their effectiveness in diagnosing EPTB in comparison to two different diagnostic criteria:liquid culture and composite reference standard(CRS).Results When CRS was used as the reference standard,the sensitivity and specificity of culture,smear,Xpert,and histopathological examination were(44.83%,89.29%),(51.72%,89.29%),(70.11%,96.43%),and(85.06%,82.14%),respectively.Based on liquid culture tests,the sensitivity and specificity of smear,Xpert,and pathological examination were(66.67%,72.60%),(83.33%,63.01%),and(92.86%,45.21%),respectively.Histopathological examination showed the highest sensitivity but lowest specificity.Further,we found that the combination of Xpert and histopathological examination showed a sensitivity of 90.80%and a specificity of 89.29%.Conclusion Ultrasound-guided puncture sampling is safe and effective for the diagnosis of EPTB.Compared with culture,smear,and Xpert,histopathological examination showed higher sensitivity but lower specificity.The combination of histopathology with Xpert showed the best performance characteristics.
基金the Science,Research and Innovation Promotion Funding(TSRI)(Grant No.FRB660012/0168)managed under Rajamangala University of Technology Thanyaburi(FRB66E0646O.4).
文摘This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of the product based on the time truncated life test employing theWeibull distribution.The control chart developed supports the examination of the mean lifespan variation for a particular product in the process of manufacturing.Three control limit levels are used:the warning control limit,inner control limit,and outer control limit.Together,they enhance the capability for variation detection.A genetic algorithm can be used for optimization during the in-control process,whereby the optimal parameters can be established for the proposed control chart.The control chart performance is assessed using the average run length,while the influence of the model parameters upon the control chart solution is assessed via sensitivity analysis based on an orthogonal experimental design withmultiple linear regression.A comparative study was conducted based on the out-of-control average run length,in which the developed control chart offered greater sensitivity in the detection of process shifts while making use of smaller samples on average than is the case for existing control charts.Finally,to exhibit the utility of the developed control chart,this paper presents its application using simulated data with parameters drawn from the real set of data.
文摘The study of machine learning has revealed that it can unleash new applications in a variety of disciplines.Many limitations limit their expressiveness,and researchers are working to overcome them to fully exploit the power of data-driven machine learning(ML)and deep learning(DL)techniques.The data imbalance presents major hurdles for classification and prediction problems in machine learning,restricting data analytics and acquiring relevant insights in practically all real-world research domains.In visual learning,network information security,failure prediction,digital marketing,healthcare,and a variety of other domains,raw data suffers from a biased data distribution of one class over the other.This article aims to present a taxonomy of the approaches for handling imbalanced data problems and their comparative study on the classification metrics and their application areas.We have explored very recent trends of techniques employed for solutions to class imbalance problems in datasets and have also discussed their limitations.This article has also identified open challenges for further research in the direction of class data imbalance.
基金Project supported by the National Key Research and Development Program of China(Grant No.2019YFA0307701)the National Natural Science Foundation of China(Grant Nos.11674128,11674124,and 11974138).
文摘We conduct an experimental study supported by theoretical analysis of single laser ablating copper to investigate the interactions between laser and material at different sample temperatures,and predict the changes of ablation morphology and lattice temperature.For investigating the effect of sample temperature on femtosecond laser processing,we conduct experiments on and simulate the thermal behavior of femtosecond laser irradiating copper by using a two-temperature model.The simulation results show that both electron peak temperature and the relaxation time needed to reach equilibrium increase as initial sample temperature rises.When the sample temperature rises from 300 K to 600 K,the maximum lattice temperature of the copper surface increases by about 6500 K under femtosecond laser irradiation,and the ablation depth increases by 20%.The simulated ablation depths follow the same general trend as the experimental values.This work provides some theoretical basis and technical support for developing femtosecond laser processing in the field of metal materials.
基金supported by the National Natural Science Foundation of China(Grant Nos.42125203 and 42102107)the National Key Research and Development Project of China(Grant No.2020YFA0714802)+1 种基金the“Deep-time Digital Earth”Science and Technology Leading Talents Team Funds from the Central Universities for the Frontiers Science Center for Deep-time Digital Earth,China University of Geosciences(Beijing)(Grant No.2652023001)the 111 Project of the Ministry of Science and Technology(Grant No.BP0719021).
文摘The Ailaoshan Orogen in the southeastern Tibet Plateau,situated between the Yangtze and Simao blocks,underwent a complex structural,magmatic,and metamorphic evolution resulting in different tectonic subzones with varying structural lineaments and elemental concentrations.These elements can conceal or reduce anomalies due to the mutual effect between different anomaly areas.Dividing the whole zone into subzones based on tectonic settings,ore cluster areas,or sample catchment basins(Scb),geochemical and structural anomalies associated with gold(Au)mineralization have been identified utilizing mean plus twice standard deviations(Mean+2STD),factor analysis(FA),concentration-area(CA)modeling of stream sediment geochemical data,and lineament density in both the Ailaoshan Orogen and the individual subzones.The FA in the divided 98 Scbs with 6 Scbs containing Au deposits can roughly ascertain unknown rock types,identify specific element associations of known rocks and discern the porphyry or skarn-type Au mineralization.Compared with methods of Mean+2STD and C-A model of data in the whole orogen,which mistake the anomalies as background or act the background as anomalies,the combined methods of FA and C-A in the separate subzones or Scbs works well in regional metallogenic potential analysis.Mapping of lineament densities with a 10-km circle diameter is not suitable to locate Au deposits because of the delineated large areas of medium-high lineament density.In contrast,the use of circle diameters of 1.3 km or 1.7 km in the ore cluster scale delineates areas with a higher concentration of lineament density,consistent with the locations of known Au deposits.By analyzing the map of faults and Au anomalies,two potential prospecting targets,Scbs 1 and 63 with a sandstone as a potential host rock for Au,have been identified in the Ailaoshan Orogen.The use of combined methods in the divided subzones proved to be more effective in improving geological understanding and identifying mineralization anomalies associated with Au,rather than analyzing the entire large area.
基金supported by the National Key Research and Development Program (2022YFF0609504)the National Natural Science Foundation of China (61974126,51902273,62005230,62001405)the Natural Science Foundation of Fujian Province of China (No.2021J06009)
文摘Perovskite solar cells(PsCs)have developed tremendously over the past decade.However,the key factors influencing the power conversion efficiency(PCE)of PSCs remain incompletely understood,due to the complexity and coupling of these structural and compositional parameters.In this research,we demon-strate an effective approach to optimize PSCs performance via machine learning(ML).To address chal-lenges posed by limited samples,we propose a feature mask(FM)method,which augments training samples through feature transformation rather than synthetic data.Using this approach,squeeze-and-excitation residual network(SEResNet)model achieves an accuracy with a root-mean-square-error(RMSE)of 0.833%and a Pearson's correlation coefficient(r)of 0.980.Furthermore,we employ the permu-tation importance(PI)algorithm to investigate key features for PCE.Subsequently,we predict PCE through high-throughput screenings,in which we study the relationship between PCE and chemical com-positions.After that,we conduct experiments to validate the consistency between predicted results by ML and experimental results.In this work,ML demonstrates the capability to predict device performance,extract key parameters from complex systems,and accelerate the transition from laboratory findings to commercialapplications.
文摘Identifying rare patterns for medical diagnosis is a challenging task due to heterogeneity and the volume of data.Data summarization can create a concise version of the original data that can be used for effective diagnosis.In this paper,we propose an ensemble summarization method that combines clustering and sampling to create a summary of the original data to ensure the inclusion of rare patterns.To the best of our knowledge,there has been no such technique available to augment the performance of anomaly detection techniques and simultaneously increase the efficiency of medical diagnosis.The performance of popular anomaly detection algorithms increases significantly in terms of accuracy and computational complexity when the summaries are used.Therefore,the medical diagnosis becomes more effective,and our experimental results reflect that the combination of the proposed summarization scheme and all underlying algorithms used in this paper outperforms the most popular anomaly detection techniques.
基金National Natural Science Foundation of China(U22B20131)State Key Laboratory of Explosion Science and Technology(QNKT23-10)for supporting this project.
文摘Investigating the ignition response of nitrate ester plasticized polyether(NEPE) propellant under dynamic extrusion loading is of great significant at least for two cases. Firstly, it helps to understand the mechanism and conditions of unwanted ignition inside charged propellant under accident stimulus.Secondly, evaluates the risk of a shell crevice in a solid rocket motor(SRM) under a falling or overturning scene. In the present study, an innovative visual crevice extrusion experiment is designed using a dropweight apparatus. The dynamic responses of NEPE propellant during extrusion loading, including compaction and compression, rapid shear flow into the crevice, stress concentration, and ignition reaction, have been firstly observed using a high-performance high-speed camera. The ignition reaction is observed in the triangular region of the NEPE propellant sample above the crevice when the drop weight velocity was 1.90 m/s. Based on the user material subroutine interface UMAT provided by finite element software LS-DYNA, a viscoelastic-plastic model and dual ignition criterion related to plastic shear dissipation are developed and applied to the local ignition response analysis under crevice extrusion conditions. The stress concentration occurs in the crevice location of the propellant sample, the shear stress is relatively large, the effective plastic work is relatively large, and the ignition reaction is easy to occur. When the sample thickness decreases from 5 mm to 2.5 mm, the shear stress increases from 22.3 MPa to 28.6 MPa, the critical value of effective plastic work required for ignition is shortened from 1280 μs to 730 μs, and the triangular area is easily triggering an ignition reaction. The propellant sample with a small thickness is more likely to stress concentration, resulting in large shear stress and effective work, triggering an ignition reaction.
基金Project supported by the National Key Research and Development Program of China(Grant No.2023YFF1204402)the National Natural Science Foundation of China(Grant Nos.12074079 and 12374208)+1 种基金the Natural Science Foundation of Shanghai(Grant No.22ZR1406800)the China Postdoctoral Science Foundation(Grant No.2022M720815).
文摘The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.