Objective To evaluate the diagnostic value of histopathological examination of ultrasound-guided puncture biopsy samples in extrapulmonary tuberculosis(EPTB).Methods This study was conducted at the Shanghai Public Hea...Objective To evaluate the diagnostic value of histopathological examination of ultrasound-guided puncture biopsy samples in extrapulmonary tuberculosis(EPTB).Methods This study was conducted at the Shanghai Public Health Clinical Center.A total of 115patients underwent ultrasound-guided puncture biopsy,followed by MGIT 960 culture(culture),smear,Gene Xpert MTB/RIF(Xpert),and histopathological examination.These assays were performed to evaluate their effectiveness in diagnosing EPTB in comparison to two different diagnostic criteria:liquid culture and composite reference standard(CRS).Results When CRS was used as the reference standard,the sensitivity and specificity of culture,smear,Xpert,and histopathological examination were(44.83%,89.29%),(51.72%,89.29%),(70.11%,96.43%),and(85.06%,82.14%),respectively.Based on liquid culture tests,the sensitivity and specificity of smear,Xpert,and pathological examination were(66.67%,72.60%),(83.33%,63.01%),and(92.86%,45.21%),respectively.Histopathological examination showed the highest sensitivity but lowest specificity.Further,we found that the combination of Xpert and histopathological examination showed a sensitivity of 90.80%and a specificity of 89.29%.Conclusion Ultrasound-guided puncture sampling is safe and effective for the diagnosis of EPTB.Compared with culture,smear,and Xpert,histopathological examination showed higher sensitivity but lower specificity.The combination of histopathology with Xpert showed the best performance characteristics.展开更多
The encapsulation of lunar samples is a core research area in the third phase of the Chinese Lunar Exploration Program.The seal assembly,opening and closing mechanism(OCM),and locking mechanism are the core components...The encapsulation of lunar samples is a core research area in the third phase of the Chinese Lunar Exploration Program.The seal assembly,opening and closing mechanism(OCM),and locking mechanism are the core components of the encapsulation device of the lunar samples,and the requirements of a tight seal,lightweight,and low power make the design of these core components difficult.In this study,a combined sealing assembly,OCM,and locking mechanism were investigated for the device.The sealing architecture consists of rubber and an Ag-In alloy,and a theory was built to analyze the seal.Experiments of the electroplate Au coating on the knife-edge revealed that the hermetic seal can be significantly improved.The driving principle for coaxial double-helical pairs was investigated and used to design the OCM.Moreover,a locking mechanism was created using an electric initiating explosive device with orifice damping.By optimizing the design,the output parameters were adjusted to meet the requirements of the lunar explorer.The experimental results showed that the helium leak rate of the test pieces were not more than 5×10^(-11) Pa·m^(3)·s^(-1),the minimum power of the OCM was 0.3 W,and the total weight of the principle prototype was 2.9 kg.The explosive driven locking mechanism has low impact.This investigation solved the difficulties in achieving tight seal,light weight,and low power for the lunar explorer,and the results can also be used to explore other extraterrestrial objects in the future.展开更多
Perovskite solar cells(PsCs)have developed tremendously over the past decade.However,the key factors influencing the power conversion efficiency(PCE)of PSCs remain incompletely understood,due to the complexity and cou...Perovskite solar cells(PsCs)have developed tremendously over the past decade.However,the key factors influencing the power conversion efficiency(PCE)of PSCs remain incompletely understood,due to the complexity and coupling of these structural and compositional parameters.In this research,we demon-strate an effective approach to optimize PSCs performance via machine learning(ML).To address chal-lenges posed by limited samples,we propose a feature mask(FM)method,which augments training samples through feature transformation rather than synthetic data.Using this approach,squeeze-and-excitation residual network(SEResNet)model achieves an accuracy with a root-mean-square-error(RMSE)of 0.833%and a Pearson's correlation coefficient(r)of 0.980.Furthermore,we employ the permu-tation importance(PI)algorithm to investigate key features for PCE.Subsequently,we predict PCE through high-throughput screenings,in which we study the relationship between PCE and chemical com-positions.After that,we conduct experiments to validate the consistency between predicted results by ML and experimental results.In this work,ML demonstrates the capability to predict device performance,extract key parameters from complex systems,and accelerate the transition from laboratory findings to commercialapplications.展开更多
The deep mining of coal resources is accompanied by severe environmental challenges and various potential engineering hazards.The implementation of NPR(negative Poisson's ratio)bolts are capable of controlling lar...The deep mining of coal resources is accompanied by severe environmental challenges and various potential engineering hazards.The implementation of NPR(negative Poisson's ratio)bolts are capable of controlling large deformations in the surrounding rock effectively.This paper focuses on studying the mechanical properties of the NPR bolt under static disturbance load.The deep nonlinear mechanical experimental system was used to study the mechanical behavior of rock samples with different anchored types(unanchored/PR anchored/2G NPR anchored)under static disturbance load.The whole process of rock samples was taken by high-speed camera to obtain the real-time failure characteristics under static disturbance load.At the same time,the acoustic emission signal was collected to obtain the key characteristic parameters of acoustic emission such as acoustic emission count,energy,and frequency.The deformation at the failure of the samples was calculated and analyzed by digital speckle software.The findings indicate that the failure mode of rock is influenced by different types of anchoring.The peak failure strength of 2G NPR bolt anchored rock samples exhibits an increase of 6.5%when compared to the unanchored rock samples.The cumulative count and cumulative energy of acoustic emission exhibit a decrease of 62.16%and 62.90%,respectively.The maximum deformation of bearing capacity exhibits an increase of 59.27%,while the failure time demonstrates a delay of 42.86%.The peak failure strength of the 2G NPR bolt anchored ones under static disturbance load exhibits an increase of 5.94%when compared to the rock anchored by PR(Poisson's ratio)bolt.The cumulative count and cumulative energy of acoustic emission exhibit a decrease of 47.16%and 43.86%,respectively.The maximum deformation of the bearing capacity exhibits an increase of 50.43%,and the failure time demonstrates a delay of 32%.After anchoring by 2G NPR bolt,anchoring support effectively reduces the risk of damage caused by static disturbance load.These results demonstrate that the support effect of 2G NPR bolt materials surpasses that of PR bolt.展开更多
Accurate and reliable fault detection is essential for the safe operation of electric vehicles.Support vector data description(SVDD)has been widely used in the field of fault detection.However,constructing the hypersp...Accurate and reliable fault detection is essential for the safe operation of electric vehicles.Support vector data description(SVDD)has been widely used in the field of fault detection.However,constructing the hypersphere boundary only describes the distribution of unlabeled samples,while the distribution of faulty samples cannot be effectively described and easilymisses detecting faulty data due to the imbalance of sample distribution.Meanwhile,selecting parameters is critical to the detection performance,and empirical parameterization is generally timeconsuming and laborious and may not result in finding the optimal parameters.Therefore,this paper proposes a semi-supervised data-driven method based on which the SVDD algorithm is improved and achieves excellent fault detection performance.By incorporating faulty samples into the underlying SVDD model,training deals better with the problem of missing detection of faulty samples caused by the imbalance in the distribution of abnormal samples,and the hypersphere boundary ismodified to classify the samplesmore accurately.The Bayesian Optimization NSVDD(BO-NSVDD)model was constructed to quickly and accurately optimize hyperparameter combinations.In the experiments,electric vehicle operation data with four common fault types are used to evaluate the performance with other five models,and the results show that the BO-NSVDD model presents superior detection performance for each type of fault data,especially in the imperceptible early and minor faults,which has seen very obvious advantages.Finally,the strong robustness of the proposed method is verified by adding different intensities of noise in the dataset.展开更多
In order to solve the problems of weak prediction stability and generalization ability of a neural network algorithm model in the yarn quality prediction research for small samples,a prediction model based on an AdaBo...In order to solve the problems of weak prediction stability and generalization ability of a neural network algorithm model in the yarn quality prediction research for small samples,a prediction model based on an AdaBoost algorithm(AdaBoost model) was established.A prediction model based on a linear regression algorithm(LR model) and a prediction model based on a multi-layer perceptron neural network algorithm(MLP model) were established for comparison.The prediction experiments of the yarn evenness and the yarn strength were implemented.Determination coefficients and prediction errors were used to evaluate the prediction accuracy of these models,and the K-fold cross validation was used to evaluate the generalization ability of these models.In the prediction experiments,the determination coefficient of the yarn evenness prediction result of the AdaBoost model is 76% and 87% higher than that of the LR model and the MLP model,respectively.The determination coefficient of the yarn strength prediction result of the AdaBoost model is slightly higher than that of the other two models.Considering that the yarn evenness dataset has a weaker linear relationship with the cotton dataset than that of the yarn strength dataset in this paper,the AdaBoost model has the best adaptability for the nonlinear dataset among the three models.In addition,the AdaBoost model shows generally better results in the cross-validation experiments and the series of prediction experiments at eight different training set sample sizes.It is proved that the AdaBoost model not only has good prediction accuracy but also has good prediction stability and generalization ability for small samples.展开更多
Accurate prediction of the internal corrosion rates of oil and gas pipelines could be an effective way to prevent pipeline leaks.In this study,a proposed framework for predicting corrosion rates under a small sample o...Accurate prediction of the internal corrosion rates of oil and gas pipelines could be an effective way to prevent pipeline leaks.In this study,a proposed framework for predicting corrosion rates under a small sample of metal corrosion data in the laboratory was developed to provide a new perspective on how to solve the problem of pipeline corrosion under the condition of insufficient real samples.This approach employed the bagging algorithm to construct a strong learner by integrating several KNN learners.A total of 99 data were collected and split into training and test set with a 9:1 ratio.The training set was used to obtain the best hyperparameters by 10-fold cross-validation and grid search,and the test set was used to determine the performance of the model.The results showed that theMean Absolute Error(MAE)of this framework is 28.06%of the traditional model and outperforms other ensemblemethods.Therefore,the proposed framework is suitable formetal corrosion prediction under small sample conditions.展开更多
Lymnaeid snails are key intermediate hosts for the development and survival of Fasciola spp.,the causative agent of Fascioliasis which are economically important parasites infecting humans and livestock globally.The c...Lymnaeid snails are key intermediate hosts for the development and survival of Fasciola spp.,the causative agent of Fascioliasis which are economically important parasites infecting humans and livestock globally.The current control method for treating Fascioliasis is heavily reliant on anthelmintic drugs,particularly Triclabendazole(TCBZ)which has resulted in drug-resistant parasites and poses significant risk as there are no long-term efficacious alternatives available.Sustainable control measures at the farm level could include both parasite and snail control will play an important role in Fasciola spp.control and reduce the reliance on anthelmintic drugs.Implementation of such sustainable control measures requires effective identification of snails on the property however Lymnaeid snails are small and difficult to physically locate.Snail identification using an environmental DNA approach is a recent approach in which physically locating snails are not required.Austropeplea tomentosa,is the primary intermediate snail host for F.hepatica transmission in South-East Australia and we present an in-field loop-mediated isothermal amplification and water filtering method for the detection of A.tomentosa eDNA from water samples to improve current surveillance methods.This methodology is highly sensitive with a detection limit of 5×10^(−6)ng/μL,detected in<20 minutes,with cumulative sample preparation and amplification time under 1 hour.This proposed workflow could assist in monitoring areas to determine the risk of Fascioliasis infection and implement strategies to manage snail populations to ultimately reduce the risk of infection for humans and livestock.展开更多
Stream sediment sampling is a significant tool in geochemical exploration. The stream sediment composition reflects the bedrock geology, overburden cover, and metalliferous mineralization. This research article focuse...Stream sediment sampling is a significant tool in geochemical exploration. The stream sediment composition reflects the bedrock geology, overburden cover, and metalliferous mineralization. This research article focuses on assessing selected trace element concentrations in stream sediments and interpreting their inter-element relationships using multivariate statistical methods. Tagadur Ranganathaswamy Gudda and its surroundings in the Nuggihalli schist belt of southern India have been investigated in the present work. The geology of the study area is complex, with a diverse range of litho units and evidence of strong structural deformation. The area is known for its mineralization potential for chromite, vanadiferous titanomagnetite, and sulfides. The topography of the region is characterized by an undulating terrain with a radial drainage pattern. Most part of the schist belt is soil covered except the Tagadur Ranganathaswamy Gudda area. For this study, a discrete stream sediment sampling method was adopted to collect the samples. Stream sediment samples were collected using a discrete sampling method and analyzed for trace elements using an ICP-AES spectrophotometer: Fe, Cr, Ti, V, Cu, Ni, Zn, Pb, Mn, Cd, and As have been analyzed. The analytical data were statistically treated using the SPSS software, including descriptive statistics, normalization of data using natural log transformation, and factor analysis with varimax rotation. The transformed data showed a log-normal distribution, indicating the presence of geochemical anomalies. The results of the study provide valuable insights into the geochemical processes and mineralization potential of the study area. The statistical analysis helps in understanding the inter-element relationships and identifying element groups and their implications on bedrock potential mineralization. Additionally, spatial analysis using inverse distance weighting interpolation provides information about the distribution of geochemical parameters across the study area. Overall, this research contributes to the understanding of stream sediment geochemistry and its application in mineral exploration. The findings have implications for future exploration efforts and can aid in the identification of potential ore deposits in the Nuggihalli schist belt and similar geological settings.展开更多
Objective Liver transplantation is a current treatment option for hepatocellular carcinoma(HCC).The United States National Inpatient Sample database was utilized to identify risk factors that influence the outcome of ...Objective Liver transplantation is a current treatment option for hepatocellular carcinoma(HCC).The United States National Inpatient Sample database was utilized to identify risk factors that influence the outcome of liver transplantation,including locoregional recurrence,distant metastasis,and in-hospital mortality,in HCC patients with concurrent hepatitis B infection,hepatitis C infection,or alcoholic cirrhosis.Methods This retrospective cohort study included HCC patients(n=2391)from the National Inpatient Sample database who underwent liver transplantation and were diagnosed with hepatitis B or C virus infection,co-infection with hepatitis B and C,or alcoholic cirrhosis of the liver between 2005 and 2014.Associations between HCC etiology and post-transplant outcomes were examined with multivariate analysis models.Results Liver cirrhosis was due to alcohol in 10.5%of patients,hepatitis B in 6.6%,hepatitis C in 10.8%,and combined hepatitis B and C infection in 24.3%.Distant metastasis was found in 16.7%of patients infected with hepatitis B and 9%of hepatitis C patients.Local recurrence of HCC was significantly more likely to occur in patients with hepatitis B than in those with alcohol-induced disease.Conclusion After liver transplantation,patients with hepatitis B infection have a higher risk of local recurrence and distant metastasis.Postoperative care and patient tracking are essential for liver transplant patients with hepatitis B infection.展开更多
The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenz...The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.展开更多
The Moon provides a unique environment for investigating nearby astrophysical events such as supernovae.Lunar samples retain valuable information from these events,via detectable long-lived“fingerprint”radionuclides...The Moon provides a unique environment for investigating nearby astrophysical events such as supernovae.Lunar samples retain valuable information from these events,via detectable long-lived“fingerprint”radionuclides such as^(60)Fe.In this work,we stepped up the development of an accelerator mass spectrometry(AMS)method for detecting^(60)Fe using the HI-13tandem accelerator at the China Institute of Atomic Energy(CIAE).Since interferences could not be sufficiently removed solely with the existing magnetic systems of the tandem accelerator and the following Q3D magnetic spectrograph,a Wien filter with a maximum voltage of±60 kV and a maximum magnetic field of 0.3 T was installed after the accelerator magnetic systems to lower the detection background for the low abundance nuclide^(60)Fe.A 1μm thick Si_(3)N_(4) foil was installed in front of the Q3D as an energy degrader.For particle detection,a multi-anode gas ionization chamber was mounted at the center of the focal plane of the spectrograph.Finally,an^(60)Fe sample with an abundance of 1.125×10^(-10)was used to test the new AMS system.These results indicate that^(60)Fe can be clearly distinguished from the isobar^(60)Ni.The sensitivity was assessed to be better than 4.3×10^(-14)based on blank sample measurements lasting 5.8 h,and the sensitivity could,in principle,be expected to be approximately 2.5×10^(-15)when the data were accumulated for 100 h,which is feasible for future lunar sample measurements because the main contaminants were sufficiently separated.展开更多
With its generality and practicality, the combination of partial charging curves and machine learning(ML) for battery capacity estimation has attracted widespread attention. However, a clear classification,fair compar...With its generality and practicality, the combination of partial charging curves and machine learning(ML) for battery capacity estimation has attracted widespread attention. However, a clear classification,fair comparison, and performance rationalization of these methods are lacking, due to the scattered existing studies. To address these issues, we develop 20 capacity estimation methods from three perspectives:charging sequence construction, input forms, and ML models. 22,582 charging curves are generated from 44 cells with different battery chemistry and operating conditions to validate the performance. Through comprehensive and unbiased comparison, the long short-term memory(LSTM) based neural network exhibits the best accuracy and robustness. Across all 6503 tested samples, the mean absolute percentage error(MAPE) for capacity estimation using LSTM is 0.61%, with a maximum error of only 3.94%. Even with the addition of 3 m V voltage noise or the extension of sampling intervals to 60 s, the average MAPE remains below 2%. Furthermore, the charging sequences are provided with physical explanations related to battery degradation to enhance confidence in their application. Recommendations for using other competitive methods are also presented. This work provides valuable insights and guidance for estimating battery capacity based on partial charging curves.展开更多
Data gaps and biases are two important issues that affect the quality of biodiversity information and downstream results.Understanding how best to fill existing gaps and account for biases is necessary to improve our ...Data gaps and biases are two important issues that affect the quality of biodiversity information and downstream results.Understanding how best to fill existing gaps and account for biases is necessary to improve our current information most effectively.Two current main approaches for obtaining and improving data include(1)curation of biological collections,and(2)fieldwork.However,the comparative effectiveness of these approaches in improving biodiversity data remains little explored.We used the Flora de Bogota project to study the magnitude of change in species richness,spatial coverage,and sample coverage of plant records based on curation versus fieldwork.The process of curation resulted in a decrease in species richness(synonym and error removal),but it significantly increased the number of records per species.Fieldwork contributed to a slight increase in species richness,via accumulation of new records.Additionally,curation led to increases in spatial coverage,species observed by locality,the number of plant records by species,and localities by species compared to fieldwork.Overall,curationwas more efficient in producing new information compared to fieldwork,mainly because of the large number of records available in herbaria.We recommend intensive curatorial work as the first step in increasing biodiversity data quality and quantity,to identify bias and gaps at the regional scale that can then be targeted with fieldwork.The stepwise strategy would enable fieldwork to be planned more costeffectively given the limited resources for biodiversity exploration and characterization.展开更多
Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS m...Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.展开更多
The purpose of software defect prediction is to identify defect-prone code modules to assist software quality assurance teams with the appropriate allocation of resources and labor.In previous software defect predicti...The purpose of software defect prediction is to identify defect-prone code modules to assist software quality assurance teams with the appropriate allocation of resources and labor.In previous software defect prediction studies,transfer learning was effective in solving the problem of inconsistent project data distribution.However,target projects often lack sufficient data,which affects the performance of the transfer learning model.In addition,the presence of uncorrelated features between projects can decrease the prediction accuracy of the transfer learning model.To address these problems,this article propose a software defect prediction method based on stable learning(SDP-SL)that combines code visualization techniques and residual networks.This method first transforms code files into code images using code visualization techniques and then constructs a defect prediction model based on these code images.During the model training process,target project data are not required as prior knowledge.Following the principles of stable learning,this paper dynamically adjusted the weights of source project samples to eliminate dependencies between features,thereby capturing the“invariance mechanism”within the data.This approach explores the genuine relationship between code defect features and labels,thereby enhancing defect prediction performance.To evaluate the performance of SDP-SL,this article conducted comparative experiments on 10 open-source projects in the PROMISE dataset.The experimental results demonstrated that in terms of the F-measure,the proposed SDP-SL method outperformed other within-project defect prediction methods by 2.11%-44.03%.In cross-project defect prediction,the SDP-SL method provided an improvement of 5.89%-25.46% in prediction performance compared to other cross-project defect prediction methods.Therefore,SDP-SL can effectively enhance within-and cross-project defect predictions.展开更多
In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes consid...In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes considerable coherence with the SAR transmission waveform together with periodical modulation patterns.This paper develops an MISRJ suppression algorithm for SAR imagery with online dictionary learning.In the algorithm,the jamming modulation temporal properties are exploited with extracting and sorting MISRJ slices using fast-time autocorrelation.Online dictionary learning is followed to separate real signals from jamming slices.Under the learned representation,time-varying MISRJs are suppressed effectively.Both simulated and real-measured SAR data are also used to confirm advantages in suppressing time-varying MISRJs over traditional methods.展开更多
This study aims to investigate mechanical properties and failure mechanisms of layered rock with rough joint surfaces under direct shear loading.Cubic layered samples with dimensions of 100 mm×100 mm×100 mm ...This study aims to investigate mechanical properties and failure mechanisms of layered rock with rough joint surfaces under direct shear loading.Cubic layered samples with dimensions of 100 mm×100 mm×100 mm were casted using rock-like materials,with anisotropic angle(α)and joint roughness coefficient(JRC)ranging from 15°to 75°and 2-20,respectively.The direct shear tests were conducted under the application of initial normal stress(σ_(n)) ranging from 1-4 MPa.The test results indicate significant differences in mechanical properties,acoustic emission(AE)responses,maximum principal strain fields,and ultimate failure modes of layered samples under different test conditions.The peak stress increases with the increasingαand achieves a maximum value atα=60°or 75°.As σ_(n) increases,the peak stress shows an increasing trend,with correlation coefficients R² ranging from 0.918 to 0.995 for the linear least squares fitting.As JRC increases from 2-4 to 18-20,the cohesion increases by 86.32%whenα=15°,while the cohesion decreases by 27.93%whenα=75°.The differences in roughness characteristics of shear failure surface induced byαresult in anisotropic post-peak AE responses,which is characterized by active AE signals whenαis small and quiet AE signals for a largeα.For a given JRC=6-8 andσ_(n)=1 MPa,asαincreases,the accumulative AE counts increase by 224.31%(αincreased from 15°to 60°),and then decrease by 14.68%(αincreased from 60°to 75°).The shear failure surface is formed along the weak interlayer whenα=15°and penetrates the layered matrix whenα=60°.Whenα=15°,as σ_(n) increases,the adjacent weak interlayer induces a change in the direction of tensile cracks propagation,resulting in a stepped pattern of cracks distribution.The increase in JRC intensifies roughness characteristics of shear failure surface for a smallα,however,it is not pronounced for a largeα.The findings will contribute to a better understanding of the mechanical responses and failure mechanisms of the layered rocks subjected to shear loads.展开更多
The emergence of digital networks and the wide adoption of information on internet platforms have given rise to threats against users’private information.Many intruders actively seek such private data either for sale...The emergence of digital networks and the wide adoption of information on internet platforms have given rise to threats against users’private information.Many intruders actively seek such private data either for sale or other inappropriate purposes.Similarly,national and international organizations have country-level and company-level private information that could be accessed by different network attacks.Therefore,the need for a Network Intruder Detection System(NIDS)becomes essential for protecting these networks and organizations.In the evolution of NIDS,Artificial Intelligence(AI)assisted tools and methods have been widely adopted to provide effective solutions.However,the development of NIDS still faces challenges at the dataset and machine learning levels,such as large deviations in numeric features,the presence of numerous irrelevant categorical features resulting in reduced cardinality,and class imbalance in multiclass-level data.To address these challenges and offer a unified solution to NIDS development,this study proposes a novel framework that preprocesses datasets and applies a box-cox transformation to linearly transform the numeric features and bring them into closer alignment.Cardinality reduction was applied to categorical features through the binning method.Subsequently,the class imbalance dataset was addressed using the adaptive synthetic sampling data generation method.Finally,the preprocessed,refined,and oversampled feature set was divided into training and test sets with an 80–20 ratio,and two experiments were conducted.In Experiment 1,the binary classification was executed using four machine learning classifiers,with the extra trees classifier achieving the highest accuracy of 97.23%and an AUC of 0.9961.In Experiment 2,multiclass classification was performed,and the extra trees classifier emerged as the most effective,achieving an accuracy of 81.27%and an AUC of 0.97.The results were evaluated based on training,testing,and total time,and a comparative analysis with state-of-the-art studies proved the robustness and significance of the applied methods in developing a timely and precision-efficient solution to NIDS.展开更多
In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sam...In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sampling will be valid for some classes of multivariate entire functions,satisfying certain growth conditions.We will show that many known results included in Commun Korean Math Soc,2002,17:731-740,Turk J Math,2017,41:387-403 and Filomat,2020,34:3339-3347 are special cases of our results.Moreover,we estimate the truncation error of this sampling based on localized sampling without decay assumption.Illustrative examples are also presented.展开更多
基金funded by the grants from the National Key Research and Development Program of China[2021YFC2301503,2022YFC2302900]the National Natural and Science Foundation of China[82171739,82171815,81873884]。
文摘Objective To evaluate the diagnostic value of histopathological examination of ultrasound-guided puncture biopsy samples in extrapulmonary tuberculosis(EPTB).Methods This study was conducted at the Shanghai Public Health Clinical Center.A total of 115patients underwent ultrasound-guided puncture biopsy,followed by MGIT 960 culture(culture),smear,Gene Xpert MTB/RIF(Xpert),and histopathological examination.These assays were performed to evaluate their effectiveness in diagnosing EPTB in comparison to two different diagnostic criteria:liquid culture and composite reference standard(CRS).Results When CRS was used as the reference standard,the sensitivity and specificity of culture,smear,Xpert,and histopathological examination were(44.83%,89.29%),(51.72%,89.29%),(70.11%,96.43%),and(85.06%,82.14%),respectively.Based on liquid culture tests,the sensitivity and specificity of smear,Xpert,and pathological examination were(66.67%,72.60%),(83.33%,63.01%),and(92.86%,45.21%),respectively.Histopathological examination showed the highest sensitivity but lowest specificity.Further,we found that the combination of Xpert and histopathological examination showed a sensitivity of 90.80%and a specificity of 89.29%.Conclusion Ultrasound-guided puncture sampling is safe and effective for the diagnosis of EPTB.Compared with culture,smear,and Xpert,histopathological examination showed higher sensitivity but lower specificity.The combination of histopathology with Xpert showed the best performance characteristics.
基金Supported by Research Foundation of CLEP of China (Grant No.TY3Q20110003)。
文摘The encapsulation of lunar samples is a core research area in the third phase of the Chinese Lunar Exploration Program.The seal assembly,opening and closing mechanism(OCM),and locking mechanism are the core components of the encapsulation device of the lunar samples,and the requirements of a tight seal,lightweight,and low power make the design of these core components difficult.In this study,a combined sealing assembly,OCM,and locking mechanism were investigated for the device.The sealing architecture consists of rubber and an Ag-In alloy,and a theory was built to analyze the seal.Experiments of the electroplate Au coating on the knife-edge revealed that the hermetic seal can be significantly improved.The driving principle for coaxial double-helical pairs was investigated and used to design the OCM.Moreover,a locking mechanism was created using an electric initiating explosive device with orifice damping.By optimizing the design,the output parameters were adjusted to meet the requirements of the lunar explorer.The experimental results showed that the helium leak rate of the test pieces were not more than 5×10^(-11) Pa·m^(3)·s^(-1),the minimum power of the OCM was 0.3 W,and the total weight of the principle prototype was 2.9 kg.The explosive driven locking mechanism has low impact.This investigation solved the difficulties in achieving tight seal,light weight,and low power for the lunar explorer,and the results can also be used to explore other extraterrestrial objects in the future.
基金supported by the National Key Research and Development Program (2022YFF0609504)the National Natural Science Foundation of China (61974126,51902273,62005230,62001405)the Natural Science Foundation of Fujian Province of China (No.2021J06009)
文摘Perovskite solar cells(PsCs)have developed tremendously over the past decade.However,the key factors influencing the power conversion efficiency(PCE)of PSCs remain incompletely understood,due to the complexity and coupling of these structural and compositional parameters.In this research,we demon-strate an effective approach to optimize PSCs performance via machine learning(ML).To address chal-lenges posed by limited samples,we propose a feature mask(FM)method,which augments training samples through feature transformation rather than synthetic data.Using this approach,squeeze-and-excitation residual network(SEResNet)model achieves an accuracy with a root-mean-square-error(RMSE)of 0.833%and a Pearson's correlation coefficient(r)of 0.980.Furthermore,we employ the permu-tation importance(PI)algorithm to investigate key features for PCE.Subsequently,we predict PCE through high-throughput screenings,in which we study the relationship between PCE and chemical com-positions.After that,we conduct experiments to validate the consistency between predicted results by ML and experimental results.In this work,ML demonstrates the capability to predict device performance,extract key parameters from complex systems,and accelerate the transition from laboratory findings to commercialapplications.
基金provided by the National Natural Science Foundation of China(52074300)the Program of China Scholarship Council(202206430024)+2 种基金the National Natural Science Foundation of China Youth Science(52104139)Yueqi Young Scholars Project of China University of Mining and Technology Beijing(2602021RC84)Guizhou province science and technology planning project([2020]3007,[2020]3008)。
文摘The deep mining of coal resources is accompanied by severe environmental challenges and various potential engineering hazards.The implementation of NPR(negative Poisson's ratio)bolts are capable of controlling large deformations in the surrounding rock effectively.This paper focuses on studying the mechanical properties of the NPR bolt under static disturbance load.The deep nonlinear mechanical experimental system was used to study the mechanical behavior of rock samples with different anchored types(unanchored/PR anchored/2G NPR anchored)under static disturbance load.The whole process of rock samples was taken by high-speed camera to obtain the real-time failure characteristics under static disturbance load.At the same time,the acoustic emission signal was collected to obtain the key characteristic parameters of acoustic emission such as acoustic emission count,energy,and frequency.The deformation at the failure of the samples was calculated and analyzed by digital speckle software.The findings indicate that the failure mode of rock is influenced by different types of anchoring.The peak failure strength of 2G NPR bolt anchored rock samples exhibits an increase of 6.5%when compared to the unanchored rock samples.The cumulative count and cumulative energy of acoustic emission exhibit a decrease of 62.16%and 62.90%,respectively.The maximum deformation of bearing capacity exhibits an increase of 59.27%,while the failure time demonstrates a delay of 42.86%.The peak failure strength of the 2G NPR bolt anchored ones under static disturbance load exhibits an increase of 5.94%when compared to the rock anchored by PR(Poisson's ratio)bolt.The cumulative count and cumulative energy of acoustic emission exhibit a decrease of 47.16%and 43.86%,respectively.The maximum deformation of the bearing capacity exhibits an increase of 50.43%,and the failure time demonstrates a delay of 32%.After anchoring by 2G NPR bolt,anchoring support effectively reduces the risk of damage caused by static disturbance load.These results demonstrate that the support effect of 2G NPR bolt materials surpasses that of PR bolt.
基金supported partially by NationalNatural Science Foundation of China(NSFC)(No.U21A20146)Collaborative Innovation Project of Anhui Universities(No.GXXT-2020-070)+8 种基金Cooperation Project of Anhui Future Technology Research Institute and Enterprise(No.2023qyhz32)Development of a New Dynamic Life Prediction Technology for Energy Storage Batteries(No.KH10003598)Opening Project of Key Laboratory of Electric Drive and Control of Anhui Province(No.DQKJ202304)Anhui Provincial Department of Education New Era Education Quality Project(No.2023dshwyx019)Special Fund for Collaborative Innovation between Anhui Polytechnic University and Jiujiang District(No.2022cyxtb10)Key Research and Development Program of Wuhu City(No.2022yf42)Open Research Fund of Anhui Key Laboratory of Detection Technology and Energy Saving Devices(No.JCKJ2021B06)Anhui Provincial Graduate Student Innovation and Entrepreneurship Practice Project(No.2022cxcysj123)Key Scientific Research Project for Anhui Universities(No.2022AH050981).
文摘Accurate and reliable fault detection is essential for the safe operation of electric vehicles.Support vector data description(SVDD)has been widely used in the field of fault detection.However,constructing the hypersphere boundary only describes the distribution of unlabeled samples,while the distribution of faulty samples cannot be effectively described and easilymisses detecting faulty data due to the imbalance of sample distribution.Meanwhile,selecting parameters is critical to the detection performance,and empirical parameterization is generally timeconsuming and laborious and may not result in finding the optimal parameters.Therefore,this paper proposes a semi-supervised data-driven method based on which the SVDD algorithm is improved and achieves excellent fault detection performance.By incorporating faulty samples into the underlying SVDD model,training deals better with the problem of missing detection of faulty samples caused by the imbalance in the distribution of abnormal samples,and the hypersphere boundary ismodified to classify the samplesmore accurately.The Bayesian Optimization NSVDD(BO-NSVDD)model was constructed to quickly and accurately optimize hyperparameter combinations.In the experiments,electric vehicle operation data with four common fault types are used to evaluate the performance with other five models,and the results show that the BO-NSVDD model presents superior detection performance for each type of fault data,especially in the imperceptible early and minor faults,which has seen very obvious advantages.Finally,the strong robustness of the proposed method is verified by adding different intensities of noise in the dataset.
文摘In order to solve the problems of weak prediction stability and generalization ability of a neural network algorithm model in the yarn quality prediction research for small samples,a prediction model based on an AdaBoost algorithm(AdaBoost model) was established.A prediction model based on a linear regression algorithm(LR model) and a prediction model based on a multi-layer perceptron neural network algorithm(MLP model) were established for comparison.The prediction experiments of the yarn evenness and the yarn strength were implemented.Determination coefficients and prediction errors were used to evaluate the prediction accuracy of these models,and the K-fold cross validation was used to evaluate the generalization ability of these models.In the prediction experiments,the determination coefficient of the yarn evenness prediction result of the AdaBoost model is 76% and 87% higher than that of the LR model and the MLP model,respectively.The determination coefficient of the yarn strength prediction result of the AdaBoost model is slightly higher than that of the other two models.Considering that the yarn evenness dataset has a weaker linear relationship with the cotton dataset than that of the yarn strength dataset in this paper,the AdaBoost model has the best adaptability for the nonlinear dataset among the three models.In addition,the AdaBoost model shows generally better results in the cross-validation experiments and the series of prediction experiments at eight different training set sample sizes.It is proved that the AdaBoost model not only has good prediction accuracy but also has good prediction stability and generalization ability for small samples.
基金supported by the National Natural Science Foundation of China(Grant No.52174062).
文摘Accurate prediction of the internal corrosion rates of oil and gas pipelines could be an effective way to prevent pipeline leaks.In this study,a proposed framework for predicting corrosion rates under a small sample of metal corrosion data in the laboratory was developed to provide a new perspective on how to solve the problem of pipeline corrosion under the condition of insufficient real samples.This approach employed the bagging algorithm to construct a strong learner by integrating several KNN learners.A total of 99 data were collected and split into training and test set with a 9:1 ratio.The training set was used to obtain the best hyperparameters by 10-fold cross-validation and grid search,and the test set was used to determine the performance of the model.The results showed that theMean Absolute Error(MAE)of this framework is 28.06%of the traditional model and outperforms other ensemblemethods.Therefore,the proposed framework is suitable formetal corrosion prediction under small sample conditions.
基金supported by Cooperative Research Centres Project(CRCP)awarded to Geneworks and La Trobe University.L.T.is supported by an Australian Research Training Program scholarship and the Tim Healy Memorial Scholarship awarded by The Department of Primary Industries South Australia(PIRSA).
文摘Lymnaeid snails are key intermediate hosts for the development and survival of Fasciola spp.,the causative agent of Fascioliasis which are economically important parasites infecting humans and livestock globally.The current control method for treating Fascioliasis is heavily reliant on anthelmintic drugs,particularly Triclabendazole(TCBZ)which has resulted in drug-resistant parasites and poses significant risk as there are no long-term efficacious alternatives available.Sustainable control measures at the farm level could include both parasite and snail control will play an important role in Fasciola spp.control and reduce the reliance on anthelmintic drugs.Implementation of such sustainable control measures requires effective identification of snails on the property however Lymnaeid snails are small and difficult to physically locate.Snail identification using an environmental DNA approach is a recent approach in which physically locating snails are not required.Austropeplea tomentosa,is the primary intermediate snail host for F.hepatica transmission in South-East Australia and we present an in-field loop-mediated isothermal amplification and water filtering method for the detection of A.tomentosa eDNA from water samples to improve current surveillance methods.This methodology is highly sensitive with a detection limit of 5×10^(−6)ng/μL,detected in<20 minutes,with cumulative sample preparation and amplification time under 1 hour.This proposed workflow could assist in monitoring areas to determine the risk of Fascioliasis infection and implement strategies to manage snail populations to ultimately reduce the risk of infection for humans and livestock.
文摘Stream sediment sampling is a significant tool in geochemical exploration. The stream sediment composition reflects the bedrock geology, overburden cover, and metalliferous mineralization. This research article focuses on assessing selected trace element concentrations in stream sediments and interpreting their inter-element relationships using multivariate statistical methods. Tagadur Ranganathaswamy Gudda and its surroundings in the Nuggihalli schist belt of southern India have been investigated in the present work. The geology of the study area is complex, with a diverse range of litho units and evidence of strong structural deformation. The area is known for its mineralization potential for chromite, vanadiferous titanomagnetite, and sulfides. The topography of the region is characterized by an undulating terrain with a radial drainage pattern. Most part of the schist belt is soil covered except the Tagadur Ranganathaswamy Gudda area. For this study, a discrete stream sediment sampling method was adopted to collect the samples. Stream sediment samples were collected using a discrete sampling method and analyzed for trace elements using an ICP-AES spectrophotometer: Fe, Cr, Ti, V, Cu, Ni, Zn, Pb, Mn, Cd, and As have been analyzed. The analytical data were statistically treated using the SPSS software, including descriptive statistics, normalization of data using natural log transformation, and factor analysis with varimax rotation. The transformed data showed a log-normal distribution, indicating the presence of geochemical anomalies. The results of the study provide valuable insights into the geochemical processes and mineralization potential of the study area. The statistical analysis helps in understanding the inter-element relationships and identifying element groups and their implications on bedrock potential mineralization. Additionally, spatial analysis using inverse distance weighting interpolation provides information about the distribution of geochemical parameters across the study area. Overall, this research contributes to the understanding of stream sediment geochemistry and its application in mineral exploration. The findings have implications for future exploration efforts and can aid in the identification of potential ore deposits in the Nuggihalli schist belt and similar geological settings.
基金This study was supported by the Chen Xiao-Ping Foundation for the Development of Science and Technology of Hubei Province(No.CXPJJH11900001-2019210).
文摘Objective Liver transplantation is a current treatment option for hepatocellular carcinoma(HCC).The United States National Inpatient Sample database was utilized to identify risk factors that influence the outcome of liver transplantation,including locoregional recurrence,distant metastasis,and in-hospital mortality,in HCC patients with concurrent hepatitis B infection,hepatitis C infection,or alcoholic cirrhosis.Methods This retrospective cohort study included HCC patients(n=2391)from the National Inpatient Sample database who underwent liver transplantation and were diagnosed with hepatitis B or C virus infection,co-infection with hepatitis B and C,or alcoholic cirrhosis of the liver between 2005 and 2014.Associations between HCC etiology and post-transplant outcomes were examined with multivariate analysis models.Results Liver cirrhosis was due to alcohol in 10.5%of patients,hepatitis B in 6.6%,hepatitis C in 10.8%,and combined hepatitis B and C infection in 24.3%.Distant metastasis was found in 16.7%of patients infected with hepatitis B and 9%of hepatitis C patients.Local recurrence of HCC was significantly more likely to occur in patients with hepatitis B than in those with alcohol-induced disease.Conclusion After liver transplantation,patients with hepatitis B infection have a higher risk of local recurrence and distant metastasis.Postoperative care and patient tracking are essential for liver transplant patients with hepatitis B infection.
文摘The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.
基金supported by the National Natural Science Foundation of China(Nos.12125509,12222514,11961141003,and 12005304)National Key Research and Development Project(No.2022YFA1602301)+1 种基金CAST Young Talent Support Planthe CNNC Science Fund for Talented Young Scholars Continuous support for basic scientific research projects。
文摘The Moon provides a unique environment for investigating nearby astrophysical events such as supernovae.Lunar samples retain valuable information from these events,via detectable long-lived“fingerprint”radionuclides such as^(60)Fe.In this work,we stepped up the development of an accelerator mass spectrometry(AMS)method for detecting^(60)Fe using the HI-13tandem accelerator at the China Institute of Atomic Energy(CIAE).Since interferences could not be sufficiently removed solely with the existing magnetic systems of the tandem accelerator and the following Q3D magnetic spectrograph,a Wien filter with a maximum voltage of±60 kV and a maximum magnetic field of 0.3 T was installed after the accelerator magnetic systems to lower the detection background for the low abundance nuclide^(60)Fe.A 1μm thick Si_(3)N_(4) foil was installed in front of the Q3D as an energy degrader.For particle detection,a multi-anode gas ionization chamber was mounted at the center of the focal plane of the spectrograph.Finally,an^(60)Fe sample with an abundance of 1.125×10^(-10)was used to test the new AMS system.These results indicate that^(60)Fe can be clearly distinguished from the isobar^(60)Ni.The sensitivity was assessed to be better than 4.3×10^(-14)based on blank sample measurements lasting 5.8 h,and the sensitivity could,in principle,be expected to be approximately 2.5×10^(-15)when the data were accumulated for 100 h,which is feasible for future lunar sample measurements because the main contaminants were sufficiently separated.
基金supported by the National Natural Science Foundation of China (52075420)the National Key Research and Development Program of China (2020YFB1708400)。
文摘With its generality and practicality, the combination of partial charging curves and machine learning(ML) for battery capacity estimation has attracted widespread attention. However, a clear classification,fair comparison, and performance rationalization of these methods are lacking, due to the scattered existing studies. To address these issues, we develop 20 capacity estimation methods from three perspectives:charging sequence construction, input forms, and ML models. 22,582 charging curves are generated from 44 cells with different battery chemistry and operating conditions to validate the performance. Through comprehensive and unbiased comparison, the long short-term memory(LSTM) based neural network exhibits the best accuracy and robustness. Across all 6503 tested samples, the mean absolute percentage error(MAPE) for capacity estimation using LSTM is 0.61%, with a maximum error of only 3.94%. Even with the addition of 3 m V voltage noise or the extension of sampling intervals to 60 s, the average MAPE remains below 2%. Furthermore, the charging sequences are provided with physical explanations related to battery degradation to enhance confidence in their application. Recommendations for using other competitive methods are also presented. This work provides valuable insights and guidance for estimating battery capacity based on partial charging curves.
基金supported by Colciencias Doctoral funding (727-2015)Universidad del Rosario, through a teaching assistantship and a doctoral grant
文摘Data gaps and biases are two important issues that affect the quality of biodiversity information and downstream results.Understanding how best to fill existing gaps and account for biases is necessary to improve our current information most effectively.Two current main approaches for obtaining and improving data include(1)curation of biological collections,and(2)fieldwork.However,the comparative effectiveness of these approaches in improving biodiversity data remains little explored.We used the Flora de Bogota project to study the magnitude of change in species richness,spatial coverage,and sample coverage of plant records based on curation versus fieldwork.The process of curation resulted in a decrease in species richness(synonym and error removal),but it significantly increased the number of records per species.Fieldwork contributed to a slight increase in species richness,via accumulation of new records.Additionally,curation led to increases in spatial coverage,species observed by locality,the number of plant records by species,and localities by species compared to fieldwork.Overall,curationwas more efficient in producing new information compared to fieldwork,mainly because of the large number of records available in herbaria.We recommend intensive curatorial work as the first step in increasing biodiversity data quality and quantity,to identify bias and gaps at the regional scale that can then be targeted with fieldwork.The stepwise strategy would enable fieldwork to be planned more costeffectively given the limited resources for biodiversity exploration and characterization.
基金supported by the Platform Development Foundation of the China Institute for Radiation Protection(No.YP21030101)the National Natural Science Foundation of China(General Program)(Nos.12175114,U2167209)+1 种基金the National Key R&D Program of China(No.2021YFF0603600)the Tsinghua University Initiative Scientific Research Program(No.20211080081).
文摘Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.
基金supported by the NationalNatural Science Foundation of China(Grant No.61867004)the Youth Fund of the National Natural Science Foundation of China(Grant No.41801288).
文摘The purpose of software defect prediction is to identify defect-prone code modules to assist software quality assurance teams with the appropriate allocation of resources and labor.In previous software defect prediction studies,transfer learning was effective in solving the problem of inconsistent project data distribution.However,target projects often lack sufficient data,which affects the performance of the transfer learning model.In addition,the presence of uncorrelated features between projects can decrease the prediction accuracy of the transfer learning model.To address these problems,this article propose a software defect prediction method based on stable learning(SDP-SL)that combines code visualization techniques and residual networks.This method first transforms code files into code images using code visualization techniques and then constructs a defect prediction model based on these code images.During the model training process,target project data are not required as prior knowledge.Following the principles of stable learning,this paper dynamically adjusted the weights of source project samples to eliminate dependencies between features,thereby capturing the“invariance mechanism”within the data.This approach explores the genuine relationship between code defect features and labels,thereby enhancing defect prediction performance.To evaluate the performance of SDP-SL,this article conducted comparative experiments on 10 open-source projects in the PROMISE dataset.The experimental results demonstrated that in terms of the F-measure,the proposed SDP-SL method outperformed other within-project defect prediction methods by 2.11%-44.03%.In cross-project defect prediction,the SDP-SL method provided an improvement of 5.89%-25.46% in prediction performance compared to other cross-project defect prediction methods.Therefore,SDP-SL can effectively enhance within-and cross-project defect predictions.
基金supported by the National Natural Science Foundation of China(61771372,61771367,62101494)the National Outstanding Youth Science Fund Project(61525105)+1 种基金Shenzhen Science and Technology Program(KQTD20190929172704911)the Aeronautic al Science Foundation of China(2019200M1001)。
文摘In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes considerable coherence with the SAR transmission waveform together with periodical modulation patterns.This paper develops an MISRJ suppression algorithm for SAR imagery with online dictionary learning.In the algorithm,the jamming modulation temporal properties are exploited with extracting and sorting MISRJ slices using fast-time autocorrelation.Online dictionary learning is followed to separate real signals from jamming slices.Under the learned representation,time-varying MISRJs are suppressed effectively.Both simulated and real-measured SAR data are also used to confirm advantages in suppressing time-varying MISRJs over traditional methods.
基金financial support from the National Natural Science Foundation of China(Nos.52174092,51904290,52004272,52104125,42372328,and U23B2091)Natural Science Foundation of Jiangsu Province,China(Nos.BK20220157 and BK20240209)+3 种基金the Fundamental Research Funds for the Central Universities,China(No.2022YCPY0202)Xuzhou Science and Technology Project,China(Nos.KC21033 and KC22005)Yunlong Lake Laboratory of Deep Underground Science and Engineering Project,China(No.104023002)the Graduate Innovation Program of China University of Mining and Technology(No.2023WLTCRCZL052)。
文摘This study aims to investigate mechanical properties and failure mechanisms of layered rock with rough joint surfaces under direct shear loading.Cubic layered samples with dimensions of 100 mm×100 mm×100 mm were casted using rock-like materials,with anisotropic angle(α)and joint roughness coefficient(JRC)ranging from 15°to 75°and 2-20,respectively.The direct shear tests were conducted under the application of initial normal stress(σ_(n)) ranging from 1-4 MPa.The test results indicate significant differences in mechanical properties,acoustic emission(AE)responses,maximum principal strain fields,and ultimate failure modes of layered samples under different test conditions.The peak stress increases with the increasingαand achieves a maximum value atα=60°or 75°.As σ_(n) increases,the peak stress shows an increasing trend,with correlation coefficients R² ranging from 0.918 to 0.995 for the linear least squares fitting.As JRC increases from 2-4 to 18-20,the cohesion increases by 86.32%whenα=15°,while the cohesion decreases by 27.93%whenα=75°.The differences in roughness characteristics of shear failure surface induced byαresult in anisotropic post-peak AE responses,which is characterized by active AE signals whenαis small and quiet AE signals for a largeα.For a given JRC=6-8 andσ_(n)=1 MPa,asαincreases,the accumulative AE counts increase by 224.31%(αincreased from 15°to 60°),and then decrease by 14.68%(αincreased from 60°to 75°).The shear failure surface is formed along the weak interlayer whenα=15°and penetrates the layered matrix whenα=60°.Whenα=15°,as σ_(n) increases,the adjacent weak interlayer induces a change in the direction of tensile cracks propagation,resulting in a stepped pattern of cracks distribution.The increase in JRC intensifies roughness characteristics of shear failure surface for a smallα,however,it is not pronounced for a largeα.The findings will contribute to a better understanding of the mechanical responses and failure mechanisms of the layered rocks subjected to shear loads.
文摘The emergence of digital networks and the wide adoption of information on internet platforms have given rise to threats against users’private information.Many intruders actively seek such private data either for sale or other inappropriate purposes.Similarly,national and international organizations have country-level and company-level private information that could be accessed by different network attacks.Therefore,the need for a Network Intruder Detection System(NIDS)becomes essential for protecting these networks and organizations.In the evolution of NIDS,Artificial Intelligence(AI)assisted tools and methods have been widely adopted to provide effective solutions.However,the development of NIDS still faces challenges at the dataset and machine learning levels,such as large deviations in numeric features,the presence of numerous irrelevant categorical features resulting in reduced cardinality,and class imbalance in multiclass-level data.To address these challenges and offer a unified solution to NIDS development,this study proposes a novel framework that preprocesses datasets and applies a box-cox transformation to linearly transform the numeric features and bring them into closer alignment.Cardinality reduction was applied to categorical features through the binning method.Subsequently,the class imbalance dataset was addressed using the adaptive synthetic sampling data generation method.Finally,the preprocessed,refined,and oversampled feature set was divided into training and test sets with an 80–20 ratio,and two experiments were conducted.In Experiment 1,the binary classification was executed using four machine learning classifiers,with the extra trees classifier achieving the highest accuracy of 97.23%and an AUC of 0.9961.In Experiment 2,multiclass classification was performed,and the extra trees classifier emerged as the most effective,achieving an accuracy of 81.27%and an AUC of 0.97.The results were evaluated based on training,testing,and total time,and a comparative analysis with state-of-the-art studies proved the robustness and significance of the applied methods in developing a timely and precision-efficient solution to NIDS.
文摘In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sampling will be valid for some classes of multivariate entire functions,satisfying certain growth conditions.We will show that many known results included in Commun Korean Math Soc,2002,17:731-740,Turk J Math,2017,41:387-403 and Filomat,2020,34:3339-3347 are special cases of our results.Moreover,we estimate the truncation error of this sampling based on localized sampling without decay assumption.Illustrative examples are also presented.