The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally in...The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.展开更多
Thin-walled aerostructural components frequently get distorted after the machining process.Reworking to correct distortions or eventually rejecting parts significantly increases the cost.This paper proposes a new appr...Thin-walled aerostructural components frequently get distorted after the machining process.Reworking to correct distortions or eventually rejecting parts significantly increases the cost.This paper proposes a new approach to correct distortions in thin-walled components by strategically applying hammer peening on target surfaces of a machined component.Aluminium alloy 7475-T7351 was chosen for this research.The study was divided in two stages.First,the residual stresses(RS)induced by four different pneumatic hammer peening conditions(modifying the stepover distance and initial offset)were characterised in a test coupon,and one of the conditions was selected for the next stage.In the second stage,a FEM model was used to predict distortions caused by machining in a representative workpiece.Then,the RS induced by hammer peening were included in an FEM model to define two hammer peening strategies(varying the coverage area)to analyse the capability to reduce distortions.Two workpieces were machined and then treated with the simulated hammer peening strategies for experimental validation.Results in the test coupon showed that pneumatic hammer peening can generate high compressive RS(-50 to350 MPa)up to 800 lm depth,with their magnitude increasing with a reduced stepover distance.Application of hammer peening over 4% of the surface of the representative workpiece reduced the machininginduced distortions by 37%,and a coverage area of 100% led to and overcorrection by a factor of five.This confirms that hammer peening can be strategically applied(in target areas and changing the percentage of coverage)to correct low or severe distortions.展开更多
BACKGROUND:Sepsis is one of the main causes of mortality in intensive care units(ICUs).Early prediction is critical for reducing injury.As approximately 36%of sepsis occur within 24 h after emergency department(ED)adm...BACKGROUND:Sepsis is one of the main causes of mortality in intensive care units(ICUs).Early prediction is critical for reducing injury.As approximately 36%of sepsis occur within 24 h after emergency department(ED)admission in Medical Information Mart for Intensive Care(MIMIC-IV),a prediction system for the ED triage stage would be helpful.Previous methods such as the quick Sequential Organ Failure Assessment(qSOFA)are more suitable for screening than for prediction in the ED,and we aimed to fi nd a light-weight,convenient prediction method through machine learning.METHODS:We accessed the MIMIC-IV for sepsis patient data in the EDs.Our dataset comprised demographic information,vital signs,and synthetic features.Extreme Gradient Boosting(XGBoost)was used to predict the risk of developing sepsis within 24 h after ED admission.Additionally,SHapley Additive exPlanations(SHAP)was employed to provide a comprehensive interpretation of the model's results.Ten percent of the patients were randomly selected as the testing set,while the remaining patients were used for training with 10-fold cross-validation.RESULTS:For 10-fold cross-validation on 14,957 samples,we reached an accuracy of 84.1%±0.3%and an area under the receiver operating characteristic(ROC)curve of 0.92±0.02.The model achieved similar performance on the testing set of 1,662 patients.SHAP values showed that the fi ve most important features were acuity,arrival transportation,age,shock index,and respiratory rate.CONCLUSION:Machine learning models such as XGBoost may be used for sepsis prediction using only a small amount of data conveniently collected in the ED triage stage.This may help reduce workload in the ED and warn medical workers against the risk of sepsis in advance.展开更多
Recent years have witnessed significant advances in utilizing machine learning-based techniques for thermal metamaterial-based structures and devices to attain favorable thermal transport behaviors.Among the various t...Recent years have witnessed significant advances in utilizing machine learning-based techniques for thermal metamaterial-based structures and devices to attain favorable thermal transport behaviors.Among the various thermal transport behaviors,achieving thermal transparency stands out as particularly desirable and intriguing.Our earlier work demonstrated the use of a thermal metamaterial-based periodic interparticle system as the underlying structure for manipulating thermal transport behavior and achieving thermal transparency.In this paper,we introduce an approach based on graph neural network to address the complex inverse design problem of determining the design parameters for a thermal metamaterial-based periodic interparticle system with the desired thermal transport behavior.Our work demonstrates that combining graph neural network modeling and inference is an effective approach for solving inverse design problems associated with attaining desirable thermal transport behaviors using thermal metamaterials.展开更多
State of health(SOH)estimation of e-mobilities operated in real and dynamic conditions is essential and challenging.Most of existing estimations are based on a fixed constant current charging and discharging aging pro...State of health(SOH)estimation of e-mobilities operated in real and dynamic conditions is essential and challenging.Most of existing estimations are based on a fixed constant current charging and discharging aging profiles,which overlooked the fact that the charging and discharging profiles are random and not complete in real application.This work investigates the influence of feature engineering on the accuracy of different machine learning(ML)-based SOH estimations acting on different recharging sub-profiles where a realistic battery mission profile is considered.Fifteen features were extracted from the battery partial recharging profiles,considering different factors such as starting voltage values,charge amount,and charging sliding windows.Then,features were selected based on a feature selection pipeline consisting of filtering and supervised ML-based subset selection.Multiple linear regression(MLR),Gaussian process regression(GPR),and support vector regression(SVR)were applied to estimate SOH,and root mean square error(RMSE)was used to evaluate and compare the estimation performance.The results showed that the feature selection pipeline can improve SOH estimation accuracy by 55.05%,2.57%,and 2.82%for MLR,GPR and SVR respectively.It was demonstrated that the estimation based on partial charging profiles with lower starting voltage,large charge,and large sliding window size is more likely to achieve higher accuracy.This work hopes to give some insights into the supervised ML-based feature engineering acting on random partial recharges on SOH estimation performance and tries to fill the gap of effective SOH estimation between theoretical study and real dynamic application.展开更多
The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques we...The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.展开更多
This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.W...This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications.展开更多
Traditional particle identification methods face timeconsuming,experience-dependent,and poor repeatability challenges in heavy-ion collisions at low and intermediate energies.Researchers urgently need solutions to the...Traditional particle identification methods face timeconsuming,experience-dependent,and poor repeatability challenges in heavy-ion collisions at low and intermediate energies.Researchers urgently need solutions to the dilemma of traditional particle identification methods.This study explores the possibility of applying intelligent learning algorithms to the particle identification of heavy-ion collisions at low and intermediate energies.Multiple intelligent algorithms,including XgBoost and TabNet,were selected to test datasets from the neutron ion multi-detector for reaction-oriented dynamics(NIMROD-ISiS)and Geant4 simulation.Tree-based machine learning algorithms and deep learning algorithms e.g.TabNet show excellent performance and generalization ability.Adding additional data features besides energy deposition can improve the algorithm’s performance when the data distribution is nonuniform.Intelligent learning algorithms can be applied to solve the particle identification problem in heavy-ion collisions at low and intermediate energies.展开更多
In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications...In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.展开更多
The rapid evolution of wireless communication technologies has underscored the critical role of antennas in ensuring seamless connectivity.Antenna defects,ranging from manufacturing imperfections to environmental wear...The rapid evolution of wireless communication technologies has underscored the critical role of antennas in ensuring seamless connectivity.Antenna defects,ranging from manufacturing imperfections to environmental wear,pose significant challenges to the reliability and performance of communication systems.This review paper navigates the landscape of antenna defect detection,emphasizing the need for a nuanced understanding of various defect types and the associated challenges in visual detection.This review paper serves as a valuable resource for researchers,engineers,and practitioners engaged in the design and maintenance of communication systems.The insights presented here pave the way for enhanced reliability in antenna systems through targeted defect detection measures.In this study,a comprehensive literature analysis on computer vision algorithms that are employed in end-of-line visual inspection of antenna parts is presented.The PRISMA principles will be followed throughout the review,and its goals are to provide a summary of recent research,identify relevant computer vision techniques,and evaluate how effective these techniques are in discovering defects during inspections.It contains articles from scholarly journals as well as papers presented at conferences up until June 2023.This research utilized search phrases that were relevant,and papers were chosen based on whether or not they met certain inclusion and exclusion criteria.In this study,several different computer vision approaches,such as feature extraction and defect classification,are broken down and analyzed.Additionally,their applicability and performance are discussed.The review highlights the significance of utilizing a wide variety of datasets and measurement criteria.The findings of this study add to the existing body of knowledge and point researchers in the direction of promising new areas of investigation,such as real-time inspection systems and multispectral imaging.This review,on its whole,offers a complete study of computer vision approaches for quality control in antenna parts.It does so by providing helpful insights and drawing attention to areas that require additional exploration.展开更多
Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the ne...Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the need for effective risk prediction models. Machine learning (ML) techniques have shown promise in analyzing complex data patterns and predicting disease outcomes. The accuracy of these techniques is greatly affected by changing their parameters. Hyperparameter optimization plays a crucial role in improving model performance. In this work, the Particle Swarm Optimization (PSO) algorithm was used to effectively search the hyperparameter space and improve the predictive power of the machine learning models by identifying the optimal hyperparameters that can provide the highest accuracy. A dataset with a variety of clinical and epidemiological characteristics linked to COVID-19 cases was used in this study. Various machine learning models, including Random Forests, Decision Trees, Support Vector Machines, and Neural Networks, were utilized to capture the complex relationships present in the data. To evaluate the predictive performance of the models, the accuracy metric was employed. The experimental findings showed that the suggested method of estimating COVID-19 risk is effective. When compared to baseline models, the optimized machine learning models performed better and produced better results.展开更多
Aiming at the problems of the traditional method of assessing distribution of particle size in bench blasting, a support vector machines (SVMs) regression methodology was used to predict the mean particle size (X50...Aiming at the problems of the traditional method of assessing distribution of particle size in bench blasting, a support vector machines (SVMs) regression methodology was used to predict the mean particle size (X50) resulting from rock blast fragmentation in various mines based on the statistical learning theory. The data base consisted of blast design parameters, explosive parameters, modulus of elasticity and in-situ block size. The seven input independent variables used for the SVMs model for the prediction of X50 of rock blast fragmentation were the ratio of bench height to drilled burden (H/B), ratio of spacing to burden (S/B), ratio of burden to hole diameter (B/D), ratio of stemming to burden (T/B), powder factor (Pf), modulus of elasticity (E) and in-situ block size (XB). After using the 90 sets of the measured data in various mines and rock formations in the world for training and testing, the model was applied to 12 another blast data for validation of the trained support vector regression (SVR) model. The prediction results of SVR were compared with those of artificial neural network (ANN), multivariate regression analysis (MVRA) models, conventional Kuznetsov method and the measured X50 values. The proposed method shows promising results and the prediction accuracy of SVMs model is acceptable.展开更多
TiC particles reinforced Ni-based alloy composite coatings were prepared on 7005 aluminum alloy by plasma spray. The effects of load, speed and temperature on the tribological behavior and mechanisms of the composite ...TiC particles reinforced Ni-based alloy composite coatings were prepared on 7005 aluminum alloy by plasma spray. The effects of load, speed and temperature on the tribological behavior and mechanisms of the composite coatings under dry friction were researched. The wear prediction model of the composite coatings was established based on the least square support vector machine (LS-SVM). The results show that the composite coatings exhibit smaller friction coefficients and wear losses than the Ni-based alloy coatings under different friction conditions. The predicting time of the LS-SVM model is only 12.93%of that of the BP-ANN model, and the predicting accuracies on friction coefficients and wear losses of the former are increased by 58.74%and 41.87%compared with the latter. The LS-SVM model can effectively predict the tribological behavior of the TiCP/Ni-base alloy composite coatings under dry friction.展开更多
Anodic bonding between silicon and glass with dou bl e electric fields is presented.By this means,the damage caused by the electric f ield to the movable part during bonding can be avoided and the experiment result s ...Anodic bonding between silicon and glass with dou bl e electric fields is presented.By this means,the damage caused by the electric f ield to the movable part during bonding can be avoided and the experiment result s show that.展开更多
Tactile perception plays a vital role for the human body and is also highly desired for smart prosthesis and advanced robots.Compared to active sensing devices,passive piezoelectric and triboelectric tactile sensors c...Tactile perception plays a vital role for the human body and is also highly desired for smart prosthesis and advanced robots.Compared to active sensing devices,passive piezoelectric and triboelectric tactile sensors consume less power,but lack the capability to resolve static stimuli.Here,we address this issue by utilizing the unique polarization chemistry of conjugated polymers for the first time and propose a new type of bioinspired,passive,and bio-friendly tactile sensors for resolving both static and dynamic stimuli.Specifically,to emulate the polarization process of natural sensory cells,conjugated polymers(including poly(3,4-ethylenedioxythiophen e):poly(styrenesulfonate),polyaniline,or polypyrrole)are controllably polarized into two opposite states to create artificial potential differences.The controllable and reversible polarization process of the conjugated polymers is fully in situ characterized.Then,a micro-structured ionic electrolyte is employed to imitate the natural ion channels and to encode external touch stimulations into the variation in potential difference outputs.Compared with the currently existing tactile sensing devices,the developed tactile sensors feature distinct characteristics including fully organic composition,high sensitivity(up to 773 mV N^(−1)),ultralow power consumption(nW),as well as superior bio-friendliness.As demonstrations,both single point tactile perception(surface texture perception and material property perception)and two-dimensional tactile recognitions(shape or profile perception)with high accuracy are successfully realized using self-defined machine learning algorithms.This tactile sensing concept innovation based on the polarization chemistry of conjugated polymers opens up a new path to create robotic tactile sensors and prosthetic electronic skins.展开更多
Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligen...Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligence.Among its various applications,it has proven groundbreaking in healthcare as well,both in clinical practice and research.In this editorial,we succinctly introduce ML applications and present a study,featured in the latest issue of the World Journal of Clinical Cases.The authors of this study conducted an analysis using both multiple linear regression(MLR)and ML methods to investigate the significant factors that may impact the estimated glomerular filtration rate in healthy women with and without non-alcoholic fatty liver disease(NAFLD).Their results implicated age as the most important determining factor in both groups,followed by lactic dehydrogenase,uric acid,forced expiratory volume in one second,and albumin.In addition,for the NAFLD-group,the 5th and 6th most important impact factors were thyroid-stimulating hormone and systolic blood pressure,as compared to plasma calcium and body fat for the NAFLD+group.However,the study's distinctive contribution lies in its adoption of ML methodologies,showcasing their superiority over traditional statistical approaches(herein MLR),thereby highlighting the potential of ML to represent an invaluable advanced adjunct tool in clinical practice and research.展开更多
A Kalman filter used in strapdown AHRS (Attitude Heading Reference System) based on micro machined inertial sensors is introduced. The composition and principle of the system are described. The attitude algorithm and ...A Kalman filter used in strapdown AHRS (Attitude Heading Reference System) based on micro machined inertial sensors is introduced. The composition and principle of the system are described. The attitude algorithm and error model of the system are derived based on the quaternion formulation. The real time quaternion based Kalman filter is designed. Simulation results show that accuracy of the system is better than 0.04 degree without disturbance of lateral acceleration and reduced to 0.44 degree with l...展开更多
In order to reduce the weight of airplane and increase its mechanical behaviors, more and more large integrated parts are applied in modern aviation industry. When machining thin-walled aeroplane parts, more than 90% ...In order to reduce the weight of airplane and increase its mechanical behaviors, more and more large integrated parts are applied in modern aviation industry. When machining thin-walled aeroplane parts, more than 90% of the materials would be removed, resulting in severe distortion of the parts due to the weakened rigidity and the release of residual stress. This might also lead to stress concentration and damage of the parts. The effect of material removal from residually stressed billet is simulated using FEA software MSC. Marc and the causations of distortion is analyzed. To verify the finite element simulation, a high speed milling test on aluminum alloy 7050T7351 is carried out. The results show that the simulation result is consistent with the experimental one. It is concluded that the release of residual stress is the main cause of machining distortion.展开更多
According to statistic data,machinery faults contribute to largest proportion of High-voltage circuit breaker failures,and traditional maintenance methods exist some disadvantages for that issue.Therefore,based on the...According to statistic data,machinery faults contribute to largest proportion of High-voltage circuit breaker failures,and traditional maintenance methods exist some disadvantages for that issue.Therefore,based on the wavelet packet decomposition approach and support vector machines,a new diagnosis model is proposed for such fault diagnoses in this study.The vibration eigenvalue extraction is analyzed through wavelet packet decomposition,and a four-layer support vector machine is constituted as a fault classifier.The Gaussian radial basis function is employed as the kernel function for the classifier.The penalty parameter c and kernel parameterδof the support vector machine are vital for the diagnostic accuracy,and these parameters must be carefully predetermined.Thus,a particle swarm optimizationsupport vector machine model is developed in which the optimal parameters c andδfor the support vector machine in each layer are determined by the particle swarm algorithm.The validity of this fault diagnosis model is determined with a real dataset from the operation experiment.Moreover,comparative investigations of fault diagnosis experiments with a normal support vector machine and a particle swarm optimization back-propagation neural network are also implemented.The results indicate that the proposed fault diagnosis model yields better accuracy and e-ciency than these other models.展开更多
An approach which combines particle swarm optimization and support vector machine(PSO–SVM)is proposed to forecast large-scale goaf instability(LSGI).Firstly,influencing factors of goaf safety are analyzed,and followi...An approach which combines particle swarm optimization and support vector machine(PSO–SVM)is proposed to forecast large-scale goaf instability(LSGI).Firstly,influencing factors of goaf safety are analyzed,and following parameters were selected as evaluation indexes in the LSGI:uniaxial compressive strength(UCS)of rock,elastic modulus(E)of rock,rock quality designation(RQD),area ration of pillar(Sp),the ratio of width to height of the pillar(w/h),depth of ore body(H),volume of goaf(V),dip of ore body(a)and area of goaf(Sg).Then LSGI forecasting model by PSO-SVM was established according to the influencing factors.The performance of hybrid model(PSO+SVM=PSO–SVM)has been compared with the grid search method of support vector machine(GSM–SVM)model.The actual data of 40 goafs are applied to research the forecasting ability of the proposed method,and two cases of underground mine are also validated by the proposed model.The results indicated that the heuristic algorithm of PSO can speed up the SVM parameter optimization search,and the predictive ability of the PSO–SVM model with the RBF kernel function is acceptable and robust,which might hold a high potential to become a useful tool in goaf risky prediction research.展开更多
文摘The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.
基金the financial support given from Elkartek Program to the project FRONTIERS 2022-Superficies multifuncionales en la frontera del conocimiento(KK-2022/00109)LOFAMO grant given by EPSRC(EP/X023281/1).
文摘Thin-walled aerostructural components frequently get distorted after the machining process.Reworking to correct distortions or eventually rejecting parts significantly increases the cost.This paper proposes a new approach to correct distortions in thin-walled components by strategically applying hammer peening on target surfaces of a machined component.Aluminium alloy 7475-T7351 was chosen for this research.The study was divided in two stages.First,the residual stresses(RS)induced by four different pneumatic hammer peening conditions(modifying the stepover distance and initial offset)were characterised in a test coupon,and one of the conditions was selected for the next stage.In the second stage,a FEM model was used to predict distortions caused by machining in a representative workpiece.Then,the RS induced by hammer peening were included in an FEM model to define two hammer peening strategies(varying the coverage area)to analyse the capability to reduce distortions.Two workpieces were machined and then treated with the simulated hammer peening strategies for experimental validation.Results in the test coupon showed that pneumatic hammer peening can generate high compressive RS(-50 to350 MPa)up to 800 lm depth,with their magnitude increasing with a reduced stepover distance.Application of hammer peening over 4% of the surface of the representative workpiece reduced the machininginduced distortions by 37%,and a coverage area of 100% led to and overcorrection by a factor of five.This confirms that hammer peening can be strategically applied(in target areas and changing the percentage of coverage)to correct low or severe distortions.
基金supported by the National Key Research and Development Program of China(2021YFC2500803)the CAMS Innovation Fund for Medical Sciences(2021-I2M-1-056).
文摘BACKGROUND:Sepsis is one of the main causes of mortality in intensive care units(ICUs).Early prediction is critical for reducing injury.As approximately 36%of sepsis occur within 24 h after emergency department(ED)admission in Medical Information Mart for Intensive Care(MIMIC-IV),a prediction system for the ED triage stage would be helpful.Previous methods such as the quick Sequential Organ Failure Assessment(qSOFA)are more suitable for screening than for prediction in the ED,and we aimed to fi nd a light-weight,convenient prediction method through machine learning.METHODS:We accessed the MIMIC-IV for sepsis patient data in the EDs.Our dataset comprised demographic information,vital signs,and synthetic features.Extreme Gradient Boosting(XGBoost)was used to predict the risk of developing sepsis within 24 h after ED admission.Additionally,SHapley Additive exPlanations(SHAP)was employed to provide a comprehensive interpretation of the model's results.Ten percent of the patients were randomly selected as the testing set,while the remaining patients were used for training with 10-fold cross-validation.RESULTS:For 10-fold cross-validation on 14,957 samples,we reached an accuracy of 84.1%±0.3%and an area under the receiver operating characteristic(ROC)curve of 0.92±0.02.The model achieved similar performance on the testing set of 1,662 patients.SHAP values showed that the fi ve most important features were acuity,arrival transportation,age,shock index,and respiratory rate.CONCLUSION:Machine learning models such as XGBoost may be used for sepsis prediction using only a small amount of data conveniently collected in the ED triage stage.This may help reduce workload in the ED and warn medical workers against the risk of sepsis in advance.
基金funding from the National Natural Science Foundation of China (Grant Nos.12035004 and 12320101004)the Innovation Program of Shanghai Municipal Education Commission (Grant No.2023ZKZD06).
文摘Recent years have witnessed significant advances in utilizing machine learning-based techniques for thermal metamaterial-based structures and devices to attain favorable thermal transport behaviors.Among the various thermal transport behaviors,achieving thermal transparency stands out as particularly desirable and intriguing.Our earlier work demonstrated the use of a thermal metamaterial-based periodic interparticle system as the underlying structure for manipulating thermal transport behavior and achieving thermal transparency.In this paper,we introduce an approach based on graph neural network to address the complex inverse design problem of determining the design parameters for a thermal metamaterial-based periodic interparticle system with the desired thermal transport behavior.Our work demonstrates that combining graph neural network modeling and inference is an effective approach for solving inverse design problems associated with attaining desirable thermal transport behaviors using thermal metamaterials.
基金funded by China Scholarship Council.The fund number is 202108320111 and 202208320055。
文摘State of health(SOH)estimation of e-mobilities operated in real and dynamic conditions is essential and challenging.Most of existing estimations are based on a fixed constant current charging and discharging aging profiles,which overlooked the fact that the charging and discharging profiles are random and not complete in real application.This work investigates the influence of feature engineering on the accuracy of different machine learning(ML)-based SOH estimations acting on different recharging sub-profiles where a realistic battery mission profile is considered.Fifteen features were extracted from the battery partial recharging profiles,considering different factors such as starting voltage values,charge amount,and charging sliding windows.Then,features were selected based on a feature selection pipeline consisting of filtering and supervised ML-based subset selection.Multiple linear regression(MLR),Gaussian process regression(GPR),and support vector regression(SVR)were applied to estimate SOH,and root mean square error(RMSE)was used to evaluate and compare the estimation performance.The results showed that the feature selection pipeline can improve SOH estimation accuracy by 55.05%,2.57%,and 2.82%for MLR,GPR and SVR respectively.It was demonstrated that the estimation based on partial charging profiles with lower starting voltage,large charge,and large sliding window size is more likely to achieve higher accuracy.This work hopes to give some insights into the supervised ML-based feature engineering acting on random partial recharges on SOH estimation performance and tries to fill the gap of effective SOH estimation between theoretical study and real dynamic application.
基金supported by the Second Tibetan Plateau Scientific Expedition and Research Program(Grant no.2019QZKK0904)Natural Science Foundation of Hebei Province(Grant no.D2022403032)S&T Program of Hebei(Grant no.E2021403001).
文摘The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.
基金supported in part by Major Science and Technology Demonstration Project of Jiangsu Provincial Key R&D Program under Grant No.BE2023025in part by the National Natural Science Foundation of China under Grant No.62302238+2 种基金in part by the Natural Science Foundation of Jiangsu Province under Grant No.BK20220388in part by the Natural Science Research Project of Colleges and Universities in Jiangsu Province under Grant No.22KJB520004in part by the China Postdoctoral Science Foundation under Grant No.2022M711689.
文摘This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications.
基金This work was supported by the Strategic Priority Research Program of Chinese Academy of Sciences(No.XDB34030000)the National Key Research and Development Program of China(No.2022YFA1602404)+1 种基金the National Natural Science Foundation(No.U1832129)the Youth Innovation Promotion Association CAS(No.2017309).
文摘Traditional particle identification methods face timeconsuming,experience-dependent,and poor repeatability challenges in heavy-ion collisions at low and intermediate energies.Researchers urgently need solutions to the dilemma of traditional particle identification methods.This study explores the possibility of applying intelligent learning algorithms to the particle identification of heavy-ion collisions at low and intermediate energies.Multiple intelligent algorithms,including XgBoost and TabNet,were selected to test datasets from the neutron ion multi-detector for reaction-oriented dynamics(NIMROD-ISiS)and Geant4 simulation.Tree-based machine learning algorithms and deep learning algorithms e.g.TabNet show excellent performance and generalization ability.Adding additional data features besides energy deposition can improve the algorithm’s performance when the data distribution is nonuniform.Intelligent learning algorithms can be applied to solve the particle identification problem in heavy-ion collisions at low and intermediate energies.
基金This work was supported in part by the National Science and Technology Council of Taiwan,under Contract NSTC 112-2410-H-324-001-MY2.
文摘In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.
文摘The rapid evolution of wireless communication technologies has underscored the critical role of antennas in ensuring seamless connectivity.Antenna defects,ranging from manufacturing imperfections to environmental wear,pose significant challenges to the reliability and performance of communication systems.This review paper navigates the landscape of antenna defect detection,emphasizing the need for a nuanced understanding of various defect types and the associated challenges in visual detection.This review paper serves as a valuable resource for researchers,engineers,and practitioners engaged in the design and maintenance of communication systems.The insights presented here pave the way for enhanced reliability in antenna systems through targeted defect detection measures.In this study,a comprehensive literature analysis on computer vision algorithms that are employed in end-of-line visual inspection of antenna parts is presented.The PRISMA principles will be followed throughout the review,and its goals are to provide a summary of recent research,identify relevant computer vision techniques,and evaluate how effective these techniques are in discovering defects during inspections.It contains articles from scholarly journals as well as papers presented at conferences up until June 2023.This research utilized search phrases that were relevant,and papers were chosen based on whether or not they met certain inclusion and exclusion criteria.In this study,several different computer vision approaches,such as feature extraction and defect classification,are broken down and analyzed.Additionally,their applicability and performance are discussed.The review highlights the significance of utilizing a wide variety of datasets and measurement criteria.The findings of this study add to the existing body of knowledge and point researchers in the direction of promising new areas of investigation,such as real-time inspection systems and multispectral imaging.This review,on its whole,offers a complete study of computer vision approaches for quality control in antenna parts.It does so by providing helpful insights and drawing attention to areas that require additional exploration.
文摘Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the need for effective risk prediction models. Machine learning (ML) techniques have shown promise in analyzing complex data patterns and predicting disease outcomes. The accuracy of these techniques is greatly affected by changing their parameters. Hyperparameter optimization plays a crucial role in improving model performance. In this work, the Particle Swarm Optimization (PSO) algorithm was used to effectively search the hyperparameter space and improve the predictive power of the machine learning models by identifying the optimal hyperparameters that can provide the highest accuracy. A dataset with a variety of clinical and epidemiological characteristics linked to COVID-19 cases was used in this study. Various machine learning models, including Random Forests, Decision Trees, Support Vector Machines, and Neural Networks, were utilized to capture the complex relationships present in the data. To evaluate the predictive performance of the models, the accuracy metric was employed. The experimental findings showed that the suggested method of estimating COVID-19 risk is effective. When compared to baseline models, the optimized machine learning models performed better and produced better results.
基金Foundation item:Project (2006BAB02A02) supported by the National Key Technology R&D Program during the 11th Five-year Plan Period of ChinaProject (CX2011B119) supported by the Graduated Students' Research and Innovation Fund of Hunan Province, ChinaProject (2009ssxt230) supported by the Central South University Innovation Fund,China
文摘Aiming at the problems of the traditional method of assessing distribution of particle size in bench blasting, a support vector machines (SVMs) regression methodology was used to predict the mean particle size (X50) resulting from rock blast fragmentation in various mines based on the statistical learning theory. The data base consisted of blast design parameters, explosive parameters, modulus of elasticity and in-situ block size. The seven input independent variables used for the SVMs model for the prediction of X50 of rock blast fragmentation were the ratio of bench height to drilled burden (H/B), ratio of spacing to burden (S/B), ratio of burden to hole diameter (B/D), ratio of stemming to burden (T/B), powder factor (Pf), modulus of elasticity (E) and in-situ block size (XB). After using the 90 sets of the measured data in various mines and rock formations in the world for training and testing, the model was applied to 12 another blast data for validation of the trained support vector regression (SVR) model. The prediction results of SVR were compared with those of artificial neural network (ANN), multivariate regression analysis (MVRA) models, conventional Kuznetsov method and the measured X50 values. The proposed method shows promising results and the prediction accuracy of SVMs model is acceptable.
文摘TiC particles reinforced Ni-based alloy composite coatings were prepared on 7005 aluminum alloy by plasma spray. The effects of load, speed and temperature on the tribological behavior and mechanisms of the composite coatings under dry friction were researched. The wear prediction model of the composite coatings was established based on the least square support vector machine (LS-SVM). The results show that the composite coatings exhibit smaller friction coefficients and wear losses than the Ni-based alloy coatings under different friction conditions. The predicting time of the LS-SVM model is only 12.93%of that of the BP-ANN model, and the predicting accuracies on friction coefficients and wear losses of the former are increased by 58.74%and 41.87%compared with the latter. The LS-SVM model can effectively predict the tribological behavior of the TiCP/Ni-base alloy composite coatings under dry friction.
文摘Anodic bonding between silicon and glass with dou bl e electric fields is presented.By this means,the damage caused by the electric f ield to the movable part during bonding can be avoided and the experiment result s show that.
基金financially supported by the Sichuan Science and Technology Program(2022YFS0025 and 2024YFFK0133)supported by the“Fundamental Research Funds for the Central Universities of China.”。
文摘Tactile perception plays a vital role for the human body and is also highly desired for smart prosthesis and advanced robots.Compared to active sensing devices,passive piezoelectric and triboelectric tactile sensors consume less power,but lack the capability to resolve static stimuli.Here,we address this issue by utilizing the unique polarization chemistry of conjugated polymers for the first time and propose a new type of bioinspired,passive,and bio-friendly tactile sensors for resolving both static and dynamic stimuli.Specifically,to emulate the polarization process of natural sensory cells,conjugated polymers(including poly(3,4-ethylenedioxythiophen e):poly(styrenesulfonate),polyaniline,or polypyrrole)are controllably polarized into two opposite states to create artificial potential differences.The controllable and reversible polarization process of the conjugated polymers is fully in situ characterized.Then,a micro-structured ionic electrolyte is employed to imitate the natural ion channels and to encode external touch stimulations into the variation in potential difference outputs.Compared with the currently existing tactile sensing devices,the developed tactile sensors feature distinct characteristics including fully organic composition,high sensitivity(up to 773 mV N^(−1)),ultralow power consumption(nW),as well as superior bio-friendliness.As demonstrations,both single point tactile perception(surface texture perception and material property perception)and two-dimensional tactile recognitions(shape or profile perception)with high accuracy are successfully realized using self-defined machine learning algorithms.This tactile sensing concept innovation based on the polarization chemistry of conjugated polymers opens up a new path to create robotic tactile sensors and prosthetic electronic skins.
文摘Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligence.Among its various applications,it has proven groundbreaking in healthcare as well,both in clinical practice and research.In this editorial,we succinctly introduce ML applications and present a study,featured in the latest issue of the World Journal of Clinical Cases.The authors of this study conducted an analysis using both multiple linear regression(MLR)and ML methods to investigate the significant factors that may impact the estimated glomerular filtration rate in healthy women with and without non-alcoholic fatty liver disease(NAFLD).Their results implicated age as the most important determining factor in both groups,followed by lactic dehydrogenase,uric acid,forced expiratory volume in one second,and albumin.In addition,for the NAFLD-group,the 5th and 6th most important impact factors were thyroid-stimulating hormone and systolic blood pressure,as compared to plasma calcium and body fat for the NAFLD+group.However,the study's distinctive contribution lies in its adoption of ML methodologies,showcasing their superiority over traditional statistical approaches(herein MLR),thereby highlighting the potential of ML to represent an invaluable advanced adjunct tool in clinical practice and research.
文摘A Kalman filter used in strapdown AHRS (Attitude Heading Reference System) based on micro machined inertial sensors is introduced. The composition and principle of the system are described. The attitude algorithm and error model of the system are derived based on the quaternion formulation. The real time quaternion based Kalman filter is designed. Simulation results show that accuracy of the system is better than 0.04 degree without disturbance of lateral acceleration and reduced to 0.44 degree with l...
文摘In order to reduce the weight of airplane and increase its mechanical behaviors, more and more large integrated parts are applied in modern aviation industry. When machining thin-walled aeroplane parts, more than 90% of the materials would be removed, resulting in severe distortion of the parts due to the weakened rigidity and the release of residual stress. This might also lead to stress concentration and damage of the parts. The effect of material removal from residually stressed billet is simulated using FEA software MSC. Marc and the causations of distortion is analyzed. To verify the finite element simulation, a high speed milling test on aluminum alloy 7050T7351 is carried out. The results show that the simulation result is consistent with the experimental one. It is concluded that the release of residual stress is the main cause of machining distortion.
基金Supported by National Natural Science Foundation of China(Grant No.51705372)National Science and Technology Project of the Power Grid of China(Grant No.5211DS16002L).
文摘According to statistic data,machinery faults contribute to largest proportion of High-voltage circuit breaker failures,and traditional maintenance methods exist some disadvantages for that issue.Therefore,based on the wavelet packet decomposition approach and support vector machines,a new diagnosis model is proposed for such fault diagnoses in this study.The vibration eigenvalue extraction is analyzed through wavelet packet decomposition,and a four-layer support vector machine is constituted as a fault classifier.The Gaussian radial basis function is employed as the kernel function for the classifier.The penalty parameter c and kernel parameterδof the support vector machine are vital for the diagnostic accuracy,and these parameters must be carefully predetermined.Thus,a particle swarm optimizationsupport vector machine model is developed in which the optimal parameters c andδfor the support vector machine in each layer are determined by the particle swarm algorithm.The validity of this fault diagnosis model is determined with a real dataset from the operation experiment.Moreover,comparative investigations of fault diagnosis experiments with a normal support vector machine and a particle swarm optimization back-propagation neural network are also implemented.The results indicate that the proposed fault diagnosis model yields better accuracy and e-ciency than these other models.
基金supported by the National Basic Research Program Project of China(No.2010CB732004)the National Natural Science Foundation Project of China(Nos.50934006 and41272304)+2 种基金the Graduated Students’ResearchInnovation Fund Project of Hunan Province of China(No.CX2011B119)the Scholarship Award for Excellent Doctoral Student of Ministry of Education of China and the Valuable Equipment Open Sharing Fund of Central South University(No.1343-76140000022)
文摘An approach which combines particle swarm optimization and support vector machine(PSO–SVM)is proposed to forecast large-scale goaf instability(LSGI).Firstly,influencing factors of goaf safety are analyzed,and following parameters were selected as evaluation indexes in the LSGI:uniaxial compressive strength(UCS)of rock,elastic modulus(E)of rock,rock quality designation(RQD),area ration of pillar(Sp),the ratio of width to height of the pillar(w/h),depth of ore body(H),volume of goaf(V),dip of ore body(a)and area of goaf(Sg).Then LSGI forecasting model by PSO-SVM was established according to the influencing factors.The performance of hybrid model(PSO+SVM=PSO–SVM)has been compared with the grid search method of support vector machine(GSM–SVM)model.The actual data of 40 goafs are applied to research the forecasting ability of the proposed method,and two cases of underground mine are also validated by the proposed model.The results indicated that the heuristic algorithm of PSO can speed up the SVM parameter optimization search,and the predictive ability of the PSO–SVM model with the RBF kernel function is acceptable and robust,which might hold a high potential to become a useful tool in goaf risky prediction research.