The aerospace community widely uses difficult-to-cut materials,such as titanium alloys,high-temperature alloys,metal/ceramic/polymer matrix composites,hard and brittle materials,and geometrically complex components,su...The aerospace community widely uses difficult-to-cut materials,such as titanium alloys,high-temperature alloys,metal/ceramic/polymer matrix composites,hard and brittle materials,and geometrically complex components,such as thin-walled structures,microchannels,and complex surfaces.Mechanical machining is the main material removal process for the vast majority of aerospace components.However,many problems exist,including severe and rapid tool wear,low machining efficiency,and poor surface integrity.Nontraditional energy-assisted mechanical machining is a hybrid process that uses nontraditional energies(vibration,laser,electricity,etc)to improve the machinability of local materials and decrease the burden of mechanical machining.This provides a feasible and promising method to improve the material removal rate and surface quality,reduce process forces,and prolong tool life.However,systematic reviews of this technology are lacking with respect to the current research status and development direction.This paper reviews the recent progress in the nontraditional energy-assisted mechanical machining of difficult-to-cut materials and components in the aerospace community.In addition,this paper focuses on the processing principles,material responses under nontraditional energy,resultant forces and temperatures,material removal mechanisms,and applications of these processes,including vibration-,laser-,electric-,magnetic-,chemical-,advanced coolant-,and hybrid nontraditional energy-assisted mechanical machining.Finally,a comprehensive summary of the principles,advantages,and limitations of each hybrid process is provided,and future perspectives on forward design,device development,and sustainability of nontraditional energy-assisted mechanical machining processes are discussed.展开更多
Traditional 3Ni weathering steel cannot completely meet the requirements for offshore engineering development,resulting in the design of novel 3Ni steel with the addition of microalloy elements such as Mn or Nb for st...Traditional 3Ni weathering steel cannot completely meet the requirements for offshore engineering development,resulting in the design of novel 3Ni steel with the addition of microalloy elements such as Mn or Nb for strength enhancement becoming a trend.The stress-assisted corrosion behavior of a novel designed high-strength 3Ni steel was investigated in the current study using the corrosion big data method.The information on the corrosion process was recorded using the galvanic corrosion current monitoring method.The gradi-ent boosting decision tree(GBDT)machine learning method was used to mine the corrosion mechanism,and the importance of the struc-ture factor was investigated.Field exposure tests were conducted to verify the calculated results using the GBDT method.Results indic-ated that the GBDT method can be effectively used to study the influence of structural factors on the corrosion process of 3Ni steel.Dif-ferent mechanisms for the addition of Mn and Cu to the stress-assisted corrosion of 3Ni steel suggested that Mn and Cu have no obvious effect on the corrosion rate of non-stressed 3Ni steel during the early stage of corrosion.When the corrosion reached a stable state,the in-crease in Mn element content increased the corrosion rate of 3Ni steel,while Cu reduced this rate.In the presence of stress,the increase in Mn element content and Cu addition can inhibit the corrosion process.The corrosion law of outdoor-exposed 3Ni steel is consistent with the law based on corrosion big data technology,verifying the reliability of the big data evaluation method and data prediction model selection.展开更多
Magnesium alloys have many advantages as lightweight materials for engineering applications,especially in the fields of automotive and aerospace.They undergo extensive cutting or machining while making products out of...Magnesium alloys have many advantages as lightweight materials for engineering applications,especially in the fields of automotive and aerospace.They undergo extensive cutting or machining while making products out of them.Dry cutting,a sustainable machining method,causes more friction and adhesion at the tool-chip interface.One of the promising solutions to this problem is cutting tool surface texturing,which can reduce tool wear and friction in dry cutting and improve machining performance.This paper aims to investigate the impact of dimple textures(made on the flank face of cutting inserts)on tool wear and chip morphology in the dry machining of AZ31B magnesium alloy.The results show that the cutting speed was the most significant factor affecting tool flank wear,followed by feed rate and cutting depth.The tool wear mechanism was examined using scanning electron microscope(SEM)images and energy dispersive X-ray spectroscopy(EDS)analysis reports,which showed that at low cutting speed,the main wear mechanism was abrasion,while at high speed,it was adhesion.The chips are discontinuous at low cutting speeds,while continuous at high cutting speeds.The dimple textured flank face cutting tools facilitate the dry machining of AZ31B magnesium alloy and contribute to ecological benefits.展开更多
Difficult-to-machine materials (DMMs) are extensively applied in critical fields such as aviation,semiconductor,biomedicine,and other key fields due to their excellent material properties.However,traditional machining...Difficult-to-machine materials (DMMs) are extensively applied in critical fields such as aviation,semiconductor,biomedicine,and other key fields due to their excellent material properties.However,traditional machining technologies often struggle to achieve ultra-precision with DMMs resulting from poor surface quality and low processing efficiency.In recent years,field-assisted machining (FAM) technology has emerged as a new generation of machining technology based on innovative principles such as laser heating,tool vibration,magnetic magnetization,and plasma modification,providing a new solution for improving the machinability of DMMs.This technology not only addresses these limitations of traditional machining methods,but also has become a hot topic of research in the domain of ultra-precision machining of DMMs.Many new methods and principles have been introduced and investigated one after another,yet few studies have presented a comprehensive analysis and summarization.To fill this gap and understand the development trend of FAM,this study provides an important overview of FAM,covering different assisted machining methods,application effects,mechanism analysis,and equipment design.The current deficiencies and future challenges of FAM are summarized to lay the foundation for the further development of multi-field hybrid assisted and intelligent FAM technologies.展开更多
BACKGROUND:Sepsis is one of the main causes of mortality in intensive care units(ICUs).Early prediction is critical for reducing injury.As approximately 36%of sepsis occur within 24 h after emergency department(ED)adm...BACKGROUND:Sepsis is one of the main causes of mortality in intensive care units(ICUs).Early prediction is critical for reducing injury.As approximately 36%of sepsis occur within 24 h after emergency department(ED)admission in Medical Information Mart for Intensive Care(MIMIC-IV),a prediction system for the ED triage stage would be helpful.Previous methods such as the quick Sequential Organ Failure Assessment(qSOFA)are more suitable for screening than for prediction in the ED,and we aimed to fi nd a light-weight,convenient prediction method through machine learning.METHODS:We accessed the MIMIC-IV for sepsis patient data in the EDs.Our dataset comprised demographic information,vital signs,and synthetic features.Extreme Gradient Boosting(XGBoost)was used to predict the risk of developing sepsis within 24 h after ED admission.Additionally,SHapley Additive exPlanations(SHAP)was employed to provide a comprehensive interpretation of the model's results.Ten percent of the patients were randomly selected as the testing set,while the remaining patients were used for training with 10-fold cross-validation.RESULTS:For 10-fold cross-validation on 14,957 samples,we reached an accuracy of 84.1%±0.3%and an area under the receiver operating characteristic(ROC)curve of 0.92±0.02.The model achieved similar performance on the testing set of 1,662 patients.SHAP values showed that the fi ve most important features were acuity,arrival transportation,age,shock index,and respiratory rate.CONCLUSION:Machine learning models such as XGBoost may be used for sepsis prediction using only a small amount of data conveniently collected in the ED triage stage.This may help reduce workload in the ED and warn medical workers against the risk of sepsis in advance.展开更多
Recent years have witnessed significant advances in utilizing machine learning-based techniques for thermal metamaterial-based structures and devices to attain favorable thermal transport behaviors.Among the various t...Recent years have witnessed significant advances in utilizing machine learning-based techniques for thermal metamaterial-based structures and devices to attain favorable thermal transport behaviors.Among the various thermal transport behaviors,achieving thermal transparency stands out as particularly desirable and intriguing.Our earlier work demonstrated the use of a thermal metamaterial-based periodic interparticle system as the underlying structure for manipulating thermal transport behavior and achieving thermal transparency.In this paper,we introduce an approach based on graph neural network to address the complex inverse design problem of determining the design parameters for a thermal metamaterial-based periodic interparticle system with the desired thermal transport behavior.Our work demonstrates that combining graph neural network modeling and inference is an effective approach for solving inverse design problems associated with attaining desirable thermal transport behaviors using thermal metamaterials.展开更多
State of health(SOH)estimation of e-mobilities operated in real and dynamic conditions is essential and challenging.Most of existing estimations are based on a fixed constant current charging and discharging aging pro...State of health(SOH)estimation of e-mobilities operated in real and dynamic conditions is essential and challenging.Most of existing estimations are based on a fixed constant current charging and discharging aging profiles,which overlooked the fact that the charging and discharging profiles are random and not complete in real application.This work investigates the influence of feature engineering on the accuracy of different machine learning(ML)-based SOH estimations acting on different recharging sub-profiles where a realistic battery mission profile is considered.Fifteen features were extracted from the battery partial recharging profiles,considering different factors such as starting voltage values,charge amount,and charging sliding windows.Then,features were selected based on a feature selection pipeline consisting of filtering and supervised ML-based subset selection.Multiple linear regression(MLR),Gaussian process regression(GPR),and support vector regression(SVR)were applied to estimate SOH,and root mean square error(RMSE)was used to evaluate and compare the estimation performance.The results showed that the feature selection pipeline can improve SOH estimation accuracy by 55.05%,2.57%,and 2.82%for MLR,GPR and SVR respectively.It was demonstrated that the estimation based on partial charging profiles with lower starting voltage,large charge,and large sliding window size is more likely to achieve higher accuracy.This work hopes to give some insights into the supervised ML-based feature engineering acting on random partial recharges on SOH estimation performance and tries to fill the gap of effective SOH estimation between theoretical study and real dynamic application.展开更多
The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques we...The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.展开更多
This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.W...This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications.展开更多
Traditional particle identification methods face timeconsuming,experience-dependent,and poor repeatability challenges in heavy-ion collisions at low and intermediate energies.Researchers urgently need solutions to the...Traditional particle identification methods face timeconsuming,experience-dependent,and poor repeatability challenges in heavy-ion collisions at low and intermediate energies.Researchers urgently need solutions to the dilemma of traditional particle identification methods.This study explores the possibility of applying intelligent learning algorithms to the particle identification of heavy-ion collisions at low and intermediate energies.Multiple intelligent algorithms,including XgBoost and TabNet,were selected to test datasets from the neutron ion multi-detector for reaction-oriented dynamics(NIMROD-ISiS)and Geant4 simulation.Tree-based machine learning algorithms and deep learning algorithms e.g.TabNet show excellent performance and generalization ability.Adding additional data features besides energy deposition can improve the algorithm’s performance when the data distribution is nonuniform.Intelligent learning algorithms can be applied to solve the particle identification problem in heavy-ion collisions at low and intermediate energies.展开更多
In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications...In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.展开更多
The rapid evolution of wireless communication technologies has underscored the critical role of antennas in ensuring seamless connectivity.Antenna defects,ranging from manufacturing imperfections to environmental wear...The rapid evolution of wireless communication technologies has underscored the critical role of antennas in ensuring seamless connectivity.Antenna defects,ranging from manufacturing imperfections to environmental wear,pose significant challenges to the reliability and performance of communication systems.This review paper navigates the landscape of antenna defect detection,emphasizing the need for a nuanced understanding of various defect types and the associated challenges in visual detection.This review paper serves as a valuable resource for researchers,engineers,and practitioners engaged in the design and maintenance of communication systems.The insights presented here pave the way for enhanced reliability in antenna systems through targeted defect detection measures.In this study,a comprehensive literature analysis on computer vision algorithms that are employed in end-of-line visual inspection of antenna parts is presented.The PRISMA principles will be followed throughout the review,and its goals are to provide a summary of recent research,identify relevant computer vision techniques,and evaluate how effective these techniques are in discovering defects during inspections.It contains articles from scholarly journals as well as papers presented at conferences up until June 2023.This research utilized search phrases that were relevant,and papers were chosen based on whether or not they met certain inclusion and exclusion criteria.In this study,several different computer vision approaches,such as feature extraction and defect classification,are broken down and analyzed.Additionally,their applicability and performance are discussed.The review highlights the significance of utilizing a wide variety of datasets and measurement criteria.The findings of this study add to the existing body of knowledge and point researchers in the direction of promising new areas of investigation,such as real-time inspection systems and multispectral imaging.This review,on its whole,offers a complete study of computer vision approaches for quality control in antenna parts.It does so by providing helpful insights and drawing attention to areas that require additional exploration.展开更多
The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally in...The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.展开更多
Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the ne...Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the need for effective risk prediction models. Machine learning (ML) techniques have shown promise in analyzing complex data patterns and predicting disease outcomes. The accuracy of these techniques is greatly affected by changing their parameters. Hyperparameter optimization plays a crucial role in improving model performance. In this work, the Particle Swarm Optimization (PSO) algorithm was used to effectively search the hyperparameter space and improve the predictive power of the machine learning models by identifying the optimal hyperparameters that can provide the highest accuracy. A dataset with a variety of clinical and epidemiological characteristics linked to COVID-19 cases was used in this study. Various machine learning models, including Random Forests, Decision Trees, Support Vector Machines, and Neural Networks, were utilized to capture the complex relationships present in the data. To evaluate the predictive performance of the models, the accuracy metric was employed. The experimental findings showed that the suggested method of estimating COVID-19 risk is effective. When compared to baseline models, the optimized machine learning models performed better and produced better results.展开更多
In order to reduce the weight of airplane and increase its mechanical behaviors, more and more large integrated parts are applied in modern aviation industry. When machining thin-walled aeroplane parts, more than 90% ...In order to reduce the weight of airplane and increase its mechanical behaviors, more and more large integrated parts are applied in modern aviation industry. When machining thin-walled aeroplane parts, more than 90% of the materials would be removed, resulting in severe distortion of the parts due to the weakened rigidity and the release of residual stress. This might also lead to stress concentration and damage of the parts. The effect of material removal from residually stressed billet is simulated using FEA software MSC. Marc and the causations of distortion is analyzed. To verify the finite element simulation, a high speed milling test on aluminum alloy 7050T7351 is carried out. The results show that the simulation result is consistent with the experimental one. It is concluded that the release of residual stress is the main cause of machining distortion.展开更多
In dealing with abrasive waterjet machining(AWJM) simulation,most literatures apply finite element method(FEM) to build pure waterjet models or single abrasive particle erosion models.To overcome the mesh distorti...In dealing with abrasive waterjet machining(AWJM) simulation,most literatures apply finite element method(FEM) to build pure waterjet models or single abrasive particle erosion models.To overcome the mesh distortion caused by large deformation using FEM and to consider the effects of both water and abrasive,the smoothed particle hydrodynamics(SPH) coupled FEM modeling for AWJM simulation is presented,in which the abrasive waterjet is modeled by SPH particles and the target material is modeled by FEM.The two parts interact through contact algorithm.Utilizing this model,abrasive waterjet with high velocity penetrating the target materials is simulated and the mechanism of erosion is depicted.The relationships between the depth of penetration and jet parameters,including water pressure and traverse speed,etc,are analyzed based on the simulation.The simulation results agree well with the existed experimental data.The mixing multi-materials SPH particles,which contain abrasive and water,are adopted by means of the randomized algorithm and material model for the abrasive is presented.The study will not only provide a new powerful tool for the simulation of abrasive waterjet machining,but also be beneficial to understand its cutting mechanism and optimize the operating parameters.展开更多
The Al 2O 3 particles reinforced aluminum matrix composite (Al 2O 3p/Al) are more and more widely used for their excellent physical and chemical properties. However, their poor machinability leads to severe tool wear ...The Al 2O 3 particles reinforced aluminum matrix composite (Al 2O 3p/Al) are more and more widely used for their excellent physical and chemical properties. However, their poor machinability leads to severe tool wear and bad machined surface. In this paper laser assisted machining is adopted in machining Al 2O 3p/Al composite and good result was obtained. The result of experiment shows in machining Al 2O 3p/Al composites the cutting force is reduced in 30%~50%, the tool wear is reduced in 20%~30% and machined surface quality is improved in laser assisted machining as compared with conventional cutting. The physical model of the cutting process is set up and explains the reason why the cutting forced are reduced. The state of the particles is the main influence of the change. When the material of cutting zone is heating by laser, the aluminum matrix becomes softer and easier in plastic deformation, which leads to the reduction of the pushing force from the tool to the machined surface. The soften aluminum matrix is more easy to be squeezed out from the machined surface, and it leads the concentration of the Al 2O 3 particles in the surface layer of machined surface. The softening effect of laser heating on aluminum matrix reduces the pushing forces of the Al 2O 3 particles on the clearance face of cutting tool, which is just the reason for the severe cutting tool wear in conventional machining of Al 2O 3p/Al composite. Because the Al 2O 3 particles were pushed in during the cutting process, the particles increased in the surface layer. Because of the difference in thermal conductivity and thermal expansion between the Al-matrix and Al 2O 3 particle, residual stress is changed in the matrix after machining due to the extrusion of the tool, deformation of the matrix and displacement of the Al 2O 3 particle in the matrix. Temperature gradient comes into the cutting zone and the work-piece surface layer, it will lead to the increase of thermal stress and misfit dislocation in the matrix. The residual stress is compressive in the laser assisted hot cutting surface, the compressive stress is nearly triple times than that in the conventional cutting surface. Some analysis on the mechanism of laser heat assisted machining of Al 2O 3p/Al composite is given in the paper too.展开更多
Gap debris as discharge product is closely related to machining process in electrical discharge machining(EDM). A lot of recent researches have focused on the relationship among debris size, surfaces texture, remove...Gap debris as discharge product is closely related to machining process in electrical discharge machining(EDM). A lot of recent researches have focused on the relationship among debris size, surfaces texture, remove rate, and machining stability. The study on statistical distribution of debris size contributes to the research, but it is still superficial currently. In order to obtain the distribution law of the debris particle size, laser particle size analyzer(LPSA) combined with scanning electron microscope(SEM) is used to analyze the EDM debris size. Firstly, the heating dried method is applied to obtain the debris particles. Secondly, the measuring range of LPSA is determined as 0.5–100 μm by SEM observation, and the frequency distribution histogram and the cumulative frequency distribution scattergram of debris size are obtained by using LPSA. Thirdly, according to the distribution characteristic of the frequency distribution histogram, the statistical distribution functions of lognormal, exponentially modified Gaussian(EMG), Gamma and Weibull are chosen to achieve curve fitting of the histogram. At last, the distribute law of the debris size is obtained by fitting results. Experiments with different discharge parameters are carried out on an EDM machine designed by the authors themselves, and the machining conditions are tool electrode of red-copper material, workpiece of ANSI 1045 material and working fluid of de-ionized water. The experimental results indicate that the debris sizes of all experiment sample truly obey the Weibull distribution. The obtained distribution law is significantly important for all the models established based on the debris particle size.展开更多
The electric double layer with the transmission of particles was presented based on the principle of electrochemistry.In accordance with this theory,the cavitation catalysis removal mechanism of ultrasonic-pulse elect...The electric double layer with the transmission of particles was presented based on the principle of electrochemistry.In accordance with this theory,the cavitation catalysis removal mechanism of ultrasonic-pulse electrochemical compound machining(UPECM) based on particles was proposed.The removal mechanism was a particular focus and was thus validated by experiments.The principles and experiments of UPECM were introduced,and the removal model of the UPECM based on the principles of UPECM was established.Furthermore,the effects of the material removal rate for the main processing parameters,including the particles size,the ultrasonic vibration amplitude,the pulse voltage and the minimum machining gap between the tool and the workpiece,were also studied through UPECM.The results show that the particles promote ultrasonic-pulse electrochemical compound machining and thus act as the catalyzer of UPECM.The results also indicate that the processing speed,machining accuracy and surface quality can be improved under UPECM compound machining.展开更多
Parts with varied curvature features play increasingly critical roles in engineering, and are often machined under high-speed continuous-path running mode to ensure the machining efficiency. However, the continuous-pa...Parts with varied curvature features play increasingly critical roles in engineering, and are often machined under high-speed continuous-path running mode to ensure the machining efficiency. However, the continuous-path running trajectory error is significant during high-feed-speed machining, which seriously restricts the machining precision for such parts with varied curvature features. In order to reduce the continuous-path running trajectory error without sacrificing the machining efficiency, a pre-compensation method for the trajectory error is proposed. Based on the formation mechanism of the continuous-path running trajectory error analyzed, this error is estimated in advance by approximating the desired toolpath with spline curves. Then, an iterative error pre-compensation method is presented. By machining with the regenerated toolpath after pre-compensation instead of the uncompensated toolpath, the continuous-path running trajectory error can be effectively decreased without the reduction of the feed speed. To demonstrate the feasibility of the proposed pre-compensation method, a heart curve toolpath that possesses varied curvature features is employed. Experimental results indicate that compared with the uncompensated processing trajectory, the maximum and average machining errors for the pre-compensated processing trajectory are reduced by 67.19% and 82.30%, respectively. An easy to implement solution for high efficiency and high precision machining of the parts with varied curvature features is provided.展开更多
基金supported by the National Natural Science Foundation of China(Nos.52075255,92160301,52175415,52205475,and 92060203)。
文摘The aerospace community widely uses difficult-to-cut materials,such as titanium alloys,high-temperature alloys,metal/ceramic/polymer matrix composites,hard and brittle materials,and geometrically complex components,such as thin-walled structures,microchannels,and complex surfaces.Mechanical machining is the main material removal process for the vast majority of aerospace components.However,many problems exist,including severe and rapid tool wear,low machining efficiency,and poor surface integrity.Nontraditional energy-assisted mechanical machining is a hybrid process that uses nontraditional energies(vibration,laser,electricity,etc)to improve the machinability of local materials and decrease the burden of mechanical machining.This provides a feasible and promising method to improve the material removal rate and surface quality,reduce process forces,and prolong tool life.However,systematic reviews of this technology are lacking with respect to the current research status and development direction.This paper reviews the recent progress in the nontraditional energy-assisted mechanical machining of difficult-to-cut materials and components in the aerospace community.In addition,this paper focuses on the processing principles,material responses under nontraditional energy,resultant forces and temperatures,material removal mechanisms,and applications of these processes,including vibration-,laser-,electric-,magnetic-,chemical-,advanced coolant-,and hybrid nontraditional energy-assisted mechanical machining.Finally,a comprehensive summary of the principles,advantages,and limitations of each hybrid process is provided,and future perspectives on forward design,device development,and sustainability of nontraditional energy-assisted mechanical machining processes are discussed.
基金supported by the National Nat-ural Science Foundation of China(No.52203376)the National Key Research and Development Program of China(No.2023YFB3813200).
文摘Traditional 3Ni weathering steel cannot completely meet the requirements for offshore engineering development,resulting in the design of novel 3Ni steel with the addition of microalloy elements such as Mn or Nb for strength enhancement becoming a trend.The stress-assisted corrosion behavior of a novel designed high-strength 3Ni steel was investigated in the current study using the corrosion big data method.The information on the corrosion process was recorded using the galvanic corrosion current monitoring method.The gradi-ent boosting decision tree(GBDT)machine learning method was used to mine the corrosion mechanism,and the importance of the struc-ture factor was investigated.Field exposure tests were conducted to verify the calculated results using the GBDT method.Results indic-ated that the GBDT method can be effectively used to study the influence of structural factors on the corrosion process of 3Ni steel.Dif-ferent mechanisms for the addition of Mn and Cu to the stress-assisted corrosion of 3Ni steel suggested that Mn and Cu have no obvious effect on the corrosion rate of non-stressed 3Ni steel during the early stage of corrosion.When the corrosion reached a stable state,the in-crease in Mn element content increased the corrosion rate of 3Ni steel,while Cu reduced this rate.In the presence of stress,the increase in Mn element content and Cu addition can inhibit the corrosion process.The corrosion law of outdoor-exposed 3Ni steel is consistent with the law based on corrosion big data technology,verifying the reliability of the big data evaluation method and data prediction model selection.
文摘Magnesium alloys have many advantages as lightweight materials for engineering applications,especially in the fields of automotive and aerospace.They undergo extensive cutting or machining while making products out of them.Dry cutting,a sustainable machining method,causes more friction and adhesion at the tool-chip interface.One of the promising solutions to this problem is cutting tool surface texturing,which can reduce tool wear and friction in dry cutting and improve machining performance.This paper aims to investigate the impact of dimple textures(made on the flank face of cutting inserts)on tool wear and chip morphology in the dry machining of AZ31B magnesium alloy.The results show that the cutting speed was the most significant factor affecting tool flank wear,followed by feed rate and cutting depth.The tool wear mechanism was examined using scanning electron microscope(SEM)images and energy dispersive X-ray spectroscopy(EDS)analysis reports,which showed that at low cutting speed,the main wear mechanism was abrasion,while at high speed,it was adhesion.The chips are discontinuous at low cutting speeds,while continuous at high cutting speeds.The dimple textured flank face cutting tools facilitate the dry machining of AZ31B magnesium alloy and contribute to ecological benefits.
基金supported by the National Key Research and Development Project of China (Grant No.2023YFB3407200)the National Natural Science Foundation of China (Grant Nos.52225506,52375430,and 52188102)the Program for HUST Academic Frontier Youth Team (Grant No.2019QYTD12)。
文摘Difficult-to-machine materials (DMMs) are extensively applied in critical fields such as aviation,semiconductor,biomedicine,and other key fields due to their excellent material properties.However,traditional machining technologies often struggle to achieve ultra-precision with DMMs resulting from poor surface quality and low processing efficiency.In recent years,field-assisted machining (FAM) technology has emerged as a new generation of machining technology based on innovative principles such as laser heating,tool vibration,magnetic magnetization,and plasma modification,providing a new solution for improving the machinability of DMMs.This technology not only addresses these limitations of traditional machining methods,but also has become a hot topic of research in the domain of ultra-precision machining of DMMs.Many new methods and principles have been introduced and investigated one after another,yet few studies have presented a comprehensive analysis and summarization.To fill this gap and understand the development trend of FAM,this study provides an important overview of FAM,covering different assisted machining methods,application effects,mechanism analysis,and equipment design.The current deficiencies and future challenges of FAM are summarized to lay the foundation for the further development of multi-field hybrid assisted and intelligent FAM technologies.
基金supported by the National Key Research and Development Program of China(2021YFC2500803)the CAMS Innovation Fund for Medical Sciences(2021-I2M-1-056).
文摘BACKGROUND:Sepsis is one of the main causes of mortality in intensive care units(ICUs).Early prediction is critical for reducing injury.As approximately 36%of sepsis occur within 24 h after emergency department(ED)admission in Medical Information Mart for Intensive Care(MIMIC-IV),a prediction system for the ED triage stage would be helpful.Previous methods such as the quick Sequential Organ Failure Assessment(qSOFA)are more suitable for screening than for prediction in the ED,and we aimed to fi nd a light-weight,convenient prediction method through machine learning.METHODS:We accessed the MIMIC-IV for sepsis patient data in the EDs.Our dataset comprised demographic information,vital signs,and synthetic features.Extreme Gradient Boosting(XGBoost)was used to predict the risk of developing sepsis within 24 h after ED admission.Additionally,SHapley Additive exPlanations(SHAP)was employed to provide a comprehensive interpretation of the model's results.Ten percent of the patients were randomly selected as the testing set,while the remaining patients were used for training with 10-fold cross-validation.RESULTS:For 10-fold cross-validation on 14,957 samples,we reached an accuracy of 84.1%±0.3%and an area under the receiver operating characteristic(ROC)curve of 0.92±0.02.The model achieved similar performance on the testing set of 1,662 patients.SHAP values showed that the fi ve most important features were acuity,arrival transportation,age,shock index,and respiratory rate.CONCLUSION:Machine learning models such as XGBoost may be used for sepsis prediction using only a small amount of data conveniently collected in the ED triage stage.This may help reduce workload in the ED and warn medical workers against the risk of sepsis in advance.
基金funding from the National Natural Science Foundation of China (Grant Nos.12035004 and 12320101004)the Innovation Program of Shanghai Municipal Education Commission (Grant No.2023ZKZD06).
文摘Recent years have witnessed significant advances in utilizing machine learning-based techniques for thermal metamaterial-based structures and devices to attain favorable thermal transport behaviors.Among the various thermal transport behaviors,achieving thermal transparency stands out as particularly desirable and intriguing.Our earlier work demonstrated the use of a thermal metamaterial-based periodic interparticle system as the underlying structure for manipulating thermal transport behavior and achieving thermal transparency.In this paper,we introduce an approach based on graph neural network to address the complex inverse design problem of determining the design parameters for a thermal metamaterial-based periodic interparticle system with the desired thermal transport behavior.Our work demonstrates that combining graph neural network modeling and inference is an effective approach for solving inverse design problems associated with attaining desirable thermal transport behaviors using thermal metamaterials.
基金funded by China Scholarship Council.The fund number is 202108320111 and 202208320055。
文摘State of health(SOH)estimation of e-mobilities operated in real and dynamic conditions is essential and challenging.Most of existing estimations are based on a fixed constant current charging and discharging aging profiles,which overlooked the fact that the charging and discharging profiles are random and not complete in real application.This work investigates the influence of feature engineering on the accuracy of different machine learning(ML)-based SOH estimations acting on different recharging sub-profiles where a realistic battery mission profile is considered.Fifteen features were extracted from the battery partial recharging profiles,considering different factors such as starting voltage values,charge amount,and charging sliding windows.Then,features were selected based on a feature selection pipeline consisting of filtering and supervised ML-based subset selection.Multiple linear regression(MLR),Gaussian process regression(GPR),and support vector regression(SVR)were applied to estimate SOH,and root mean square error(RMSE)was used to evaluate and compare the estimation performance.The results showed that the feature selection pipeline can improve SOH estimation accuracy by 55.05%,2.57%,and 2.82%for MLR,GPR and SVR respectively.It was demonstrated that the estimation based on partial charging profiles with lower starting voltage,large charge,and large sliding window size is more likely to achieve higher accuracy.This work hopes to give some insights into the supervised ML-based feature engineering acting on random partial recharges on SOH estimation performance and tries to fill the gap of effective SOH estimation between theoretical study and real dynamic application.
基金supported by the Second Tibetan Plateau Scientific Expedition and Research Program(Grant no.2019QZKK0904)Natural Science Foundation of Hebei Province(Grant no.D2022403032)S&T Program of Hebei(Grant no.E2021403001).
文摘The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.
基金supported in part by Major Science and Technology Demonstration Project of Jiangsu Provincial Key R&D Program under Grant No.BE2023025in part by the National Natural Science Foundation of China under Grant No.62302238+2 种基金in part by the Natural Science Foundation of Jiangsu Province under Grant No.BK20220388in part by the Natural Science Research Project of Colleges and Universities in Jiangsu Province under Grant No.22KJB520004in part by the China Postdoctoral Science Foundation under Grant No.2022M711689.
文摘This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications.
基金This work was supported by the Strategic Priority Research Program of Chinese Academy of Sciences(No.XDB34030000)the National Key Research and Development Program of China(No.2022YFA1602404)+1 种基金the National Natural Science Foundation(No.U1832129)the Youth Innovation Promotion Association CAS(No.2017309).
文摘Traditional particle identification methods face timeconsuming,experience-dependent,and poor repeatability challenges in heavy-ion collisions at low and intermediate energies.Researchers urgently need solutions to the dilemma of traditional particle identification methods.This study explores the possibility of applying intelligent learning algorithms to the particle identification of heavy-ion collisions at low and intermediate energies.Multiple intelligent algorithms,including XgBoost and TabNet,were selected to test datasets from the neutron ion multi-detector for reaction-oriented dynamics(NIMROD-ISiS)and Geant4 simulation.Tree-based machine learning algorithms and deep learning algorithms e.g.TabNet show excellent performance and generalization ability.Adding additional data features besides energy deposition can improve the algorithm’s performance when the data distribution is nonuniform.Intelligent learning algorithms can be applied to solve the particle identification problem in heavy-ion collisions at low and intermediate energies.
基金This work was supported in part by the National Science and Technology Council of Taiwan,under Contract NSTC 112-2410-H-324-001-MY2.
文摘In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.
文摘The rapid evolution of wireless communication technologies has underscored the critical role of antennas in ensuring seamless connectivity.Antenna defects,ranging from manufacturing imperfections to environmental wear,pose significant challenges to the reliability and performance of communication systems.This review paper navigates the landscape of antenna defect detection,emphasizing the need for a nuanced understanding of various defect types and the associated challenges in visual detection.This review paper serves as a valuable resource for researchers,engineers,and practitioners engaged in the design and maintenance of communication systems.The insights presented here pave the way for enhanced reliability in antenna systems through targeted defect detection measures.In this study,a comprehensive literature analysis on computer vision algorithms that are employed in end-of-line visual inspection of antenna parts is presented.The PRISMA principles will be followed throughout the review,and its goals are to provide a summary of recent research,identify relevant computer vision techniques,and evaluate how effective these techniques are in discovering defects during inspections.It contains articles from scholarly journals as well as papers presented at conferences up until June 2023.This research utilized search phrases that were relevant,and papers were chosen based on whether or not they met certain inclusion and exclusion criteria.In this study,several different computer vision approaches,such as feature extraction and defect classification,are broken down and analyzed.Additionally,their applicability and performance are discussed.The review highlights the significance of utilizing a wide variety of datasets and measurement criteria.The findings of this study add to the existing body of knowledge and point researchers in the direction of promising new areas of investigation,such as real-time inspection systems and multispectral imaging.This review,on its whole,offers a complete study of computer vision approaches for quality control in antenna parts.It does so by providing helpful insights and drawing attention to areas that require additional exploration.
文摘The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.
文摘Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the need for effective risk prediction models. Machine learning (ML) techniques have shown promise in analyzing complex data patterns and predicting disease outcomes. The accuracy of these techniques is greatly affected by changing their parameters. Hyperparameter optimization plays a crucial role in improving model performance. In this work, the Particle Swarm Optimization (PSO) algorithm was used to effectively search the hyperparameter space and improve the predictive power of the machine learning models by identifying the optimal hyperparameters that can provide the highest accuracy. A dataset with a variety of clinical and epidemiological characteristics linked to COVID-19 cases was used in this study. Various machine learning models, including Random Forests, Decision Trees, Support Vector Machines, and Neural Networks, were utilized to capture the complex relationships present in the data. To evaluate the predictive performance of the models, the accuracy metric was employed. The experimental findings showed that the suggested method of estimating COVID-19 risk is effective. When compared to baseline models, the optimized machine learning models performed better and produced better results.
文摘In order to reduce the weight of airplane and increase its mechanical behaviors, more and more large integrated parts are applied in modern aviation industry. When machining thin-walled aeroplane parts, more than 90% of the materials would be removed, resulting in severe distortion of the parts due to the weakened rigidity and the release of residual stress. This might also lead to stress concentration and damage of the parts. The effect of material removal from residually stressed billet is simulated using FEA software MSC. Marc and the causations of distortion is analyzed. To verify the finite element simulation, a high speed milling test on aluminum alloy 7050T7351 is carried out. The results show that the simulation result is consistent with the experimental one. It is concluded that the release of residual stress is the main cause of machining distortion.
基金supported by Shandong Provincial Natural Science Foundation of China (Grant No. Y2007A07)
文摘In dealing with abrasive waterjet machining(AWJM) simulation,most literatures apply finite element method(FEM) to build pure waterjet models or single abrasive particle erosion models.To overcome the mesh distortion caused by large deformation using FEM and to consider the effects of both water and abrasive,the smoothed particle hydrodynamics(SPH) coupled FEM modeling for AWJM simulation is presented,in which the abrasive waterjet is modeled by SPH particles and the target material is modeled by FEM.The two parts interact through contact algorithm.Utilizing this model,abrasive waterjet with high velocity penetrating the target materials is simulated and the mechanism of erosion is depicted.The relationships between the depth of penetration and jet parameters,including water pressure and traverse speed,etc,are analyzed based on the simulation.The simulation results agree well with the existed experimental data.The mixing multi-materials SPH particles,which contain abrasive and water,are adopted by means of the randomized algorithm and material model for the abrasive is presented.The study will not only provide a new powerful tool for the simulation of abrasive waterjet machining,but also be beneficial to understand its cutting mechanism and optimize the operating parameters.
文摘The Al 2O 3 particles reinforced aluminum matrix composite (Al 2O 3p/Al) are more and more widely used for their excellent physical and chemical properties. However, their poor machinability leads to severe tool wear and bad machined surface. In this paper laser assisted machining is adopted in machining Al 2O 3p/Al composite and good result was obtained. The result of experiment shows in machining Al 2O 3p/Al composites the cutting force is reduced in 30%~50%, the tool wear is reduced in 20%~30% and machined surface quality is improved in laser assisted machining as compared with conventional cutting. The physical model of the cutting process is set up and explains the reason why the cutting forced are reduced. The state of the particles is the main influence of the change. When the material of cutting zone is heating by laser, the aluminum matrix becomes softer and easier in plastic deformation, which leads to the reduction of the pushing force from the tool to the machined surface. The soften aluminum matrix is more easy to be squeezed out from the machined surface, and it leads the concentration of the Al 2O 3 particles in the surface layer of machined surface. The softening effect of laser heating on aluminum matrix reduces the pushing forces of the Al 2O 3 particles on the clearance face of cutting tool, which is just the reason for the severe cutting tool wear in conventional machining of Al 2O 3p/Al composite. Because the Al 2O 3 particles were pushed in during the cutting process, the particles increased in the surface layer. Because of the difference in thermal conductivity and thermal expansion between the Al-matrix and Al 2O 3 particle, residual stress is changed in the matrix after machining due to the extrusion of the tool, deformation of the matrix and displacement of the Al 2O 3 particle in the matrix. Temperature gradient comes into the cutting zone and the work-piece surface layer, it will lead to the increase of thermal stress and misfit dislocation in the matrix. The residual stress is compressive in the laser assisted hot cutting surface, the compressive stress is nearly triple times than that in the conventional cutting surface. Some analysis on the mechanism of laser heat assisted machining of Al 2O 3p/Al composite is given in the paper too.
基金supported by Research Fund for the Doctoral Program of Ministry of Education of China(Grant No.20090041110031)National Natural Science Foundation of China(Grant No.50575033)
文摘Gap debris as discharge product is closely related to machining process in electrical discharge machining(EDM). A lot of recent researches have focused on the relationship among debris size, surfaces texture, remove rate, and machining stability. The study on statistical distribution of debris size contributes to the research, but it is still superficial currently. In order to obtain the distribution law of the debris particle size, laser particle size analyzer(LPSA) combined with scanning electron microscope(SEM) is used to analyze the EDM debris size. Firstly, the heating dried method is applied to obtain the debris particles. Secondly, the measuring range of LPSA is determined as 0.5–100 μm by SEM observation, and the frequency distribution histogram and the cumulative frequency distribution scattergram of debris size are obtained by using LPSA. Thirdly, according to the distribution characteristic of the frequency distribution histogram, the statistical distribution functions of lognormal, exponentially modified Gaussian(EMG), Gamma and Weibull are chosen to achieve curve fitting of the histogram. At last, the distribute law of the debris size is obtained by fitting results. Experiments with different discharge parameters are carried out on an EDM machine designed by the authors themselves, and the machining conditions are tool electrode of red-copper material, workpiece of ANSI 1045 material and working fluid of de-ionized water. The experimental results indicate that the debris sizes of all experiment sample truly obey the Weibull distribution. The obtained distribution law is significantly important for all the models established based on the debris particle size.
基金Project(51275116)supported by the National Natural Science Foundation of ChinaProject(2012ZE77010)supported by the Aero Science Foundation of ChinaProject(LBH-Q11090)supported by the Postdoctoral Science Research Development Foundation of Heilongjiang Province,China
文摘The electric double layer with the transmission of particles was presented based on the principle of electrochemistry.In accordance with this theory,the cavitation catalysis removal mechanism of ultrasonic-pulse electrochemical compound machining(UPECM) based on particles was proposed.The removal mechanism was a particular focus and was thus validated by experiments.The principles and experiments of UPECM were introduced,and the removal model of the UPECM based on the principles of UPECM was established.Furthermore,the effects of the material removal rate for the main processing parameters,including the particles size,the ultrasonic vibration amplitude,the pulse voltage and the minimum machining gap between the tool and the workpiece,were also studied through UPECM.The results show that the particles promote ultrasonic-pulse electrochemical compound machining and thus act as the catalyzer of UPECM.The results also indicate that the processing speed,machining accuracy and surface quality can be improved under UPECM compound machining.
基金Supported by National Natural Science Foundation of China(Grant Nos.51575087,51205041)Science Fund for Creative Research Groups(Grant No.51321004)+1 种基金Basic Research Foundation of Key Laboratory of Liaoning Educational Committee,China(Grant No.LZ2014003)Research Project of Ministry of Education of China(Grant No.113018A)
文摘Parts with varied curvature features play increasingly critical roles in engineering, and are often machined under high-speed continuous-path running mode to ensure the machining efficiency. However, the continuous-path running trajectory error is significant during high-feed-speed machining, which seriously restricts the machining precision for such parts with varied curvature features. In order to reduce the continuous-path running trajectory error without sacrificing the machining efficiency, a pre-compensation method for the trajectory error is proposed. Based on the formation mechanism of the continuous-path running trajectory error analyzed, this error is estimated in advance by approximating the desired toolpath with spline curves. Then, an iterative error pre-compensation method is presented. By machining with the regenerated toolpath after pre-compensation instead of the uncompensated toolpath, the continuous-path running trajectory error can be effectively decreased without the reduction of the feed speed. To demonstrate the feasibility of the proposed pre-compensation method, a heart curve toolpath that possesses varied curvature features is employed. Experimental results indicate that compared with the uncompensated processing trajectory, the maximum and average machining errors for the pre-compensated processing trajectory are reduced by 67.19% and 82.30%, respectively. An easy to implement solution for high efficiency and high precision machining of the parts with varied curvature features is provided.