The aerospace community widely uses difficult-to-cut materials,such as titanium alloys,high-temperature alloys,metal/ceramic/polymer matrix composites,hard and brittle materials,and geometrically complex components,su...The aerospace community widely uses difficult-to-cut materials,such as titanium alloys,high-temperature alloys,metal/ceramic/polymer matrix composites,hard and brittle materials,and geometrically complex components,such as thin-walled structures,microchannels,and complex surfaces.Mechanical machining is the main material removal process for the vast majority of aerospace components.However,many problems exist,including severe and rapid tool wear,low machining efficiency,and poor surface integrity.Nontraditional energy-assisted mechanical machining is a hybrid process that uses nontraditional energies(vibration,laser,electricity,etc)to improve the machinability of local materials and decrease the burden of mechanical machining.This provides a feasible and promising method to improve the material removal rate and surface quality,reduce process forces,and prolong tool life.However,systematic reviews of this technology are lacking with respect to the current research status and development direction.This paper reviews the recent progress in the nontraditional energy-assisted mechanical machining of difficult-to-cut materials and components in the aerospace community.In addition,this paper focuses on the processing principles,material responses under nontraditional energy,resultant forces and temperatures,material removal mechanisms,and applications of these processes,including vibration-,laser-,electric-,magnetic-,chemical-,advanced coolant-,and hybrid nontraditional energy-assisted mechanical machining.Finally,a comprehensive summary of the principles,advantages,and limitations of each hybrid process is provided,and future perspectives on forward design,device development,and sustainability of nontraditional energy-assisted mechanical machining processes are discussed.展开更多
With the continuous development and advancement of science and technology,the work of tool path planning has received extensive attention.Among them,curved surface generation and data processing are the focus of manag...With the continuous development and advancement of science and technology,the work of tool path planning has received extensive attention.Among them,curved surface generation and data processing are the focus of management and design,which necessitate the full application of reverse design of complex curved surface components to complete numerical control processing,effective optimization and upgrading,integration the tasks of point cloud data collection,and point cloud data processing to ensure that the corresponding computer numerical control machining model can exert its actual value.This paper briefly analyzes the basic principles of curved surface reconstruction as well as discusses the reverse design of complex curved components and the experimental processes and results that involved computer numerical control machining,which serves the purpose as reference only.展开更多
The visual analysis of common neurological disorders such as epileptic seizures in electroencephalography(EEG) is an oversensitive operation and prone to errors,which has motivated the researchers to develop effective...The visual analysis of common neurological disorders such as epileptic seizures in electroencephalography(EEG) is an oversensitive operation and prone to errors,which has motivated the researchers to develop effective automated seizure detection methods.This paper proposes a robust automatic seizure detection method that can establish a veritable diagnosis of these diseases.The proposed method consists of three steps:(i) remove artifact from EEG data using Savitzky-Golay filter and multi-scale principal component analysis(MSPCA),(ii) extract features from EEG signals using signal decomposition representations based on empirical mode decomposition(EMD),discrete wavelet transform(DWT),and dual-tree complex wavelet transform(DTCWT) allowing to overcome the non-linearity and non-stationary of EEG signals,and(iii) allocate the feature vector to the relevant class(i.e.,seizure class "ictal" or free seizure class "interictal") using machine learning techniques such as support vector machine(SVM),k-nearest neighbor(k-NN),and linear discriminant analysis(LDA).The experimental results were based on two EEG datasets generated from the CHB-MIT database with and without overlapping process.The results obtained have shown the effectiveness of the proposed method that allows achieving a higher classification accuracy rate up to 100% and also outperforms similar state-of-the-art methods.展开更多
Nowadays, power quality issues are becoming a significant research topic because of the increasing inclusion of very sensitive devices and considerable renewable energy sources. In general, most of the previous power ...Nowadays, power quality issues are becoming a significant research topic because of the increasing inclusion of very sensitive devices and considerable renewable energy sources. In general, most of the previous power quality classification techniques focused on single power quality events and did not include an optimal feature selection process. This paper presents a classification system that employs Wavelet Transform and the RMS profile to extract the main features of the measured waveforms containing either single or complex disturbances. A data mining process is designed to select the optimal set of features that better describes each disturbance present in the waveform. Support Vector Machine binary classifiers organized in a “One Vs Rest” architecture are individually optimized to classify single and complex disturbances. The parameters that rule the performance of each binary classifier are also individually adjusted using a grid search algorithm that helps them achieve optimal performance. This specialized process significantly improves the total classification accuracy. Several single and complex disturbances were simulated in order to train and test the algorithm. The results show that the classifier is capable of identifying >99% of single disturbances and >97% of complex disturbances.展开更多
Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topo...Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topographic factors like altitude,slope,slope direction,slope variability,surface roughness,and meteorological factors like temperature and wind speed.The results of the correction demonstrated that the ensemble learning method has a considerably corrective effect and the three methods(Random Forest,AdaBoost,and Bagging)adopted in the study had similar results.The mean bias between CMPAS and 85%of automatic weather stations has dropped by more than 30%.The plateau region displays the largest accuracy increase,the winter season shows the greatest error reduction,and decreasing precipitation improves the correction outcome.Additionally,the heavy precipitation process’precision has improved to some degree.For individual stations,the revised CMPAS error fluctuation range is significantly reduced.展开更多
Modern manufacturing systems are expected to undertake multiple tasks, flexible for extensive customization, and that trends make production systems become more and more complicated. The advantage of a complex product...Modern manufacturing systems are expected to undertake multiple tasks, flexible for extensive customization, and that trends make production systems become more and more complicated. The advantage of a complex production system is a capability to fulfill more intensive goods production and to adapt to various parameters in different conditions. The disadvantage of a complex system, on the other hand, with the pace of the increase of complexity, lies in the control difficulties rising dramatically. Moreover, classical methods are reluctant to control a complex system, and searching for the appropriate control policy tends to become more complicated. Thanks to the development of machine learning technology, this problem is provided with more possibilities for the solutions. In this paper, a hybrid machine learning algorithm, integrating genetic algorithm and reinforcement learning algorithm, is proposed to cope with the accuracy of a control policy and system optimization issue in the simulation of a complex manufacturing system. The objective of this paper is to cut down the makespan and the due date in the manufacturing system. Three use cases, based on the different recipe of the product, are employed to validate the algorithm, and the results prove the applicability of the hybrid algorithm. Besides that, some additionally obtained results are beneficial to find out a solution for the complex system optimization and manufacturing system structure transformation.展开更多
The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are ca...The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios.展开更多
A data lake(DL),abbreviated as DL,denotes a vast reservoir or repository of data.It accumulates substantial volumes of data and employs advanced analytics to correlate data from diverse origins containing various form...A data lake(DL),abbreviated as DL,denotes a vast reservoir or repository of data.It accumulates substantial volumes of data and employs advanced analytics to correlate data from diverse origins containing various forms of semi-structured,structured,and unstructured information.These systems use a flat architecture and run different types of data analytics.NoSQL databases are nontabular and store data in a different manner than the relational table.NoSQL databases come in various forms,including key-value pairs,documents,wide columns,and graphs,each based on its data model.They offer simpler scalability and generally outperform traditional relational databases.While NoSQL databases can store diverse data types,they lack full support for atomicity,consistency,isolation,and durability features found in relational databases.Consequently,employing machine learning approaches becomes necessary to categorize complex structured query language(SQL)queries.Results indicate that the most frequently used automatic classification technique in processing SQL queries on NoSQL databases is machine learning-based classification.Overall,this study provides an overview of the automatic classification techniques used in processing SQL queries on NoSQL databases.Understanding these techniques can aid in the development of effective and efficient NoSQL database applications.展开更多
Heart monitoring improves life quality.Electrocardiograms(ECGs or EKGs)detect heart irregularities.Machine learning algorithms can create a few ECG diagnosis processing methods.The first method uses raw ECG and time-s...Heart monitoring improves life quality.Electrocardiograms(ECGs or EKGs)detect heart irregularities.Machine learning algorithms can create a few ECG diagnosis processing methods.The first method uses raw ECG and time-series data.The second method classifies the ECG by patient experience.The third technique translates ECG impulses into Q waves,R waves and S waves(QRS)features using richer information.Because ECG signals vary naturally between humans and activities,we will combine the three feature selection methods to improve classification accuracy and diagnosis.Classifications using all three approaches have not been examined till now.Several researchers found that Machine Learning(ML)techniques can improve ECG classification.This study will compare popular machine learning techniques to evaluate ECG features.Four algorithms—Support Vector Machine(SVM),Decision Tree,Naive Bayes,and Neural Network—compare categorization results.SVM plus prior knowledge has the highest accuracy(99%)of the four ML methods.QRS characteristics failed to identify signals without chaos theory.With 99.8%classification accuracy,the Decision Tree technique outperformed all previous experiments.展开更多
Digitization precision analysis is an important tool to ensure the design precision of machine tool currently. The correlative research about precision modeling and analysis mainly focuses on the geometry precision an...Digitization precision analysis is an important tool to ensure the design precision of machine tool currently. The correlative research about precision modeling and analysis mainly focuses on the geometry precision and motion precision of machine tool, and the forming motion precision of workpiece surface. For the machine tool with complex forming motion, there is not accurate corresponding relationship between the existing criterion on precision design and the machining precision of workpiece. Therefore, a design scheme on machine tool precision based on error prediction is proposed, which is divided into two-stage digitization precision analysis crucially. The first stage aims at the technology system to complete the precision distribution and inspection from the workpiece to various component parts of technology system and achieve the total output precision of machine tool under the specified machining precision; the second stage aims at the machine tool system to complete the precision distribution and inspection from the output precision of machine tool to the machine tool components. This article serves YK3610 gear hobber as the example to describe the error model of two systems and basic application method, and the practical cutting precision of this machine tool achieves to 5-4-4 grade. The proposed method can provide reliable guidance to the precision design of machine tool with complex forming motion.展开更多
A hybrid two-stage flowshop scheduling problem was considered which involves m identical parallel machines at Stage 1 and a burn-in processor M at Stage 2, and the makespan was taken as the minimization objective. Thi...A hybrid two-stage flowshop scheduling problem was considered which involves m identical parallel machines at Stage 1 and a burn-in processor M at Stage 2, and the makespan was taken as the minimization objective. This scheduling problem is NP-hard in general. We divide it into eight subcases. Except for the following two subcases: (1) b≥ an, max{m, B} 〈 n; (2) a1 ≤ b ≤ an, m ≤ B 〈 n, for all other subcases, their NP-hardness was proved or pointed out, corresponding approximation algorithms were conducted and their worst-case performances were estimated. In all these approximation algorithms, the Multifit and PTAS algorithms were respectively used, as the jobs were scheduled in m identical parallel machines.展开更多
The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingl...The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.展开更多
In this paper, single machine scheduling problems with variable processing time are raised. The criterions of the problem considered are minimizing scheduling length of all jobs, flow time and number of tardy jobs and...In this paper, single machine scheduling problems with variable processing time are raised. The criterions of the problem considered are minimizing scheduling length of all jobs, flow time and number of tardy jobs and so on. The complexity of the problem is determined. [WT5HZ]展开更多
Parallel machine problems with a single server and release times are generalizations of classical parallel machine problems. Before processing, each job must be loaded on a machine, which takes a certain release times...Parallel machine problems with a single server and release times are generalizations of classical parallel machine problems. Before processing, each job must be loaded on a machine, which takes a certain release times and a certain setup times. All these setups have to be done by a single server, which can handle at most one job at a time. In this paper, we continue studying the complexity result for parallel machine problem with a single and release times. New complexity results are derived for special cases.展开更多
In this paper,we use machine learning techniques to form a cancer cell model that displays the growth and promotion of synaptic and electrical signals.Here,such a technique can be applied directly to the spiking neura...In this paper,we use machine learning techniques to form a cancer cell model that displays the growth and promotion of synaptic and electrical signals.Here,such a technique can be applied directly to the spiking neural network of cancer cell synapses.The results show that machine learning techniques for the spiked network of cancer cell synapses have the powerful function of neuron models and potential supervisors for different implementations.The changes in the neural activity of tumor microenvironment caused by synaptic and electrical signals are described.It can be used to cancer cells and tumor training processes of neural networks to reproduce complex spatiotemporal dynamics and to mechanize the association of excitatory synaptic structures which are between tumors and neurons in the brain with complex human health behaviors.展开更多
This paper uses the concept of algorithmic efficiency to present a unified theory of intelligence. Intelligence is defined informally, formally, and computationally. We introduce the concept of dimensional complexity ...This paper uses the concept of algorithmic efficiency to present a unified theory of intelligence. Intelligence is defined informally, formally, and computationally. We introduce the concept of dimensional complexity in algorithmic efficiency and deduce that an optimally efficient algorithm has zero time complexity, zero space complexity, and an infinite dimensional complexity. This algorithm is used to generate the number line.展开更多
Due to the shortcomings of the diagnosis systems for complex electronic devices such as failure models hard to build and low fault isolation resolution, a new hierarchical modeling and diagnosis method is proposed bas...Due to the shortcomings of the diagnosis systems for complex electronic devices such as failure models hard to build and low fault isolation resolution, a new hierarchical modeling and diagnosis method is proposed based on multisignal model and support vector machine (SVM). Multisignal model is used to describe the failure propagation relationship in electronic device system, and the most probable failure printed circuit boards (PCBs) can be found by Bayes inference. The exact failure modes in the PCBs can be identified by SVM. The results show the proposed modeling and diagnosis method is effective and suitable for diagnosis for complex electronic devices.展开更多
基金supported by the National Natural Science Foundation of China(Nos.52075255,92160301,52175415,52205475,and 92060203)。
文摘The aerospace community widely uses difficult-to-cut materials,such as titanium alloys,high-temperature alloys,metal/ceramic/polymer matrix composites,hard and brittle materials,and geometrically complex components,such as thin-walled structures,microchannels,and complex surfaces.Mechanical machining is the main material removal process for the vast majority of aerospace components.However,many problems exist,including severe and rapid tool wear,low machining efficiency,and poor surface integrity.Nontraditional energy-assisted mechanical machining is a hybrid process that uses nontraditional energies(vibration,laser,electricity,etc)to improve the machinability of local materials and decrease the burden of mechanical machining.This provides a feasible and promising method to improve the material removal rate and surface quality,reduce process forces,and prolong tool life.However,systematic reviews of this technology are lacking with respect to the current research status and development direction.This paper reviews the recent progress in the nontraditional energy-assisted mechanical machining of difficult-to-cut materials and components in the aerospace community.In addition,this paper focuses on the processing principles,material responses under nontraditional energy,resultant forces and temperatures,material removal mechanisms,and applications of these processes,including vibration-,laser-,electric-,magnetic-,chemical-,advanced coolant-,and hybrid nontraditional energy-assisted mechanical machining.Finally,a comprehensive summary of the principles,advantages,and limitations of each hybrid process is provided,and future perspectives on forward design,device development,and sustainability of nontraditional energy-assisted mechanical machining processes are discussed.
文摘With the continuous development and advancement of science and technology,the work of tool path planning has received extensive attention.Among them,curved surface generation and data processing are the focus of management and design,which necessitate the full application of reverse design of complex curved surface components to complete numerical control processing,effective optimization and upgrading,integration the tasks of point cloud data collection,and point cloud data processing to ensure that the corresponding computer numerical control machining model can exert its actual value.This paper briefly analyzes the basic principles of curved surface reconstruction as well as discusses the reverse design of complex curved components and the experimental processes and results that involved computer numerical control machining,which serves the purpose as reference only.
文摘The visual analysis of common neurological disorders such as epileptic seizures in electroencephalography(EEG) is an oversensitive operation and prone to errors,which has motivated the researchers to develop effective automated seizure detection methods.This paper proposes a robust automatic seizure detection method that can establish a veritable diagnosis of these diseases.The proposed method consists of three steps:(i) remove artifact from EEG data using Savitzky-Golay filter and multi-scale principal component analysis(MSPCA),(ii) extract features from EEG signals using signal decomposition representations based on empirical mode decomposition(EMD),discrete wavelet transform(DWT),and dual-tree complex wavelet transform(DTCWT) allowing to overcome the non-linearity and non-stationary of EEG signals,and(iii) allocate the feature vector to the relevant class(i.e.,seizure class "ictal" or free seizure class "interictal") using machine learning techniques such as support vector machine(SVM),k-nearest neighbor(k-NN),and linear discriminant analysis(LDA).The experimental results were based on two EEG datasets generated from the CHB-MIT database with and without overlapping process.The results obtained have shown the effectiveness of the proposed method that allows achieving a higher classification accuracy rate up to 100% and also outperforms similar state-of-the-art methods.
文摘Nowadays, power quality issues are becoming a significant research topic because of the increasing inclusion of very sensitive devices and considerable renewable energy sources. In general, most of the previous power quality classification techniques focused on single power quality events and did not include an optimal feature selection process. This paper presents a classification system that employs Wavelet Transform and the RMS profile to extract the main features of the measured waveforms containing either single or complex disturbances. A data mining process is designed to select the optimal set of features that better describes each disturbance present in the waveform. Support Vector Machine binary classifiers organized in a “One Vs Rest” architecture are individually optimized to classify single and complex disturbances. The parameters that rule the performance of each binary classifier are also individually adjusted using a grid search algorithm that helps them achieve optimal performance. This specialized process significantly improves the total classification accuracy. Several single and complex disturbances were simulated in order to train and test the algorithm. The results show that the classifier is capable of identifying >99% of single disturbances and >97% of complex disturbances.
基金Program of Science and Technology Department of Sichuan Province(2022YFS0541-02)Program of Heavy Rain and Drought-flood Disasters in Plateau and Basin Key Laboratory of Sichuan Province(SCQXKJQN202121)Innovative Development Program of the China Meteorological Administration(CXFZ2021Z007)。
文摘Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topographic factors like altitude,slope,slope direction,slope variability,surface roughness,and meteorological factors like temperature and wind speed.The results of the correction demonstrated that the ensemble learning method has a considerably corrective effect and the three methods(Random Forest,AdaBoost,and Bagging)adopted in the study had similar results.The mean bias between CMPAS and 85%of automatic weather stations has dropped by more than 30%.The plateau region displays the largest accuracy increase,the winter season shows the greatest error reduction,and decreasing precipitation improves the correction outcome.Additionally,the heavy precipitation process’precision has improved to some degree.For individual stations,the revised CMPAS error fluctuation range is significantly reduced.
文摘Modern manufacturing systems are expected to undertake multiple tasks, flexible for extensive customization, and that trends make production systems become more and more complicated. The advantage of a complex production system is a capability to fulfill more intensive goods production and to adapt to various parameters in different conditions. The disadvantage of a complex system, on the other hand, with the pace of the increase of complexity, lies in the control difficulties rising dramatically. Moreover, classical methods are reluctant to control a complex system, and searching for the appropriate control policy tends to become more complicated. Thanks to the development of machine learning technology, this problem is provided with more possibilities for the solutions. In this paper, a hybrid machine learning algorithm, integrating genetic algorithm and reinforcement learning algorithm, is proposed to cope with the accuracy of a control policy and system optimization issue in the simulation of a complex manufacturing system. The objective of this paper is to cut down the makespan and the due date in the manufacturing system. Three use cases, based on the different recipe of the product, are employed to validate the algorithm, and the results prove the applicability of the hybrid algorithm. Besides that, some additionally obtained results are beneficial to find out a solution for the complex system optimization and manufacturing system structure transformation.
基金supported in part by the“Pioneer”and“Leading Goose”R&D Program of Zhejiang(Grant No.2022C03174)the National Natural Science Foundation of China(No.92067103)+4 种基金the Key Research and Development Program of Shaanxi,China(No.2021ZDLGY06-02)the Natural Science Foundation of Shaanxi Province(No.2019ZDLGY12-02)the Shaanxi Innovation Team Project(No.2018TD-007)the Xi'an Science and technology Innovation Plan(No.201809168CX9JC10)the Fundamental Research Funds for the Central Universities(No.YJS2212)and National 111 Program of China B16037.
文摘The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios.
基金supported by the Student Scheme provided by Universiti Kebangsaan Malaysia with the Code TAP-20558.
文摘A data lake(DL),abbreviated as DL,denotes a vast reservoir or repository of data.It accumulates substantial volumes of data and employs advanced analytics to correlate data from diverse origins containing various forms of semi-structured,structured,and unstructured information.These systems use a flat architecture and run different types of data analytics.NoSQL databases are nontabular and store data in a different manner than the relational table.NoSQL databases come in various forms,including key-value pairs,documents,wide columns,and graphs,each based on its data model.They offer simpler scalability and generally outperform traditional relational databases.While NoSQL databases can store diverse data types,they lack full support for atomicity,consistency,isolation,and durability features found in relational databases.Consequently,employing machine learning approaches becomes necessary to categorize complex structured query language(SQL)queries.Results indicate that the most frequently used automatic classification technique in processing SQL queries on NoSQL databases is machine learning-based classification.Overall,this study provides an overview of the automatic classification techniques used in processing SQL queries on NoSQL databases.Understanding these techniques can aid in the development of effective and efficient NoSQL database applications.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups(Grant Number RGP.2/246/44),B.B.,and https://www.kku.edu.sa/en.
文摘Heart monitoring improves life quality.Electrocardiograms(ECGs or EKGs)detect heart irregularities.Machine learning algorithms can create a few ECG diagnosis processing methods.The first method uses raw ECG and time-series data.The second method classifies the ECG by patient experience.The third technique translates ECG impulses into Q waves,R waves and S waves(QRS)features using richer information.Because ECG signals vary naturally between humans and activities,we will combine the three feature selection methods to improve classification accuracy and diagnosis.Classifications using all three approaches have not been examined till now.Several researchers found that Machine Learning(ML)techniques can improve ECG classification.This study will compare popular machine learning techniques to evaluate ECG features.Four algorithms—Support Vector Machine(SVM),Decision Tree,Naive Bayes,and Neural Network—compare categorization results.SVM plus prior knowledge has the highest accuracy(99%)of the four ML methods.QRS characteristics failed to identify signals without chaos theory.With 99.8%classification accuracy,the Decision Tree technique outperformed all previous experiments.
基金supported by National Natural Science Foundation of China (Grant No. 51075419)Chongqing Municipal Natural Science Foundation of China (Grant No. CSTC,2009BB3234)
文摘Digitization precision analysis is an important tool to ensure the design precision of machine tool currently. The correlative research about precision modeling and analysis mainly focuses on the geometry precision and motion precision of machine tool, and the forming motion precision of workpiece surface. For the machine tool with complex forming motion, there is not accurate corresponding relationship between the existing criterion on precision design and the machining precision of workpiece. Therefore, a design scheme on machine tool precision based on error prediction is proposed, which is divided into two-stage digitization precision analysis crucially. The first stage aims at the technology system to complete the precision distribution and inspection from the workpiece to various component parts of technology system and achieve the total output precision of machine tool under the specified machining precision; the second stage aims at the machine tool system to complete the precision distribution and inspection from the output precision of machine tool to the machine tool components. This article serves YK3610 gear hobber as the example to describe the error model of two systems and basic application method, and the practical cutting precision of this machine tool achieves to 5-4-4 grade. The proposed method can provide reliable guidance to the precision design of machine tool with complex forming motion.
基金Project supported by the Science and Technology Development Fund of Shanghai University(Grant No.A.10-0101-06-0017)
文摘A hybrid two-stage flowshop scheduling problem was considered which involves m identical parallel machines at Stage 1 and a burn-in processor M at Stage 2, and the makespan was taken as the minimization objective. This scheduling problem is NP-hard in general. We divide it into eight subcases. Except for the following two subcases: (1) b≥ an, max{m, B} 〈 n; (2) a1 ≤ b ≤ an, m ≤ B 〈 n, for all other subcases, their NP-hardness was proved or pointed out, corresponding approximation algorithms were conducted and their worst-case performances were estimated. In all these approximation algorithms, the Multifit and PTAS algorithms were respectively used, as the jobs were scheduled in m identical parallel machines.
文摘The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.
文摘In this paper, single machine scheduling problems with variable processing time are raised. The criterions of the problem considered are minimizing scheduling length of all jobs, flow time and number of tardy jobs and so on. The complexity of the problem is determined. [WT5HZ]
文摘Parallel machine problems with a single server and release times are generalizations of classical parallel machine problems. Before processing, each job must be loaded on a machine, which takes a certain release times and a certain setup times. All these setups have to be done by a single server, which can handle at most one job at a time. In this paper, we continue studying the complexity result for parallel machine problem with a single and release times. New complexity results are derived for special cases.
基金Project supported by the National Natural Science Foundation of China(Nos.11772046 and 81870345)。
文摘In this paper,we use machine learning techniques to form a cancer cell model that displays the growth and promotion of synaptic and electrical signals.Here,such a technique can be applied directly to the spiking neural network of cancer cell synapses.The results show that machine learning techniques for the spiked network of cancer cell synapses have the powerful function of neuron models and potential supervisors for different implementations.The changes in the neural activity of tumor microenvironment caused by synaptic and electrical signals are described.It can be used to cancer cells and tumor training processes of neural networks to reproduce complex spatiotemporal dynamics and to mechanize the association of excitatory synaptic structures which are between tumors and neurons in the brain with complex human health behaviors.
文摘This paper uses the concept of algorithmic efficiency to present a unified theory of intelligence. Intelligence is defined informally, formally, and computationally. We introduce the concept of dimensional complexity in algorithmic efficiency and deduce that an optimally efficient algorithm has zero time complexity, zero space complexity, and an infinite dimensional complexity. This algorithm is used to generate the number line.
基金supported by the Defense Foundation Scientific Research Fund under Grant No.9140A17030308DZ02,9140A16060409DZ02the National Natural Science Fundation of Chinaunder Grant No.60934002Dr.Lianke for the extensive discussions on the subject and UESTC for its support under Grant No.JX0756,Y02018023601059
文摘Due to the shortcomings of the diagnosis systems for complex electronic devices such as failure models hard to build and low fault isolation resolution, a new hierarchical modeling and diagnosis method is proposed based on multisignal model and support vector machine (SVM). Multisignal model is used to describe the failure propagation relationship in electronic device system, and the most probable failure printed circuit boards (PCBs) can be found by Bayes inference. The exact failure modes in the PCBs can be identified by SVM. The results show the proposed modeling and diagnosis method is effective and suitable for diagnosis for complex electronic devices.