In this paper,we propose a structural developmental neural network to address the plasticity‐stability dilemma,computational inefficiency,and lack of prior knowledge in continual unsupervised learning.This model uses...In this paper,we propose a structural developmental neural network to address the plasticity‐stability dilemma,computational inefficiency,and lack of prior knowledge in continual unsupervised learning.This model uses competitive learning rules and dynamic neurons with information saturation to achieve parameter adjustment and adaptive structure development.Dynamic neurons adjust the information saturation after winning the competition and use this parameter to modulate the neuron parameter adjustment and the division timing.By dividing to generate new neurons,the network not only keeps sensitive to novel features but also can subdivide classes learnt repeatedly.The dynamic neurons with information saturation and division mechanism can simulate the long short‐term memory of the human brain,which enables the network to continually learn new samples while maintaining the previous learning results.The parent‐child relationship between neurons arising from neuronal division enables the network to simulate the human cognitive process that gradually refines the perception of objects.By setting the clustering layer parameter,users can choose the desired degree of class subdivision.Experimental results on artificial and real‐world datasets demonstrate that the proposed model is feasible for unsupervised learning tasks in instance increment and class incre-ment scenarios and outperforms prior structural developmental neural networks.展开更多
In order to solve the problem of automatic defect detection and process control in the welding and arc additive process,the paper monitors the current,voltage,audio,and other data during the welding process and extrac...In order to solve the problem of automatic defect detection and process control in the welding and arc additive process,the paper monitors the current,voltage,audio,and other data during the welding process and extracts the minimum value,standard deviation,deviation from the voltage and current data.It extracts spectral features such as root mean square,spectral centroid,and zero-crossing rate from audio data,fuses the features extracted from multiple sensor signals,and establishes multiple machine learning supervised and unsupervised models.They are used to detect abnormalities in the welding process.The experimental results show that the established multiple machine learning models have high accuracy,among which the supervised learning model,the balanced accuracy of Ada boost is 0.957,and the unsupervised learning model Isolation Forest has a balanced accuracy of 0.909.展开更多
This paper reports distinct spatio-spectral properties of Zen-meditation EEG (electroencephalograph), compared with resting EEG, by implementing unsupervised machine learning scheme in clustering the brain mappings of...This paper reports distinct spatio-spectral properties of Zen-meditation EEG (electroencephalograph), compared with resting EEG, by implementing unsupervised machine learning scheme in clustering the brain mappings of centroid frequency (BMFc). Zen practitioners simultaneously concentrate on the third ventricle, hypothalamus and corpora quadrigemina touniversalize all brain neurons to construct a <i>detached</i> brain and gradually change the normal brain traits, leading to the process of brain-neuroplasticity. During such tri-aperture concentration, EEG exhibits prominent diffuse high-frequency oscillations. Unsupervised self-organizing map (SOM), clusters the dataset of quantitative EEG by matching the input feature vector Fc and the output cluster center through the SOM network weights. Input dataset contains brain mappings of 30 centroid frequencies extracted from CWT (continuous wavelet transform) coefficients. According to SOM clustering results, resting EEG is dominated by global low-frequency (<14 Hz) activities, except channels T7, F7 and TP7 (>14.4 Hz);whereas Zen-meditation EEG exhibits globally high-frequency (>16 Hz) activities throughout the entire record. Beta waves with a wide range of frequencies are often associated with active concentration. Nonetheless, clinic report discloses that benzodiazepines, medication treatment for anxiety, insomnia and panic attacks to relieve mind/body stress, often induce <i>beta buzz</i>. We may hypothesize that Zen-meditation practitioners attain the unique state of mindfulness concentration under optimal body-mind relaxation.展开更多
Underwater image enhancement aims to restore a clean appearance and thus improves the quality of underwater degraded images.Current methods feed the whole image directly into the model for enhancement.However,they ign...Underwater image enhancement aims to restore a clean appearance and thus improves the quality of underwater degraded images.Current methods feed the whole image directly into the model for enhancement.However,they ignored that the R,G and B channels of underwater degraded images present varied degrees of degradation,due to the selective absorption for the light.To address this issue,we propose an unsupervised multi-expert learning model by considering the enhancement of each color channel.Specifically,an unsupervised architecture based on generative adversarial network is employed to alleviate the need for paired underwater images.Based on this,we design a generator,including a multi-expert encoder,a feature fusion module and a feature fusion-guided decoder,to generate the clear underwater image.Accordingly,a multi-expert discriminator is proposed to verify the authenticity of the R,G and B channels,respectively.In addition,content perceptual loss and edge loss are introduced into the loss function to further improve the content and details of the enhanced images.Extensive experiments on public datasets demonstrate that our method achieves more pleasing results in vision quality.Various metrics(PSNR,SSIM,UIQM and UCIQE) evaluated on our enhanced images have been improved obviously.展开更多
Unsupervised learning algorithms can effectively solve sample imbalance.To address battery consistency anomalies in new energy vehicles,we adopt a variety of unsupervised learning algorithms to evaluate and predict th...Unsupervised learning algorithms can effectively solve sample imbalance.To address battery consistency anomalies in new energy vehicles,we adopt a variety of unsupervised learning algorithms to evaluate and predict the battery consistency of three vehicles using charging fragment data from actual operating conditions.We extract battery-related features,such as the mean of maximum difference,standard deviation,and entropy of batteries and then apply principal component analysis to reduce the dimensionality and record the amount of preserved information.We then build models through a collection of unsupervised learning algorithms for the anomaly detection of cell consistency faults.We also determine whether unsupervised and supervised learning algorithms can address the battery consistency problem and document the parameter tuning process.In addition,we compare the prediction effectiveness of charging and discharging features modeled individually and in combination,determine the choice of charging and discharging features to be modeled in combination,and visualize the multidimensional data for fault detection.Experimental results show that the unsupervised learning algorithm is effective in visualizing and predicting vehicle core conformance faults,and can accurately predict faults in real time.The“distance+boxplot”algorithm shows the best performance with a prediction accuracy of 80%,a recall rate of 100%,and an F1 of 0.89.The proposed approach can be applied to monitor battery consistency faults in real time and reduce the possibility of disasters arising from consistency faults.展开更多
Highly stretchable and robust strain sensors are rapidly emerging as promising candidates for a diverse of wearable electronics.The main challenge for the practical application of wearable electronics is the energy co...Highly stretchable and robust strain sensors are rapidly emerging as promising candidates for a diverse of wearable electronics.The main challenge for the practical application of wearable electronics is the energy consumption and device aging.Energy consumption mainly depends on the conductivity of the sensor,and it is a key factor in determining device aging.Here,we design a liq-uid metal(LM)-embedded hydrogel as a sensing material to overcome the bar-rier of energy consumption and device aging of wearable electronics.The sensing material simultaneously exhibits high conductivity(up to 22 S m�1),low elastic modulus(23 kPa),and ultrahigh stretchability(1500%)with excel-lent robustness(consistent performance against 12000 mechanical cycling).A motion monitoring system is composed of intrinsically soft LM-embedded hydrogel as sensing material,a microcontroller,signal-processing circuits,Bluetooth transceiver,and self-organizing map developed software for the visu-alization of multi-dimensional data.This system integrating multiple functions including signal conditioning,processing,and wireless transmission achieves monitor hand gesture as well as sign-to-verbal translation.This approach provides an ideal strategy for deaf-mute communicating with normal people and broadens the application of wearable electronics.展开更多
Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique...Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.展开更多
The wear of metal cutting tools will progressively rise as the cutting time goes on. Wearing heavily on the toolwill generate significant noise and vibration, negatively impacting the accuracy of the forming and the s...The wear of metal cutting tools will progressively rise as the cutting time goes on. Wearing heavily on the toolwill generate significant noise and vibration, negatively impacting the accuracy of the forming and the surfaceintegrity of the workpiece. Hence, during the cutting process, it is imperative to continually monitor the tool wearstate andpromptly replace anyheavilyworn tools toguarantee thequality of the cutting.The conventional tool wearmonitoring models, which are based on machine learning, are specifically built for the intended cutting conditions.However, these models require retraining when the cutting conditions undergo any changes. This method has noapplication value if the cutting conditions frequently change. This manuscript proposes a method for monitoringtool wear basedonunsuperviseddeep transfer learning. Due to the similarity of the tool wear process under varyingworking conditions, a tool wear recognitionmodel that can adapt to both current and previous working conditionshas been developed by utilizing cutting monitoring data from history. To extract and classify cutting vibrationsignals, the unsupervised deep transfer learning network comprises a one-dimensional (1D) convolutional neuralnetwork (CNN) with a multi-layer perceptron (MLP). To achieve distribution alignment of deep features throughthe maximum mean discrepancy algorithm, a domain adaptive layer is embedded in the penultimate layer of thenetwork. A platformformonitoring tool wear during endmilling has been constructed. The proposedmethod wasverified through the execution of a full life test of end milling under multiple working conditions with a Cr12MoVsteel workpiece. Our experiments demonstrate that the transfer learning model maintains a classification accuracyof over 80%. In comparisonwith the most advanced tool wearmonitoring methods, the presentedmodel guaranteessuperior performance in the target domains.展开更多
Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconst...Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconstruction has become a prevalent approach for unsupervised anomaly detection.However,effectively learning representations and achieving accurate detection results remain challenging due to the intricate temporal patterns and dependencies in real-world time series.In this paper,we propose a cross-dimension attentive feature fusion network for time series anomaly detection,referred to as CAFFN.Specifically,a series and feature mixing block is introduced to learn representations in 1D space.Additionally,a fast Fourier transform is employed to convert the time series into 2D space,providing the capability for 2D feature extraction.Finally,a cross-dimension attentive feature fusion mechanism is designed that adaptively integrates features across different dimensions for anomaly detection.Experimental results on real-world time series datasets demonstrate that CAFFN performs better than other competing methods in time series anomaly detection.展开更多
Anomaly detection in high dimensional data is a critical research issue with serious implication in the real-world problems.Many issues in this field still unsolved,so several modern anomaly detection methods struggle...Anomaly detection in high dimensional data is a critical research issue with serious implication in the real-world problems.Many issues in this field still unsolved,so several modern anomaly detection methods struggle to maintain adequate accuracy due to the highly descriptive nature of big data.Such a phenomenon is referred to as the“curse of dimensionality”that affects traditional techniques in terms of both accuracy and performance.Thus,this research proposed a hybrid model based on Deep Autoencoder Neural Network(DANN)with five layers to reduce the difference between the input and output.The proposed model was applied to a real-world gas turbine(GT)dataset that contains 87620 columns and 56 rows.During the experiment,two issues have been investigated and solved to enhance the results.The first is the dataset class imbalance,which solved using SMOTE technique.The second issue is the poor performance,which can be solved using one of the optimization algorithms.Several optimization algorithms have been investigated and tested,including stochastic gradient descent(SGD),RMSprop,Adam and Adamax.However,Adamax optimization algorithm showed the best results when employed to train theDANNmodel.The experimental results show that our proposed model can detect the anomalies by efficiently reducing the high dimensionality of dataset with accuracy of 99.40%,F1-score of 0.9649,Area Under the Curve(AUC)rate of 0.9649,and a minimal loss function during the hybrid model training.展开更多
Particle image velocimetry(PIV)is an essential method in experimental fluid dynamics.In recent years,the development of deep learning‐based methods has inspired new ap-proaches to tackle the PIV problem,which conside...Particle image velocimetry(PIV)is an essential method in experimental fluid dynamics.In recent years,the development of deep learning‐based methods has inspired new ap-proaches to tackle the PIV problem,which considerably improves the accuracy of PIV.However,the supervised learning of PIV is driven by large volumes of data with ground truth information.Therefore,the authors consider unsupervised PIV methods.There has been some work on unsupervised PIV,but they are not nearly as effective as supervised learning PIV.The authors try to improve the effectiveness and accuracy of unsupervised PIV by adding classical PIV methods and physical constraints.In this paper,the authors propose an unsupervised PIV method combined with the cross‐correlation method and divergence‐free constraint,which obtains better performance than other unsupervised PIV methods.The authors compare some classical PIV methods and some deep learning methods,such as LiteFlowNet,LiteFlowNet‐en,and UnLiteFlowNet with the authors’model on the synthetic dataset.Besides,the authors contrast the results of LiteFlowNet,UnLiteFlowNet and the authors’model on experimental particle images.As a result,the authors’model shows comparable performance with classical PIV methods as well as supervised PIV methods and outperforms the previous unsupervised PIV method in most flow cases.展开更多
The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions va...The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions vary widely among individuals,making it challenging to accurately diagnose the disease.This study proposed a deep-learning disease diagnosismodel based onweakly supervised learning and clustering visualization(W_CVNet)that fused classification with segmentation.First,the data were preprocessed.An optimizable weakly supervised segmentation preprocessing method(O-WSSPM)was used to remove redundant data and solve the category imbalance problem.Second,a deep-learning fusion method was used for feature extraction and classification recognition.A dual asymmetric complementary bilinear feature extraction method(D-CBM)was used to fully extract complementary features,which solved the problem of insufficient feature extraction by a single deep learning network.Third,an unsupervised learning method based on Fuzzy C-Means(FCM)clustering was used to segment and visualize COVID-19 lesions enabling physicians to accurately assess lesion distribution and disease severity.In this study,5-fold cross-validation methods were used,and the results showed that the network had an average classification accuracy of 85.8%,outperforming six recent advanced classification models.W_CVNet can effectively help physicians with automated aid in diagnosis to determine if the disease is present and,in the case of COVID-19 patients,to further predict the area of the lesion.展开更多
Nowadays,there is tremendous growth in biometric authentication and cybersecurity applications.Thus,the efficient way of storing and securing personal biometric patterns is mandatory in most governmental and private s...Nowadays,there is tremendous growth in biometric authentication and cybersecurity applications.Thus,the efficient way of storing and securing personal biometric patterns is mandatory in most governmental and private sectors.Therefore,designing and implementing robust security algorithms for users’biometrics is still a hot research area to be investigated.This work presents a powerful biometric security system(BSS)to protect different biometric modalities such as faces,iris,and fingerprints.The proposed BSSmodel is based on hybridizing auto-encoder(AE)network and a chaos-based ciphering algorithm to cipher the details of the stored biometric patterns and ensures their secrecy.The employed AE network is unsupervised deep learning(DL)structure used in the proposed BSS model to extract main biometric features.These obtained features are utilized to generate two random chaos matrices.The first random chaos matrix is used to permute the pixels of biometric images.In contrast,the second random matrix is used to further cipher and confuse the resulting permuted biometric pixels using a two-dimensional(2D)chaotic logisticmap(CLM)algorithm.To assess the efficiency of the proposed BSS,(1)different standardized color and grayscale images of the examined fingerprint,faces,and iris biometrics were used(2)comprehensive security and recognition evaluation metrics were measured.The assessment results have proven the authentication and robustness superiority of the proposed BSSmodel compared to other existing BSSmodels.For example,the proposed BSS succeeds in getting a high area under the receiver operating characteristic(AROC)value that reached 99.97%and low rates of 0.00137,0.00148,and 3516 CMC,2023,vol.74,no.20.00157 for equal error rate(EER),false reject rate(FRR),and a false accept rate(FAR),respectively.展开更多
Intelligent diagnosis approaches with shallow architectural models play an essential role in healthcare.Deep Learning(DL)models with unsupervised learning concepts have been proposed because high-quality feature extra...Intelligent diagnosis approaches with shallow architectural models play an essential role in healthcare.Deep Learning(DL)models with unsupervised learning concepts have been proposed because high-quality feature extraction and adequate labelled details significantly influence shallow models.On the other hand,skin lesionbased segregation and disintegration procedures play an essential role in earlier skin cancer detection.However,artefacts,an unclear boundary,poor contrast,and different lesion sizes make detection difficult.To address the issues in skin lesion diagnosis,this study creates the UDLS-DDOA model,an intelligent Unsupervised Deep Learning-based Stacked Auto-encoder(UDLS)optimized by Dynamic Differential Annealed Optimization(DDOA).Pre-processing,segregation,feature removal or separation,and disintegration are part of the proposed skin lesion diagnosis model.Pre-processing of skin lesion images occurs at the initial level for noise removal in the image using the Top hat filter and painting methodology.Following that,a Fuzzy C-Means(FCM)segregation procedure is performed using a Quasi-Oppositional Elephant Herd Optimization(QOEHO)algorithm.Besides,a novel feature extraction technique using the UDLS technique is applied where the parameter tuning takes place using DDOA.In the end,the disintegration procedure would be accomplished using a SoftMax(SM)classifier.The UDLS-DDOA model is tested against the International Skin Imaging Collaboration(ISIC)dataset,and the experimental results are examined using various computational attributes.The simulation results demonstrated that the UDLS-DDOA model outperformed the compared methods significantly.展开更多
Supervised learning aims to build a function or model that seeks as many mappings as possible between the training data and outputs,where each training data will predict as a label to match its corresponding ground‐t...Supervised learning aims to build a function or model that seeks as many mappings as possible between the training data and outputs,where each training data will predict as a label to match its corresponding ground‐truth value.Although supervised learning has achieved great success in many tasks,sufficient data supervision for labels is not accessible in many domains because accurate data labelling is costly and laborious,particularly in medical image analysis.The cost of the dataset with ground‐truth labels is much higher than in other domains.Therefore,it is noteworthy to focus on weakly supervised learning for medical image analysis,as it is more applicable for practical applications.In this re-view,the authors give an overview of the latest process of weakly supervised learning in medical image analysis,including incomplete,inexact,and inaccurate supervision,and introduce the related works on different applications for medical image analysis.Related concepts are illustrated to help readers get an overview ranging from supervised to un-supervised learning within the scope of machine learning.Furthermore,the challenges and future works of weakly supervised learning in medical image analysis are discussed.展开更多
In recent years, functional data has been widely used in finance, medicine, biology and other fields. The current clustering analysis can solve the problems in finite-dimensional space, but it is difficult to be direc...In recent years, functional data has been widely used in finance, medicine, biology and other fields. The current clustering analysis can solve the problems in finite-dimensional space, but it is difficult to be directly used for the clustering of functional data. In this paper, we propose a new unsupervised clustering algorithm based on adaptive weights. In the absence of initialization parameter, we use entropy-type penalty terms and fuzzy partition matrix to find the optimal number of clusters. At the same time, we introduce a measure based on adaptive weights to reflect the difference in information content between different clustering metrics. Simulation experiments show that the proposed algorithm has higher purity than some algorithms.展开更多
Satellite image classification is crucial in various applications such as urban planning,environmental monitoring,and land use analysis.In this study,the authors present a comparative analysis of different supervised ...Satellite image classification is crucial in various applications such as urban planning,environmental monitoring,and land use analysis.In this study,the authors present a comparative analysis of different supervised and unsupervised learning methods for satellite image classification,focusing on a case study in Casablanca using Landsat 8 imagery.This research aims to identify the most effective machine-learning approach for accurately classifying land cover in an urban environment.The methodology used consists of the pre-processing of Landsat imagery data from Casablanca city,the authors extract relevant features and partition them into training and test sets,and then use random forest(RF),SVM(support vector machine),classification,and regression tree(CART),gradient tree boost(GTB),decision tree(DT),and minimum distance(MD)algorithms.Through a series of experiments,the authors evaluate the performance of each machine learning method in terms of accuracy,and Kappa coefficient.This work shows that random forest is the best-performing algorithm,with an accuracy of 95.42%and 0.94 Kappa coefficient.The authors discuss the factors of their performance,including data characteristics,accurate selection,and model influencing.展开更多
The performance of traditional vibration based fault diagnosis methods greatly depends on those handcrafted features extracted using signal processing algorithms, which require significant amounts of domain knowledge ...The performance of traditional vibration based fault diagnosis methods greatly depends on those handcrafted features extracted using signal processing algorithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised representation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal structures, this paper proposes a multiscale representation learning(MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at different scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multiscale representations. Finally, the multiscale representations are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.展开更多
This paper investigates the intelligent load monitoring problem with applications to practical energy management scenarios in smart grids.As one of the critical components for paving the way to smart grids’success,an...This paper investigates the intelligent load monitoring problem with applications to practical energy management scenarios in smart grids.As one of the critical components for paving the way to smart grids’success,an intelligent and feasible non-intrusive load monitoring(NILM)algorithm is urgently needed.However,most recent researches on NILM have not dealt with practical problems when applied to power grid,i.e.,①limited communication for slow-change systems;②requirement of low-cost hardware at the users’side;and③inconvenience to adapt to new households.Therefore,a novel NILM algorithm based on biology-inspired spiking neural network(SNN)has been developed to overcome the existing challenges.To provide intelligence in NILM,the developed SNN features an unsupervised learning rule,i.e.,spike-time dependent plasticity(STDP),which only requires the user to label one instance for each appliance while adapting to a new household.To upgrade the feasibility in NILM,the designed spiking neurons mimic the mechanism of human brain neurons that can be constructed by a resistor-capacitor(RC)circuit.In addition,a distributed computing system has been designed that divides the SNN into two parts,i.e.,smart outlets and local servers.Since the information flows as sparse binary vectors among spiking neurons in the developed SNN-based NILM,the high-frequency data can be easily compressed as the spike times,and are sent to the local server with limited communication capability,whereas it is unable to handle the traditional NILM.Finally,a series of experiments are conducted using a benchmark public dataset.Meanwhile,the effectiveness of developed SNN-based NILM can be demonstrated through comparisons with other emerging NILM algorithms such as the convolutional neural networks.展开更多
Typically, magnesium alloys have been designed using a so-called hill-climbing approach, with rather incremental advances over the past century. Iterative and incremental alloy design is slow and expensive, but more i...Typically, magnesium alloys have been designed using a so-called hill-climbing approach, with rather incremental advances over the past century. Iterative and incremental alloy design is slow and expensive, but more importantly it does not harness all the data that exists in the field. In this work, a new approach is proposed that utilises data science and provides a detailed understanding of the data that exists in the field of Mg-alloy design to date. In this approach, first a consolidated alloy database that incorporates 916 datapoints was developed from the literature and experimental work. To analyse the characteristics of the database, alloying and thermomechanical processing effects on mechanical properties were explored via composition-process-property matrices. An unsupervised machine learning(ML) method of clustering was also implemented, using unlabelled data, with the aim of revealing potentially useful information for an alloy representation space of low dimensionality. In addition, the alloy database was correlated to thermodynamically stable secondary phases to further understand the relationships between microstructure and mechanical properties. This work not only introduces an invaluable open-source database, but it also provides, for the first-time data, insights that enable future accelerated digital Mg-alloy design.展开更多
基金supported by the National Natural Science Foundation of China(Grants Nos.61825305 and U21A20518).
文摘In this paper,we propose a structural developmental neural network to address the plasticity‐stability dilemma,computational inefficiency,and lack of prior knowledge in continual unsupervised learning.This model uses competitive learning rules and dynamic neurons with information saturation to achieve parameter adjustment and adaptive structure development.Dynamic neurons adjust the information saturation after winning the competition and use this parameter to modulate the neuron parameter adjustment and the division timing.By dividing to generate new neurons,the network not only keeps sensitive to novel features but also can subdivide classes learnt repeatedly.The dynamic neurons with information saturation and division mechanism can simulate the long short‐term memory of the human brain,which enables the network to continually learn new samples while maintaining the previous learning results.The parent‐child relationship between neurons arising from neuronal division enables the network to simulate the human cognitive process that gradually refines the perception of objects.By setting the clustering layer parameter,users can choose the desired degree of class subdivision.Experimental results on artificial and real‐world datasets demonstrate that the proposed model is feasible for unsupervised learning tasks in instance increment and class incre-ment scenarios and outperforms prior structural developmental neural networks.
文摘In order to solve the problem of automatic defect detection and process control in the welding and arc additive process,the paper monitors the current,voltage,audio,and other data during the welding process and extracts the minimum value,standard deviation,deviation from the voltage and current data.It extracts spectral features such as root mean square,spectral centroid,and zero-crossing rate from audio data,fuses the features extracted from multiple sensor signals,and establishes multiple machine learning supervised and unsupervised models.They are used to detect abnormalities in the welding process.The experimental results show that the established multiple machine learning models have high accuracy,among which the supervised learning model,the balanced accuracy of Ada boost is 0.957,and the unsupervised learning model Isolation Forest has a balanced accuracy of 0.909.
文摘This paper reports distinct spatio-spectral properties of Zen-meditation EEG (electroencephalograph), compared with resting EEG, by implementing unsupervised machine learning scheme in clustering the brain mappings of centroid frequency (BMFc). Zen practitioners simultaneously concentrate on the third ventricle, hypothalamus and corpora quadrigemina touniversalize all brain neurons to construct a <i>detached</i> brain and gradually change the normal brain traits, leading to the process of brain-neuroplasticity. During such tri-aperture concentration, EEG exhibits prominent diffuse high-frequency oscillations. Unsupervised self-organizing map (SOM), clusters the dataset of quantitative EEG by matching the input feature vector Fc and the output cluster center through the SOM network weights. Input dataset contains brain mappings of 30 centroid frequencies extracted from CWT (continuous wavelet transform) coefficients. According to SOM clustering results, resting EEG is dominated by global low-frequency (<14 Hz) activities, except channels T7, F7 and TP7 (>14.4 Hz);whereas Zen-meditation EEG exhibits globally high-frequency (>16 Hz) activities throughout the entire record. Beta waves with a wide range of frequencies are often associated with active concentration. Nonetheless, clinic report discloses that benzodiazepines, medication treatment for anxiety, insomnia and panic attacks to relieve mind/body stress, often induce <i>beta buzz</i>. We may hypothesize that Zen-meditation practitioners attain the unique state of mindfulness concentration under optimal body-mind relaxation.
基金supported in part by the National Key Research and Development Program of China(2020YFB1313002)the National Natural Science Foundation of China(62276023,U22B2055,62222302,U2013202)+1 种基金the Fundamental Research Funds for the Central Universities(FRF-TP-22-003C1)the Postgraduate Education Reform Project of Henan Province(2021SJGLX260Y)。
文摘Underwater image enhancement aims to restore a clean appearance and thus improves the quality of underwater degraded images.Current methods feed the whole image directly into the model for enhancement.However,they ignored that the R,G and B channels of underwater degraded images present varied degrees of degradation,due to the selective absorption for the light.To address this issue,we propose an unsupervised multi-expert learning model by considering the enhancement of each color channel.Specifically,an unsupervised architecture based on generative adversarial network is employed to alleviate the need for paired underwater images.Based on this,we design a generator,including a multi-expert encoder,a feature fusion module and a feature fusion-guided decoder,to generate the clear underwater image.Accordingly,a multi-expert discriminator is proposed to verify the authenticity of the R,G and B channels,respectively.In addition,content perceptual loss and edge loss are introduced into the loss function to further improve the content and details of the enhanced images.Extensive experiments on public datasets demonstrate that our method achieves more pleasing results in vision quality.Various metrics(PSNR,SSIM,UIQM and UCIQE) evaluated on our enhanced images have been improved obviously.
文摘Unsupervised learning algorithms can effectively solve sample imbalance.To address battery consistency anomalies in new energy vehicles,we adopt a variety of unsupervised learning algorithms to evaluate and predict the battery consistency of three vehicles using charging fragment data from actual operating conditions.We extract battery-related features,such as the mean of maximum difference,standard deviation,and entropy of batteries and then apply principal component analysis to reduce the dimensionality and record the amount of preserved information.We then build models through a collection of unsupervised learning algorithms for the anomaly detection of cell consistency faults.We also determine whether unsupervised and supervised learning algorithms can address the battery consistency problem and document the parameter tuning process.In addition,we compare the prediction effectiveness of charging and discharging features modeled individually and in combination,determine the choice of charging and discharging features to be modeled in combination,and visualize the multidimensional data for fault detection.Experimental results show that the unsupervised learning algorithm is effective in visualizing and predicting vehicle core conformance faults,and can accurately predict faults in real time.The“distance+boxplot”algorithm shows the best performance with a prediction accuracy of 80%,a recall rate of 100%,and an F1 of 0.89.The proposed approach can be applied to monitor battery consistency faults in real time and reduce the possibility of disasters arising from consistency faults.
基金National Natural Science Foundation of China,Grant/Award Numbers:22176221,51763010,51963011Central Public-interest Scientific Institution Basal Research Fund(CAFS),Grant/Award Number:2020TD75+2 种基金Jiangxi Provincial Double Thousand Talents Plan-Youth Program,Grant/Award Number:JXSQ2019201108Jiangxi Key Laboratory of Flexible Electronics,Grant/Award Number:20212BCD42004National。
文摘Highly stretchable and robust strain sensors are rapidly emerging as promising candidates for a diverse of wearable electronics.The main challenge for the practical application of wearable electronics is the energy consumption and device aging.Energy consumption mainly depends on the conductivity of the sensor,and it is a key factor in determining device aging.Here,we design a liq-uid metal(LM)-embedded hydrogel as a sensing material to overcome the bar-rier of energy consumption and device aging of wearable electronics.The sensing material simultaneously exhibits high conductivity(up to 22 S m�1),low elastic modulus(23 kPa),and ultrahigh stretchability(1500%)with excel-lent robustness(consistent performance against 12000 mechanical cycling).A motion monitoring system is composed of intrinsically soft LM-embedded hydrogel as sensing material,a microcontroller,signal-processing circuits,Bluetooth transceiver,and self-organizing map developed software for the visu-alization of multi-dimensional data.This system integrating multiple functions including signal conditioning,processing,and wireless transmission achieves monitor hand gesture as well as sign-to-verbal translation.This approach provides an ideal strategy for deaf-mute communicating with normal people and broadens the application of wearable electronics.
文摘Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.
基金the National Key Research and Development Program of China(No.2020YFB1713500)the Natural Science Basic Research Program of Shaanxi(Grant No.2023JCYB289)+1 种基金the National Natural Science Foundation of China(Grant No.52175112)the Fundamental Research Funds for the Central Universities(Grant No.ZYTS23102).
文摘The wear of metal cutting tools will progressively rise as the cutting time goes on. Wearing heavily on the toolwill generate significant noise and vibration, negatively impacting the accuracy of the forming and the surfaceintegrity of the workpiece. Hence, during the cutting process, it is imperative to continually monitor the tool wearstate andpromptly replace anyheavilyworn tools toguarantee thequality of the cutting.The conventional tool wearmonitoring models, which are based on machine learning, are specifically built for the intended cutting conditions.However, these models require retraining when the cutting conditions undergo any changes. This method has noapplication value if the cutting conditions frequently change. This manuscript proposes a method for monitoringtool wear basedonunsuperviseddeep transfer learning. Due to the similarity of the tool wear process under varyingworking conditions, a tool wear recognitionmodel that can adapt to both current and previous working conditionshas been developed by utilizing cutting monitoring data from history. To extract and classify cutting vibrationsignals, the unsupervised deep transfer learning network comprises a one-dimensional (1D) convolutional neuralnetwork (CNN) with a multi-layer perceptron (MLP). To achieve distribution alignment of deep features throughthe maximum mean discrepancy algorithm, a domain adaptive layer is embedded in the penultimate layer of thenetwork. A platformformonitoring tool wear during endmilling has been constructed. The proposedmethod wasverified through the execution of a full life test of end milling under multiple working conditions with a Cr12MoVsteel workpiece. Our experiments demonstrate that the transfer learning model maintains a classification accuracyof over 80%. In comparisonwith the most advanced tool wearmonitoring methods, the presentedmodel guaranteessuperior performance in the target domains.
基金supported in part by the National Natural Science Foundation of China(Grants 62376172,62006163,62376043)in part by the National Postdoctoral Program for Innovative Talents(Grant BX20200226)in part by Sichuan Science and Technology Planning Project(Grants 2022YFSY0047,2022YFQ0014,2023ZYD0143,2022YFH0021,2023YFQ0020,24QYCX0354,24NSFTD0025).
文摘Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconstruction has become a prevalent approach for unsupervised anomaly detection.However,effectively learning representations and achieving accurate detection results remain challenging due to the intricate temporal patterns and dependencies in real-world time series.In this paper,we propose a cross-dimension attentive feature fusion network for time series anomaly detection,referred to as CAFFN.Specifically,a series and feature mixing block is introduced to learn representations in 1D space.Additionally,a fast Fourier transform is employed to convert the time series into 2D space,providing the capability for 2D feature extraction.Finally,a cross-dimension attentive feature fusion mechanism is designed that adaptively integrates features across different dimensions for anomaly detection.Experimental results on real-world time series datasets demonstrate that CAFFN performs better than other competing methods in time series anomaly detection.
基金This research/paper was fully supported by Universiti Teknologi PETRONAS,under the Yayasan Universiti Teknologi PETRONAS(YUTP)Fundamental Research Grant Scheme(YUTP-015LC0-123).
文摘Anomaly detection in high dimensional data is a critical research issue with serious implication in the real-world problems.Many issues in this field still unsolved,so several modern anomaly detection methods struggle to maintain adequate accuracy due to the highly descriptive nature of big data.Such a phenomenon is referred to as the“curse of dimensionality”that affects traditional techniques in terms of both accuracy and performance.Thus,this research proposed a hybrid model based on Deep Autoencoder Neural Network(DANN)with five layers to reduce the difference between the input and output.The proposed model was applied to a real-world gas turbine(GT)dataset that contains 87620 columns and 56 rows.During the experiment,two issues have been investigated and solved to enhance the results.The first is the dataset class imbalance,which solved using SMOTE technique.The second issue is the poor performance,which can be solved using one of the optimization algorithms.Several optimization algorithms have been investigated and tested,including stochastic gradient descent(SGD),RMSprop,Adam and Adamax.However,Adamax optimization algorithm showed the best results when employed to train theDANNmodel.The experimental results show that our proposed model can detect the anomalies by efficiently reducing the high dimensionality of dataset with accuracy of 99.40%,F1-score of 0.9649,Area Under the Curve(AUC)rate of 0.9649,and a minimal loss function during the hybrid model training.
基金Natural Science Foundation of Zhejiang Province,Grant/Award Number:LY21F030003National Key Research and Development Program of China,Grant/Award Number:2019YFB1705800National Natural Science Foundation of China,Grant/Award Number:61973270。
文摘Particle image velocimetry(PIV)is an essential method in experimental fluid dynamics.In recent years,the development of deep learning‐based methods has inspired new ap-proaches to tackle the PIV problem,which considerably improves the accuracy of PIV.However,the supervised learning of PIV is driven by large volumes of data with ground truth information.Therefore,the authors consider unsupervised PIV methods.There has been some work on unsupervised PIV,but they are not nearly as effective as supervised learning PIV.The authors try to improve the effectiveness and accuracy of unsupervised PIV by adding classical PIV methods and physical constraints.In this paper,the authors propose an unsupervised PIV method combined with the cross‐correlation method and divergence‐free constraint,which obtains better performance than other unsupervised PIV methods.The authors compare some classical PIV methods and some deep learning methods,such as LiteFlowNet,LiteFlowNet‐en,and UnLiteFlowNet with the authors’model on the synthetic dataset.Besides,the authors contrast the results of LiteFlowNet,UnLiteFlowNet and the authors’model on experimental particle images.As a result,the authors’model shows comparable performance with classical PIV methods as well as supervised PIV methods and outperforms the previous unsupervised PIV method in most flow cases.
基金funded by the Open Foundation of Anhui EngineeringResearch Center of Intelligent Perception and Elderly Care,Chuzhou University(No.2022OPA03)the Higher EducationNatural Science Foundation of Anhui Province(No.KJ2021B01)and the Innovation Team Projects of Universities in Guangdong(No.2022KCXTD057).
文摘The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions vary widely among individuals,making it challenging to accurately diagnose the disease.This study proposed a deep-learning disease diagnosismodel based onweakly supervised learning and clustering visualization(W_CVNet)that fused classification with segmentation.First,the data were preprocessed.An optimizable weakly supervised segmentation preprocessing method(O-WSSPM)was used to remove redundant data and solve the category imbalance problem.Second,a deep-learning fusion method was used for feature extraction and classification recognition.A dual asymmetric complementary bilinear feature extraction method(D-CBM)was used to fully extract complementary features,which solved the problem of insufficient feature extraction by a single deep learning network.Third,an unsupervised learning method based on Fuzzy C-Means(FCM)clustering was used to segment and visualize COVID-19 lesions enabling physicians to accurately assess lesion distribution and disease severity.In this study,5-fold cross-validation methods were used,and the results showed that the network had an average classification accuracy of 85.8%,outperforming six recent advanced classification models.W_CVNet can effectively help physicians with automated aid in diagnosis to determine if the disease is present and,in the case of COVID-19 patients,to further predict the area of the lesion.
文摘Nowadays,there is tremendous growth in biometric authentication and cybersecurity applications.Thus,the efficient way of storing and securing personal biometric patterns is mandatory in most governmental and private sectors.Therefore,designing and implementing robust security algorithms for users’biometrics is still a hot research area to be investigated.This work presents a powerful biometric security system(BSS)to protect different biometric modalities such as faces,iris,and fingerprints.The proposed BSSmodel is based on hybridizing auto-encoder(AE)network and a chaos-based ciphering algorithm to cipher the details of the stored biometric patterns and ensures their secrecy.The employed AE network is unsupervised deep learning(DL)structure used in the proposed BSS model to extract main biometric features.These obtained features are utilized to generate two random chaos matrices.The first random chaos matrix is used to permute the pixels of biometric images.In contrast,the second random matrix is used to further cipher and confuse the resulting permuted biometric pixels using a two-dimensional(2D)chaotic logisticmap(CLM)algorithm.To assess the efficiency of the proposed BSS,(1)different standardized color and grayscale images of the examined fingerprint,faces,and iris biometrics were used(2)comprehensive security and recognition evaluation metrics were measured.The assessment results have proven the authentication and robustness superiority of the proposed BSSmodel compared to other existing BSSmodels.For example,the proposed BSS succeeds in getting a high area under the receiver operating characteristic(AROC)value that reached 99.97%and low rates of 0.00137,0.00148,and 3516 CMC,2023,vol.74,no.20.00157 for equal error rate(EER),false reject rate(FRR),and a false accept rate(FAR),respectively.
基金deputyship for Research&Innovation,Ministry of Education in Saudi Arabia,for funding this research work through Project Number (IFP-2020-133).
文摘Intelligent diagnosis approaches with shallow architectural models play an essential role in healthcare.Deep Learning(DL)models with unsupervised learning concepts have been proposed because high-quality feature extraction and adequate labelled details significantly influence shallow models.On the other hand,skin lesionbased segregation and disintegration procedures play an essential role in earlier skin cancer detection.However,artefacts,an unclear boundary,poor contrast,and different lesion sizes make detection difficult.To address the issues in skin lesion diagnosis,this study creates the UDLS-DDOA model,an intelligent Unsupervised Deep Learning-based Stacked Auto-encoder(UDLS)optimized by Dynamic Differential Annealed Optimization(DDOA).Pre-processing,segregation,feature removal or separation,and disintegration are part of the proposed skin lesion diagnosis model.Pre-processing of skin lesion images occurs at the initial level for noise removal in the image using the Top hat filter and painting methodology.Following that,a Fuzzy C-Means(FCM)segregation procedure is performed using a Quasi-Oppositional Elephant Herd Optimization(QOEHO)algorithm.Besides,a novel feature extraction technique using the UDLS technique is applied where the parameter tuning takes place using DDOA.In the end,the disintegration procedure would be accomplished using a SoftMax(SM)classifier.The UDLS-DDOA model is tested against the International Skin Imaging Collaboration(ISIC)dataset,and the experimental results are examined using various computational attributes.The simulation results demonstrated that the UDLS-DDOA model outperformed the compared methods significantly.
基金supported by MRC,UK(MC_PC_17171)Royal Society,UK(RP202G0230)+8 种基金BHF,UK(AA/18/3/34220)Hope Foundation for Cancer Research,UK(RM60G0680)GCRF,UK(P202PF11)Sino‐UK Industrial Fund,UK(RP202G0289)LIAS,UK(P202ED10,P202RE969)Data Science Enhancement Fund,UK(P202RE237)Fight for Sight,UK(24NN201)Sino‐UK Education Fund,UK(OP202006)BBSRC,UK(RM32G0178B8).
文摘Supervised learning aims to build a function or model that seeks as many mappings as possible between the training data and outputs,where each training data will predict as a label to match its corresponding ground‐truth value.Although supervised learning has achieved great success in many tasks,sufficient data supervision for labels is not accessible in many domains because accurate data labelling is costly and laborious,particularly in medical image analysis.The cost of the dataset with ground‐truth labels is much higher than in other domains.Therefore,it is noteworthy to focus on weakly supervised learning for medical image analysis,as it is more applicable for practical applications.In this re-view,the authors give an overview of the latest process of weakly supervised learning in medical image analysis,including incomplete,inexact,and inaccurate supervision,and introduce the related works on different applications for medical image analysis.Related concepts are illustrated to help readers get an overview ranging from supervised to un-supervised learning within the scope of machine learning.Furthermore,the challenges and future works of weakly supervised learning in medical image analysis are discussed.
文摘In recent years, functional data has been widely used in finance, medicine, biology and other fields. The current clustering analysis can solve the problems in finite-dimensional space, but it is difficult to be directly used for the clustering of functional data. In this paper, we propose a new unsupervised clustering algorithm based on adaptive weights. In the absence of initialization parameter, we use entropy-type penalty terms and fuzzy partition matrix to find the optimal number of clusters. At the same time, we introduce a measure based on adaptive weights to reflect the difference in information content between different clustering metrics. Simulation experiments show that the proposed algorithm has higher purity than some algorithms.
文摘Satellite image classification is crucial in various applications such as urban planning,environmental monitoring,and land use analysis.In this study,the authors present a comparative analysis of different supervised and unsupervised learning methods for satellite image classification,focusing on a case study in Casablanca using Landsat 8 imagery.This research aims to identify the most effective machine-learning approach for accurately classifying land cover in an urban environment.The methodology used consists of the pre-processing of Landsat imagery data from Casablanca city,the authors extract relevant features and partition them into training and test sets,and then use random forest(RF),SVM(support vector machine),classification,and regression tree(CART),gradient tree boost(GTB),decision tree(DT),and minimum distance(MD)algorithms.Through a series of experiments,the authors evaluate the performance of each machine learning method in terms of accuracy,and Kappa coefficient.This work shows that random forest is the best-performing algorithm,with an accuracy of 95.42%and 0.94 Kappa coefficient.The authors discuss the factors of their performance,including data characteristics,accurate selection,and model influencing.
基金Supported by Hebei Provincial Natural Science Foundation of China(Grant No.F2016203421)
文摘The performance of traditional vibration based fault diagnosis methods greatly depends on those handcrafted features extracted using signal processing algorithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised representation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal structures, this paper proposes a multiscale representation learning(MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at different scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multiscale representations. Finally, the multiscale representations are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.
基金supported by the SGCC Science and Technology Program under project“Distributed High-Speed Frequency Control Under UHVDC Bipolar Blocking Fault Scenario”(No.SGGR0000DLJS1800934)。
文摘This paper investigates the intelligent load monitoring problem with applications to practical energy management scenarios in smart grids.As one of the critical components for paving the way to smart grids’success,an intelligent and feasible non-intrusive load monitoring(NILM)algorithm is urgently needed.However,most recent researches on NILM have not dealt with practical problems when applied to power grid,i.e.,①limited communication for slow-change systems;②requirement of low-cost hardware at the users’side;and③inconvenience to adapt to new households.Therefore,a novel NILM algorithm based on biology-inspired spiking neural network(SNN)has been developed to overcome the existing challenges.To provide intelligence in NILM,the developed SNN features an unsupervised learning rule,i.e.,spike-time dependent plasticity(STDP),which only requires the user to label one instance for each appliance while adapting to a new household.To upgrade the feasibility in NILM,the designed spiking neurons mimic the mechanism of human brain neurons that can be constructed by a resistor-capacitor(RC)circuit.In addition,a distributed computing system has been designed that divides the SNN into two parts,i.e.,smart outlets and local servers.Since the information flows as sparse binary vectors among spiking neurons in the developed SNN-based NILM,the high-frequency data can be easily compressed as the spike times,and are sent to the local server with limited communication capability,whereas it is unable to handle the traditional NILM.Finally,a series of experiments are conducted using a benchmark public dataset.Meanwhile,the effectiveness of developed SNN-based NILM can be demonstrated through comparisons with other emerging NILM algorithms such as the convolutional neural networks.
基金the support of the Monash-IITB Academy Scholarshipfunded in part by the Australian Research Council (DP190103592)。
文摘Typically, magnesium alloys have been designed using a so-called hill-climbing approach, with rather incremental advances over the past century. Iterative and incremental alloy design is slow and expensive, but more importantly it does not harness all the data that exists in the field. In this work, a new approach is proposed that utilises data science and provides a detailed understanding of the data that exists in the field of Mg-alloy design to date. In this approach, first a consolidated alloy database that incorporates 916 datapoints was developed from the literature and experimental work. To analyse the characteristics of the database, alloying and thermomechanical processing effects on mechanical properties were explored via composition-process-property matrices. An unsupervised machine learning(ML) method of clustering was also implemented, using unlabelled data, with the aim of revealing potentially useful information for an alloy representation space of low dimensionality. In addition, the alloy database was correlated to thermodynamically stable secondary phases to further understand the relationships between microstructure and mechanical properties. This work not only introduces an invaluable open-source database, but it also provides, for the first-time data, insights that enable future accelerated digital Mg-alloy design.