期刊文献+
共找到3,003篇文章
< 1 2 151 >
每页显示 20 50 100
Feature extraction for machine learning-based intrusion detection in IoT networks
1
作者 Mohanad Sarhan Siamak Layeghy +2 位作者 Nour Moustafa Marcus Gallagher Marius Portmann 《Digital Communications and Networks》 SCIE CSCD 2024年第1期205-216,共12页
A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have ... A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have occurred,which led to an active research area for improving NIDS technologies.In an analysis of related works,it was observed that most researchers aim to obtain better classification results by using a set of untried combinations of Feature Reduction(FR)and Machine Learning(ML)techniques on NIDS datasets.However,these datasets are different in feature sets,attack types,and network design.Therefore,this paper aims to discover whether these techniques can be generalised across various datasets.Six ML models are utilised:a Deep Feed Forward(DFF),Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),Decision Tree(DT),Logistic Regression(LR),and Naive Bayes(NB).The accuracy of three Feature Extraction(FE)algorithms is detected;Principal Component Analysis(PCA),Auto-encoder(AE),and Linear Discriminant Analysis(LDA),are evaluated using three benchmark datasets:UNSW-NB15,ToN-IoT and CSE-CIC-IDS2018.Although PCA and AE algorithms have been widely used,the determination of their optimal number of extracted dimensions has been overlooked.The results indicate that no clear FE method or ML model can achieve the best scores for all datasets.The optimal number of extracted dimensions has been identified for each dataset,and LDA degrades the performance of the ML models on two datasets.The variance is used to analyse the extracted dimensions of LDA and PCA.Finally,this paper concludes that the choice of datasets significantly alters the performance of the applied techniques.We believe that a universal(benchmark)feature set is needed to facilitate further advancement and progress of research in this field. 展开更多
关键词 Feature extraction machine learning Network intrusion detection system IOT
下载PDF
Automated Machine Learning Algorithm Using Recurrent Neural Network to Perform Long-Term Time Series Forecasting
2
作者 Ying Su Morgan C.Wang Shuai Liu 《Computers, Materials & Continua》 SCIE EI 2024年第3期3529-3549,共21页
Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically ... Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically relies on expert input and necessitates substantial manual involvement.This manual effort spans model development,feature engineering,hyper-parameter tuning,and the intricate construction of time series models.The complexity of these tasks renders complete automation unfeasible,as they inherently demand human intervention at multiple junctures.To surmount these challenges,this article proposes leveraging Long Short-Term Memory,which is the variant of Recurrent Neural Networks,harnessing memory cells and gating mechanisms to facilitate long-term time series prediction.However,forecasting accuracy by particular neural network and traditional models can degrade significantly,when addressing long-term time-series tasks.Therefore,our research demonstrates that this innovative approach outperforms the traditional Autoregressive Integrated Moving Average(ARIMA)method in forecasting long-term univariate time series.ARIMA is a high-quality and competitive model in time series prediction,and yet it requires significant preprocessing efforts.Using multiple accuracy metrics,we have evaluated both ARIMA and proposed method on the simulated time-series data and real data in both short and long term.Furthermore,our findings indicate its superiority over alternative network architectures,including Fully Connected Neural Networks,Convolutional Neural Networks,and Nonpooling Convolutional Neural Networks.Our AutoML approach enables non-professional to attain highly accurate and effective time series forecasting,and can be widely applied to various domains,particularly in business and finance. 展开更多
关键词 Automated machine learning autoregressive integrated moving average neural networks time series analysis
下载PDF
Prediction of Porous Media Fluid Flow with Spatial Heterogeneity Using Criss-Cross Physics-Informed Convolutional Neural Networks
3
作者 Jiangxia Han Liang Xue +5 位作者 Ying Jia Mpoki Sam Mwasamwasa Felix Nanguka Charles Sangweni Hailong Liu Qian Li 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1323-1340,共18页
Recent advances in deep neural networks have shed new light on physics,engineering,and scientific computing.Reconciling the data-centered viewpoint with physical simulation is one of the research hotspots.The physicsi... Recent advances in deep neural networks have shed new light on physics,engineering,and scientific computing.Reconciling the data-centered viewpoint with physical simulation is one of the research hotspots.The physicsinformedneural network(PINN)is currently the most general framework,which is more popular due to theconvenience of constructing NNs and excellent generalization ability.The automatic differentiation(AD)-basedPINN model is suitable for the homogeneous scientific problem;however,it is unclear how AD can enforce fluxcontinuity across boundaries between cells of different properties where spatial heterogeneity is represented bygrid cells with different physical properties.In this work,we propose a criss-cross physics-informed convolutionalneural network(CC-PINN)learning architecture,aiming to learn the solution of parametric PDEs with spatialheterogeneity of physical properties.To achieve the seamless enforcement of flux continuity and integration ofphysicalmeaning into CNN,a predefined 2D convolutional layer is proposed to accurately express transmissibilitybetween adjacent cells.The efficacy of the proposedmethodwas evaluated through predictions of several petroleumreservoir problems with spatial heterogeneity and compared against state-of-the-art(PINN)through numericalanalysis as a benchmark,which demonstrated the superiority of the proposed method over the PINN. 展开更多
关键词 Physical-informed neural networks(PINN) flow in porous media convolutional neural networks spatial heterogeneity machine learning
下载PDF
Multi-Scale-Matching neural networks for thin plate bending problem 被引量:1
4
作者 Lei Zhang Guowei He 《Theoretical & Applied Mechanics Letters》 CAS CSCD 2024年第1期11-15,共5页
Physics-informed neural networks are a useful machine learning method for solving differential equations,but encounter challenges in effectively learning thin boundary layers within singular perturbation problems.To r... Physics-informed neural networks are a useful machine learning method for solving differential equations,but encounter challenges in effectively learning thin boundary layers within singular perturbation problems.To resolve this issue,multi-scale-matching neural networks are proposed to solve the singular perturbation problems.Inspired by matched asymptotic expansions,the solution is decomposed into inner solutions for small scales and outer solutions for large scales,corresponding to boundary layers and outer regions,respectively.Moreover,to conform neural networks,we introduce exponential stretched variables in the boundary layers to avoid semiinfinite region problems.Numerical results for the thin plate problem validate the proposed method. 展开更多
关键词 Singular perturbation Physics-informed neural networks Boundary layer machine learning
下载PDF
Application of Convolutional Neural Networks in Classification of GBM for Enhanced Prognosis
5
作者 Rithik Samanthula 《Advances in Bioscience and Biotechnology》 CAS 2024年第2期91-99,共9页
The lethal brain tumor “Glioblastoma” has the propensity to grow over time. To improve patient outcomes, it is essential to classify GBM accurately and promptly in order to provide a focused and individualized treat... The lethal brain tumor “Glioblastoma” has the propensity to grow over time. To improve patient outcomes, it is essential to classify GBM accurately and promptly in order to provide a focused and individualized treatment plan. Despite this, deep learning methods, particularly Convolutional Neural Networks (CNNs), have demonstrated a high level of accuracy in a myriad of medical image analysis applications as a result of recent technical breakthroughs. The overall aim of the research is to investigate how CNNs can be used to classify GBMs using data from medical imaging, to improve prognosis precision and effectiveness. This research study will demonstrate a suggested methodology that makes use of the CNN architecture and is trained using a database of MRI pictures with this tumor. The constructed model will be assessed based on its overall performance. Extensive experiments and comparisons with conventional machine learning techniques and existing classification methods will also be made. It will be crucial to emphasize the possibility of early and accurate prediction in a clinical workflow because it can have a big impact on treatment planning and patient outcomes. The paramount objective is to not only address the classification challenge but also to outline a clear pathway towards enhancing prognosis precision and treatment effectiveness. 展开更多
关键词 GLIOBLASTOMA machine learning Artificial Intelligence neural networks Brain Tumor Cancer Tensorflow LAYERS CYTOARCHITECTURE Deep learning Deep neural Network Training Batches
下载PDF
Comparative Analysis of Machine Learning Models for Customer Churn Prediction in the U.S. Banking and Financial Services: Economic Impact and Industry-Specific Insights
6
作者 Omoshola S. Owolabi Prince C. Uche +4 位作者 Nathaniel T. Adeniken Oghenekome Efijemue Samuel Attakorah Oluwabukola G. Emi-Johnson Emmanuel Hinneh 《Journal of Data Analysis and Information Processing》 2024年第3期388-418,共31页
Customer churn poses a significant challenge for the banking and finance industry in the United States, directly affecting profitability and market share. This study conducts a comprehensive comparative analysis of ma... Customer churn poses a significant challenge for the banking and finance industry in the United States, directly affecting profitability and market share. This study conducts a comprehensive comparative analysis of machine learning models for customer churn prediction, focusing on the U.S. context. The research evaluates the performance of logistic regression, random forest, and neural networks using industry-specific datasets, considering the economic impact and practical implications of the findings. The exploratory data analysis reveals unique patterns and trends in the U.S. banking and finance industry, such as the age distribution of customers and the prevalence of dormant accounts. The study incorporates macroeconomic factors to capture the potential influence of external conditions on customer churn behavior. The findings highlight the importance of leveraging advanced machine learning techniques and comprehensive customer data to develop effective churn prevention strategies in the U.S. context. By accurately predicting customer churn, financial institutions can proactively identify at-risk customers, implement targeted retention strategies, and optimize resource allocation. The study discusses the limitations and potential future improvements, serving as a roadmap for researchers and practitioners to further advance the field of customer churn prediction in the evolving landscape of the U.S. banking and finance industry. 展开更多
关键词 CHURN Prediction machine learning Economic Impact Industry-Specific Insights Logistic Regression Random Forest neural networks
下载PDF
Pioneering role of machine learning in unveiling intensive care unitacquired weakness
7
作者 Silvano Dragonieri 《World Journal of Clinical Cases》 SCIE 2024年第13期2157-2159,共3页
In the research published in the World Journal of Clinical Cases,Wang and Long conducted a quantitative analysis to delineate the risk factors for intensive care unit-acquired weakness(ICU-AW)utilizing advanced machin... In the research published in the World Journal of Clinical Cases,Wang and Long conducted a quantitative analysis to delineate the risk factors for intensive care unit-acquired weakness(ICU-AW)utilizing advanced machine learning methodologies.The study employed a multilayer perceptron neural network to accurately predict the incidence of ICU-AW,focusing on critical variables such as ICU stay duration and mechanical ventilation.This research marks a significant advancement in applying machine learning to clinical diagnostics,offering a new paradigm for predictive medicine in critical care.It underscores the importance of integrating artificial intelligence technologies in clinical practice to enhance patient management strategies and calls for interdisciplinary collaboration to drive innovation in healthcare. 展开更多
关键词 Intensive care unit-acquired weakness machine learning Multilayer perceptron neural network Predictive medicine Interdisciplinary collaboration
下载PDF
Software Defect Prediction Using Hybrid Machine Learning Techniques: A Comparative Study
8
作者 Hemant Kumar Vipin Saxena 《Journal of Software Engineering and Applications》 2024年第4期155-171,共17页
When a customer uses the software, then it is possible to occur defects that can be removed in the updated versions of the software. Hence, in the present work, a robust examination of cross-project software defect pr... When a customer uses the software, then it is possible to occur defects that can be removed in the updated versions of the software. Hence, in the present work, a robust examination of cross-project software defect prediction is elaborated through an innovative hybrid machine learning framework. The proposed technique combines an advanced deep neural network architecture with ensemble models such as Support Vector Machine (SVM), Random Forest (RF), and XGBoost. The study evaluates the performance by considering multiple software projects like CM1, JM1, KC1, and PC1 using datasets from the PROMISE Software Engineering Repository. The three hybrid models that are compared are Hybrid Model-1 (SVM, RandomForest, XGBoost, Neural Network), Hybrid Model-2 (GradientBoosting, DecisionTree, LogisticRegression, Neural Network), and Hybrid Model-3 (KNeighbors, GaussianNB, Support Vector Classification (SVC), Neural Network), and the Hybrid Model 3 surpasses the others in terms of recall, F1-score, accuracy, ROC AUC, and precision. The presented work offers valuable insights into the effectiveness of hybrid techniques for cross-project defect prediction, providing a comparative perspective on early defect identification and mitigation strategies. . 展开更多
关键词 Defect Prediction Hybrid Techniques Ensemble Models machine learning neural Network
下载PDF
Deep learning neural networks for spatially explicit prediction of flash flood probability 被引量:5
9
作者 Mahdi Panahi Abolfazl Jaafari +5 位作者 Ataollah Shirzadi Himan Shahabi Omid Rahmati Ebrahim Omidvar Saro Lee Dieu Tien Bui 《Geoscience Frontiers》 SCIE CAS CSCD 2021年第3期370-383,共14页
Flood probability maps are essential for a range of applications,including land use planning and developing mitigation strategies and early warning systems.This study describes the potential application of two archite... Flood probability maps are essential for a range of applications,including land use planning and developing mitigation strategies and early warning systems.This study describes the potential application of two architectures of deep learning neural networks,namely convolutional neural networks(CNN)and recurrent neural networks(RNN),for spatially explicit prediction and mapping of flash flood probability.To develop and validate the predictive models,a geospatial database that contained records for the historical flood events and geo-environmental characteristics of the Golestan Province in northern Iran was constructed.The step-wise weight assessment ratio analysis(SWARA)was employed to investigate the spatial interplay between floods and different influencing factors.The CNN and RNN models were trained using the SWARA weights and validated using the receiver operating characteristics technique.The results showed that the CNN model(AUC=0.832,RMSE=0.144)performed slightly better than the RNN model(AUC=0.814,RMSE=0.181)in predicting future floods.Further,these models demonstrated an improved prediction of floods compared to previous studies that used different models in the same study area.This study showed that the spatially explicit deep learning neural network models are successful in capturing the heterogeneity of spatial patterns of flood probability in the Golestan Province,and the resulting probability maps can be used for the development of mitigation plans in response to the future floods.The general policy implication of our study suggests that design,implementation,and verification of flood early warning systems should be directed to approximately 40%of the land area characterized by high and very susceptibility to flooding. 展开更多
关键词 Spatial modeling machine learning Convolutional neural networks Recurrent neural networks GIS Iran
下载PDF
Extraction Fuzzy Linguistic Rules from Neural Networks for Maximizing Tool Life in High-speed Milling Process 被引量:2
10
作者 SHEN Zhigang HE Ning LI Liang 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2009年第3期341-346,共6页
In metal cutting industry it is a common practice to search for optimal combination of cutting parameters in order to maximize the tool life for a fixed minimum value of material removal rate(MRR). After the advent ... In metal cutting industry it is a common practice to search for optimal combination of cutting parameters in order to maximize the tool life for a fixed minimum value of material removal rate(MRR). After the advent of high-speed milling(HSM) pro cess, lots of experimental and theoretical researches have been done for this purpose which mainly emphasized on the optimization of the cutting parameters. It is highly beneficial to convert raw data into a comprehensive knowledge-based expert system using fuzzy logic as the reasoning mechanism. In this paper an attempt has been presented for the extraction of the rules from fuzzy neural network(FNN) so as to have the most effective knowledge-base for given set of data. Experiments were conducted to determine the best values of cutting speeds that can maximize tool life for different combinations of input parameters. A fuzzy neural network was constructed based on the fuzzification of input parameters and the cutting speed. After training process, raw rule sets were extracted and a rule pruning approach was proposed to obtain concise linguistic rules. The estimation process with fuzzy inference showed that the optimized combination of fuzzy rules provided the estimation error of only 6.34 m/min as compared to 314 m/min of that of randomized combination of rule s. 展开更多
关键词 high-speed milling rule extraction neural network fuzzy logic
下载PDF
Interpretable machine learning optimization(InterOpt)for operational parameters:A case study of highly-efficient shale gas development 被引量:1
11
作者 Yun-Tian Chen Dong-Xiao Zhang +1 位作者 Qun Zhao De-Xun Liu 《Petroleum Science》 SCIE EI CAS CSCD 2023年第3期1788-1805,共18页
An algorithm named InterOpt for optimizing operational parameters is proposed based on interpretable machine learning,and is demonstrated via optimization of shale gas development.InterOpt consists of three parts:a ne... An algorithm named InterOpt for optimizing operational parameters is proposed based on interpretable machine learning,and is demonstrated via optimization of shale gas development.InterOpt consists of three parts:a neural network is used to construct an emulator of the actual drilling and hydraulic fracturing process in the vector space(i.e.,virtual environment);:the Sharpley value method in inter-pretable machine learning is applied to analyzing the impact of geological and operational parameters in each well(i.e.,single well feature impact analysis):and ensemble randomized maximum likelihood(EnRML)is conducted to optimize the operational parameters to comprehensively improve the efficiency of shale gas development and reduce the average cost.In the experiment,InterOpt provides different drilling and fracturing plans for each well according to its specific geological conditions,and finally achieves an average cost reduction of 9.7%for a case study with 104 wells. 展开更多
关键词 Interpretable machine learning Operational parameters optimization Shapley value Shale gas development neural network
下载PDF
A Comprehensive Investigation of Machine Learning Feature Extraction and ClassificationMethods for Automated Diagnosis of COVID-19 Based on X-ray Images 被引量:7
12
作者 Mazin Abed Mohammed Karrar Hameed Abdulkareem +6 位作者 Begonya Garcia-Zapirain Salama A.Mostafa Mashael S.Maashi Alaa S.Al-Waisy Mohammed Ahmed Subhi Ammar Awad Mutlag Dac-Nhuong Le 《Computers, Materials & Continua》 SCIE EI 2021年第3期3289-3310,共22页
The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,whi... The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,which makes it challenging and a big issue to improve approaches for efficient identification of COVID-19 disease.In this study,an automatic prediction of COVID-19 identification is proposed to automatically discriminate between healthy and COVID-19 infected subjects in X-ray images using two successful moderns are traditional machine learning methods(e.g.,artificial neural network(ANN),support vector machine(SVM),linear kernel and radial basis function(RBF),k-nearest neighbor(k-NN),Decision Tree(DT),andCN2 rule inducer techniques)and deep learningmodels(e.g.,MobileNets V2,ResNet50,GoogleNet,DarkNet andXception).A largeX-ray dataset has been created and developed,namely the COVID-19 vs.Normal(400 healthy cases,and 400 COVID cases).To the best of our knowledge,it is currently the largest publicly accessible COVID-19 dataset with the largest number of X-ray images of confirmed COVID-19 infection cases.Based on the results obtained from the experiments,it can be concluded that all the models performed well,deep learning models had achieved the optimum accuracy of 98.8%in ResNet50 model.In comparison,in traditional machine learning techniques, the SVM demonstrated the best result for an accuracy of 95% and RBFaccuracy 94% for the prediction of coronavirus disease 2019. 展开更多
关键词 Coronavirus disease COVID-19 diagnosis machine learning convolutional neural networks resnet50 artificial neural network support vector machine X-ray images feature transfer learning
下载PDF
Lateral interaction by Laplacian‐based graph smoothing for deep neural networks
13
作者 Jianhui Chen Zuoren Wang Cheng‐Lin Liu 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第4期1590-1607,共18页
Lateral interaction in the biological brain is a key mechanism that underlies higher cognitive functions.Linear self‐organising map(SOM)introduces lateral interaction in a general form in which signals of any modalit... Lateral interaction in the biological brain is a key mechanism that underlies higher cognitive functions.Linear self‐organising map(SOM)introduces lateral interaction in a general form in which signals of any modality can be used.Some approaches directly incorporate SOM learning rules into neural networks,but incur complex operations and poor extendibility.The efficient way to implement lateral interaction in deep neural networks is not well established.The use of Laplacian Matrix‐based Smoothing(LS)regularisation is proposed for implementing lateral interaction in a concise form.The authors’derivation and experiments show that lateral interaction implemented by SOM model is a special case of LS‐regulated k‐means,and they both show the topology‐preserving capability.The authors also verify that LS‐regularisation can be used in conjunction with the end‐to‐end training paradigm in deep auto‐encoders.Additionally,the benefits of LS‐regularisation in relaxing the requirement of parameter initialisation in various models and improving the classification performance of prototype classifiers are evaluated.Furthermore,the topologically ordered structure introduced by LS‐regularisation in feature extractor can improve the generalisation performance on classification tasks.Overall,LS‐regularisation is an effective and efficient way to implement lateral interaction and can be easily extended to different models. 展开更多
关键词 artificial neural networks biologically plausible Laplacian‐based graph smoothing lateral interaction machine learning
下载PDF
A data-driven machine learning approach for yaw control applications of wind farms
14
作者 Christian Santoni Zexia Zhang +1 位作者 Fotis Sotiropoulos Ali Khosronejad 《Theoretical & Applied Mechanics Letters》 CSCD 2023年第5期341-352,共12页
This study proposes a cost-effective machine-learning based model for predicting velocity and turbulence kineticenergy fields in the wake of wind turbines for yaw control applications.The model consists of an auto-enc... This study proposes a cost-effective machine-learning based model for predicting velocity and turbulence kineticenergy fields in the wake of wind turbines for yaw control applications.The model consists of an auto-encoderconvolutional neural network(ACNN)trained to extract the features of turbine wakes using instantaneous datafrom large-eddy simulation(LES).The proposed framework is demonstrated by applying it to the Sandia NationalLaboratory Scaled Wind Farm Technology facility consisting of three 225 kW turbines.LES of this site is performedfor different wind speeds and yaw angles to generate datasets for training and validating the proposed ACNN.It is shown that the ACNN accurately predicts turbine wake characteristics for cases with turbine yaw angleand wind speed that were not part of the training process.Specifically,the ACNN is shown to reproduce thewake redirection of the upstream turbine and the secondary wake steering of the downstream turbine accurately.Compared to the brute-force LES,the ACNN developed herein is shown to reduce the overall computational costrequired to obtain the steady state first and second-order statistics of the wind farm by about 85%. 展开更多
关键词 Wind energy machine learning Yaw controlLarge eddy simulations Convolutional neural networks
下载PDF
Empirical Analysis of Neural Networks-Based Models for Phishing Website Classification Using Diverse Datasets
15
作者 Shoaib Khan Bilal Khan +2 位作者 Saifullah Jan Subhan Ullah Aiman 《Journal of Cyber Security》 2023年第1期47-66,共20页
Phishing attacks pose a significant security threat by masquerading as trustworthy entities to steal sensitive information,a problem that persists despite user awareness.This study addresses the pressing issue of phis... Phishing attacks pose a significant security threat by masquerading as trustworthy entities to steal sensitive information,a problem that persists despite user awareness.This study addresses the pressing issue of phishing attacks on websites and assesses the performance of three prominent Machine Learning(ML)models—Artificial Neural Networks(ANN),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM)—utilizing authentic datasets sourced from Kaggle and Mendeley repositories.Extensive experimentation and analysis reveal that the CNN model achieves a better accuracy of 98%.On the other hand,LSTM shows the lowest accuracy of 96%.These findings underscore the potential of ML techniques in enhancing phishing detection systems and bolstering cybersecurity measures against evolving phishing tactics,offering a promising avenue for safeguarding sensitive information and online security. 展开更多
关键词 Artificial neural networks phishing websites network security machine learning phishing datasets CLASSIFICATION
下载PDF
Framework for TCAD augmented machine learning on multi-I-V characteristics using convolutional neural network and multiprocessing
16
作者 Thomas Hirtz Steyn Huurman +2 位作者 He Tian Yi Yang Tian-Ling Ren 《Journal of Semiconductors》 EI CAS CSCD 2021年第12期86-94,共9页
In a world where data is increasingly important for making breakthroughs,microelectronics is a field where data is sparse and hard to acquire.Only a few entities have the infrastructure that is required to automate th... In a world where data is increasingly important for making breakthroughs,microelectronics is a field where data is sparse and hard to acquire.Only a few entities have the infrastructure that is required to automate the fabrication and testing of semiconductor devices.This infrastructure is crucial for generating sufficient data for the use of new information technologies.This situation generates a cleavage between most of the researchers and the industry.To address this issue,this paper will introduce a widely applicable approach for creating custom datasets using simulation tools and parallel computing.The multi-I-V curves that we obtained were processed simultaneously using convolutional neural networks,which gave us the ability to predict a full set of device characteristics with a single inference.We prove the potential of this approach through two concrete examples of useful deep learning models that were trained using the generated data.We believe that this work can act as a bridge between the state-of-the-art of data-driven methods and more classical semiconductor research,such as device engineering,yield engineering or process monitoring.Moreover,this research gives the opportunity to anybody to start experimenting with deep neural networks and machine learning in the field of microelectronics,without the need for expensive experimentation infrastructure. 展开更多
关键词 machine learning neural networks semiconductor devices simulation
下载PDF
Diffractive Deep Neural Networks at Visible Wavelengths 被引量:8
17
作者 Hang Chen Jianan Feng +4 位作者 Minwei Jiang Yiqun Wang Jie Lin Jiubin Tan Peng Jin 《Engineering》 SCIE EI 2021年第10期1483-1491,共9页
Optical deep learning based on diffractive optical elements offers unique advantages for parallel processing,computational speed,and power efficiency.One landmark method is the diffractive deep neural network(D^(2) NN... Optical deep learning based on diffractive optical elements offers unique advantages for parallel processing,computational speed,and power efficiency.One landmark method is the diffractive deep neural network(D^(2) NN)based on three-dimensional printing technology operated in the terahertz spectral range.Since the terahertz bandwidth involves limited interparticle coupling and material losses,this paper extends D^(2) NN to visible wavelengths.A general theory including a revised formula is proposed to solve any contradictions between wavelength,neuron size,and fabrication limitations.A novel visible light D^(2) NN classifier is used to recognize unchanged targets(handwritten digits ranging from 0 to 9)and targets that have been changed(i.e.,targets that have been covered or altered)at a visible wavelength of 632.8 nm.The obtained experimental classification accuracy(84%)and numerical classification accuracy(91.57%)quantify the match between the theoretical design and fabricated system performance.The presented framework can be used to apply a D^(2) NN to various practical applications and design other new applications. 展开更多
关键词 Optical computation Optical neural networks Deep learning Optical machine learning Diffractive deep neural networks
下载PDF
State of the art in applications of machine learning in steelmaking process modeling 被引量:6
18
作者 Runhao Zhang Jian Yang 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2023年第11期2055-2075,共21页
With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning te... With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models. 展开更多
关键词 machine learning steelmaking process modeling artificial neural network support vector machine case-based reasoning data processing
下载PDF
Detection Collision Flows in SDN Based 5G Using Machine Learning Algorithms 被引量:1
19
作者 Aqsa Aqdus Rashid Amin +3 位作者 Sadia Ramzan Sultan S.Alshamrani Abdullah Alshehri El-Sayed M.El-kenawy 《Computers, Materials & Continua》 SCIE EI 2023年第1期1413-1435,共23页
The rapid advancement of wireless communication is forming a hyper-connected 5G network in which billions of linked devices generate massive amounts of data.The traffic control and data forwarding functions are decoup... The rapid advancement of wireless communication is forming a hyper-connected 5G network in which billions of linked devices generate massive amounts of data.The traffic control and data forwarding functions are decoupled in software-defined networking(SDN)and allow the network to be programmable.Each switch in SDN keeps track of forwarding information in a flow table.The SDN switches must search the flow table for the flow rules that match the packets to handle the incoming packets.Due to the obvious vast quantity of data in data centres,the capacity of the flow table restricts the data plane’s forwarding capabilities.So,the SDN must handle traffic from across the whole network.The flow table depends on Ternary Content Addressable Memorable Memory(TCAM)for storing and a quick search of regulations;it is restricted in capacity owing to its elevated cost and energy consumption.Whenever the flow table is abused and overflowing,the usual regulations cannot be executed quickly.In this case,we consider lowrate flow table overflowing that causes collision flow rules to be installed and consumes excessive existing flow table capacity by delivering packets that don’t fit the flow table at a low rate.This study introduces machine learning techniques for detecting and categorizing low-rate collision flows table in SDN,using Feed ForwardNeuralNetwork(FFNN),K-Means,and Decision Tree(DT).We generate two network topologies,Fat Tree and Simple Tree Topologies,with the Mininet simulator and coupled to the OpenDayLight(ODL)controller.The efficiency and efficacy of the suggested algorithms are assessed using several assessment indicators such as success rate query,propagation delay,overall dropped packets,energy consumption,bandwidth usage,latency rate,and throughput.The findings showed that the suggested technique to tackle the flow table congestion problem minimizes the number of flows while retaining the statistical consistency of the 5G network.By putting the proposed flow method and checking whether a packet may move from point A to point B without breaking certain regulations,the evaluation tool examines every flow against a set of criteria.The FFNN with DT and K-means algorithms obtain accuracies of 96.29%and 97.51%,respectively,in the identification of collision flows,according to the experimental outcome when associated with existing methods from the literature. 展开更多
关键词 5G networks software-defined networking(SDN) OpenFlow load balancing machine learning(ML) feed forward neural network(FFNN) k-means and decision tree(DT)
下载PDF
Applying Neural-Network-Based Machine Learning to Additive Manufacturing:Current Applications,Challenges,and Future Perspectives 被引量:19
20
作者 Xinbo Qi Guofeng Chen +2 位作者 Yong Li Xuan Cheng Changpeng Li 《Engineering》 SCIE EI 2019年第4期721-729,共9页
Additive manufacturing(AM),also known as three-dimensional printing,is gaining increasing attention from academia and industry due to the unique advantages it has in comparison with traditional subtractive manufacturi... Additive manufacturing(AM),also known as three-dimensional printing,is gaining increasing attention from academia and industry due to the unique advantages it has in comparison with traditional subtractive manufacturing.However,AM processing parameters are difficult to tune,since they can exert a huge impact on the printed microstructure and on the performance of the subsequent products.It is a difficult task to build a process-structure-property-performance(PSPP)relationship for AM using traditional numerical and analytical models.Today,the machine learning(ML)method has been demonstrated to be a valid way to perform complex pattern recognition and regression analysis without an explicit need to construct and solve the underlying physical models.Among ML algorithms,the neural network(NN)is the most widely used model due to the large dataset that is currently available,strong computational power,and sophisticated algorithm architecture.This paper overviews the progress of applying the NN algorithm to several aspects of the AM whole chain,including model design,in situ monitoring,and quality evaluation.Current challenges in applying NNs to AM and potential solutions for these problems are then outlined.Finally,future trends are proposed in order to provide an overall discussion of this interdisciplinary area. 展开更多
关键词 ADDITIVE manufacturing 3D PRINTING neural network machine learning Algorithm
下载PDF
上一页 1 2 151 下一页 到第
使用帮助 返回顶部