期刊文献+
共找到9,578篇文章
< 1 2 250 >
每页显示 20 50 100
Advancements in machine learning for material design and process optimization in the field of additive manufacturing
1
作者 Hao-ran Zhou Hao Yang +8 位作者 Huai-qian Li Ying-chun Ma Sen Yu Jian shi Jing-chang Cheng Peng Gao Bo Yu Zhi-quan Miao Yan-peng Wei 《China Foundry》 SCIE EI CAS CSCD 2024年第2期101-115,共15页
Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is co... Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is constrained by issues like unclear fundamental principles,complex experimental cycles,and high costs.Machine learning,as a novel artificial intelligence technology,has the potential to deeply engage in the development of additive manufacturing process,assisting engineers in learning and developing new techniques.This paper provides a comprehensive overview of the research and applications of machine learning in the field of additive manufacturing,particularly in model design and process development.Firstly,it introduces the background and significance of machine learning-assisted design in additive manufacturing process.It then further delves into the application of machine learning in additive manufacturing,focusing on model design and process guidance.Finally,it concludes by summarizing and forecasting the development trends of machine learning technology in the field of additive manufacturing. 展开更多
关键词 additive manufacturing machine learning material design process optimization intersection of disciplines embedded machine learning
下载PDF
Reliable calculations of nuclear binding energies by the Gaussian process of machine learning
2
作者 Zi-Yi Yuan Dong Bai +1 位作者 Zhen Wang Zhong-Zhou Ren 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第6期130-144,共15页
Reliable calculations of nuclear binding energies are crucial for advancing the research of nuclear physics. Machine learning provides an innovative approach to exploring complex physical problems. In this study, the ... Reliable calculations of nuclear binding energies are crucial for advancing the research of nuclear physics. Machine learning provides an innovative approach to exploring complex physical problems. In this study, the nuclear binding energies are modeled directly using a machine-learning method called the Gaussian process. First, the binding energies for 2238 nuclei with Z > 20 and N > 20 are calculated using the Gaussian process in a physically motivated feature space, yielding an average deviation of 0.046 MeV and a standard deviation of 0.066 MeV. The results show the good learning ability of the Gaussian process in the studies of binding energies. Then, the predictive power of the Gaussian process is studied by calculating the binding energies for 108 nuclei newly included in AME2020. The theoretical results are in good agreement with the experimental data, reflecting the good predictive power of the Gaussian process. Moreover, the α-decay energies for 1169 nuclei with 50 ≤ Z ≤ 110 are derived from the theoretical binding energies calculated using the Gaussian process. The average deviation and the standard deviation are, respectively, 0.047 MeV and 0.070 MeV. Noticeably, the calculated α-decay energies for the two new isotopes ^ (204 )Ac(Huang et al. Phys Lett B 834, 137484(2022)) and ^ (207) Th(Yang et al. Phys Rev C 105, L051302(2022)) agree well with the latest experimental data. These results demonstrate that the Gaussian process is reliable for the calculations of nuclear binding energies. Finally, the α-decay properties of some unknown actinide nuclei are predicted using the Gaussian process. The predicted results can be useful guides for future research on binding energies and α-decay properties. 展开更多
关键词 Nuclear binding energies DECAY machine learning Gaussian process
下载PDF
Predicting grain size-dependent superplastic properties in friction stir processed ZK30 magnesium alloy with machine learning methods
3
作者 Farid Bahari-Sambran Fernando Carreno +1 位作者 C.M.Cepeda-Jiménez Alberto Orozco-Caballero 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第5期1931-1943,共13页
The aim of this work is to predict,for the first time,the high temperature flow stress dependency with the grain size and the underlaid deformation mechanism using two machine learning models,random forest(RF)and arti... The aim of this work is to predict,for the first time,the high temperature flow stress dependency with the grain size and the underlaid deformation mechanism using two machine learning models,random forest(RF)and artificial neural network(ANN).With that purpose,a ZK30 magnesium alloy was friction stir processed(FSP)using three different severe conditions to obtain fine grain microstructures(with average grain sizes between 2 and 3μm)prone to extensive superplastic response.The three friction stir processed samples clearly deformed by grain boundary sliding(GBS)deformation mechanism at high temperatures.The maximum elongations to failure,well over 400% at high strain rate of 10^(-2)s^(-1),were reached at 400℃ in the material with coarsest grain size of 2.8μm,and at 300℃ for the finest grain size of 2μm.Nevertheless,the superplastic response decreased at 350℃ and 400℃ due to thermal instabilities and grain coarsening,which makes it difficult to assess the operative deformation mechanism at such temperatures.This work highlights that the machine learning models considered,especially the ANN model with higher accuracy in predicting flow stress values,allow determining adequately the superplastic creep behavior including other possible grain size scenarios. 展开更多
关键词 machine learning Artificial intelligence Magnesium alloys SUPERPLASTICITY Friction stir processing Grain coarsening
下载PDF
Prediction of corrosion rate for friction stir processed WE43 alloy by combining PSO-based virtual sample generation and machine learning
4
作者 Annayath Maqbool Abdul Khalad Noor Zaman Khan 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第4期1518-1528,共11页
The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corros... The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corrosion rate.However,a better understanding of the correlation between the FSP process parameters and the corrosion rate is still lacking.The current study used machine learning to establish the relationship between the corrosion rate and FSP process parameters(rotational speed,traverse speed,and shoulder diameter)for WE43 alloy.The Taguchi L27 design of experiments was used for the experimental analysis.In addition,synthetic data was generated using particle swarm optimization for virtual sample generation(VSG).The application of VSG has led to an increase in the prediction accuracy of machine learning models.A sensitivity analysis was performed using Shapley Additive Explanations to determine the key factors affecting the corrosion rate.The shoulder diameter had a significant impact in comparison to the traverse speed.A graphical user interface(GUI)has been created to predict the corrosion rate using the identified factors.This study focuses on the WE43 alloy,but its findings can also be used to predict the corrosion rate of other magnesium alloys. 展开更多
关键词 Corrosion rate Friction stir processing Virtual sample generation Particle swarm optimization machine learning Graphical user interface
下载PDF
State of the art in applications of machine learning in steelmaking process modeling 被引量:6
5
作者 Runhao Zhang Jian Yang 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2023年第11期2055-2075,共21页
With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning te... With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models. 展开更多
关键词 machine learning steelmaking process modeling artificial neural network support vector machine case-based reasoning data processing
下载PDF
WORKPIECE LOCATING AND POST PROCESSING SYSTEMS ON 6-DOF CNC MILLING MACHINE
6
作者 王瑞 钟诗胜 王知行 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2006年第2期138-143,共6页
A conventional non-computerized numerical control (CNC) machine is updated by mounting a six degree-of-free (DOF) parallel mechanism on it, thus obtaining a new CNC one. The structure of this CNC milling machine i... A conventional non-computerized numerical control (CNC) machine is updated by mounting a six degree-of-free (DOF) parallel mechanism on it, thus obtaining a new CNC one. The structure of this CNC milling machine is introduced, and the workpiece locating system and the post processing system of the cutter location (CL) data file are analyzed. The new machine has advantages of low costs, simple structure, good rigidity, and high precision. It is easy to be transformed and used to process the workpiece with a complex surface. 展开更多
关键词 parallel kinematic machine CNC milling machine workpiece locating system post processing system
下载PDF
Machine learning-driven optimization of plasma-catalytic dry reforming of methane
7
作者 Yuxiang Cai Danhua Mei +2 位作者 Yanzhen Chen Annemie Bogaerts Xin Tu 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2024年第9期153-163,共11页
This study investigates the dry reformation of methane(DRM)over Ni/Al_(2)O_(3)catalysts in a dielectric barrier discharge(DBD)non-thermal plasma reactor.A novel hybrid machine learning(ML)model is developed to optimiz... This study investigates the dry reformation of methane(DRM)over Ni/Al_(2)O_(3)catalysts in a dielectric barrier discharge(DBD)non-thermal plasma reactor.A novel hybrid machine learning(ML)model is developed to optimize the plasma-catalytic DRM reaction with limited experimental data.To address the non-linear and complex nature of the plasma-catalytic DRM process,the hybrid ML model integrates three well-established algorithms:regression trees,support vector regression,and artificial neural networks.A genetic algorithm(GA)is then used to optimize the hyperparameters of each algorithm within the hybrid ML model.The ML model achieved excellent agreement with the experimental data,demonstrating its efficacy in accurately predicting and optimizing the DRM process.The model was subsequently used to investigate the impact of various operating parameters on the plasma-catalytic DRM performance.We found that the optimal discharge power(20 W),CO_(2)/CH_(4)molar ratio(1.5),and Ni loading(7.8 wt%)resulted in the maximum energy yield at a total flow rate of∼51 mL/min.Furthermore,we investigated the relative significance of each operating parameter on the performance of the plasma-catalytic DRM process.The results show that the total flow rate had the greatest influence on the conversion,with a significance exceeding 35%for each output,while the Ni loading had the least impact on the overall reaction performance.This hybrid model demonstrates a remarkable ability to extract valuable insights from limited datasets,enabling the development and optimization of more efficient and selective plasma-catalytic chemical processes. 展开更多
关键词 Plasma catalysis machine learning process optimization Dry reforming of methane Syngas production
下载PDF
A Cooperated Imperialist Competitive Algorithm for Unrelated Parallel Batch Machine Scheduling Problem
8
作者 Deming Lei Heen Li 《Computers, Materials & Continua》 SCIE EI 2024年第5期1855-1874,共20页
This study focuses on the scheduling problem of unrelated parallel batch processing machines(BPM)with release times,a scenario derived from the moulding process in a foundry.In this process,a batch is initially formed... This study focuses on the scheduling problem of unrelated parallel batch processing machines(BPM)with release times,a scenario derived from the moulding process in a foundry.In this process,a batch is initially formed,placed in a sandbox,and then the sandbox is positioned on a BPM formoulding.The complexity of the scheduling problem increases due to the consideration of BPM capacity and sandbox volume.To minimize the makespan,a new cooperated imperialist competitive algorithm(CICA)is introduced.In CICA,the number of empires is not a parameter,and four empires aremaintained throughout the search process.Two types of assimilations are achieved:The strongest and weakest empires cooperate in their assimilation,while the remaining two empires,having a close normalization total cost,combine in their assimilation.A new form of imperialist competition is proposed to prevent insufficient competition,and the unique features of the problem are effectively utilized.Computational experiments are conducted across several instances,and a significant amount of experimental results show that the newstrategies of CICAare effective,indicating promising advantages for the considered BPMscheduling problems. 展开更多
关键词 Release time ASSIMILATION imperialist competitive algorithm batch processing machines scheduling
下载PDF
Machine Learning Techniques Using Deep Instinctive Encoder-Based Feature Extraction for Optimized Breast Cancer Detection
9
作者 Vaishnawi Priyadarshni Sanjay Kumar Sharma +2 位作者 Mohammad Khalid Imam Rahmani Baijnath Kaushik Rania Almajalid 《Computers, Materials & Continua》 SCIE EI 2024年第2期2441-2468,共28页
Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s li... Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s lives.Developing an efficient technology-based detection system can lead to non-destructive and preliminary cancer detection techniques.This paper proposes a comprehensive framework that can effectively diagnose cancerous cells from benign cells using the Curated Breast Imaging Subset of the Digital Database for Screening Mammography(CBIS-DDSM)data set.The novelty of the proposed framework lies in the integration of various techniques,where the fusion of deep learning(DL),traditional machine learning(ML)techniques,and enhanced classification models have been deployed using the curated dataset.The analysis outcome proves that the proposed enhanced RF(ERF),enhanced DT(EDT)and enhanced LR(ELR)models for BC detection outperformed most of the existing models with impressive results. 展开更多
关键词 Autoencoder breast cancer deep neural network convolutional neural network image processing machine learning deep learning
下载PDF
Terrorism Attack Classification Using Machine Learning: The Effectiveness of Using Textual Features Extracted from GTD Dataset
10
作者 Mohammed Abdalsalam Chunlin Li +1 位作者 Abdelghani Dahou Natalia Kryvinska 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1427-1467,共41页
One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelli... One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier. 展开更多
关键词 Artificial intelligence machine learning natural language processing data analytic DistilBERT feature extraction terrorism classification GTD dataset
下载PDF
Comparative Analysis of Machine Learning Algorithms for Email Phishing Detection Using TF-IDF, Word2Vec, and BERT
11
作者 Arar Al Tawil Laiali Almazaydeh +3 位作者 Doaa Qawasmeh Baraah Qawasmeh Mohammad Alshinwan Khaled Elleithy 《Computers, Materials & Continua》 SCIE EI 2024年第11期3395-3412,共18页
Cybercriminals often use fraudulent emails and fictitious email accounts to deceive individuals into disclosing confidential information,a practice known as phishing.This study utilizes three distinct methodologies,Te... Cybercriminals often use fraudulent emails and fictitious email accounts to deceive individuals into disclosing confidential information,a practice known as phishing.This study utilizes three distinct methodologies,Term Frequency-Inverse Document Frequency,Word2Vec,and Bidirectional Encoder Representations from Transform-ers,to evaluate the effectiveness of various machine learning algorithms in detecting phishing attacks.The study uses feature extraction methods to assess the performance of Logistic Regression,Decision Tree,Random Forest,and Multilayer Perceptron algorithms.The best results for each classifier using Term Frequency-Inverse Document Frequency were Multilayer Perceptron(Precision:0.98,Recall:0.98,F1-score:0.98,Accuracy:0.98).Word2Vec’s best results were Multilayer Perceptron(Precision:0.98,Recall:0.98,F1-score:0.98,Accuracy:0.98).The highest performance was achieved using the Bidirectional Encoder Representations from the Transformers model,with Precision,Recall,F1-score,and Accuracy all reaching 0.99.This study highlights how advanced pre-trained models,such as Bidirectional Encoder Representations from Transformers,can significantly enhance the accuracy and reliability of fraud detection systems. 展开更多
关键词 ATTACKS email phishing machine learning security representations from transformers(BERT) text classifeir natural language processing(NLP)
下载PDF
An Example of Machine Vision Applied in Printing Quality Checking——Research on the Checking of Printing Quality by Image Processing 被引量:5
12
作者 唐万有 王文凤 《微计算机信息》 北大核心 2008年第6期45-47,共3页
The traditional printing checking method always uses printing control strips,but the results are not very well in repeatability and stability. In this paper,the checking methods for printing quality basing on image ar... The traditional printing checking method always uses printing control strips,but the results are not very well in repeatability and stability. In this paper,the checking methods for printing quality basing on image are taken as research objects. On the base of the traditional checking methods of printing quality,combining the method and theory of digital image processing with printing theory in the new domain of image quality checking,it constitute the checking system of printing quality by image processing,and expound the theory design and the model of this system. This is an application of machine vision. It uses the high resolution industrial CCD(Charge Coupled Device) colorful camera. It can display the real-time photographs on the monitor,and input the video signal to the image gathering card,and then the image data transmits through the computer PCI bus to the memory. At the same time,the system carries on processing and data analysis. This method is proved by experiments. The experiments are mainly about the data conversion of image and ink limit show of printing. 展开更多
关键词 机器视觉 印刷质量检测 图像处理 数据转换 墨量显示
下载PDF
Machine Learning Approaches for the Solution of the Riemann Problem in Fluid Dynamics:a Case Study
13
作者 Vitaly Gyrya Mikhail Shashkov +1 位作者 Alexei Skurikhin Svetlana Tokareva 《Communications on Applied Mathematics and Computation》 EI 2024年第3期1832-1859,共28页
We present our results by using a machine learning(ML)approach for the solution of the Riemann problem for the Euler equations of fluid dynamics.The Riemann problem is an initial-value problem with piecewise-constant ... We present our results by using a machine learning(ML)approach for the solution of the Riemann problem for the Euler equations of fluid dynamics.The Riemann problem is an initial-value problem with piecewise-constant initial data and it represents a mathematical model of the shock tube.The solution of the Riemann problem is the building block for many numerical algorithms in computational fluid dynamics,such as finite-volume or discontinuous Galerkin methods.Therefore,a fast and accurate approximation of the solution of the Riemann problem and construction of the associated numerical fluxes is of crucial importance.The exact solution of the shock tube problem is fully described by the intermediate pressure and mathematically reduces to finding a solution of a nonlinear equation.Prior to delving into the complexities of ML for the Riemann problem,we consider a much simpler formulation,yet very informative,problem of learning roots of quadratic equations based on their coefficients.We compare two approaches:(i)Gaussian process(GP)regressions,and(ii)neural network(NN)approximations.Among these approaches,NNs prove to be more robust and efficient,although GP can be appreciably more accurate(about 30\%).We then use our experience with the quadratic equation to apply the GP and NN approaches to learn the exact solution of the Riemann problem from the initial data or coefficients of the gas equation of state(EOS).We compare GP and NN approximations in both regression and classification analysis and discuss the potential benefits and drawbacks of the ML approach. 展开更多
关键词 machine learning(ML) Neural network(NN) Gaussian process(GP) Riemann problem Numerical fluxes Finite-volume method
下载PDF
LKMT:Linguistics Knowledge-Driven Multi-Task Neural Machine Translation for Urdu and English
14
作者 Muhammad Naeem Ul Hassan Zhengtao Yu +4 位作者 Jian Wang Ying Li Shengxiang Gao Shuwan Yang Cunli Mao 《Computers, Materials & Continua》 SCIE EI 2024年第10期951-969,共19页
Thanks to the strong representation capability of pre-trained language models,supervised machine translation models have achieved outstanding performance.However,the performances of these models drop sharply when the ... Thanks to the strong representation capability of pre-trained language models,supervised machine translation models have achieved outstanding performance.However,the performances of these models drop sharply when the scale of the parallel training corpus is limited.Considering the pre-trained language model has a strong ability for monolingual representation,it is the key challenge for machine translation to construct the in-depth relationship between the source and target language by injecting the lexical and syntactic information into pre-trained language models.To alleviate the dependence on the parallel corpus,we propose a Linguistics Knowledge-Driven MultiTask(LKMT)approach to inject part-of-speech and syntactic knowledge into pre-trained models,thus enhancing the machine translation performance.On the one hand,we integrate part-of-speech and dependency labels into the embedding layer and exploit large-scale monolingual corpus to update all parameters of pre-trained language models,thus ensuring the updated language model contains potential lexical and syntactic information.On the other hand,we leverage an extra self-attention layer to explicitly inject linguistic knowledge into the pre-trained language model-enhanced machine translation model.Experiments on the benchmark dataset show that our proposed LKMT approach improves the Urdu-English translation accuracy by 1.97 points and the English-Urdu translation accuracy by 2.42 points,highlighting the effectiveness of our LKMT framework.Detailed ablation experiments confirm the positive impact of part-of-speech and dependency parsing on machine translation. 展开更多
关键词 Urdu NMT(neural machine translation) Urdu natural language processing Urdu Linguistic features low resources language linguistic features pretrain model
下载PDF
Study on Licker-In and Flat Speeds of Carding Machine and Its Effects on Quality of Cotton Spinning Process 被引量:1
15
作者 Md. Mominul Motin Ayub Nabi Khan Md. Obaidur Rahman 《Journal of Textile Science and Technology》 2023年第3期198-214,共17页
Spinning has a significant influence on all textile processes. Combinations of all the capital equipment display the process’ critical condition. By transforming unprocessed fibers into carded sliver and yarn, the ca... Spinning has a significant influence on all textile processes. Combinations of all the capital equipment display the process’ critical condition. By transforming unprocessed fibers into carded sliver and yarn, the carding machine serves a critical role in the textile industry. The carding machine’s licker-in and flat speeds are crucial operational factors that have a big influence on the finished goods’ quality. The purpose of this study is to examine the link between licker-in and flat speeds and how they affect the yarn and carded sliver quality. A thorough experimental examination on a carding machine was carried out to accomplish this. The carded sliver and yarn produced after experimenting with different licker-in and flat speed combinations were assessed for important quality factors including evenness, strength, and flaws. To account for changes in material qualities and machine settings, the study also took into consideration the impact of various fiber kinds and processing circumstances. The findings of the investigation showed a direct relationship between the quality of the carded sliver and yarn and the licker-in and flat speeds. Within a limited range, greater licker-in speeds were shown to increase carding efficiency and decrease fiber tangling. On the other hand, extremely high speeds led to more fiber breakage and neps. Higher flat speeds, on the other hand, helped to enhance fiber alignment, which increased the evenness and strength of the carded sliver and yarn. Additionally, it was discovered that the ideal blend of licker-in and flat rates varied based on the fiber type and processing circumstances. When being carded, various fibers displayed distinctive behaviors that necessitated adjusting the operating settings in order to provide the necessary quality results. The study also determined the crucial speed ratios between the licker-in and flat speeds that reduced fiber breakage and increased the caliber of the finished goods. The results of this study offer useful information for textile producers and process engineers to improve the quality of carded sliver and yarn while maximizing the performance of carding machines. Operators may choose machine settings and parameter adjustments wisely by knowing the impacts of licker-in and flat speeds, which will increase textile industry efficiency, productivity, and product quality. 展开更多
关键词 Spinning process Carding machine Yarn Count FLAT Licker-In Sliver Hank
下载PDF
Modal Frequency Prediction of Chladni Patterns Using Machine Learning
16
作者 Atul Kumar K. P. Wani 《Open Journal of Acoustics》 2024年第1期1-16,共16页
The introduction of machine learning (ML) in the research domain is a new era technique. The machine learning algorithm is developed for frequency predication of patterns that are formed on the Chladni plate and focus... The introduction of machine learning (ML) in the research domain is a new era technique. The machine learning algorithm is developed for frequency predication of patterns that are formed on the Chladni plate and focused on the application of machine learning algorithms in image processing. In the Chladni plate, nodes and antinodes are demonstrated at various excited frequencies. Sand on the plate creates specific patterns when it is excited by vibrations from a mechanical oscillator. In the experimental setup, a rectangular aluminum plate of 16 cm x 16 cm and 0.61 mm thickness was placed over the mechanical oscillator, which was driven by a sine wave signal generator. 14 Chladni patterns are obtained on a Chladni plate and validation is done with modal analysis in Ansys. For machine learning, a large number of data sets are required, as captured around 200 photos of each modal frequency and around 3000 photos with a camera of all 14 Chladni patterns for supervised learning. The current model is written in Python language and model has one convolution layer. The main modules used in this are Tensor Flow Keras, NumPy, CV2 and Maxpooling. The fed reference data is taken for 14 frequencies between 330 Hz to 3910 Hz. In the model, all the images are converted to grayscale and canny edge detected. All patterns of frequencies have an almost 80% - 99% correlation with test sample experimental data. This approach is to form a directory of Chladni patterns for future reference purpose in real-life application. A machine learning algorithm can predict the resonant frequency based on the patterns formed on the Chladni plate. 展开更多
关键词 Chaldni Pattern Modal Analysis machine Learning Resonant Frequency Image processing
下载PDF
Study on Licker-In and Flat Speeds of Carding Machine and Its Effects on Quality of Cotton Spinning Process
17
作者 Md. Mominul Motin Ayub Nabi Khan Md. Obaidur Rahman 《Journal of Flow Control, Measurement & Visualization》 2023年第3期198-214,共17页
Spinning has a significant influence on all textile processes. Combinations of all the capital equipment display the process’ critical condition. By transforming unprocessed fibers into carded sliver and yarn, the ca... Spinning has a significant influence on all textile processes. Combinations of all the capital equipment display the process’ critical condition. By transforming unprocessed fibers into carded sliver and yarn, the carding machine serves a critical role in the textile industry. The carding machine’s licker-in and flat speeds are crucial operational factors that have a big influence on the finished goods’ quality. The purpose of this study is to examine the link between licker-in and flat speeds and how they affect the yarn and carded sliver quality. A thorough experimental examination on a carding machine was carried out to accomplish this. The carded sliver and yarn produced after experimenting with different licker-in and flat speed combinations were assessed for important quality factors including evenness, strength, and flaws. To account for changes in material qualities and machine settings, the study also took into consideration the impact of various fiber kinds and processing circumstances. The findings of the investigation showed a direct relationship between the quality of the carded sliver and yarn and the licker-in and flat speeds. Within a limited range, greater licker-in speeds were shown to increase carding efficiency and decrease fiber tangling. On the other hand, extremely high speeds led to more fiber breakage and neps. Higher flat speeds, on the other hand, helped to enhance fiber alignment, which increased the evenness and strength of the carded sliver and yarn. Additionally, it was discovered that the ideal blend of licker-in and flat rates varied based on the fiber type and processing circumstances. When being carded, various fibers displayed distinctive behaviors that necessitated adjusting the operating settings in order to provide the necessary quality results. The study also determined the crucial speed ratios between the licker-in and flat speeds that reduced fiber breakage and increased the caliber of the finished goods. The results of this study offer useful information for textile producers and process engineers to improve the quality of carded sliver and yarn while maximizing the performance of carding machines. Operators may choose machine settings and parameter adjustments wisely by knowing the impacts of licker-in and flat speeds, which will increase textile industry efficiency, productivity, and product quality. 展开更多
关键词 Spinning process Carding machine Yarn Count FLAT Licker-In Sliver Hank
下载PDF
Applications of advanced signal processing and machine learning in the neonatal hypoxic-ischemic electroencephalography 被引量:5
18
作者 Hamid Abbasi Charles P.Unsworth 《Neural Regeneration Research》 SCIE CAS CSCD 2020年第2期222-231,共10页
Perinatal hypoxic-ischemic-encephalopathy significantly contributes to neonatal death and life-long disability such as cerebral palsy. Advances in signal processing and machine learning have provided the research comm... Perinatal hypoxic-ischemic-encephalopathy significantly contributes to neonatal death and life-long disability such as cerebral palsy. Advances in signal processing and machine learning have provided the research community with an opportunity to develop automated real-time identification techniques to detect the signs of hypoxic-ischemic-encephalopathy in larger electroencephalography/amplitude-integrated electroencephalography data sets more easily. This review details the recent achievements, performed by a number of prominent research groups across the world, in the automatic identification and classification of hypoxic-ischemic epileptiform neonatal seizures using advanced signal processing and machine learning techniques. This review also addresses the clinical challenges that current automated techniques face in order to be fully utilized by clinicians, and highlights the importance of upgrading the current clinical bedside sampling frequencies to higher sampling rates in order to provide better hypoxic-ischemic biomarker detection frameworks. Additionally, the article highlights that current clinical automated epileptiform detection strategies for human neonates have been only concerned with seizure detection after the therapeutic latent phase of injury. Whereas recent animal studies have demonstrated that the latent phase of opportunity is critically important for early diagnosis of hypoxic-ischemic-encephalopathy electroencephalography biomarkers and although difficult, detection strategies could utilize biomarkers in the latent phase to also predict the onset of future seizures. 展开更多
关键词 advanced signal processing AEEG automatic detection classification clinical EEG fetal HIE hypoxic-ischemic ENCEPHALOPATHY machine learning neonatal SEIZURE real-time identification review
下载PDF
The State-of-the-Art Review on Applications of Intrusive Sensing,Image Processing Techniques,and Machine Learning Methods in Pavement Monitoring and Analysis 被引量:14
19
作者 Yue Hou Qiuhan Li +5 位作者 Chen Zhang Guoyang Lu Zhoujing Ye Yihan Chen Linbing Wang Dandan Cao 《Engineering》 SCIE EI 2021年第6期845-856,共12页
In modern transportation,pavement is one of the most important civil infrastructures for the movement of vehicles and pedestrians.Pavement service quality and service life are of great importance for civil engineers a... In modern transportation,pavement is one of the most important civil infrastructures for the movement of vehicles and pedestrians.Pavement service quality and service life are of great importance for civil engineers as they directly affect the regular service for the users.Therefore,monitoring the health status of pavement before irreversible damage occurs is essential for timely maintenance,which in turn ensures public transportation safety.Many pavement damages can be detected and analyzed by monitoring the structure dynamic responses and evaluating road surface conditions.Advanced technologies can be employed for the collection and analysis of such data,including various intrusive sensing techniques,image processing techniques,and machine learning methods.This review summarizes the state-ofthe-art of these three technologies in pavement engineering in recent years and suggests possible developments for future pavement monitoring and analysis based on these approaches. 展开更多
关键词 Pavement monitoring and analysis The state-of-the-art review Intrusive sensing Image processing techniques machine learning methods
下载PDF
Recognition and Classification of Pomegranate Leaves Diseases by Image Processing and Machine Learning Techniques 被引量:1
20
作者 Mangena Venu Madhavan Dang Ngoc Hoang Thanh +3 位作者 Aditya Khamparia Sagar Pande RahulMalik Deepak Gupta 《Computers, Materials & Continua》 SCIE EI 2021年第3期2939-2955,共17页
Disease recognition in plants is one of the essential problems in agricultural image processing.This article focuses on designing a framework that can recognize and classify diseases on pomegranate plants exactly.The ... Disease recognition in plants is one of the essential problems in agricultural image processing.This article focuses on designing a framework that can recognize and classify diseases on pomegranate plants exactly.The framework utilizes image processing techniques such as image acquisition,image resizing,image enhancement,image segmentation,ROI extraction(region of interest),and feature extraction.An image dataset related to pomegranate leaf disease is utilized to implement the framework,divided into a training set and a test set.In the implementation process,techniques such as image enhancement and image segmentation are primarily used for identifying ROI and features.An image classification will then be implemented by combining a supervised learning model with a support vector machine.The proposed framework is developed based on MATLAB with a graphical user interface.According to the experimental results,the proposed framework can achieve 98.39%accuracy for classifying diseased and healthy leaves.Moreover,the framework can achieve an accuracy of 98.07%for classifying diseases on pomegranate leaves. 展开更多
关键词 Image enhancement image segmentation image processing for agriculture K-MEANS multi-class support vector machine
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部