期刊文献+
共找到1,359篇文章
< 1 2 68 >
每页显示 20 50 100
Multiple Targets Localization Algorithm Based on Covariance Matrix Sparse Representation and Bayesian Learning
1
作者 Jichuan Liu Xiangzhi Meng Shengjie Wang 《Journal of Beijing Institute of Technology》 EI CAS 2024年第2期119-129,共11页
The multi-source passive localization problem is a problem of great interest in signal pro-cessing with many applications.In this paper,a sparse representation model based on covariance matrix is constructed for the l... The multi-source passive localization problem is a problem of great interest in signal pro-cessing with many applications.In this paper,a sparse representation model based on covariance matrix is constructed for the long-range localization scenario,and a sparse Bayesian learning algo-rithm based on Laplace prior of signal covariance is developed for the base mismatch problem caused by target deviation from the initial point grid.An adaptive grid sparse Bayesian learning targets localization(AGSBL)algorithm is proposed.The AGSBL algorithm implements a covari-ance-based sparse signal reconstruction and grid adaptive localization dictionary learning.Simula-tion results show that the AGSBL algorithm outperforms the traditional compressed-aware localiza-tion algorithm for different signal-to-noise ratios and different number of targets in long-range scenes. 展开更多
关键词 grid adaptive model bayesian learning multi-source localization
下载PDF
A Multi-Task Deep Learning Framework for Simultaneous Detection of Thoracic Pathology through Image Classification
2
作者 Nada Al Zahrani Ramdane Hedjar +4 位作者 Mohamed Mekhtiche Mohamed Bencherif Taha Al Fakih Fattoh Al-Qershi Muna Alrazghan 《Journal of Computer and Communications》 2024年第4期153-170,共18页
Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’... Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’s ability to breathe normally. Some notable examples of such diseases encompass pneumonia, lung cancer, coronavirus disease 2019 (COVID-19), tuberculosis, and chronic obstructive pulmonary disease (COPD). Consequently, early and precise detection of these diseases is paramount during the diagnostic process. Traditionally, the primary methods employed for the detection involve the use of X-ray imaging or computed tomography (CT) scans. Nevertheless, due to the scarcity of proficient radiologists and the inherent similarities between these diseases, the accuracy of detection can be compromised, leading to imprecise or erroneous results. To address this challenge, scientists have turned to computer-based solutions, aiming for swift and accurate diagnoses. The primary objective of this study is to develop two machine learning models, utilizing single-task and multi-task learning frameworks, to enhance classification accuracy. Within the multi-task learning architecture, two principal approaches exist soft parameter sharing and hard parameter sharing. Consequently, this research adopts a multi-task deep learning approach that leverages CNNs to achieve improved classification performance for the specified tasks. These tasks, focusing on pneumonia and COVID-19, are processed and learned simultaneously within a multi-task model. To assess the effectiveness of the trained model, it is rigorously validated using three different real-world datasets for training and testing. 展开更多
关键词 PNEUMONIA Thoracic Pathology COVID-19 Deep learning multi-task learning
下载PDF
Application of Bayesian Analysis Based on Neural Network and Deep Learning in Data Visualization
3
作者 Jiying Yang Qi Long +1 位作者 Xiaoyun Zhu Yuan Yang 《Journal of Electronic Research and Application》 2024年第4期88-93,共6页
This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,tradit... This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science. 展开更多
关键词 Neural network Deep learning bayesian analysis Data visualization Big data environment
下载PDF
Machine learning with active pharmaceutical ingredient/polymer interaction mechanism:Prediction for complex phase behaviors of pharmaceuticals and formulations 被引量:2
4
作者 Kai Ge Yiping Huang Yuanhui Ji 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第2期263-272,共10页
The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceu... The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceutical formulations.In this work,a developed machine-learning model efficiently predicts the solubility of APIs in polymers by learning the phase equilibrium principle and using a few molecular descriptors.Under the few-shot learning framework,thermodynamic theory(perturbed-chain statistical associating fluid theory)was used for data augmentation,and computational chemistry was applied for molecular descriptors'screening.The results showed that the developed machine-learning model can predict the API-polymer phase diagram accurately,broaden the solubility data of APIs in polymers,and reproduce the relationship between API solubility and the interaction mechanisms between API and polymer successfully,which provided efficient guidance for the development of pharmaceutical formulations. 展开更多
关键词 multi-task machine learning Density functional theory Hydrogen bond interaction MISCIBILITY SOLUBILITY
下载PDF
Joint Multi-Domain Channel Estimation Based on Sparse Bayesian Learning for OTFS System 被引量:7
5
作者 Yong Liao Xue Li 《China Communications》 SCIE CSCD 2023年第1期14-23,共10页
Since orthogonal time-frequency space(OTFS)can effectively handle the problems caused by Doppler effect in high-mobility environment,it has gradually become a promising candidate for modulation scheme in the next gene... Since orthogonal time-frequency space(OTFS)can effectively handle the problems caused by Doppler effect in high-mobility environment,it has gradually become a promising candidate for modulation scheme in the next generation of mobile communication.However,the inter-Doppler interference(IDI)problem caused by fractional Doppler poses great challenges to channel estimation.To avoid this problem,this paper proposes a joint time and delayDoppler(DD)domain based on sparse Bayesian learning(SBL)channel estimation algorithm.Firstly,we derive the original channel response(OCR)from the time domain channel impulse response(CIR),which can reflect the channel variation during one OTFS symbol.Compare with the traditional channel model,the OCR can avoid the IDI problem.After that,the dimension of OCR is reduced by using the basis expansion model(BEM)and the relationship between the time and DD domain channel model,so that we have turned the underdetermined problem into an overdetermined problem.Finally,in terms of sparsity of channel in delay domain,SBL algorithm is used to estimate the basis coefficients in the BEM without any priori information of channel.The simulation results show the effectiveness and superiority of the proposed channel estimation algorithm. 展开更多
关键词 OTFS sparse bayesian learning basis expansion model channel estimation
下载PDF
Vector Approximate Message Passing with Sparse Bayesian Learning for Gaussian Mixture Prior 被引量:2
6
作者 Chengyao Ruan Zaichen Zhang +3 位作者 Hao Jiang Jian Dang Liang Wu Hongming Zhang 《China Communications》 SCIE CSCD 2023年第5期57-69,共13页
Compressed sensing(CS)aims for seeking appropriate algorithms to recover a sparse vector from noisy linear observations.Currently,various Bayesian-based algorithms such as sparse Bayesian learning(SBL)and approximate ... Compressed sensing(CS)aims for seeking appropriate algorithms to recover a sparse vector from noisy linear observations.Currently,various Bayesian-based algorithms such as sparse Bayesian learning(SBL)and approximate message passing(AMP)based algorithms have been proposed.For SBL,it has accurate performance with robustness while its computational complexity is high due to matrix inversion.For AMP,its performance is guaranteed by the severe restriction of the measurement matrix,which limits its application in solving CS problem.To overcome the drawbacks of the above algorithms,in this paper,we present a low complexity algorithm for the single linear model that incorporates the vector AMP(VAMP)into the SBL structure with expectation maximization(EM).Specifically,we apply the variance auto-tuning into the VAMP to implement the E step in SBL,which decrease the iterations that require to converge compared with VAMP-EM algorithm when using a Gaussian mixture(GM)prior.Simulation results show that the proposed algorithm has better performance with high robustness under various cases of difficult measurement matrices. 展开更多
关键词 sparse bayesian learning approximate message passing compressed sensing expectation propagation
下载PDF
Vision-based multi-level synthetical evaluation of seismic damage for RC structural components: a multi-task learning approach 被引量:1
7
作者 Xu Yang Qiao Weidong +2 位作者 Zhao Jin Zhang Qiangqiang Li Hui 《Earthquake Engineering and Engineering Vibration》 SCIE EI CSCD 2023年第1期69-85,共17页
Recent studies for computer vision and deep learning-based,post-earthquake inspections on RC structures mainly perform well for specific tasks,while the trained models must be fine-tuned and re-trained when facing new... Recent studies for computer vision and deep learning-based,post-earthquake inspections on RC structures mainly perform well for specific tasks,while the trained models must be fine-tuned and re-trained when facing new tasks and datasets,which is inevitably time-consuming.This study proposes a multi-task learning approach that simultaneously accomplishes the semantic segmentation of seven-type structural components,three-type seismic damage,and four-type deterioration states.The proposed method contains a CNN-based encoder-decoder backbone subnetwork with skip-connection modules and a multi-head,task-specific recognition subnetwork.The backbone subnetwork is designed to extract multi-level features of post-earthquake RC structures.The multi-head,task-specific recognition subnetwork consists of three individual self-attention pipelines,each of which utilizes extracted multi-level features from the backbone network as a mutual guidance for the individual segmentation task.A synthetical loss function is designed with real-time adaptive coefficients to balance multi-task losses and focus on the most unstably fluctuating one.Ablation experiments and comparative studies are further conducted to demonstrate their effectiveness and necessity.The results show that the proposed method can simultaneously recognize different structural components,seismic damage,and deterioration states,and that the overall performance of the three-task learning models gains general improvement when compared to all single-task and dual-task models. 展开更多
关键词 post-earthquake evaluation multi-task learning computer vision structural component segmentation seismic damage recognition deterioration state assessment
下载PDF
Convective Storm VIL and Lightning Nowcasting Using Satellite and Weather Radar Measurements Based on Multi-Task Learning Models
8
作者 Yang LI Yubao LIU +3 位作者 Rongfu SUN Fengxia GUO Xiaofeng XU Haixiang XU 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2023年第5期887-899,共13页
Convective storms and lightning are among the most important weather phenomena that are challenging to forecast.In this study,a novel multi-task learning(MTL)encoder-decoder U-net neural network was developed to forec... Convective storms and lightning are among the most important weather phenomena that are challenging to forecast.In this study,a novel multi-task learning(MTL)encoder-decoder U-net neural network was developed to forecast convective storms and lightning with lead times for up to 90 min,using GOES-16 geostationary satellite infrared brightness temperatures(IRBTs),lightning flashes from Geostationary Lightning Mapper(GLM),and vertically integrated liquid(VIL)from Next Generation Weather Radar(NEXRAD).To cope with the heavily skewed distribution of lightning data,a spatiotemporal exponent-weighted loss function and log-transformed lightning normalization approach were developed.The effects of MTL,single-task learning(STL),and IRBTs as auxiliary input features on convection and lightning nowcasting were investigated.The results showed that normalizing the heavily skew-distributed lightning data along with a log-transformation dramatically outperforms the min-max normalization method for nowcasting an intense lightning event.The MTL model significantly outperformed the STL model for both lightning nowcasting and VIL nowcasting,particularly for intense lightning events.The MTL also helped delay the lightning forecast performance decay with the lead times.Furthermore,incorporating satellite IRBTs as auxiliary input features substantially improved lightning nowcasting,but produced little difference in VIL forecasting.Finally,the MTL model performed better for forecasting both lightning and the VIL of organized convective storms than for isolated cells. 展开更多
关键词 convection/lightning nowcasting multi-task learning geostationary satellite weather radar U-net model
下载PDF
The Entity Relationship Extraction Method Using Improved RoBERTa and Multi-Task Learning
9
作者 Chaoyu Fan 《Computers, Materials & Continua》 SCIE EI 2023年第11期1719-1738,共20页
There is a growing amount of data uploaded to the internet every day and it is important to understand the volume of those data to find a better scheme to process them.However,the volume of internet data is beyond the... There is a growing amount of data uploaded to the internet every day and it is important to understand the volume of those data to find a better scheme to process them.However,the volume of internet data is beyond the processing capabilities of the current internet infrastructure.Therefore,engineering works using technology to organize and analyze information and extract useful information are interesting in both industry and academia.The goal of this paper is to explore the entity relationship based on deep learning,introduce semantic knowledge by using the prepared language model,develop an advanced entity relationship information extraction method by combining Robustly Optimized BERT Approach(RoBERTa)and multi-task learning,and combine the intelligent characters in the field of linguistic,called Robustly Optimized BERT Approach+Multi-Task Learning(RoBERTa+MTL).To improve the effectiveness of model interaction,multi-task teaching is used to implement the observation information of auxiliary tasks.Experimental results show that our method has achieved an accuracy of 88.95 entity relationship extraction,and a further it has achieved 86.35%of accuracy after being combined with multi-task learning. 展开更多
关键词 Entity relationship extraction multi-task learning RoBERTa
下载PDF
Multi-Task Learning Model with Data Augmentation for Arabic Aspect-Based Sentiment Analysis
10
作者 Arwa Saif Fadel Osama Ahmed Abulnaja Mostafa Elsayed Saleh 《Computers, Materials & Continua》 SCIE EI 2023年第5期4419-4444,共26页
Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.Howeve... Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.However,most existing works on Arabic ABSA content separately address them,assume that aspect terms are preidentified,or use a pipeline model.Pipeline solutions design different models for each task,and the output from the ATE model is used as the input to the APC model,which may result in error propagation among different steps because APC is affected by ATE error.These methods are impractical for real-world scenarios where the ATE task is the base task for APC,and its result impacts the accuracy of APC.Thus,in this study,we focused on a multi-task learning model for Arabic ATE and APC in which the model is jointly trained on two subtasks simultaneously in a singlemodel.This paper integrates themulti-task model,namely Local Cotext Foucse-Aspect Term Extraction and Polarity classification(LCF-ATEPC)and Arabic Bidirectional Encoder Representation from Transformers(AraBERT)as a shred layer for Arabic contextual text representation.The LCF-ATEPC model is based on a multi-head selfattention and local context focus mechanism(LCF)to capture the interactive information between an aspect and its context.Moreover,data augmentation techniques are proposed based on state-of-the-art augmentation techniques(word embedding substitution with constraints and contextual embedding(AraBERT))to increase the diversity of the training dataset.This paper examined the effect of data augmentation on the multi-task model for Arabic ABSA.Extensive experiments were conducted on the original and combined datasets(merging the original and augmented datasets).Experimental results demonstrate that the proposed Multi-task model outperformed existing APC techniques.Superior results were obtained by AraBERT and LCF-ATEPC with fusion layer(AR-LCF-ATEPC-Fusion)and the proposed data augmentation word embedding-based method(FastText)on the combined dataset. 展开更多
关键词 Arabic aspect extraction arabic sentiment classification AraBERT multi-task learning data augmentation
下载PDF
Multi-Task Deep Learning with Task Attention for Post-Click Conversion Rate Prediction
11
作者 Hongxin Luo Xiaobing Zhou +1 位作者 Haiyan Ding Liqing Wang 《Intelligent Automation & Soft Computing》 SCIE 2023年第6期3583-3593,共11页
Online advertising has gained much attention on various platforms as a hugely lucrative market.In promoting content and advertisements in real life,the acquisition of user target actions is usually a multi-step proces... Online advertising has gained much attention on various platforms as a hugely lucrative market.In promoting content and advertisements in real life,the acquisition of user target actions is usually a multi-step process,such as impres-sion→click→conversion,which means the process from the delivery of the recommended item to the user’s click to the final conversion.Due to data sparsity or sample selection bias,it is difficult for the trained model to achieve the business goal of the target campaign.Multi-task learning,a classical solution to this pro-blem,aims to generalize better on the original task given several related tasks by exploiting the knowledge between tasks to share the same feature and label space.Adaptively learned task relations bring better performance to make full use of the correlation between tasks.We train a general model capable of captur-ing the relationships between various tasks on all existing active tasks from a meta-learning perspective.In addition,this paper proposes a Multi-task Attention Network(MAN)to identify commonalities and differences between tasks in the feature space.The model performance is improved by explicitly learning the stacking of task relationships in the label space.To illustrate the effectiveness of our method,experiments are conducted on Alibaba Click and Conversion Pre-diction(Ali-CCP)dataset.Experimental results show that the method outperforms the state-of-the-art multi-task learning methods. 展开更多
关键词 multi-task learning recommend system ATTENTION META-learning
下载PDF
BN-GEPSO:Learning Bayesian Network Structure Using Generalized Particle Swarm Optimization
12
作者 Muhammad Saad Salman Ibrahim M.Almanjahie +1 位作者 AmanUllah Yasin Ammara Nawaz Cheema 《Computers, Materials & Continua》 SCIE EI 2023年第5期4217-4229,共13页
At present Bayesian Networks(BN)are being used widely for demonstrating uncertain knowledge in many disciplines,including biology,computer science,risk analysis,service quality analysis,and business.But they suffer fr... At present Bayesian Networks(BN)are being used widely for demonstrating uncertain knowledge in many disciplines,including biology,computer science,risk analysis,service quality analysis,and business.But they suffer from the problem that when the nodes and edges increase,the structure learning difficulty increases and algorithms become inefficient.To solve this problem,heuristic optimization algorithms are used,which tend to find a near-optimal answer rather than an exact one,with particle swarm optimization(PSO)being one of them.PSO is a swarm intelligence-based algorithm having basic inspiration from flocks of birds(how they search for food).PSO is employed widely because it is easier to code,converges quickly,and can be parallelized easily.We use a recently proposed version of PSO called generalized particle swarm optimization(GEPSO)to learn bayesian network structure.We construct an initial directed acyclic graph(DAG)by using the max-min parent’s children(MMPC)algorithm and cross relative average entropy.ThisDAGis used to create a population for theGEPSO optimization procedure.Moreover,we propose a velocity update procedure to increase the efficiency of the algorithmic search process.Results of the experiments show that as the complexity of the dataset increases,our algorithm Bayesian network generalized particle swarm optimization(BN-GEPSO)outperforms the PSO algorithm in terms of the Bayesian information criterion(BIC)score. 展开更多
关键词 bayesian network structure learning particle swarm optimization
下载PDF
Bayesian Deep Learning Enabled Sentiment Analysis on Web Intelligence Applications
13
作者 Abeer D.Algarni 《Computers, Materials & Continua》 SCIE EI 2023年第5期3399-3412,共14页
In recent times,web intelligence(WI)has become a hot research topic,which utilizes Artificial Intelligence(AI)and advanced information technologies on theWeb and Internet.The users post reviews on social media and are... In recent times,web intelligence(WI)has become a hot research topic,which utilizes Artificial Intelligence(AI)and advanced information technologies on theWeb and Internet.The users post reviews on social media and are employed for sentiment analysis(SA),which acts as feedback to business people and government.Proper SA on the reviews helps to enhance the quality of the services and products,however,web intelligence techniques are needed to raise the company profit and user fulfillment.With this motivation,this article introduces a new modified pigeon inspired optimization based feature selection(MPIO-FS)with Bayesian deep learning(BDL),named MPIOBDL model for SA on WI applications.The presented MPIO-BDL model initially involved preprocessing and feature extraction take place using Term Frequency—Inverse Document Frequency(TF-IDF)technique to derive a useful set of information from the user reviews.Besides,the MPIO-FS model is applied for the selection of optimal feature subsets,which helps to enhance classification accuracy and reduce computation complexity.Moreover,the BDL model is employed to allocate the proper class labels of the applied user review data.A comprehensive experimental results analysis highlighted the improved classification efficiency of the presented model. 展开更多
关键词 Social media data classification bayesian deep learning artificial intelligence web intelligence feature selection
下载PDF
A General Linguistic Steganalysis Framework Using Multi-Task Learning
14
作者 Lingyun Xiang Rong Wang +2 位作者 Yuhang Liu Yangfan Liu Lina Tan 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期2383-2399,共17页
Prevailing linguistic steganalysis approaches focus on learning sensitive features to distinguish a particular category of steganographic texts from non-steganographic texts,by performing binary classification.While i... Prevailing linguistic steganalysis approaches focus on learning sensitive features to distinguish a particular category of steganographic texts from non-steganographic texts,by performing binary classification.While it remains an unsolved problem and poses a significant threat to the security of cyberspace when various categories of non-steganographic or steganographic texts coexist.In this paper,we propose a general linguistic steganalysis framework named LS-MTL,which introduces the idea of multi-task learning to deal with the classification of various categories of steganographic and non-steganographic texts.LS-MTL captures sensitive linguistic features from multiple related linguistic steganalysis tasks and can concurrently handle diverse tasks with a constructed model.In the proposed framework,convolutional neural networks(CNNs)are utilized as private base models to extract sensitive features for each steganalysis task.Besides,a shared CNN is built to capture potential interaction information and share linguistic features among all tasks.Finally,LS-MTL incorporates the private and shared sensitive features to identify the detected text as steganographic or non-steganographic.Experimental results demonstrate that the proposed framework LS-MTL outperforms the baseline in the multi-category linguistic steganalysis task,while average Acc,Pre,and Rec are increased by 0.5%,1.4%,and 0.4%,respectively.More ablation experimental results show that LS-MTL with the shared module has robust generalization capability and achieves good detection performance even in the case of spare data. 展开更多
关键词 Linguistic steganalysis multi-task learning convolutional neural network(CNN) feature extraction detection performance
下载PDF
Hand Gesture Recognition for Disabled People Using Bayesian Optimization with Transfer Learning
15
作者 Fadwa Alrowais Radwa Marzouk +1 位作者 Fahd N.Al-Wesabi Anwer Mustafa Hilal 《Intelligent Automation & Soft Computing》 SCIE 2023年第6期3325-3342,共18页
Sign language recognition can be treated as one of the efficient solu-tions for disabled people to communicate with others.It helps them to convey the required data by the use of sign language with no issues.The lates... Sign language recognition can be treated as one of the efficient solu-tions for disabled people to communicate with others.It helps them to convey the required data by the use of sign language with no issues.The latest develop-ments in computer vision and image processing techniques can be accurately uti-lized for the sign recognition process by disabled people.American Sign Language(ASL)detection was challenging because of the enhancing intraclass similarity and higher complexity.This article develops a new Bayesian Optimiza-tion with Deep Learning-Driven Hand Gesture Recognition Based Sign Language Communication(BODL-HGRSLC)for Disabled People.The BODL-HGRSLC technique aims to recognize the hand gestures for disabled people’s communica-tion.The presented BODL-HGRSLC technique integrates the concepts of compu-ter vision(CV)and DL models.In the presented BODL-HGRSLC technique,a deep convolutional neural network-based residual network(ResNet)model is applied for feature extraction.Besides,the presented BODL-HGRSLC model uses Bayesian optimization for the hyperparameter tuning process.At last,a bidir-ectional gated recurrent unit(BiGRU)model is exploited for the HGR procedure.A wide range of experiments was conducted to demonstrate the enhanced perfor-mance of the presented BODL-HGRSLC model.The comprehensive comparison study reported the improvements of the BODL-HGRSLC model over other DL models with maximum accuracy of 99.75%. 展开更多
关键词 Deep learning hand gesture recognition disabled people computer vision bayesian optimization
下载PDF
Multi-tasking to Address Diversity in Language Learning
16
作者 雷琨 《海外英语》 2014年第21期98-99,103,共3页
With focus now placed on the learner, more attention is given to his learning style, multiple intelligence and developing learning strategies to enable him to make sense of and use of the target language appropriately... With focus now placed on the learner, more attention is given to his learning style, multiple intelligence and developing learning strategies to enable him to make sense of and use of the target language appropriately in varied contexts and with different uses of the language. To attain this, the teacher is tasked with designing, monitoring and processing language learning activities for students to carry out and in the process learn by doing and reflecting on the learning process they went through as they interacted socially with each other. This paper describes a task named"The Fishbowl Technique"and found to be effective in large ESL classes in the secondary level in the Philippines. 展开更多
关键词 multi-tasking DIVERSITY learning STYLE the fishbow
下载PDF
Active Machine Learning for Chemical Engineers:A Bright Future Lies Ahead! 被引量:1
17
作者 Yannick Ureel Maarten R.Dobbelaere +4 位作者 Yi Ouyang Kevin De Ras Maarten K.Sabbe Guy B.Marin Kevin M.Van Geem 《Engineering》 SCIE EI CAS CSCD 2023年第8期23-30,共8页
By combining machine learning with the design of experiments,thereby achieving so-called active machine learning,more efficient and cheaper research can be conducted.Machine learning algorithms are more flexible and a... By combining machine learning with the design of experiments,thereby achieving so-called active machine learning,more efficient and cheaper research can be conducted.Machine learning algorithms are more flexible and are better than traditional design of experiment algorithms at investigating processes spanning all length scales of chemical engineering.While active machine learning algorithms are maturing,their applications are falling behind.In this article,three types of challenges presented by active machine learning—namely,convincing the experimental researcher,the flexibility of data creation,and the robustness of active machine learning algorithms—are identified,and ways to overcome them are discussed.A bright future lies ahead for active machine learning in chemical engineering,thanks to increasing automation and more efficient algorithms that can drive novel discoveries. 展开更多
关键词 Active machine learning Active learning bayesian optimization Chemical engineering Design of experiments
下载PDF
Hyperparameter Optimization for Machine Learning Models Based on Bayesian Optimization 被引量:31
18
作者 Jia Wu Xiu-Yun Chen +3 位作者 Hao Zhang Li-Dong Xiong Hang Lei Si-Hao Deng 《Journal of Electronic Science and Technology》 CAS CSCD 2019年第1期26-40,共15页
Hyperparameters are important for machine learning algorithms since they directly control the behaviors of training algorithms and have a significant effect on the performance of machine learning models. Several techn... Hyperparameters are important for machine learning algorithms since they directly control the behaviors of training algorithms and have a significant effect on the performance of machine learning models. Several techniques have been developed and successfully applied for certain application domains. However, this work demands professional knowledge and expert experience. And sometimes it has to resort to the brute-force search.Therefore, if an efficient hyperparameter optimization algorithm can be developed to optimize any given machine learning method, it will greatly improve the efficiency of machine learning. In this paper, we consider building the relationship between the performance of the machine learning models and their hyperparameters by Gaussian processes. In this way, the hyperparameter tuning problem can be abstracted as an optimization problem and Bayesian optimization is used to solve the problem. Bayesian optimization is based on the Bayesian theorem. It sets a prior over the optimization function and gathers the information from the previous sample to update the posterior of the optimization function. A utility function selects the next sample point to maximize the optimization function.Several experiments were conducted on standard test datasets. Experiment results show that the proposed method can find the best hyperparameters for the widely used machine learning models, such as the random forest algorithm and the neural networks, even multi-grained cascade forest under the consideration of time cost. 展开更多
关键词 bayesian OPTIMIZATION GAUSSIAN process hyperparameter OPTIMIZATION MACHINE learning
下载PDF
Learning Bayesian network parameters under new monotonic constraints 被引量:8
19
作者 Ruohai Di Xiaoguang Gao Zhigao Guo 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2017年第6期1248-1255,共8页
When the training data are insufficient, especially when only a small sample size of data is available, domain knowledge will be taken into the process of learning parameters to improve the performance of the Bayesian... When the training data are insufficient, especially when only a small sample size of data is available, domain knowledge will be taken into the process of learning parameters to improve the performance of the Bayesian networks. In this paper, a new monotonic constraint model is proposed to represent a type of common domain knowledge. And then, the monotonic constraint estimation algorithm is proposed to learn the parameters with the monotonic constraint model. In order to demonstrate the superiority of the proposed algorithm, series of experiments are carried out. The experiment results show that the proposed algorithm is able to obtain more accurate parameters compared to some existing algorithms while the complexity is not the highest. 展开更多
关键词 bayesian networks parameter learning new mono tonic constraint
下载PDF
Finding optimal Bayesian networks by a layered learning method 被引量:4
20
作者 YANG Yu GAO Xiaoguang GUO Zhigao 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2019年第5期946-958,共13页
It is unpractical to learn the optimal structure of a big Bayesian network(BN)by exhausting the feasible structures,since the number of feasible structures is super exponential on the number of nodes.This paper propos... It is unpractical to learn the optimal structure of a big Bayesian network(BN)by exhausting the feasible structures,since the number of feasible structures is super exponential on the number of nodes.This paper proposes an approach to layer nodes of a BN by using the conditional independence testing.The parents of a node layer only belong to the layer,or layers who have priority over the layer.When a set of nodes has been layered,the number of feasible structures over the nodes can be remarkably reduced,which makes it possible to learn optimal BN structures for bigger sizes of nodes by accurate algorithms.Integrating the dynamic programming(DP)algorithm with the layering approach,we propose a hybrid algorithm—layered optimal learning(LOL)to learn BN structures.Benefitted by the layering approach,the complexity of the DP algorithm reduces to O(ρ2^n?1)from O(n2^n?1),whereρ<n.Meanwhile,the memory requirements for storing intermediate results are limited to O(C k#/k#^2 )from O(Cn/n^2 ),where k#<n.A case study on learning a standard BN with 50 nodes is conducted.The results demonstrate the superiority of the LOL algorithm,with respect to the Bayesian information criterion(BIC)score criterion,over the hill-climbing,max-min hill-climbing,PC,and three-phrase dependency analysis algorithms. 展开更多
关键词 bayesian network (BN) structure learning layeredoptimal learning (LOL)
下载PDF
上一页 1 2 68 下一页 到第
使用帮助 返回顶部