期刊文献+

为您找到了以下期刊:

共找到233篇文章
< 1 2 12 >
每页显示 20 50 100
Attention-Based CNN Fusion Model for Emotion Recognition During Walking Using Discrete Wavelet Transform on EEG and Inertial Signals 被引量:1
1
作者 Yan Zhao Ming Guo +2 位作者 Xiangyong Chen Jianqiang Sun Jianlong Qiu big data mining and analytics EI CSCD 2024年第1期188-204,共17页
Walking as a unique biometric tool conveys important information for emotion recognition.Individuals in different emotional states exhibit distinct walking patterns.For this purpose,this paper proposes a novel approac... Walking as a unique biometric tool conveys important information for emotion recognition.Individuals in different emotional states exhibit distinct walking patterns.For this purpose,this paper proposes a novel approach to recognizing emotion during walking using electroencephalogram(EEG)and inertial signals.Accurate recognition of emotion is achieved by training in an end-to-end deep learning fashion and taking into account multi-modal fusion.Subjects wear virtual reality head-mounted display(VR-HMD)equipment to immerse in strong emotions during walking.VR environment shows excellent imitation and experience ability,which plays an important role in awakening and changing emotions.In addition,the multi-modal signals acquired from EEG and inertial sensors are separately represented as virtual emotion images by discrete wavelet transform(DWT).These serve as input to the attention-based convolutional neural network(CNN)fusion model.The designed network structure is simple and lightweight while integrating the channel attention mechanism to extract and enhance features.To effectively improve the performance of the recognition system,the proposed decision fusion algorithm combines Critic method and majority voting strategy to determine the weight values that affect the final decision results.An investigation is made on the effect of diverse mother wavelet types and wavelet decomposition levels on model performance which indicates that the 2.2-order reverse biorthogonal(rbio2.2)wavelet with two-level decomposition has the best recognition performance.Comparative experiment results show that the proposed method outperforms other existing state-of-the-art works with an accuracy of 98.73%. 展开更多
关键词 WALKING multi-modal fusion virtual reality emotion recognition discrete wavelet transform attention mechanism
原文传递
Gender-Based Analysis of User Reactions to Facebook Posts
2
作者 Yassine El Moudene Jaafar Idrais +1 位作者 Rida El Abassi Abderrahim Sabour big data mining and analytics EI CSCD 2024年第1期75-86,共12页
Online Social Networks(OSNs)are based on the sharing of different types of information and on various interactions(comments,reactions,and sharing).One of these important actions is the emotional reaction to the conten... Online Social Networks(OSNs)are based on the sharing of different types of information and on various interactions(comments,reactions,and sharing).One of these important actions is the emotional reaction to the content.The diversity of reaction types available on Facebook(namely FB)enables users to express their feelings,and its traceability creates and enriches the users’emotional identity in the virtual world.This paper is based on the analysis of 119875012 FB reactions(Like,Love,Haha,Wow,Sad,Angry,Thankful,and Pride)made at multiple levels(publications,comments,and sub-comments)to study and classify the users’emotional behavior,visualize the distribution of different types of reactions,and analyze the gender impact on emotion generation.All of these can be achieved by addressing these research questions:who reacts the most?Which emotion is the most expressed? 展开更多
关键词 PROFILING knowledge extraction data mining emotion mining social media data crawling Facebook reactions GENDER
原文传递
Identification of Proteins and Genes Associated with Hedgehog Signaling Pathway Involved in Neoplasm Formation Using Text-Mining Approach
3
作者 Nadezhda Yu.Biziukova Sergey M.Ivanov Oga A.Tarasova big data mining and analytics EI CSCD 2024年第1期107-130,共24页
Analysis of molecular mechanisms that lead to the development of various types of tumors is essential for biology and medicine,because it may help to find new therapeutic opportunities for cancer treatment and cure in... Analysis of molecular mechanisms that lead to the development of various types of tumors is essential for biology and medicine,because it may help to find new therapeutic opportunities for cancer treatment and cure including personalized treatment approaches.One of the pathways known to be important for the development of neoplastic diseases and pathological processes is the Hedgehog signaling pathway that normally controls human embryonic development.Systematic accumulation of various types of biological data,including interactions between proteins,regulation of genes transcription,proteomics,and metabolomics experiments results,allows the application of computational analysis of these big data for identification of key molecular mechanisms of certain diseases and pathologies and promising therapeutic targets.The aim of this study is to develop a computational approach for revealing associations between human proteins and genes interacting with the Hedgehog pathway components,as well as for identifying their roles in the development of various types of tumors.We automatically collect sets of abstract texts from the NCBI PubMed bibliographic database.For recognition of the Hedgehog pathway proteins and genes and neoplastic diseases we use a dictionary-based named entity recognition approach,while for all other proteins and genes machine learning method is used.For association extraction,we develop a set of semantic rules.We complete the results of the text analysis with the gene set enrichment analysis.The identified key pathways that may influence the Hedgehog pathway and their roles in tumor development are then verified using the information in the literature. 展开更多
关键词 text-mining data mining Hedgehog pathway neoplastic processes enrichment analysis pathology molecularmechanisms
原文传递
Extending OpenStack Monasca for Predictive Elasticity Control
4
作者 Giacomo Lanciano Filippo Galli +2 位作者 Tommaso Cucinotta Davide Bacciu Andrea Passarella big data mining and analytics EI CSCD 2024年第2期315-339,共25页
Traditional auto-scaling approaches are conceived as reactive automations,typically triggered when predefined thresholds are breached by resource consumption metrics.Managing such rules at scale is cumbersome,especial... Traditional auto-scaling approaches are conceived as reactive automations,typically triggered when predefined thresholds are breached by resource consumption metrics.Managing such rules at scale is cumbersome,especially when resources require non-negligible time to be instantiated.This paper introduces an architecture for predictive cloud operations,which enables orchestrators to apply time-series forecasting techniques to estimate the evolution of relevant metrics and take decisions based on the predicted state of the system.In this way,they can anticipate load peaks and trigger appropriate scaling actions in advance,such that new resources are available when needed.The proposed architecture is implemented in OpenStack,extending the monitoring capabilities of Monasca by injecting short-term forecasts of standard metrics.We use our architecture to implement predictive scaling policies leveraging on linear regression,autoregressive integrated moving average,feed-forward,and recurrent neural networks(RNN).Then,we evaluate their performance on a synthetic workload,comparing them to those of a traditional policy.To assess the ability of the different models to generalize to unseen patterns,we also evaluate them on traces from a real content delivery network(CDN)workload.In particular,the RNN model exhibites the best overall performance in terms of prediction error,observed client-side response latency,and forecasting overhead.The implementation of our architecture is open-source. 展开更多
关键词 OPENSTACK MONITORING elasticity control auto-scaling predictive operations Monasca
原文传递
AI/ML Enabled Automation System for Software Defined Disaggregated Open Radio Access Networks:Transforming Telecommunication Business
5
作者 Sunil Kumar big data mining and analytics EI CSCD 2024年第2期271-293,共23页
Open Air Interface(OAI)alliance recently introduced a new disaggregated Open Radio Access Networks(O-RAN)framework for next generation telecommunications and networks.This disaggregated architecture is open,automated,... Open Air Interface(OAI)alliance recently introduced a new disaggregated Open Radio Access Networks(O-RAN)framework for next generation telecommunications and networks.This disaggregated architecture is open,automated,software defined,virtual,and supports the latest advanced technologies like Artificial Intelligence(AI)Machine Learning(AI/ML).This novel intelligent architecture enables programmers to design and customize automated applications according to the business needs and to improve quality of service in fifth generation(5G)and Beyond 5G(B5G).Its disaggregated and multivendor nature gives the opportunity to new startups and small vendors to participate and provide cheap hardware software solutions to keep the market competitive.This paper presents the disaggregated and programmable O-RAN architecture focused on automation,AI/ML services,and applications with Flexible Radio access network Intelligent Controller(FRIC).We schematically demonstrate the reinforcement learning,external applications(xApps),and automation steps to implement this disaggregated O-RAN architecture.The idea of this research paper is to implement an AI/ML enabled automation system for software defined disaggregated O-RAN,which monitors,manages,and performs AI/ML-related services,including the model deployment,optimization,inference,and training. 展开更多
关键词 Artificial Intelligence(AI) Reinforcement Learning(RL) Open Radio Access Networks(O-RAN) Flexible Radio access network Intelligent Controller(FRIC) external Applications(xApps) Machine Learning(ML) sixth generation(6G)
原文传递
ROBO-SPOT:Detecting Robocalls by Understanding User Engagement and Connectivity Graph
6
作者 Muhammad Ajmal Azad Junaid Arshad Farhan Riaz big data mining and analytics EI CSCD 2024年第2期340-356,共17页
Robo or unsolicited calls have become a persistent issue in telecommunication networks,posing significant challenges to individuals,businesses,and regulatory authorities.These calls not only trick users into disclosin... Robo or unsolicited calls have become a persistent issue in telecommunication networks,posing significant challenges to individuals,businesses,and regulatory authorities.These calls not only trick users into disclosing their private and financial information,but also affect their productivity through unwanted phone ringing.A proactive approach to identify and block such unsolicited calls is essential to protect users and service providers from potential harm.Therein,this paper proposes a solution to identify robo-callers in the telephony network utilising a set of novel features to evaluate the trustworthiness of callers in a network.The trust score of the callers is then used along with machine learning models to classify them as legitimate or robo-caller.We use a large anonymized dataset(call detailed records)from a large telecommunication provider containing more than 1 billion records collected over 10 days.We have conducted extensive evaluation demonstrating that the proposed approach achieves high accuracy and detection rate whilst minimizing the error rate.Specifically,the proposed features when used collectively achieve a true-positive rate of around 97%with a false-positive rate of less than 0.01%. 展开更多
关键词 social network analysis REPUTATION unwanted calls robo-callers telephone network Spam Over Internet Technology(SPIT)
原文传递
An Adaptive Scalable Data Pipeline for Multiclass Attack Classification in Large-Scale IoT Networks
7
作者 Selvam Saravanan Uma Maheswari Balasubramanian big data mining and analytics EI CSCD 2024年第2期500-511,共12页
The current large-scale Internet of Things(IoT)networks typically generate high-velocity network traffic streams.Attackers use IoT devices to create botnets and launch attacks,such as DDoS,Spamming,Cryptocurrency mini... The current large-scale Internet of Things(IoT)networks typically generate high-velocity network traffic streams.Attackers use IoT devices to create botnets and launch attacks,such as DDoS,Spamming,Cryptocurrency mining,Phishing,etc.The service providers of large-scale IoT networks need to set up a data pipeline to collect the vast network traffic data from the IoT devices,store it,analyze it,and report the malicious IoT devices and types of attacks.Further,the attacks originating from IoT devices are dynamic,as attackers launch one kind of attack at one time and another kind of attack at another time.The number of attacks and benign instances also vary from time to time.This phenomenon of change in attack patterns is called concept drift.Hence,the attack detection system must learn continuously from the ever-changing real-time attack patterns in large-scale IoT network traffic.To meet this requirement,in this work,we propose a data pipeline with Apache Kafka,Apache Spark structured streaming,and MongoDB that can adapt to the ever-changing attack patterns in real time and classify attacks in large-scale IoT networks.When concept drift is detected,the proposed system retrains the classifier with the instances that cause the drift and a representative subsample instances from the previous training of the model.The proposed approach is evaluated with the latest dataset,IoT23,which consists of benign and several attack instances from various IoT devices.Attack classification accuracy is improved from 97.8%to 99.46%by the proposed system.The training time of distributed random forest algorithm is also studied by varying the number of cores in Apache Spark environment. 展开更多
关键词 Internet of Things(IoT) concept drift Apache Spark MONGODB Apache Kafka STREAMING
原文传递
Enhancing Telemarketing Success Using Ensemble-Based Online Machine Learning
8
作者 Shahriar Kaisar Md Mamunur Rashid +3 位作者 Abdullahi Chowdhury Sakib Shahriar Shafin Joarder Kamruzzaman Abebe Diro big data mining and analytics EI CSCD 2024年第2期294-314,共21页
Telemarketing is a well-established marketing approach to offering products and services to prospective customers.The effectiveness of such an approach,however,is highly dependent on the selection of the appropriate c... Telemarketing is a well-established marketing approach to offering products and services to prospective customers.The effectiveness of such an approach,however,is highly dependent on the selection of the appropriate consumer base,as reaching uninterested customers will induce annoyance and consume costly enterprise resources in vain while missing interested ones.The introduction of business intelligence and machine learning models can positively influence the decision-making process by predicting the potential customer base,and the existing literature in this direction shows promising results.However,the selection of influential features and the construction of effective learning models for improved performance remain a challenge.Furthermore,from the modelling perspective,the class imbalance nature of the training data,where samples with unsuccessful outcomes highly outnumber successful ones,further compounds the problem by creating biased and inaccurate models.Additionally,customer preferences are likely to change over time due to various reasons,and/or a fresh group of customers may be targeted for a new product or service,necessitating model retraining which is not addressed at all in existing works.A major challenge in model retraining is maintaining a balance between stability(retaining older knowledge)and plasticity(being receptive to new information).To address the above issues,this paper proposes an ensemble machine learning model with feature selection and oversampling techniques to identify potential customers more accurately.A novel online learning method is proposed for model retraining when new samples are available over time.This newly introduced method equips the proposed approach to deal with dynamic data,leading to improved readiness of the proposed model for practical adoption,and is a highly useful addition to the literature.Extensive experiments with real-world data show that the proposed approach achieves excellent results in all cases(e.g.,98.6%accuracy in classifying customers)and outperforms recent competing models in the literature by a considerable margin of 3%on a widely used dataset. 展开更多
关键词 machine learning online learning OVERSAMPLING TELEMARKETING imbalanced dataset ensemble model
原文传递
Limits of Depth: Over-Smoothing and Over-Squashing in GNNs
9
作者 Aafaq Mohi ud din Shaima Qureshi big data mining and analytics EI CSCD 2024年第1期205-216,共12页
Graph Neural Networks(GNNs)have become a widely used tool for learning and analyzing data on graph structures,largely due to their ability to preserve graph structure and properties via graph representation learning.H... Graph Neural Networks(GNNs)have become a widely used tool for learning and analyzing data on graph structures,largely due to their ability to preserve graph structure and properties via graph representation learning.However,the effect of depth on the performance of GNNs,particularly isotropic and anisotropic models,remains an active area of research.This study presents a comprehensive exploration of the impact of depth on GNNs,with a focus on the phenomena of over-smoothing and the bottleneck effect in deep graph neural networks.Our research investigates the tradeoff between depth and performance,revealing that increasing depth can lead to over-smoothing and a decrease in performance due to the bottleneck effect.We also examine the impact of node degrees on classification accuracy,finding that nodes with low degrees can pose challenges for accurate classification.Our experiments use several benchmark datasets and a range of evaluation metrics to compare isotropic and anisotropic GNNs of varying depths,also explore the scalability of these models.Our findings provide valuable insights into the design of deep GNNs and offer potential avenues for future research to improve their performance. 展开更多
关键词 Graph Neural Networks(GNNs) learning on graphs over-smoothing over-squashing isotropic-GNNs anisotropic-GNNs
原文传递
An Intelligent Big Data Security Framework Based on AEFS-KENN Algorithms for the Detection of Cyber-Attacks from Smart Grid Systems
10
作者 Sankaramoorthy Muthubalaji Naresh Kumar Muniyaraj +4 位作者 Sarvade Pedda Venkata Subba Rao Kavitha Thandapani Pasupuleti Rama Mohan Thangam Somasundaram Yousef Farhaoui big data mining and analytics EI CSCD 2024年第2期399-418,共20页
Big data has the ability to open up innovative and ground-breaking prospects for the electrical grid,which also supports to obtain a variety of technological,social,and financial benefits.There is an unprecedented amo... Big data has the ability to open up innovative and ground-breaking prospects for the electrical grid,which also supports to obtain a variety of technological,social,and financial benefits.There is an unprecedented amount of heterogeneous big data as a consequence of the growth of power grid technologies,along with data processing and advanced tools.The main obstacles in turning the heterogeneous large dataset into useful results are computational burden and information security.The original contribution of this paper is to develop a new big data framework for detecting various intrusions from the smart grid systems with the use of AI mechanisms.Here,an AdaBelief Exponential Feature Selection(AEFS)technique is used to efficiently handle the input huge datasets from the smart grid for boosting security.Then,a Kernel based Extreme Neural Network(KENN)technique is used to anticipate security vulnerabilities more effectively.The Polar Bear Optimization(PBO)algorithm is used to efficiently determine the parameters for the estimate of radial basis function.Moreover,several types of smart grid network datasets are employed during analysis in order to examine the outcomes and efficiency of the proposed AdaBelief Exponential Feature Selection-Kernel based Extreme Neural Network(AEFS-KENN)big data security framework.The results reveal that the accuracy of proposed AEFS-KENN is increased up to 99.5%with precision and AUC of 99%for all smart grid big datasets used in this study. 展开更多
关键词 smart grid Machine Learning(ML) big data analytics AdaBelief Exponential Feature Selection(AEFS) Polar Bear Optimization(PBO) Kernel Extreme Neural Network(KENN)
原文传递
Molecular Generation and Optimization of Molecular Properties Using a Transformer Model
11
作者 Zhongyin Xu Xiujuan Lei +1 位作者 Mei Ma Yi Pan big data mining and analytics EI CSCD 2024年第1期142-155,共14页
Generating novel molecules to satisfy specific properties is a challenging task in modern drug discovery,which requires the optimization of a specific objective based on satisfying chemical rules.Herein,we aim to opti... Generating novel molecules to satisfy specific properties is a challenging task in modern drug discovery,which requires the optimization of a specific objective based on satisfying chemical rules.Herein,we aim to optimize the properties of a specific molecule to satisfy the specific properties of the generated molecule.The Matched Molecular Pairs(MMPs),which contain the source and target molecules,are used herein,and logD and solubility are selected as the optimization properties.The main innovative work lies in the calculation related to a specific transformer from the perspective of a matrix dimension.Threshold intervals and state changes are then used to encode logD and solubility for subsequent tests.During the experiments,we screen the data based on the proportion of heavy atoms to all atoms in the groups and select 12365,1503,and 1570 MMPs as the training,validation,and test sets,respectively.Transformer models are compared with the baseline models with respect to their abilities to generate molecules with specific properties.Results show that the transformer model can accurately optimize the source molecules to satisfy specific properties. 展开更多
关键词 molecular optimization transformer Matched Molecular Pairs(MMPs) logD SOLUBILITY
原文传递
Data Temperature Informed Streaming for Optimising Large-Scale Multi-Tiered Storage
12
作者 Dominic Davies-Tagg Ashiq Anjum +3 位作者 Ali Zahir Lu Liu Muhammad Usman Yaseen Nick Antonopoulos big data mining and analytics EI CSCD 2024年第2期371-398,共28页
Data temperature is a response to the ever-growing amount of data.These data have to be stored,but they have been observed that only a small portion of the data are accessed more frequently at any one time.This leads ... Data temperature is a response to the ever-growing amount of data.These data have to be stored,but they have been observed that only a small portion of the data are accessed more frequently at any one time.This leads to the concept of hot and cold data.Cold data can be migrated away from high-performance nodes to free up performance for higher priority data.Existing studies classify hot and cold data primarily on the basis of data age and usage frequency.We present this as a limitation in the current implementation of data temperature.This is due to the fact that age automatically assumes that all new data have priority and that usage is purely reactive.We propose new variables and conditions that influence smarter decision-making on what are hot or cold data and allow greater user control over data location and their movement.We identify new metadata variables and user-defined variables to extend the current data temperature value.We further establish rules and conditions for limiting unnecessary movement of the data,which helps to prevent wasted input output(I/O)costs.We also propose a hybrid algorithm that combines existing variables and new variables and conditions into a single data temperature.The proposed system provides higher accuracy,increases performance,and gives greater user control for optimal positioning of data within multi-tiered storage solutions. 展开更多
关键词 data temperature hot and cold data multi-tiered storage metadata variable multi-temperature system
原文传递
Discriminatively Constrained Semi-Supervised Multi-View Nonnegative Matrix Factorization with Graph Regularization
13
作者 Guosheng Cui Ye Li +1 位作者 Jianzhong Li Jianping Fan big data mining and analytics EI CSCD 2024年第1期55-74,共20页
Nonnegative Matrix Factorization(NMF)is one of the most popular feature learning technologies in the field of machine learning and pattern recognition.It has been widely used and studied in the multi-view clustering t... Nonnegative Matrix Factorization(NMF)is one of the most popular feature learning technologies in the field of machine learning and pattern recognition.It has been widely used and studied in the multi-view clustering tasks because of its effectiveness.This study proposes a general semi-supervised multi-view nonnegative matrix factorization algorithm.This algorithm incorporates discriminative and geometric information on data to learn a better-fused representation,and adopts a feature normalizing strategy to align the different views.Two specific implementations of this algorithm are developed to validate the effectiveness of the proposed framework:Graph regularization based Discriminatively Constrained Multi-View Nonnegative Matrix Factorization(GDCMVNMF)and Extended Multi-View Constrained Nonnegative Matrix Factorization(ExMVCNMF).The intrinsic connection between these two specific implementations is discussed,and the optimization based on multiply update rules is presented.Experiments on six datasets show that the effectiveness of GDCMVNMF and ExMVCNMF outperforms several representative unsupervised and semi-supervised multi-view NMF approaches. 展开更多
关键词 MULTI-VIEW semi-supervised clustering discriminative information geometric information feature normalizing strategy
原文传递
Multi-Smart Meter Data Encryption Scheme Basedon Distributed Differential Privacy
14
作者 Renwu Yan Yang Zheng +1 位作者 Ning Yu Cen Liang big data mining and analytics EI CSCD 2024年第1期131-141,共11页
Under the general trend of the rapid development of smart grids,data security and privacy are facing serious challenges;protecting the privacy data of single users under the premise of obtaining user-aggregated data h... Under the general trend of the rapid development of smart grids,data security and privacy are facing serious challenges;protecting the privacy data of single users under the premise of obtaining user-aggregated data has attracted widespread attention.In this study,we propose an encryption scheme on the basis of differential privacy for the problem of user privacy leakage when aggregating data from multiple smart meters.First,we use an improved homomorphic encryption method to realize the encryption aggregation of users’data.Second,we propose a double-blind noise addition protocol to generate distributed noise through interaction between users and a cloud platform to prevent semi-honest participants from stealing data by colluding with one another.Finally,the simulation results show that the proposed scheme can encrypt the transmission of multi-intelligent meter data under the premise of satisfying the differential privacy mechanism.Even if an attacker has enough background knowledge,the security of the electricity information of one another can be ensured. 展开更多
关键词 smart grid homomorphic encryption data aggregation differential privacy cloud computing
原文传递
E-Commerce Fraud Detection Based on Machine Learning Techniques:Systematic Literature Review
15
作者 Abed Mutemi Fernando Bacao big data mining and analytics EI CSCD 2024年第2期419-444,共26页
The e-commerce industry’s rapid growth,accelerated by the COVID-19 pandemic,has led to an alarming increase in digital fraud and associated losses.To establish a healthy e-commerce ecosystem,robust cyber security and... The e-commerce industry’s rapid growth,accelerated by the COVID-19 pandemic,has led to an alarming increase in digital fraud and associated losses.To establish a healthy e-commerce ecosystem,robust cyber security and anti-fraud measures are crucial.However,research on fraud detection systems has struggled to keep pace due to limited real-world datasets.Advances in artificial intelligence,Machine Learning(ML),and cloud computing have revitalized research and applications in this domain.While ML and data mining techniques are popular in fraud detection,specific reviews focusing on their application in e-commerce platforms like eBay and Facebook are lacking depth.Existing reviews provide broad overviews but fail to grasp the intricacies of ML algorithms in the e-commerce context.To bridge this gap,our study conducts a systematic literature review using the Preferred Reporting Items for Systematic reviews and Meta-Analysis(PRISMA)methodology.We aim to explore the effectiveness of these techniques in fraud detection within digital marketplaces and the broader e-commerce landscape.Understanding the current state of the literature and emerging trends is crucial given the rising fraud incidents and associated costs.Through our investigation,we identify research opportunities and provide insights to industry stakeholders on key ML and data mining techniques for combating e-commerce fraud.Our paper examines the research on these techniques as published in the past decade.Employing the PRISMA approach,we conducted a content analysis of 101 publications,identifying research gaps,recent techniques,and highlighting the increasing utilization of artificial neural networks in fraud detection within the industry. 展开更多
关键词 E-COMMERCE Machine Learning(ML) systematic review fraud detection organized retail fraud
原文传递
Predicting Energy Consumption Using Stacked LSTM Snapshot Ensemble
16
作者 Mona Ahamd Alghamdi Abdullah S.A.L-Malaise AL-Ghamdi Mahmoud Ragab big data mining and analytics EI CSCD 2024年第2期247-270,共24页
The ability to make accurate energy predictions while considering all related energy factors allows production plants,regulatory bodies,and governments to meet energy demand and assess the effects of energy-saving ini... The ability to make accurate energy predictions while considering all related energy factors allows production plants,regulatory bodies,and governments to meet energy demand and assess the effects of energy-saving initiatives.When energy consumption falls within normal parameters,it will be possible to use the developed model to predict energy consumption and develop improvements and mitigating measures for energy consumption.The objective of this model is to accurately predict energy consumption without data limitations and provide results that are easily interpretable.The proposed model is an implementation of the stacked Long Short-Term Memory(LSTM)snapshot ensemble combined with the Fast Fourier Transform(FFT)and meta-learner.Hebrail and Berard’s Individual Household Electric-Power Consumption(IHEPC)dataset incorporated with weather data are used to analyse the model’s accuracy with predicting energy consumption.The model is trained,and the results measured using Root Mean Square Error(RMSE),Mean Absolute Error(MAE),Mean Absolute Percentage Error(MAPE),and coefficient of determination(R^(2))metrics are 0.020,0.013,0.017,and 0.999,respectively.The stacked LSTM snapshot ensemble performs better than the compared models based on prediction accuracy and minimized errors.The results of this study show that prediction accuracy is high,and the model’s stability is high as well.The model shows that high levels of accuracy prove accurate predictive ability,and together with high levels of stability,the model has good interpretability,which is not typically accounted for in models.However,this study shows that it can be inferred. 展开更多
关键词 energy consumption PREDICTION Artificial Intelligence(AI) Deep Learning(DL) snapshot ensemble
原文传递
A Novel Recommendation Algorithm Integrates Resource Allocation and Resource Transfer in Weighted Bipartite Network
17
作者 Qiang Sun Leilei Shi +4 位作者 Lu Liu Zixuan Han Liang Jiang Yan Wu Yeling Zhao big data mining and analytics EI CSCD 2024年第2期357-370,共14页
Grid-based recommendation algorithms view users and items as abstract nodes,and the information utilised by the algorithm is hidden in the selection relationships between users and items.Although these relationships c... Grid-based recommendation algorithms view users and items as abstract nodes,and the information utilised by the algorithm is hidden in the selection relationships between users and items.Although these relationships can be easily handled,much useful information is overlooked,resulting in a less accurate recommendation algorithm.The aim of this paper is to propose improvements on the standard substance diffusion algorithm,taking into account the influence of the user’s rating on the recommended item,adding a moderating factor,and optimising the initial resource allocation vector and resource transfer matrix in the recommendation algorithm.An average ranking score evaluation index is introduced to quantify user satisfaction with the recommendation results.Experiments are conducted on the MovieLens training dataset,and the experimental results show that the proposed algorithm outperforms classical collaborative filtering systems and network structure based recommendation systems in terms of recommendation accuracy and hit rate. 展开更多
关键词 cloud computing link prediction bipartite graph network recommendation algorithm cold start problem
原文传递
QAR Data Imputation Using Generative Adversarial Network with Self-Attention Mechanism
18
作者 Jingqi Zhao Chuitian Rong +1 位作者 Xin Dang Huabo Sun big data mining and analytics EI CSCD 2024年第1期12-28,共17页
Quick Access Recorder(QAR),an important device for storing data from various flight parameters,contains a large amount of valuable data and comprehensively records the real state of the airline flight.However,the reco... Quick Access Recorder(QAR),an important device for storing data from various flight parameters,contains a large amount of valuable data and comprehensively records the real state of the airline flight.However,the recorded data have certain missing values due to factors,such as weather and equipment anomalies.These missing values seriously affect the analysis of QAR data by aeronautical engineers,such as airline flight scenario reproduction and airline flight safety status assessment.Therefore,imputing missing values in the QAR data,which can further guarantee the flight safety of airlines,is crucial.QAR data also have multivariate,multiprocess,and temporal features.Therefore,we innovatively propose the imputation models A-AEGAN("A"denotes attention mechanism,"AE"denotes autoencoder,and"GAN"denotes generative adversarial network)and SA-AEGAN("SA"denotes self-attentive mechanism)for missing values of QAR data,which can be effectively applied to QAR data.Specifically,we apply an innovative generative adversarial network to impute missing values from QAR data.The improved gated recurrent unit is then introduced as the neural unit of GAN,which can successfully capture the temporal relationships in QAR data.In addition,we modify the basic structure of GAN by using an autoencoder as the generator and a recurrent neural network as the discriminator.The missing values in the QAR data are imputed by using the adversarial relationship between generator and discriminator.We introduce an attention mechanism in the autoencoder to further improve the capability of the proposed model to capture the features of QAR data.Attention mechanisms can maintain the correlation among QAR data and improve the capability of the model to impute missing data.Furthermore,we improve the proposed model by integrating a self-attention mechanism to further capture the relationship between different parameters within the QAR data.Experimental results on real datasets demonstrate that the model can reasonably impute the missing values in QAR data with excellent results. 展开更多
关键词 multivariate time series data imputation self-attention Generative Adversarial Network(GAN)
原文传递
Incremental Data Stream Classification with Adaptive Multi-Task Multi-View Learning
19
作者 Jun Wang Maiwang Shi +4 位作者 Xiao Zhang Yan Li Yunsheng Yuan Chengei Yang Dongxiao Yu big data mining and analytics EI CSCD 2024年第1期87-106,共20页
With the enhancement of data collection capabilities,massive streaming data have been accumulated in numerous application scenarios.Specifically,the issue of classifying data streams based on mobile sensors can be for... With the enhancement of data collection capabilities,massive streaming data have been accumulated in numerous application scenarios.Specifically,the issue of classifying data streams based on mobile sensors can be formalized as a multi-task multi-view learning problem with a specific task comprising multiple views with shared features collected from multiple sensors.Existing incremental learning methods are often single-task single-view,which cannot learn shared representations between relevant tasks and views.An adaptive multi-task multi-view incremental learning framework for data stream classification called MTMVIS is proposed to address the above challenges,utilizing the idea of multi-task multi-view learning.Specifically,the attention mechanism is first used to align different sensor data of different views.In addition,MTMVIS uses adaptive Fisher regularization from the perspective of multi-task multi-view learning to overcome catastrophic forgetting in incremental learning.Results reveal that the proposed framework outperforms state-of-the-art methods based on the experiments on two different datasets with other baselines. 展开更多
关键词 data stream classification mobile sensors multi-task multi-view learning incremental learning
原文传递
AI-Based Advanced Approaches and Dry Eye Disease Detection Based on Multi-Source Evidence:Cases,Applications,Issues,and Future Directions
20
作者 Mini Han Wang Lumin Xing +13 位作者 Yi Pan Feng Gu Junbin Fang Xiangrong Yu Chi Pui Pang Kelvin Kam-Lung Chong Carol Yim-Lui Cheung Xulin Liao Xiaoxiao Fang Jie Yang Ruoyu Zhou Xiaoshu Zhou Fengling Wang Wenjian Liu big data mining and analytics EI CSCD 2024年第2期445-484,共40页
This study explores the potential of Artificial Intelligence(AI)in early screening and prognosis of Dry Eye Disease(DED),aiming to enhance the accuracy of therapeutic approaches for eye-care practitioners.Despite the ... This study explores the potential of Artificial Intelligence(AI)in early screening and prognosis of Dry Eye Disease(DED),aiming to enhance the accuracy of therapeutic approaches for eye-care practitioners.Despite the promising opportunities,challenges such as diverse diagnostic evidence,complex etiology,and interdisciplinary knowledge integration impede the interpretability,reliability,and applicability of AI-based DED detection methods.The research conducts a comprehensive review of datasets,diagnostic evidence,and standards,as well as advanced algorithms in AI-based DED detection over the past five years.The DED diagnostic methods are categorized into three groups based on their relationship with AI techniques:(1)those with ground truth and/or comparable standards,(2)potential AI-based methods with significant advantages,and(3)supplementary methods for AI-based DED detection.The study proposes suggested DED detection standards,the combination of multiple diagnostic evidence,and future research directions to guide further investigations.Ultimately,the research contributes to the advancement of ophthalmic disease detection by providing insights into knowledge foundations,advanced methods,challenges,and potential future perspectives,emphasizing the significant role of AI in both academic and practical aspects of ophthalmology. 展开更多
关键词 Artificial Intelligence(AI) OPHTHALMOLOGY Dry Eye Disease(DED)detection multi-source evidence
原文传递
上一页 1 2 12 下一页 到第
使用帮助 返回顶部