期刊文献+
共找到212篇文章
< 1 2 11 >
每页显示 20 50 100
Information Freshness-Oriented Trajectory Planning and Resource Allocation for UAV-Assisted Vehicular Networks
1
作者 Hao Gai Haixia Zhang +1 位作者 Shuaishuai Guo Dongfeng Yuan 《China Communications》 SCIE CSCD 2023年第5期244-262,共19页
In this paper,multi-UAV trajectory planning and resource allocation are jointly investigated to improve the information freshness for vehicular networks,where the vehicles collect time-critical traffic information by ... In this paper,multi-UAV trajectory planning and resource allocation are jointly investigated to improve the information freshness for vehicular networks,where the vehicles collect time-critical traffic information by on-board sensors and upload to the UAVs through their allocated spectrum resource.We adopt the expected sum age of information(ESAoI)to measure the network-wide information freshness.ESAoI is jointly affected by both the UAVs trajectory and the resource allocation,which are coupled with each other and make the analysis of ESAoI challenging.To tackle this challenge,we introduce a joint trajectory planning and resource allocation procedure,where the UAVs firstly fly to their destinations and then hover to allocate resource blocks(RBs)during a time-slot.Based on this procedure,we formulate a trajectory planning and resource allocation problem for ESAoI minimization.To solve the mixed integer nonlinear programming(MINLP)problem with hybrid decision variables,we propose a TD3 trajectory planning and Round-robin resource allocation(TTPRRA).Specifically,we exploit the exploration and learning ability of the twin delayed deep deterministic policy gradient algorithm(TD3)for UAVs trajectory planning,and utilize Round Robin rule for the optimal resource allocation.With TTP-RRA,the UAVs obtain their flight velocities by sensing the locations and the age of information(AoI)of the vehicles,then allocate the RBs to the vehicles in a descending order of AoI until the remaining RBs are not sufficient to support another successful uploading.Simulation results demonstrate that TTP-RRA outperforms the baseline approaches in terms of ESAoI and average AoI(AAoI). 展开更多
关键词 information freshness for vehicular networks multi-UAV trajectory planning resource allocation deep reinforcement learning
下载PDF
Integrated Sensing,Computing and Communications Technologies in IoV and V2X
2
作者 Shanzhi Chen FRichard Yu +1 位作者 Weisong Shi Changle Li 《China Communications》 SCIE CSCD 2023年第3期I0002-I0004,共3页
With the development of new generation of information and communication technology,the Internet of Vehicles(IoV)/Vehicle-to-Everything(V2X),which realizes the connection between vehicle and X(i.e.,vehicles,pedestrians... With the development of new generation of information and communication technology,the Internet of Vehicles(IoV)/Vehicle-to-Everything(V2X),which realizes the connection between vehicle and X(i.e.,vehicles,pedestrians,infrastructures,clouds,etc.),is playing an increasingly important role in improving traffic operation efficiency and driving safety as well as enhancing the intelligence level of social traffic services. 展开更多
关键词 driving TRAFFIC operation
下载PDF
Securing Stock Transactions Using Blockchain Technology: Architecture for Identifying and Reducing Vulnerabilities Linked to the Web Applications Used (MAHV-BC)
3
作者 Kpinna Tiekoura Coulibaly Abdou Maïga +1 位作者 Jerome Diako Moustapha Diaby 《Open Journal of Applied Sciences》 2023年第11期2080-2093,共14页
This paper deals with the security of stock market transactions within financial markets, particularly that of the West African Economic and Monetary Union (UEMOA). The confidentiality and integrity of sensitive data ... This paper deals with the security of stock market transactions within financial markets, particularly that of the West African Economic and Monetary Union (UEMOA). The confidentiality and integrity of sensitive data in the stock market being crucial, the implementation of robust systems which guarantee trust between the different actors is essential. We therefore proposed, after analyzing the limits of several security approaches in the literature, an architecture based on blockchain technology making it possible to both identify and reduce the vulnerabilities linked to the design, implementation work or the use of web applications used for transactions. Our proposal makes it possible, thanks to two-factor authentication via the Blockchain, to strengthen the security of investors’ accounts and the automated recording of transactions in the Blockchain while guaranteeing the integrity of stock market operations. It also provides an application vulnerability report. To validate our approach, we compared our results to those of three other security tools, at the level of different metrics. Our approach achieved the best performance in each case. 展开更多
关键词 Stock Market Transactions Action Smart Contracts ARCHITECTURE Security Vulnerability Web Applications Blockchain and Finance Cryptography Authentication Data Integrity Transaction Confidentiality Trust Economy
下载PDF
Image Splicing Forgery Detection Using Feature-Based of Sonine Functions and Deep Features
4
作者 Ala’a R.Al-Shamasneh Rabha W.Ibrahim 《Computers, Materials & Continua》 SCIE EI 2024年第1期795-810,共16页
The growing prevalence of fake images on the Internet and social media makes image integrity verification a crucial research topic.One of the most popular methods for manipulating digital images is image splicing,whic... The growing prevalence of fake images on the Internet and social media makes image integrity verification a crucial research topic.One of the most popular methods for manipulating digital images is image splicing,which involves copying a specific area from one image and pasting it into another.Attempts were made to mitigate the effects of image splicing,which continues to be a significant research challenge.This study proposes a new splicing detectionmodel,combining Sonine functions-derived convex-based features and deep features.Two stages make up the proposed method.The first step entails feature extraction,then classification using the“support vector machine”(SVM)to differentiate authentic and spliced images.The proposed Sonine functions-based feature extraction model reveals the spliced texture details by extracting some clues about the probability of image pixels.The proposed model achieved an accuracy of 98.93% when tested with the CASIA V2.0 dataset“Chinese Academy of Sciences,Institute of Automation”which is a publicly available dataset for forgery classification.The experimental results show that,for image splicing forgery detection,the proposed Sonine functions-derived convex-based features and deep features outperform state-of-the-art techniques in terms of accuracy,precision,and recall.Overall,the obtained detection accuracy attests to the benefit of using the Sonine functions alongside deep feature representations.Finding the regions or locations where image tampering has taken place is limited by the study.Future research will need to look into advanced image analysis techniques that can offer a higher degree of accuracy in identifying and localizing tampering regions. 展开更多
关键词 Image forgery image splicing deep learning Sonine functions
下载PDF
An Implementation of Multiscale Line Detection and Mathematical Morphology for Efficient and Precise Blood Vessel Segmentation in Fundus Images
5
作者 Syed Ayaz Ali Shah Aamir Shahzad +4 位作者 Musaed Alhussein Chuan Meng Goh Khursheed Aurangzeb Tong Boon Tang Muhammad Awais 《Computers, Materials & Continua》 SCIE EI 2024年第5期2565-2583,共19页
Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when deal... Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field. 展开更多
关键词 Line detector vessel detection LOCALIZATION mathematical morphology image processing
下载PDF
Intrusion Detection System for Smart Industrial Environments with Ensemble Feature Selection and Deep Convolutional Neural Networks
6
作者 Asad Raza Shahzad Memon +1 位作者 Muhammad Ali Nizamani Mahmood Hussain Shah 《Intelligent Automation & Soft Computing》 2024年第3期545-566,共22页
Smart Industrial environments use the Industrial Internet of Things(IIoT)for their routine operations and transform their industrial operations with intelligent and driven approaches.However,IIoT devices are vulnerabl... Smart Industrial environments use the Industrial Internet of Things(IIoT)for their routine operations and transform their industrial operations with intelligent and driven approaches.However,IIoT devices are vulnerable to cyber threats and exploits due to their connectivity with the internet.Traditional signature-based IDS are effective in detecting known attacks,but they are unable to detect unknown emerging attacks.Therefore,there is the need for an IDS which can learn from data and detect new threats.Ensemble Machine Learning(ML)and individual Deep Learning(DL)based IDS have been developed,and these individual models achieved low accuracy;however,their performance can be improved with the ensemble stacking technique.In this paper,we have proposed a Deep Stacked Neural Network(DSNN)based IDS,which consists of two stacked Convolutional Neural Network(CNN)models as base learners and Extreme Gradient Boosting(XGB)as the meta learner.The proposed DSNN model was trained and evaluated with the next-generation dataset,TON_IoT.Several pre-processing techniques were applied to prepare a dataset for the model,including ensemble feature selection and the SMOTE technique.Accuracy,precision,recall,F1-score,and false positive rates were used to evaluate the performance of the proposed ensemble model.Our experimental results showed that the accuracy for binary classification is 99.61%,which is better than in the baseline individual DL and ML models.In addition,the model proposed for IDS has been compared with similar models.The proposed DSNN achieved better performance metrics than the other models.The proposed DSNN model will be used to develop enhanced IDS for threat mitigation in smart industrial environments. 展开更多
关键词 Industrial internet of things smart industrial environment cyber-attacks convolutional neural network ensemble learning
下载PDF
Insights into Manipulation: Unveiling Tampered Images Using Modified ELA, Deep Learning, and Explainable AI
7
作者 Md. Mehedi Hasan Md. Masud Rana Abu Sayed Md. Mostafizur Rahaman 《Journal of Computer and Communications》 2024年第6期135-151,共17页
Digital image forgery (DIF) is a prevalent issue in the modern age, where malicious actors manipulate images for various purposes, including deception and misinformation. Detecting such forgeries is a critical task fo... Digital image forgery (DIF) is a prevalent issue in the modern age, where malicious actors manipulate images for various purposes, including deception and misinformation. Detecting such forgeries is a critical task for maintaining the integrity of digital content. This thesis explores the use of Modified Error Level Analysis (ELA) in combination with a Convolutional Neural Network (CNN), as well as, Feedforward Neural Network (FNN) model to detect digital image forgeries. Additionally, incorporation of Explainable Artificial Intelligence (XAI) to this research provided insights into the process of decision-making by the models. The study trains and tests the models on the CASIA2 dataset, emphasizing both authentic and forged images. The CNN model is trained and evaluated, and Explainable AI (SHapley Additive exPlanation— SHAP) is incorporated to explain the model’s predictions. Similarly, the FNN model is trained and evaluated, and XAI (SHAP) is incorporated to explain the model’s predictions. The results obtained from the analysis reveals that the proposed approach using CNN model is most effective in detecting image forgeries and provides valuable explanations for decision interpretability. 展开更多
关键词 IFD DIF ELA CNN FNN XAI SHAP CASIA2.0
下载PDF
Multi-Zone-Wise Blockchain Based Intrusion Detection and Prevention System for IoT Environment
8
作者 Salaheddine Kably Tajeddine Benbarrad +1 位作者 Nabih Alaoui Mounir Arioua 《Computers, Materials & Continua》 SCIE EI 2023年第1期253-278,共26页
Blockchain merges technology with the Internet of Things(IoT)for addressing security and privacy-related issues.However,conventional blockchain suffers from scalability issues due to its linear structure,which increas... Blockchain merges technology with the Internet of Things(IoT)for addressing security and privacy-related issues.However,conventional blockchain suffers from scalability issues due to its linear structure,which increases the storage overhead,and Intrusion detection performed was limited with attack severity,leading to performance degradation.To overcome these issues,we proposed MZWB(Multi-Zone-Wise Blockchain)model.Initially,all the authenticated IoT nodes in the network ensure their legitimacy by using the Enhanced Blowfish Algorithm(EBA),considering several metrics.Then,the legitimately considered nodes for network construction for managing the network using Bayesian-Direct Acyclic Graph(B-DAG),which considers several metrics.The intrusion detection is performed based on two tiers.In the first tier,a Deep Convolution Neural Network(DCNN)analyzes the data packets by extracting packet flow features to classify the packets as normal,malicious,and suspicious.In the second tier,the suspicious packets are classified as normal or malicious using the Generative Adversarial Network(GAN).Finally,intrusion scenario performed reconstruction to reduce the severity of attacks in which Improved Monkey Optimization(IMO)is used for attack path discovery by considering several metrics,and the Graph cut utilized algorithm for attack scenario reconstruction(ASR).UNSW-NB15 and BoT-IoT utilized datasets for the MZWB method simulated using a Network simulator(NS-3.26).Compared with previous performance metrics such as energy consumption,storage overhead accuracy,response time,attack detection rate,precision,recall,and F-measure.The simulation result shows that the proposed MZWB method achieves high performance than existing works. 展开更多
关键词 IOT multi-zone-wise blockchain intrusion detection and prevention system edge computing network graph construction IDS intrusion scenario reconstruction
下载PDF
Toward Secure Software-Defined Networks Using Machine Learning: A Review, Research Challenges, and Future Directions
9
作者 Muhammad Waqas Nadeem Hock Guan Goh +1 位作者 Yichiet Aun Vasaki Ponnusamy 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期2201-2217,共17页
Over the past few years,rapid advancements in the internet and communication technologies have led to increasingly intricate and diverse networking systems.As a result,greater intelligence is necessary to effectively ... Over the past few years,rapid advancements in the internet and communication technologies have led to increasingly intricate and diverse networking systems.As a result,greater intelligence is necessary to effectively manage,optimize,and maintain these systems.Due to their distributed nature,machine learning models are challenging to deploy in traditional networks.However,Software-Defined Networking(SDN)presents an opportunity to integrate intelligence into networks by offering a programmable architecture that separates data and control planes.SDN provides a centralized network view and allows for dynamic updates of flow rules and softwarebased traffic analysis.While the programmable nature of SDN makes it easier to deploy machine learning techniques,the centralized control logic also makes it vulnerable to cyberattacks.To address these issues,recent research has focused on developing powerful machine-learning methods for detecting and mitigating attacks in SDN environments.This paper highlighted the countermeasures for cyberattacks on SDN and how current machine learningbased solutions can overcome these emerging issues.We also discuss the pros and cons of using machine learning algorithms for detecting and mitigating these attacks.Finally,we highlighted research issues,gaps,and challenges in developing machine learning-based solutions to secure the SDN controller,to help the research and network community to develop more robust and reliable solutions. 展开更多
关键词 Botnet attack deep learning distributed denial of service machine learning network security software-defined network
下载PDF
Combining Entropy Optimization and Sobel Operator for Medical Image Fusion
10
作者 Nguyen Tu Trung Tran Thi Ngan +1 位作者 Tran Manh Tuan To Huu Nguyen 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期535-544,共10页
Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fus... Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fusion aims to improve the image quality and preserve the specific features.The methods of medical image fusion generally use knowledge in many differentfields such as clinical medicine,computer vision,digital imaging,machine learning,pattern recognition to fuse different medical images.There are two main approaches in fusing image,including spatial domain approach and transform domain approachs.This paper proposes a new algorithm to fusion multimodal images.This algorithm is based on Entropy optimization and the Sobel operator.Wavelet transform is used to split the input images into components over the low and high frequency domains.Then,two fusion rules are used for obtaining the fusing images.Thefirst rule,based on the Sobel operator,is used for high frequency components.The second rule,based on Entropy optimization by using Particle Swarm Optimization(PSO)algorithm,is used for low frequency components.Proposed algorithm is implemented on the images related to central nervous system diseases.The experimental results of the paper show that the proposed algorithm is better than some recent methods in term of brightness level,the contrast,the entropy,the gradient and visual informationfidelity for fusion(VIFF),Feature Mutual Information(FMI)indices. 展开更多
关键词 Medical image fusion WAVELET entropy optimization PSO Sobel operator
下载PDF
Music Genre Classification Using DenseNet and Data Augmentation
11
作者 Dao Thi Le Thuy Trinh Van Loan +1 位作者 Chu Ba Thanh Nguyen Hieu Cuong 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期657-674,共18页
It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone h... It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone huge changes.As the number ofmusic products increases daily and themusic genres are extremely rich,storing,classifying,and searching these works manually becomes difficult,if not impossible.Automatic classification ofmusical genres will contribute to making this possible.The research presented in this paper proposes an appropriate deep learning model along with an effective data augmentation method to achieve high classification accuracy for music genre classification using Small Free Music Archive(FMA)data set.For Small FMA,it is more efficient to augment the data by generating an echo rather than pitch shifting.The research results show that the DenseNet121 model and data augmentation methods,such as noise addition and echo generation,have a classification accuracy of 98.97%for the Small FMA data set,while this data set lowered the sampling frequency to 16000 Hz.The classification accuracy of this study outperforms that of the majority of the previous results on the same Small FMA data set. 展开更多
关键词 Music genre classification Small FMA DenseNet CNN GRU data augmentation
下载PDF
Automatic Recognition of Construction Worker Activities Using Deep Learning Approaches and Wearable Inertial Sensors
12
作者 Sakorn Mekruksavanich Anuchit Jitpattanakul 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期2111-2128,共18页
The automated evaluation and analysis of employee behavior in an Industry 4.0-compliant manufacturingfirm are vital for the rapid and accurate diagnosis of work performance,particularly during the training of a new wor... The automated evaluation and analysis of employee behavior in an Industry 4.0-compliant manufacturingfirm are vital for the rapid and accurate diagnosis of work performance,particularly during the training of a new worker.Various techniques for identifying and detecting worker performance in industrial applications are based on computer vision techniques.Despite widespread com-puter vision-based approaches,it is challenging to develop technologies that assist the automated monitoring of worker actions at external working sites where cam-era deployment is problematic.Through the use of wearable inertial sensors,we propose a deep learning method for automatically recognizing the activities of construction workers.The suggested method incorporates a convolutional neural network,residual connection blocks,and multi-branch aggregate transformation modules for high-performance recognition of complicated activities such as con-struction worker tasks.The proposed approach has been evaluated using standard performance measures,such as precision,F1-score,and AUC,using a publicly available benchmark dataset known as VTT-ConIoT,which contains genuine con-struction work activities.In addition,standard deep learning models(CNNs,RNNs,and hybrid models)were developed in different empirical circumstances to compare them to the proposed model.With an average accuracy of 99.71%and an average F1-score of 99.71%,the experimentalfindings revealed that the suggested model could accurately recognize the actions of construction workers.Furthermore,we examined the impact of window size and sensor position on the identification efficiency of the proposed method. 展开更多
关键词 Complex human activity recognition wearable inertial sensors deep learning construction workers automatic recognition
下载PDF
Cache in fog computing design,concepts,contributions,and security issues in machine learning prospective
13
作者 Muhammad Ali Naeem Yousaf Bin Zikria +3 位作者 Rashid Ali Usman Tariq Yahui Meng Ali Kashif Bashir 《Digital Communications and Networks》 SCIE CSCD 2023年第5期1033-1052,共20页
The massive growth of diversified smart devices and continuous data generation poses a challenge to communication architectures.To deal with this problem,communication networks consider fog computing as one of promisi... The massive growth of diversified smart devices and continuous data generation poses a challenge to communication architectures.To deal with this problem,communication networks consider fog computing as one of promising technologies that can improve overall communication performance.It brings on-demand services proximate to the end devices and delivers the requested data in a short time.Fog computing faces several issues such as latency,bandwidth,and link utilization due to limited resources and the high processing demands of end devices.To this end,fog caching plays an imperative role in addressing data dissemination issues.This study provides a comprehensive discussion of fog computing,Internet of Things(IoTs)and the critical issues related to data security and dissemination in fog computing.Moreover,we determine the fog-based caching schemes and contribute to deal with the existing issues of fog computing.Besides,this paper presents a number of caching schemes with their contributions,benefits,and challenges to overcome the problems and limitations of fog computing.We also identify machine learning-based approaches for cache security and management in fog computing,as well as several prospective future research directions in caching,fog computing,and machine learning. 展开更多
关键词 Internet of things Cloud computing Fog computing CACHING LATENCY
下载PDF
Age and Gender Classification Using Backpropagation and Bagging Algorithms
14
作者 Ammar Almomani Mohammed Alweshah +6 位作者 Waleed Alomoush Mohammad Alauthman Aseel Jabai Anwar Abbass Ghufran Hamad Meral Abdalla Brij B.Gupta 《Computers, Materials & Continua》 SCIE EI 2023年第2期3045-3062,共18页
Voice classification is important in creating more intelligent systems that help with student exams,identifying criminals,and security systems.The main aim of the research is to develop a system able to predicate and ... Voice classification is important in creating more intelligent systems that help with student exams,identifying criminals,and security systems.The main aim of the research is to develop a system able to predicate and classify gender,age,and accent.So,a newsystem calledClassifyingVoice Gender,Age,and Accent(CVGAA)is proposed.Backpropagation and bagging algorithms are designed to improve voice recognition systems that incorporate sensory voice features such as rhythm-based features used to train the device to distinguish between the two gender categories.It has high precision compared to other algorithms used in this problem,as the adaptive backpropagation algorithm had an accuracy of 98%and the Bagging algorithm had an accuracy of 98.10%in the gender identification data.Bagging has the best accuracy among all algorithms,with 55.39%accuracy in the voice common dataset and age classification and accent accuracy in a speech accent of 78.94%. 展开更多
关键词 Classify voice gender ACCENT age bagging algorithms back propagation algorithms AI classifiers
下载PDF
Multivariate Aggregated NOMA for Resource Aware Wireless Network Communication Security
15
作者 V.Sridhar K.V.Ranga Rao +4 位作者 Saddam Hussain Syed Sajid Ullah Roobaea Alroobaea Maha Abdelhaq Raed Alsaqour 《Computers, Materials & Continua》 SCIE EI 2023年第1期1693-1708,共16页
NonorthogonalMultiple Access(NOMA)is incorporated into the wireless network systems to achieve better connectivity,spectral and energy effectiveness,higher data transfer rate,and also obtain the high quality of servic... NonorthogonalMultiple Access(NOMA)is incorporated into the wireless network systems to achieve better connectivity,spectral and energy effectiveness,higher data transfer rate,and also obtain the high quality of services(QoS).In order to improve throughput and minimum latency,aMultivariate Renkonen Regressive Weighted Preference Bootstrap Aggregation based Nonorthogonal Multiple Access(MRRWPBA-NOMA)technique is introduced for network communication.In the downlink transmission,each mobile device’s resources and their characteristics like energy,bandwidth,and trust are measured.Followed by,the Weighted Preference Bootstrap Aggregation is applied to recognize the resource-efficient mobile devices for aware data transmission by constructing the different weak hypotheses i.e.,Multivariate Renkonen Regression functions.Based on the classification,resource and trust-aware devices are selected for transmission.Simulation of the proposed MRRWPBA-NOMA technique and existing methods are carried out with different metrics such as data delivery ratio,throughput,latency,packet loss rate,and energy efficiency,signaling overhead.The simulation results assessment indicates that the proposed MRRWPBA-NOMA outperforms well than the conventional methods. 展开更多
关键词 Mobile network multivariate renkonen regression weighted preference bootstrap aggregation resource-aware secure data communication NOMA
下载PDF
Pre-Impact and Impact Fall Detection Based on a Multimodal Sensor Using a Deep Residual Network
16
作者 Narit Hnoohom Sakorn Mekruksavanich Anuchit Jitpattanakul 《Intelligent Automation & Soft Computing》 SCIE 2023年第6期3371-3385,共15页
Falls are the contributing factor to both fatal and nonfatal injuries in the elderly.Therefore,pre-impact fall detection,which identifies a fall before the body collides with the floor,would be essential.Recently,rese... Falls are the contributing factor to both fatal and nonfatal injuries in the elderly.Therefore,pre-impact fall detection,which identifies a fall before the body collides with the floor,would be essential.Recently,researchers have turned their attention from post-impact fall detection to pre-impact fall detection.Pre-impact fall detection solutions typically use either a threshold-based or machine learning-based approach,although the threshold value would be difficult to accu-rately determine in threshold-based methods.Moreover,while additional features could sometimes assist in categorizing falls and non-falls more precisely,the esti-mated determination of the significant features would be too time-intensive,thus using a significant portion of the algorithm’s operating time.In this work,we developed a deep residual network with aggregation transformation called FDSNeXt for a pre-impact fall detection approach employing wearable inertial sensors.The proposed network was introduced to address the limitations of fea-ture extraction,threshold definition,and algorithm complexity.After training on a large-scale motion dataset,the KFall dataset,and straightforward evaluation with standard metrics,the proposed approach identified pre-impact and impact falls with high accuracy of 91.87 and 92.52%,respectively.In addition,we have inves-tigated fall detection’s performances of three state-of-the-art deep learning models such as a convolutional neural network(CNN),a long short-term memory neural network(LSTM),and a hybrid model(CNN-LSTM).The experimental results showed that the proposed FDSNeXt model outperformed these deep learning models(CNN,LSTM,and CNN-LSTM)with significant improvements. 展开更多
关键词 Pre-impact fall detection deep learning wearable sensor deep residual network
下载PDF
Considerations for a Planned Democratizing Data Framework for Valid and Trusted Data
17
作者 Tambe Mariam Takang Austin Oguejiofor Amaechi 《Journal of Data Analysis and Information Processing》 2023年第3期240-261,共22页
A key requirement of today’s fast changing business outcome and innovation environment is the ability of organizations to adapt dynamically in an effective and efficient manner. Becoming a data-driven decision-making... A key requirement of today’s fast changing business outcome and innovation environment is the ability of organizations to adapt dynamically in an effective and efficient manner. Becoming a data-driven decision-making organization plays a crucially important role in addressing such adaptation requirements. The notion of “data democratization” has emerged as a mechanism with which organizations can address data-driven decision-making process issues and cross-pollinate data in ways that uncover actionable insights. We define data democratization as an attitude focused on curiosity, learning, and experimentation for delivering trusted data for trusted insights to a broad range of authorized stakeholders. In this paper, we propose a general indicator framework for data democratization by highlighting success factors that should not be overlooked in today’s data driven economy. In this practice-based research, these enablers are grouped into six broad building blocks: 1) “ethical guidelines, business context and value”, 2) “data leadership and data culture”, 3) “data literacy and business knowledge”, 4) “data wrangling, trustworthy & standardization”, 5) “sustainable data platform, access, & analytical tool”, 6) “intelligent data governance and privacy”. As an attitude, once it is planned and built, data democratization will need to be maintained. The utility of the approach is demonstrated through a case study for a Cameroon based start-up company that has ongoing data analytics projects. Our findings advance the concepts of data democratization and contribute to data free flow with trust. 展开更多
关键词 Data Democratization Trusted Data Design Process Digital Innovation Literature Reviews
下载PDF
Evaluation of Linear Precoding Schemes for Cooperative Multi-Cell MU MIMO in Future Mobile Communication Systems
18
作者 Juma Said Ally 《Journal of Computer and Communications》 2023年第6期28-42,共15页
In Mobile Communication Systems, inter-cell interference becomes one of the challenges that degrade the system’s performance, especially in the region with massive mobile users. The linear precoding schemes were prop... In Mobile Communication Systems, inter-cell interference becomes one of the challenges that degrade the system’s performance, especially in the region with massive mobile users. The linear precoding schemes were proposed to mitigate interferences between the base stations (inter-cell). These schemes are categorized into linear and non-linear;this study focused on linear precoding schemes, which are grounded into three types, namely Zero Forcing (ZF), Block Diagonalization (BD), and Signal Leakage Noise Ratio (SLNR). The study included the Cooperative Multi-cell Multi Input Multi Output (MIMO) System, whereby each Base Station serves more than one mobile station and all Base Stations on the system are assisted by each other by shared the Channel State Information (CSI). Based on the Multi-Cell Multiuser MIMO system, each Base Station on the cell is intended to maximize the data transmission rate by its mobile users by increasing the Signal Interference to Noise Ratio after the interference has been mitigated due to the usefully of linear precoding schemes on the transmitter. Moreover, these schemes used different approaches to mitigate interference. This study mainly concentrates on evaluating the performance of these schemes through the channel distribution models such as Ray-leigh and Rician included in the presence of noise errors. The results show that the SLNR scheme outperforms ZF and BD schemes overall scenario. This implied that when the value of SNR increased the performance of SLNR increased by 21.4% and 45.7% for ZF and BD respectively. 展开更多
关键词 Precoding Schemes Cooperative Networks Interference Multi-Input Multi-Output (MIMO) Multi-Cell and Multiuser
下载PDF
Machine Learning-Based Alarms Classification and Correlation in an SDH/WDM Optical Network to Improve Network Maintenance
19
作者 Deussom Djomadji Eric Michel Takembo Ntahkie Clovis +2 位作者 Tchapga Tchito Christian Arabo Mamadou Michael Ekonde Sone 《Journal of Computer and Communications》 2023年第2期122-141,共20页
The evolution of telecommunications has allowed the development of broadband services based mainly on fiber optic backbone networks. The operation and maintenance of these optical networks is made possible by using su... The evolution of telecommunications has allowed the development of broadband services based mainly on fiber optic backbone networks. The operation and maintenance of these optical networks is made possible by using supervision platforms that generate alarms that can be archived in the form of log files. But analyzing the alarms in the log files is a laborious and difficult task for the engineers who need a degree of expertise. Identifying failures and their root cause can be time consuming and impact the quality of service, network availability and service level agreements signed between the operator and its customers. Therefore, it is more than important to study the different possibilities of alarms classification and to use machine learning algorithms for alarms correlation in order to quickly determine the root causes of problems faster. We conducted a research case study on one of the operators in Cameroon who held an optical backbone based on SDH and WDM technologies with data collected from 2016-03-28 to “2022-09-01” with 7201 rows and 18. In this paper, we will classify alarms according to different criteria and use 02 unsupervised learning algorithms namely the K-Means algorithm and the DBSCAN to establish correlations between alarms in order to identify root causes of problems and reduce the time to troubleshoot. To achieve this objective, log files were exploited in order to obtain the root causes of the alarms, and then K-Means algorithm and the DBSCAN were used firstly to evaluate their performance and their capability to identify the root cause of alarms in optical network. 展开更多
关键词 Optical Network ALARMS Log Files Root Cause Analysis Machine Learning
下载PDF
Correlation Analysis of Turbidity and Total Phosphorus in Water Quality Monitoring Data
20
作者 Wenwu Tan Jianjun Zhang +7 位作者 Xing Liu Jiang Wu Yifu Sheng Ke Xiao Li Wang Haijun Lin Guang Sun Peng Guo 《Journal on Big Data》 2023年第1期85-97,共13页
At present,water pollution has become an important factor affecting and restricting national and regional economic development.Total phosphorus is one of the main sources of water pollution and eutrophication,so the p... At present,water pollution has become an important factor affecting and restricting national and regional economic development.Total phosphorus is one of the main sources of water pollution and eutrophication,so the prediction of total phosphorus in water quality has good research significance.This paper selects the total phosphorus and turbidity data for analysis by crawling the data of the water quality monitoring platform.By constructing the attribute object mapping relationship,the correlation between the two indicators was analyzed and used to predict the future data.Firstly,the monthly mean and daily mean concentrations of total phosphorus and turbidity outliers were calculated after cleaning,and the correlation between them was analyzed.Secondly,the correlation coefficients of different times and frequencies were used to predict the values for the next five days,and the data trend was predicted by python visualization.Finally,the real value was compared with the predicted value data,and the results showed that the correlation between total phosphorus and turbidity was useful in predicting the water quality. 展开更多
关键词 Correlation analysis CLUSTER water quality predict water quality monitoring data
下载PDF
上一页 1 2 11 下一页 到第
使用帮助 返回顶部