In this paper,multi-UAV trajectory planning and resource allocation are jointly investigated to improve the information freshness for vehicular networks,where the vehicles collect time-critical traffic information by ...In this paper,multi-UAV trajectory planning and resource allocation are jointly investigated to improve the information freshness for vehicular networks,where the vehicles collect time-critical traffic information by on-board sensors and upload to the UAVs through their allocated spectrum resource.We adopt the expected sum age of information(ESAoI)to measure the network-wide information freshness.ESAoI is jointly affected by both the UAVs trajectory and the resource allocation,which are coupled with each other and make the analysis of ESAoI challenging.To tackle this challenge,we introduce a joint trajectory planning and resource allocation procedure,where the UAVs firstly fly to their destinations and then hover to allocate resource blocks(RBs)during a time-slot.Based on this procedure,we formulate a trajectory planning and resource allocation problem for ESAoI minimization.To solve the mixed integer nonlinear programming(MINLP)problem with hybrid decision variables,we propose a TD3 trajectory planning and Round-robin resource allocation(TTPRRA).Specifically,we exploit the exploration and learning ability of the twin delayed deep deterministic policy gradient algorithm(TD3)for UAVs trajectory planning,and utilize Round Robin rule for the optimal resource allocation.With TTP-RRA,the UAVs obtain their flight velocities by sensing the locations and the age of information(AoI)of the vehicles,then allocate the RBs to the vehicles in a descending order of AoI until the remaining RBs are not sufficient to support another successful uploading.Simulation results demonstrate that TTP-RRA outperforms the baseline approaches in terms of ESAoI and average AoI(AAoI).展开更多
With the development of new generation of information and communication technology,the Internet of Vehicles(IoV)/Vehicle-to-Everything(V2X),which realizes the connection between vehicle and X(i.e.,vehicles,pedestrians...With the development of new generation of information and communication technology,the Internet of Vehicles(IoV)/Vehicle-to-Everything(V2X),which realizes the connection between vehicle and X(i.e.,vehicles,pedestrians,infrastructures,clouds,etc.),is playing an increasingly important role in improving traffic operation efficiency and driving safety as well as enhancing the intelligence level of social traffic services.展开更多
This paper deals with the security of stock market transactions within financial markets, particularly that of the West African Economic and Monetary Union (UEMOA). The confidentiality and integrity of sensitive data ...This paper deals with the security of stock market transactions within financial markets, particularly that of the West African Economic and Monetary Union (UEMOA). The confidentiality and integrity of sensitive data in the stock market being crucial, the implementation of robust systems which guarantee trust between the different actors is essential. We therefore proposed, after analyzing the limits of several security approaches in the literature, an architecture based on blockchain technology making it possible to both identify and reduce the vulnerabilities linked to the design, implementation work or the use of web applications used for transactions. Our proposal makes it possible, thanks to two-factor authentication via the Blockchain, to strengthen the security of investors’ accounts and the automated recording of transactions in the Blockchain while guaranteeing the integrity of stock market operations. It also provides an application vulnerability report. To validate our approach, we compared our results to those of three other security tools, at the level of different metrics. Our approach achieved the best performance in each case.展开更多
The growing prevalence of fake images on the Internet and social media makes image integrity verification a crucial research topic.One of the most popular methods for manipulating digital images is image splicing,whic...The growing prevalence of fake images on the Internet and social media makes image integrity verification a crucial research topic.One of the most popular methods for manipulating digital images is image splicing,which involves copying a specific area from one image and pasting it into another.Attempts were made to mitigate the effects of image splicing,which continues to be a significant research challenge.This study proposes a new splicing detectionmodel,combining Sonine functions-derived convex-based features and deep features.Two stages make up the proposed method.The first step entails feature extraction,then classification using the“support vector machine”(SVM)to differentiate authentic and spliced images.The proposed Sonine functions-based feature extraction model reveals the spliced texture details by extracting some clues about the probability of image pixels.The proposed model achieved an accuracy of 98.93% when tested with the CASIA V2.0 dataset“Chinese Academy of Sciences,Institute of Automation”which is a publicly available dataset for forgery classification.The experimental results show that,for image splicing forgery detection,the proposed Sonine functions-derived convex-based features and deep features outperform state-of-the-art techniques in terms of accuracy,precision,and recall.Overall,the obtained detection accuracy attests to the benefit of using the Sonine functions alongside deep feature representations.Finding the regions or locations where image tampering has taken place is limited by the study.Future research will need to look into advanced image analysis techniques that can offer a higher degree of accuracy in identifying and localizing tampering regions.展开更多
Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when deal...Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field.展开更多
Smart Industrial environments use the Industrial Internet of Things(IIoT)for their routine operations and transform their industrial operations with intelligent and driven approaches.However,IIoT devices are vulnerabl...Smart Industrial environments use the Industrial Internet of Things(IIoT)for their routine operations and transform their industrial operations with intelligent and driven approaches.However,IIoT devices are vulnerable to cyber threats and exploits due to their connectivity with the internet.Traditional signature-based IDS are effective in detecting known attacks,but they are unable to detect unknown emerging attacks.Therefore,there is the need for an IDS which can learn from data and detect new threats.Ensemble Machine Learning(ML)and individual Deep Learning(DL)based IDS have been developed,and these individual models achieved low accuracy;however,their performance can be improved with the ensemble stacking technique.In this paper,we have proposed a Deep Stacked Neural Network(DSNN)based IDS,which consists of two stacked Convolutional Neural Network(CNN)models as base learners and Extreme Gradient Boosting(XGB)as the meta learner.The proposed DSNN model was trained and evaluated with the next-generation dataset,TON_IoT.Several pre-processing techniques were applied to prepare a dataset for the model,including ensemble feature selection and the SMOTE technique.Accuracy,precision,recall,F1-score,and false positive rates were used to evaluate the performance of the proposed ensemble model.Our experimental results showed that the accuracy for binary classification is 99.61%,which is better than in the baseline individual DL and ML models.In addition,the model proposed for IDS has been compared with similar models.The proposed DSNN achieved better performance metrics than the other models.The proposed DSNN model will be used to develop enhanced IDS for threat mitigation in smart industrial environments.展开更多
Digital image forgery (DIF) is a prevalent issue in the modern age, where malicious actors manipulate images for various purposes, including deception and misinformation. Detecting such forgeries is a critical task fo...Digital image forgery (DIF) is a prevalent issue in the modern age, where malicious actors manipulate images for various purposes, including deception and misinformation. Detecting such forgeries is a critical task for maintaining the integrity of digital content. This thesis explores the use of Modified Error Level Analysis (ELA) in combination with a Convolutional Neural Network (CNN), as well as, Feedforward Neural Network (FNN) model to detect digital image forgeries. Additionally, incorporation of Explainable Artificial Intelligence (XAI) to this research provided insights into the process of decision-making by the models. The study trains and tests the models on the CASIA2 dataset, emphasizing both authentic and forged images. The CNN model is trained and evaluated, and Explainable AI (SHapley Additive exPlanation— SHAP) is incorporated to explain the model’s predictions. Similarly, the FNN model is trained and evaluated, and XAI (SHAP) is incorporated to explain the model’s predictions. The results obtained from the analysis reveals that the proposed approach using CNN model is most effective in detecting image forgeries and provides valuable explanations for decision interpretability.展开更多
Blockchain merges technology with the Internet of Things(IoT)for addressing security and privacy-related issues.However,conventional blockchain suffers from scalability issues due to its linear structure,which increas...Blockchain merges technology with the Internet of Things(IoT)for addressing security and privacy-related issues.However,conventional blockchain suffers from scalability issues due to its linear structure,which increases the storage overhead,and Intrusion detection performed was limited with attack severity,leading to performance degradation.To overcome these issues,we proposed MZWB(Multi-Zone-Wise Blockchain)model.Initially,all the authenticated IoT nodes in the network ensure their legitimacy by using the Enhanced Blowfish Algorithm(EBA),considering several metrics.Then,the legitimately considered nodes for network construction for managing the network using Bayesian-Direct Acyclic Graph(B-DAG),which considers several metrics.The intrusion detection is performed based on two tiers.In the first tier,a Deep Convolution Neural Network(DCNN)analyzes the data packets by extracting packet flow features to classify the packets as normal,malicious,and suspicious.In the second tier,the suspicious packets are classified as normal or malicious using the Generative Adversarial Network(GAN).Finally,intrusion scenario performed reconstruction to reduce the severity of attacks in which Improved Monkey Optimization(IMO)is used for attack path discovery by considering several metrics,and the Graph cut utilized algorithm for attack scenario reconstruction(ASR).UNSW-NB15 and BoT-IoT utilized datasets for the MZWB method simulated using a Network simulator(NS-3.26).Compared with previous performance metrics such as energy consumption,storage overhead accuracy,response time,attack detection rate,precision,recall,and F-measure.The simulation result shows that the proposed MZWB method achieves high performance than existing works.展开更多
Over the past few years,rapid advancements in the internet and communication technologies have led to increasingly intricate and diverse networking systems.As a result,greater intelligence is necessary to effectively ...Over the past few years,rapid advancements in the internet and communication technologies have led to increasingly intricate and diverse networking systems.As a result,greater intelligence is necessary to effectively manage,optimize,and maintain these systems.Due to their distributed nature,machine learning models are challenging to deploy in traditional networks.However,Software-Defined Networking(SDN)presents an opportunity to integrate intelligence into networks by offering a programmable architecture that separates data and control planes.SDN provides a centralized network view and allows for dynamic updates of flow rules and softwarebased traffic analysis.While the programmable nature of SDN makes it easier to deploy machine learning techniques,the centralized control logic also makes it vulnerable to cyberattacks.To address these issues,recent research has focused on developing powerful machine-learning methods for detecting and mitigating attacks in SDN environments.This paper highlighted the countermeasures for cyberattacks on SDN and how current machine learningbased solutions can overcome these emerging issues.We also discuss the pros and cons of using machine learning algorithms for detecting and mitigating these attacks.Finally,we highlighted research issues,gaps,and challenges in developing machine learning-based solutions to secure the SDN controller,to help the research and network community to develop more robust and reliable solutions.展开更多
Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fus...Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fusion aims to improve the image quality and preserve the specific features.The methods of medical image fusion generally use knowledge in many differentfields such as clinical medicine,computer vision,digital imaging,machine learning,pattern recognition to fuse different medical images.There are two main approaches in fusing image,including spatial domain approach and transform domain approachs.This paper proposes a new algorithm to fusion multimodal images.This algorithm is based on Entropy optimization and the Sobel operator.Wavelet transform is used to split the input images into components over the low and high frequency domains.Then,two fusion rules are used for obtaining the fusing images.Thefirst rule,based on the Sobel operator,is used for high frequency components.The second rule,based on Entropy optimization by using Particle Swarm Optimization(PSO)algorithm,is used for low frequency components.Proposed algorithm is implemented on the images related to central nervous system diseases.The experimental results of the paper show that the proposed algorithm is better than some recent methods in term of brightness level,the contrast,the entropy,the gradient and visual informationfidelity for fusion(VIFF),Feature Mutual Information(FMI)indices.展开更多
It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone h...It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone huge changes.As the number ofmusic products increases daily and themusic genres are extremely rich,storing,classifying,and searching these works manually becomes difficult,if not impossible.Automatic classification ofmusical genres will contribute to making this possible.The research presented in this paper proposes an appropriate deep learning model along with an effective data augmentation method to achieve high classification accuracy for music genre classification using Small Free Music Archive(FMA)data set.For Small FMA,it is more efficient to augment the data by generating an echo rather than pitch shifting.The research results show that the DenseNet121 model and data augmentation methods,such as noise addition and echo generation,have a classification accuracy of 98.97%for the Small FMA data set,while this data set lowered the sampling frequency to 16000 Hz.The classification accuracy of this study outperforms that of the majority of the previous results on the same Small FMA data set.展开更多
The automated evaluation and analysis of employee behavior in an Industry 4.0-compliant manufacturingfirm are vital for the rapid and accurate diagnosis of work performance,particularly during the training of a new wor...The automated evaluation and analysis of employee behavior in an Industry 4.0-compliant manufacturingfirm are vital for the rapid and accurate diagnosis of work performance,particularly during the training of a new worker.Various techniques for identifying and detecting worker performance in industrial applications are based on computer vision techniques.Despite widespread com-puter vision-based approaches,it is challenging to develop technologies that assist the automated monitoring of worker actions at external working sites where cam-era deployment is problematic.Through the use of wearable inertial sensors,we propose a deep learning method for automatically recognizing the activities of construction workers.The suggested method incorporates a convolutional neural network,residual connection blocks,and multi-branch aggregate transformation modules for high-performance recognition of complicated activities such as con-struction worker tasks.The proposed approach has been evaluated using standard performance measures,such as precision,F1-score,and AUC,using a publicly available benchmark dataset known as VTT-ConIoT,which contains genuine con-struction work activities.In addition,standard deep learning models(CNNs,RNNs,and hybrid models)were developed in different empirical circumstances to compare them to the proposed model.With an average accuracy of 99.71%and an average F1-score of 99.71%,the experimentalfindings revealed that the suggested model could accurately recognize the actions of construction workers.Furthermore,we examined the impact of window size and sensor position on the identification efficiency of the proposed method.展开更多
The massive growth of diversified smart devices and continuous data generation poses a challenge to communication architectures.To deal with this problem,communication networks consider fog computing as one of promisi...The massive growth of diversified smart devices and continuous data generation poses a challenge to communication architectures.To deal with this problem,communication networks consider fog computing as one of promising technologies that can improve overall communication performance.It brings on-demand services proximate to the end devices and delivers the requested data in a short time.Fog computing faces several issues such as latency,bandwidth,and link utilization due to limited resources and the high processing demands of end devices.To this end,fog caching plays an imperative role in addressing data dissemination issues.This study provides a comprehensive discussion of fog computing,Internet of Things(IoTs)and the critical issues related to data security and dissemination in fog computing.Moreover,we determine the fog-based caching schemes and contribute to deal with the existing issues of fog computing.Besides,this paper presents a number of caching schemes with their contributions,benefits,and challenges to overcome the problems and limitations of fog computing.We also identify machine learning-based approaches for cache security and management in fog computing,as well as several prospective future research directions in caching,fog computing,and machine learning.展开更多
Voice classification is important in creating more intelligent systems that help with student exams,identifying criminals,and security systems.The main aim of the research is to develop a system able to predicate and ...Voice classification is important in creating more intelligent systems that help with student exams,identifying criminals,and security systems.The main aim of the research is to develop a system able to predicate and classify gender,age,and accent.So,a newsystem calledClassifyingVoice Gender,Age,and Accent(CVGAA)is proposed.Backpropagation and bagging algorithms are designed to improve voice recognition systems that incorporate sensory voice features such as rhythm-based features used to train the device to distinguish between the two gender categories.It has high precision compared to other algorithms used in this problem,as the adaptive backpropagation algorithm had an accuracy of 98%and the Bagging algorithm had an accuracy of 98.10%in the gender identification data.Bagging has the best accuracy among all algorithms,with 55.39%accuracy in the voice common dataset and age classification and accent accuracy in a speech accent of 78.94%.展开更多
NonorthogonalMultiple Access(NOMA)is incorporated into the wireless network systems to achieve better connectivity,spectral and energy effectiveness,higher data transfer rate,and also obtain the high quality of servic...NonorthogonalMultiple Access(NOMA)is incorporated into the wireless network systems to achieve better connectivity,spectral and energy effectiveness,higher data transfer rate,and also obtain the high quality of services(QoS).In order to improve throughput and minimum latency,aMultivariate Renkonen Regressive Weighted Preference Bootstrap Aggregation based Nonorthogonal Multiple Access(MRRWPBA-NOMA)technique is introduced for network communication.In the downlink transmission,each mobile device’s resources and their characteristics like energy,bandwidth,and trust are measured.Followed by,the Weighted Preference Bootstrap Aggregation is applied to recognize the resource-efficient mobile devices for aware data transmission by constructing the different weak hypotheses i.e.,Multivariate Renkonen Regression functions.Based on the classification,resource and trust-aware devices are selected for transmission.Simulation of the proposed MRRWPBA-NOMA technique and existing methods are carried out with different metrics such as data delivery ratio,throughput,latency,packet loss rate,and energy efficiency,signaling overhead.The simulation results assessment indicates that the proposed MRRWPBA-NOMA outperforms well than the conventional methods.展开更多
Falls are the contributing factor to both fatal and nonfatal injuries in the elderly.Therefore,pre-impact fall detection,which identifies a fall before the body collides with the floor,would be essential.Recently,rese...Falls are the contributing factor to both fatal and nonfatal injuries in the elderly.Therefore,pre-impact fall detection,which identifies a fall before the body collides with the floor,would be essential.Recently,researchers have turned their attention from post-impact fall detection to pre-impact fall detection.Pre-impact fall detection solutions typically use either a threshold-based or machine learning-based approach,although the threshold value would be difficult to accu-rately determine in threshold-based methods.Moreover,while additional features could sometimes assist in categorizing falls and non-falls more precisely,the esti-mated determination of the significant features would be too time-intensive,thus using a significant portion of the algorithm’s operating time.In this work,we developed a deep residual network with aggregation transformation called FDSNeXt for a pre-impact fall detection approach employing wearable inertial sensors.The proposed network was introduced to address the limitations of fea-ture extraction,threshold definition,and algorithm complexity.After training on a large-scale motion dataset,the KFall dataset,and straightforward evaluation with standard metrics,the proposed approach identified pre-impact and impact falls with high accuracy of 91.87 and 92.52%,respectively.In addition,we have inves-tigated fall detection’s performances of three state-of-the-art deep learning models such as a convolutional neural network(CNN),a long short-term memory neural network(LSTM),and a hybrid model(CNN-LSTM).The experimental results showed that the proposed FDSNeXt model outperformed these deep learning models(CNN,LSTM,and CNN-LSTM)with significant improvements.展开更多
A key requirement of today’s fast changing business outcome and innovation environment is the ability of organizations to adapt dynamically in an effective and efficient manner. Becoming a data-driven decision-making...A key requirement of today’s fast changing business outcome and innovation environment is the ability of organizations to adapt dynamically in an effective and efficient manner. Becoming a data-driven decision-making organization plays a crucially important role in addressing such adaptation requirements. The notion of “data democratization” has emerged as a mechanism with which organizations can address data-driven decision-making process issues and cross-pollinate data in ways that uncover actionable insights. We define data democratization as an attitude focused on curiosity, learning, and experimentation for delivering trusted data for trusted insights to a broad range of authorized stakeholders. In this paper, we propose a general indicator framework for data democratization by highlighting success factors that should not be overlooked in today’s data driven economy. In this practice-based research, these enablers are grouped into six broad building blocks: 1) “ethical guidelines, business context and value”, 2) “data leadership and data culture”, 3) “data literacy and business knowledge”, 4) “data wrangling, trustworthy & standardization”, 5) “sustainable data platform, access, & analytical tool”, 6) “intelligent data governance and privacy”. As an attitude, once it is planned and built, data democratization will need to be maintained. The utility of the approach is demonstrated through a case study for a Cameroon based start-up company that has ongoing data analytics projects. Our findings advance the concepts of data democratization and contribute to data free flow with trust.展开更多
In Mobile Communication Systems, inter-cell interference becomes one of the challenges that degrade the system’s performance, especially in the region with massive mobile users. The linear precoding schemes were prop...In Mobile Communication Systems, inter-cell interference becomes one of the challenges that degrade the system’s performance, especially in the region with massive mobile users. The linear precoding schemes were proposed to mitigate interferences between the base stations (inter-cell). These schemes are categorized into linear and non-linear;this study focused on linear precoding schemes, which are grounded into three types, namely Zero Forcing (ZF), Block Diagonalization (BD), and Signal Leakage Noise Ratio (SLNR). The study included the Cooperative Multi-cell Multi Input Multi Output (MIMO) System, whereby each Base Station serves more than one mobile station and all Base Stations on the system are assisted by each other by shared the Channel State Information (CSI). Based on the Multi-Cell Multiuser MIMO system, each Base Station on the cell is intended to maximize the data transmission rate by its mobile users by increasing the Signal Interference to Noise Ratio after the interference has been mitigated due to the usefully of linear precoding schemes on the transmitter. Moreover, these schemes used different approaches to mitigate interference. This study mainly concentrates on evaluating the performance of these schemes through the channel distribution models such as Ray-leigh and Rician included in the presence of noise errors. The results show that the SLNR scheme outperforms ZF and BD schemes overall scenario. This implied that when the value of SNR increased the performance of SLNR increased by 21.4% and 45.7% for ZF and BD respectively.展开更多
The evolution of telecommunications has allowed the development of broadband services based mainly on fiber optic backbone networks. The operation and maintenance of these optical networks is made possible by using su...The evolution of telecommunications has allowed the development of broadband services based mainly on fiber optic backbone networks. The operation and maintenance of these optical networks is made possible by using supervision platforms that generate alarms that can be archived in the form of log files. But analyzing the alarms in the log files is a laborious and difficult task for the engineers who need a degree of expertise. Identifying failures and their root cause can be time consuming and impact the quality of service, network availability and service level agreements signed between the operator and its customers. Therefore, it is more than important to study the different possibilities of alarms classification and to use machine learning algorithms for alarms correlation in order to quickly determine the root causes of problems faster. We conducted a research case study on one of the operators in Cameroon who held an optical backbone based on SDH and WDM technologies with data collected from 2016-03-28 to “2022-09-01” with 7201 rows and 18. In this paper, we will classify alarms according to different criteria and use 02 unsupervised learning algorithms namely the K-Means algorithm and the DBSCAN to establish correlations between alarms in order to identify root causes of problems and reduce the time to troubleshoot. To achieve this objective, log files were exploited in order to obtain the root causes of the alarms, and then K-Means algorithm and the DBSCAN were used firstly to evaluate their performance and their capability to identify the root cause of alarms in optical network.展开更多
At present,water pollution has become an important factor affecting and restricting national and regional economic development.Total phosphorus is one of the main sources of water pollution and eutrophication,so the p...At present,water pollution has become an important factor affecting and restricting national and regional economic development.Total phosphorus is one of the main sources of water pollution and eutrophication,so the prediction of total phosphorus in water quality has good research significance.This paper selects the total phosphorus and turbidity data for analysis by crawling the data of the water quality monitoring platform.By constructing the attribute object mapping relationship,the correlation between the two indicators was analyzed and used to predict the future data.Firstly,the monthly mean and daily mean concentrations of total phosphorus and turbidity outliers were calculated after cleaning,and the correlation between them was analyzed.Secondly,the correlation coefficients of different times and frequencies were used to predict the values for the next five days,and the data trend was predicted by python visualization.Finally,the real value was compared with the predicted value data,and the results showed that the correlation between total phosphorus and turbidity was useful in predicting the water quality.展开更多
基金supported in part by the Project of International Cooperation and Exchanges NSFC under Grant No.61860206005in part by the Joint Funds of the NSFC under Grant No.U22A2003.
文摘In this paper,multi-UAV trajectory planning and resource allocation are jointly investigated to improve the information freshness for vehicular networks,where the vehicles collect time-critical traffic information by on-board sensors and upload to the UAVs through their allocated spectrum resource.We adopt the expected sum age of information(ESAoI)to measure the network-wide information freshness.ESAoI is jointly affected by both the UAVs trajectory and the resource allocation,which are coupled with each other and make the analysis of ESAoI challenging.To tackle this challenge,we introduce a joint trajectory planning and resource allocation procedure,where the UAVs firstly fly to their destinations and then hover to allocate resource blocks(RBs)during a time-slot.Based on this procedure,we formulate a trajectory planning and resource allocation problem for ESAoI minimization.To solve the mixed integer nonlinear programming(MINLP)problem with hybrid decision variables,we propose a TD3 trajectory planning and Round-robin resource allocation(TTPRRA).Specifically,we exploit the exploration and learning ability of the twin delayed deep deterministic policy gradient algorithm(TD3)for UAVs trajectory planning,and utilize Round Robin rule for the optimal resource allocation.With TTP-RRA,the UAVs obtain their flight velocities by sensing the locations and the age of information(AoI)of the vehicles,then allocate the RBs to the vehicles in a descending order of AoI until the remaining RBs are not sufficient to support another successful uploading.Simulation results demonstrate that TTP-RRA outperforms the baseline approaches in terms of ESAoI and average AoI(AAoI).
文摘With the development of new generation of information and communication technology,the Internet of Vehicles(IoV)/Vehicle-to-Everything(V2X),which realizes the connection between vehicle and X(i.e.,vehicles,pedestrians,infrastructures,clouds,etc.),is playing an increasingly important role in improving traffic operation efficiency and driving safety as well as enhancing the intelligence level of social traffic services.
文摘This paper deals with the security of stock market transactions within financial markets, particularly that of the West African Economic and Monetary Union (UEMOA). The confidentiality and integrity of sensitive data in the stock market being crucial, the implementation of robust systems which guarantee trust between the different actors is essential. We therefore proposed, after analyzing the limits of several security approaches in the literature, an architecture based on blockchain technology making it possible to both identify and reduce the vulnerabilities linked to the design, implementation work or the use of web applications used for transactions. Our proposal makes it possible, thanks to two-factor authentication via the Blockchain, to strengthen the security of investors’ accounts and the automated recording of transactions in the Blockchain while guaranteeing the integrity of stock market operations. It also provides an application vulnerability report. To validate our approach, we compared our results to those of three other security tools, at the level of different metrics. Our approach achieved the best performance in each case.
文摘The growing prevalence of fake images on the Internet and social media makes image integrity verification a crucial research topic.One of the most popular methods for manipulating digital images is image splicing,which involves copying a specific area from one image and pasting it into another.Attempts were made to mitigate the effects of image splicing,which continues to be a significant research challenge.This study proposes a new splicing detectionmodel,combining Sonine functions-derived convex-based features and deep features.Two stages make up the proposed method.The first step entails feature extraction,then classification using the“support vector machine”(SVM)to differentiate authentic and spliced images.The proposed Sonine functions-based feature extraction model reveals the spliced texture details by extracting some clues about the probability of image pixels.The proposed model achieved an accuracy of 98.93% when tested with the CASIA V2.0 dataset“Chinese Academy of Sciences,Institute of Automation”which is a publicly available dataset for forgery classification.The experimental results show that,for image splicing forgery detection,the proposed Sonine functions-derived convex-based features and deep features outperform state-of-the-art techniques in terms of accuracy,precision,and recall.Overall,the obtained detection accuracy attests to the benefit of using the Sonine functions alongside deep feature representations.Finding the regions or locations where image tampering has taken place is limited by the study.Future research will need to look into advanced image analysis techniques that can offer a higher degree of accuracy in identifying and localizing tampering regions.
文摘Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field.
文摘Smart Industrial environments use the Industrial Internet of Things(IIoT)for their routine operations and transform their industrial operations with intelligent and driven approaches.However,IIoT devices are vulnerable to cyber threats and exploits due to their connectivity with the internet.Traditional signature-based IDS are effective in detecting known attacks,but they are unable to detect unknown emerging attacks.Therefore,there is the need for an IDS which can learn from data and detect new threats.Ensemble Machine Learning(ML)and individual Deep Learning(DL)based IDS have been developed,and these individual models achieved low accuracy;however,their performance can be improved with the ensemble stacking technique.In this paper,we have proposed a Deep Stacked Neural Network(DSNN)based IDS,which consists of two stacked Convolutional Neural Network(CNN)models as base learners and Extreme Gradient Boosting(XGB)as the meta learner.The proposed DSNN model was trained and evaluated with the next-generation dataset,TON_IoT.Several pre-processing techniques were applied to prepare a dataset for the model,including ensemble feature selection and the SMOTE technique.Accuracy,precision,recall,F1-score,and false positive rates were used to evaluate the performance of the proposed ensemble model.Our experimental results showed that the accuracy for binary classification is 99.61%,which is better than in the baseline individual DL and ML models.In addition,the model proposed for IDS has been compared with similar models.The proposed DSNN achieved better performance metrics than the other models.The proposed DSNN model will be used to develop enhanced IDS for threat mitigation in smart industrial environments.
文摘Digital image forgery (DIF) is a prevalent issue in the modern age, where malicious actors manipulate images for various purposes, including deception and misinformation. Detecting such forgeries is a critical task for maintaining the integrity of digital content. This thesis explores the use of Modified Error Level Analysis (ELA) in combination with a Convolutional Neural Network (CNN), as well as, Feedforward Neural Network (FNN) model to detect digital image forgeries. Additionally, incorporation of Explainable Artificial Intelligence (XAI) to this research provided insights into the process of decision-making by the models. The study trains and tests the models on the CASIA2 dataset, emphasizing both authentic and forged images. The CNN model is trained and evaluated, and Explainable AI (SHapley Additive exPlanation— SHAP) is incorporated to explain the model’s predictions. Similarly, the FNN model is trained and evaluated, and XAI (SHAP) is incorporated to explain the model’s predictions. The results obtained from the analysis reveals that the proposed approach using CNN model is most effective in detecting image forgeries and provides valuable explanations for decision interpretability.
文摘Blockchain merges technology with the Internet of Things(IoT)for addressing security and privacy-related issues.However,conventional blockchain suffers from scalability issues due to its linear structure,which increases the storage overhead,and Intrusion detection performed was limited with attack severity,leading to performance degradation.To overcome these issues,we proposed MZWB(Multi-Zone-Wise Blockchain)model.Initially,all the authenticated IoT nodes in the network ensure their legitimacy by using the Enhanced Blowfish Algorithm(EBA),considering several metrics.Then,the legitimately considered nodes for network construction for managing the network using Bayesian-Direct Acyclic Graph(B-DAG),which considers several metrics.The intrusion detection is performed based on two tiers.In the first tier,a Deep Convolution Neural Network(DCNN)analyzes the data packets by extracting packet flow features to classify the packets as normal,malicious,and suspicious.In the second tier,the suspicious packets are classified as normal or malicious using the Generative Adversarial Network(GAN).Finally,intrusion scenario performed reconstruction to reduce the severity of attacks in which Improved Monkey Optimization(IMO)is used for attack path discovery by considering several metrics,and the Graph cut utilized algorithm for attack scenario reconstruction(ASR).UNSW-NB15 and BoT-IoT utilized datasets for the MZWB method simulated using a Network simulator(NS-3.26).Compared with previous performance metrics such as energy consumption,storage overhead accuracy,response time,attack detection rate,precision,recall,and F-measure.The simulation result shows that the proposed MZWB method achieves high performance than existing works.
文摘Over the past few years,rapid advancements in the internet and communication technologies have led to increasingly intricate and diverse networking systems.As a result,greater intelligence is necessary to effectively manage,optimize,and maintain these systems.Due to their distributed nature,machine learning models are challenging to deploy in traditional networks.However,Software-Defined Networking(SDN)presents an opportunity to integrate intelligence into networks by offering a programmable architecture that separates data and control planes.SDN provides a centralized network view and allows for dynamic updates of flow rules and softwarebased traffic analysis.While the programmable nature of SDN makes it easier to deploy machine learning techniques,the centralized control logic also makes it vulnerable to cyberattacks.To address these issues,recent research has focused on developing powerful machine-learning methods for detecting and mitigating attacks in SDN environments.This paper highlighted the countermeasures for cyberattacks on SDN and how current machine learningbased solutions can overcome these emerging issues.We also discuss the pros and cons of using machine learning algorithms for detecting and mitigating these attacks.Finally,we highlighted research issues,gaps,and challenges in developing machine learning-based solutions to secure the SDN controller,to help the research and network community to develop more robust and reliable solutions.
文摘Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fusion aims to improve the image quality and preserve the specific features.The methods of medical image fusion generally use knowledge in many differentfields such as clinical medicine,computer vision,digital imaging,machine learning,pattern recognition to fuse different medical images.There are two main approaches in fusing image,including spatial domain approach and transform domain approachs.This paper proposes a new algorithm to fusion multimodal images.This algorithm is based on Entropy optimization and the Sobel operator.Wavelet transform is used to split the input images into components over the low and high frequency domains.Then,two fusion rules are used for obtaining the fusing images.Thefirst rule,based on the Sobel operator,is used for high frequency components.The second rule,based on Entropy optimization by using Particle Swarm Optimization(PSO)algorithm,is used for low frequency components.Proposed algorithm is implemented on the images related to central nervous system diseases.The experimental results of the paper show that the proposed algorithm is better than some recent methods in term of brightness level,the contrast,the entropy,the gradient and visual informationfidelity for fusion(VIFF),Feature Mutual Information(FMI)indices.
基金The authors received the research fun T2022-CN-006 for this study.
文摘It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation,distribution,and enjoyment of musical works have undergone huge changes.As the number ofmusic products increases daily and themusic genres are extremely rich,storing,classifying,and searching these works manually becomes difficult,if not impossible.Automatic classification ofmusical genres will contribute to making this possible.The research presented in this paper proposes an appropriate deep learning model along with an effective data augmentation method to achieve high classification accuracy for music genre classification using Small Free Music Archive(FMA)data set.For Small FMA,it is more efficient to augment the data by generating an echo rather than pitch shifting.The research results show that the DenseNet121 model and data augmentation methods,such as noise addition and echo generation,have a classification accuracy of 98.97%for the Small FMA data set,while this data set lowered the sampling frequency to 16000 Hz.The classification accuracy of this study outperforms that of the majority of the previous results on the same Small FMA data set.
基金supported by University of Phayao(Grant No.FF66-UoE001)Thailand Science Research and Innovation Fund+1 种基金National Science,Research and Innovation Fund(NSRF)King Mongkut’s University of Technology North Bangkok with Contract No.KMUTNB-FF-65-27.
文摘The automated evaluation and analysis of employee behavior in an Industry 4.0-compliant manufacturingfirm are vital for the rapid and accurate diagnosis of work performance,particularly during the training of a new worker.Various techniques for identifying and detecting worker performance in industrial applications are based on computer vision techniques.Despite widespread com-puter vision-based approaches,it is challenging to develop technologies that assist the automated monitoring of worker actions at external working sites where cam-era deployment is problematic.Through the use of wearable inertial sensors,we propose a deep learning method for automatically recognizing the activities of construction workers.The suggested method incorporates a convolutional neural network,residual connection blocks,and multi-branch aggregate transformation modules for high-performance recognition of complicated activities such as con-struction worker tasks.The proposed approach has been evaluated using standard performance measures,such as precision,F1-score,and AUC,using a publicly available benchmark dataset known as VTT-ConIoT,which contains genuine con-struction work activities.In addition,standard deep learning models(CNNs,RNNs,and hybrid models)were developed in different empirical circumstances to compare them to the proposed model.With an average accuracy of 99.71%and an average F1-score of 99.71%,the experimentalfindings revealed that the suggested model could accurately recognize the actions of construction workers.Furthermore,we examined the impact of window size and sensor position on the identification efficiency of the proposed method.
基金Provincial key platforms and major scientific research projects of universities in Guangdong Province,Peoples R China under Grant No.2017GXJK116.
文摘The massive growth of diversified smart devices and continuous data generation poses a challenge to communication architectures.To deal with this problem,communication networks consider fog computing as one of promising technologies that can improve overall communication performance.It brings on-demand services proximate to the end devices and delivers the requested data in a short time.Fog computing faces several issues such as latency,bandwidth,and link utilization due to limited resources and the high processing demands of end devices.To this end,fog caching plays an imperative role in addressing data dissemination issues.This study provides a comprehensive discussion of fog computing,Internet of Things(IoTs)and the critical issues related to data security and dissemination in fog computing.Moreover,we determine the fog-based caching schemes and contribute to deal with the existing issues of fog computing.Besides,this paper presents a number of caching schemes with their contributions,benefits,and challenges to overcome the problems and limitations of fog computing.We also identify machine learning-based approaches for cache security and management in fog computing,as well as several prospective future research directions in caching,fog computing,and machine learning.
文摘Voice classification is important in creating more intelligent systems that help with student exams,identifying criminals,and security systems.The main aim of the research is to develop a system able to predicate and classify gender,age,and accent.So,a newsystem calledClassifyingVoice Gender,Age,and Accent(CVGAA)is proposed.Backpropagation and bagging algorithms are designed to improve voice recognition systems that incorporate sensory voice features such as rhythm-based features used to train the device to distinguish between the two gender categories.It has high precision compared to other algorithms used in this problem,as the adaptive backpropagation algorithm had an accuracy of 98%and the Bagging algorithm had an accuracy of 98.10%in the gender identification data.Bagging has the best accuracy among all algorithms,with 55.39%accuracy in the voice common dataset and age classification and accent accuracy in a speech accent of 78.94%.
基金the Taif University Researchers Supporting Project number(TURSP-2020/36),Taif University,Taif,Saudi Arabiafundedby Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R97), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia。
文摘NonorthogonalMultiple Access(NOMA)is incorporated into the wireless network systems to achieve better connectivity,spectral and energy effectiveness,higher data transfer rate,and also obtain the high quality of services(QoS).In order to improve throughput and minimum latency,aMultivariate Renkonen Regressive Weighted Preference Bootstrap Aggregation based Nonorthogonal Multiple Access(MRRWPBA-NOMA)technique is introduced for network communication.In the downlink transmission,each mobile device’s resources and their characteristics like energy,bandwidth,and trust are measured.Followed by,the Weighted Preference Bootstrap Aggregation is applied to recognize the resource-efficient mobile devices for aware data transmission by constructing the different weak hypotheses i.e.,Multivariate Renkonen Regression functions.Based on the classification,resource and trust-aware devices are selected for transmission.Simulation of the proposed MRRWPBA-NOMA technique and existing methods are carried out with different metrics such as data delivery ratio,throughput,latency,packet loss rate,and energy efficiency,signaling overhead.The simulation results assessment indicates that the proposed MRRWPBA-NOMA outperforms well than the conventional methods.
基金This research project was also supported by the Thailand Science Research and Innovation Fundthe University of Phayao(Grant No.FF66-UoE001)King Mongkut’s University of Technology North Bangkok under Contract No.KMUTNB-66-KNOW-05.
文摘Falls are the contributing factor to both fatal and nonfatal injuries in the elderly.Therefore,pre-impact fall detection,which identifies a fall before the body collides with the floor,would be essential.Recently,researchers have turned their attention from post-impact fall detection to pre-impact fall detection.Pre-impact fall detection solutions typically use either a threshold-based or machine learning-based approach,although the threshold value would be difficult to accu-rately determine in threshold-based methods.Moreover,while additional features could sometimes assist in categorizing falls and non-falls more precisely,the esti-mated determination of the significant features would be too time-intensive,thus using a significant portion of the algorithm’s operating time.In this work,we developed a deep residual network with aggregation transformation called FDSNeXt for a pre-impact fall detection approach employing wearable inertial sensors.The proposed network was introduced to address the limitations of fea-ture extraction,threshold definition,and algorithm complexity.After training on a large-scale motion dataset,the KFall dataset,and straightforward evaluation with standard metrics,the proposed approach identified pre-impact and impact falls with high accuracy of 91.87 and 92.52%,respectively.In addition,we have inves-tigated fall detection’s performances of three state-of-the-art deep learning models such as a convolutional neural network(CNN),a long short-term memory neural network(LSTM),and a hybrid model(CNN-LSTM).The experimental results showed that the proposed FDSNeXt model outperformed these deep learning models(CNN,LSTM,and CNN-LSTM)with significant improvements.
文摘A key requirement of today’s fast changing business outcome and innovation environment is the ability of organizations to adapt dynamically in an effective and efficient manner. Becoming a data-driven decision-making organization plays a crucially important role in addressing such adaptation requirements. The notion of “data democratization” has emerged as a mechanism with which organizations can address data-driven decision-making process issues and cross-pollinate data in ways that uncover actionable insights. We define data democratization as an attitude focused on curiosity, learning, and experimentation for delivering trusted data for trusted insights to a broad range of authorized stakeholders. In this paper, we propose a general indicator framework for data democratization by highlighting success factors that should not be overlooked in today’s data driven economy. In this practice-based research, these enablers are grouped into six broad building blocks: 1) “ethical guidelines, business context and value”, 2) “data leadership and data culture”, 3) “data literacy and business knowledge”, 4) “data wrangling, trustworthy & standardization”, 5) “sustainable data platform, access, & analytical tool”, 6) “intelligent data governance and privacy”. As an attitude, once it is planned and built, data democratization will need to be maintained. The utility of the approach is demonstrated through a case study for a Cameroon based start-up company that has ongoing data analytics projects. Our findings advance the concepts of data democratization and contribute to data free flow with trust.
文摘In Mobile Communication Systems, inter-cell interference becomes one of the challenges that degrade the system’s performance, especially in the region with massive mobile users. The linear precoding schemes were proposed to mitigate interferences between the base stations (inter-cell). These schemes are categorized into linear and non-linear;this study focused on linear precoding schemes, which are grounded into three types, namely Zero Forcing (ZF), Block Diagonalization (BD), and Signal Leakage Noise Ratio (SLNR). The study included the Cooperative Multi-cell Multi Input Multi Output (MIMO) System, whereby each Base Station serves more than one mobile station and all Base Stations on the system are assisted by each other by shared the Channel State Information (CSI). Based on the Multi-Cell Multiuser MIMO system, each Base Station on the cell is intended to maximize the data transmission rate by its mobile users by increasing the Signal Interference to Noise Ratio after the interference has been mitigated due to the usefully of linear precoding schemes on the transmitter. Moreover, these schemes used different approaches to mitigate interference. This study mainly concentrates on evaluating the performance of these schemes through the channel distribution models such as Ray-leigh and Rician included in the presence of noise errors. The results show that the SLNR scheme outperforms ZF and BD schemes overall scenario. This implied that when the value of SNR increased the performance of SLNR increased by 21.4% and 45.7% for ZF and BD respectively.
文摘The evolution of telecommunications has allowed the development of broadband services based mainly on fiber optic backbone networks. The operation and maintenance of these optical networks is made possible by using supervision platforms that generate alarms that can be archived in the form of log files. But analyzing the alarms in the log files is a laborious and difficult task for the engineers who need a degree of expertise. Identifying failures and their root cause can be time consuming and impact the quality of service, network availability and service level agreements signed between the operator and its customers. Therefore, it is more than important to study the different possibilities of alarms classification and to use machine learning algorithms for alarms correlation in order to quickly determine the root causes of problems faster. We conducted a research case study on one of the operators in Cameroon who held an optical backbone based on SDH and WDM technologies with data collected from 2016-03-28 to “2022-09-01” with 7201 rows and 18. In this paper, we will classify alarms according to different criteria and use 02 unsupervised learning algorithms namely the K-Means algorithm and the DBSCAN to establish correlations between alarms in order to identify root causes of problems and reduce the time to troubleshoot. To achieve this objective, log files were exploited in order to obtain the root causes of the alarms, and then K-Means algorithm and the DBSCAN were used firstly to evaluate their performance and their capability to identify the root cause of alarms in optical network.
基金the National Natural Science Foundation of China(No.51775185)Natural Science Foundation of Hunan Province(No.2022JJ90013)+1 种基金Intelligent Environmental Monitoring Technology Hunan Provincial Joint Training Base for Graduate Students in the Integration of Industry and Education,and Hunan Normal University University-Industry Cooperation.the 2011 Collaborative Innovation Center for Development and Utilization of Finance and Economics Big Data Property,Universities of Hunan Province,Open Project,Grant Number 20181901CRP04.
文摘At present,water pollution has become an important factor affecting and restricting national and regional economic development.Total phosphorus is one of the main sources of water pollution and eutrophication,so the prediction of total phosphorus in water quality has good research significance.This paper selects the total phosphorus and turbidity data for analysis by crawling the data of the water quality monitoring platform.By constructing the attribute object mapping relationship,the correlation between the two indicators was analyzed and used to predict the future data.Firstly,the monthly mean and daily mean concentrations of total phosphorus and turbidity outliers were calculated after cleaning,and the correlation between them was analyzed.Secondly,the correlation coefficients of different times and frequencies were used to predict the values for the next five days,and the data trend was predicted by python visualization.Finally,the real value was compared with the predicted value data,and the results showed that the correlation between total phosphorus and turbidity was useful in predicting the water quality.