期刊文献+
共找到3,772篇文章
< 1 2 189 >
每页显示 20 50 100
Exploring Deep Learning Methods for Computer Vision Applications across Multiple Sectors:Challenges and Future Trends
1
作者 Narayanan Ganesh Rajendran Shankar +3 位作者 Miroslav Mahdal Janakiraman SenthilMurugan Jasgurpreet Singh Chohan Kanak Kalita 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期103-141,共39页
Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than ot... Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than other traditional machine learning(ML)methods inCV.DL techniques can produce state-of-the-art results for difficult CV problems like picture categorization,object detection,and face recognition.In this review,a structured discussion on the history,methods,and applications of DL methods to CV problems is presented.The sector-wise presentation of applications in this papermay be particularly useful for researchers in niche fields who have limited or introductory knowledge of DL methods and CV.This review will provide readers with context and examples of how these techniques can be applied to specific areas.A curated list of popular datasets and a brief description of them are also included for the benefit of readers. 展开更多
关键词 Neural network machine vision classification object detection deep learning
下载PDF
A Study on Outlier Detection and Feature Engineering Strategies in Machine Learning for Heart Disease Prediction
2
作者 Varada Rajkumar Kukkala Surapaneni Phani Praveen +1 位作者 Naga Satya Koti Mani Kumar Tirumanadham Parvathaneni Naga Srinivasu 《Computer Systems Science & Engineering》 2024年第5期1085-1112,共28页
This paper investigates the application ofmachine learning to develop a response model to cardiovascular problems and the use of AdaBoost which incorporates an application of Outlier Detection methodologies namely;Z-S... This paper investigates the application ofmachine learning to develop a response model to cardiovascular problems and the use of AdaBoost which incorporates an application of Outlier Detection methodologies namely;Z-Score incorporated with GreyWolf Optimization(GWO)as well as Interquartile Range(IQR)coupled with Ant Colony Optimization(ACO).Using a performance index,it is shown that when compared with the Z-Score and GWO with AdaBoost,the IQR and ACO,with AdaBoost are not very accurate(89.0%vs.86.0%)and less discriminative(Area Under the Curve(AUC)score of 93.0%vs.91.0%).The Z-Score and GWO methods also outperformed the others in terms of precision,scoring 89.0%;and the recall was also found to be satisfactory,scoring 90.0%.Thus,the paper helps to reveal various specific benefits and drawbacks associated with different outlier detection and feature selection techniques,which can be important to consider in further improving various aspects of diagnostics in cardiovascular health.Collectively,these findings can enhance the knowledge of heart disease prediction and patient treatment using enhanced and innovativemachine learning(ML)techniques.These findings when combined improve patient therapy knowledge and cardiac disease prediction through the use of cutting-edge and improved machine learning approaches.This work lays the groundwork for more precise diagnosis models by highlighting the benefits of combining multiple optimization methodologies.Future studies should focus on maximizing patient outcomes and model efficacy through research on these combinations. 展开更多
关键词 Grey wolf optimization ant colony optimization Z-SCORE interquartile range(IQR) ADABOOST OUTLIER
下载PDF
Computer-Aided Diagnosis Model Using Machine Learning for Brain Tumor Detection and Classification 被引量:1
3
作者 M.Uvaneshwari M.Baskar 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期1811-1826,共16页
The Brain Tumor(BT)is created by an uncontrollable rise of anomalous cells in brain tissue,and it consists of 2 types of cancers they are malignant and benign tumors.The benevolent BT does not affect the neighbouring ... The Brain Tumor(BT)is created by an uncontrollable rise of anomalous cells in brain tissue,and it consists of 2 types of cancers they are malignant and benign tumors.The benevolent BT does not affect the neighbouring healthy and normal tissue;however,the malignant could affect the adjacent brain tissues,which results in death.Initial recognition of BT is highly significant to protecting the patient’s life.Generally,the BT can be identified through the magnetic resonance imaging(MRI)scanning technique.But the radiotherapists are not offering effective tumor segmentation in MRI images because of the position and unequal shape of the tumor in the brain.Recently,ML has prevailed against standard image processing techniques.Several studies denote the superiority of machine learning(ML)techniques over standard techniques.Therefore,this study develops novel brain tumor detection and classification model using met heuristic optimization with machine learning(BTDC-MOML)model.To accomplish the detection of brain tumor effectively,a Computer-Aided Design(CAD)model using Machine Learning(ML)technique is proposed in this research manuscript.Initially,the input image pre-processing is performed using Gaborfiltering(GF)based noise removal,contrast enhancement,and skull stripping.Next,mayfly optimization with the Kapur’s thresholding based segmentation process takes place.For feature extraction proposes,local diagonal extreme patterns(LDEP)are exploited.At last,the Extreme Gradient Boosting(XGBoost)model can be used for the BT classification process.The accuracy analysis is performed in terms of Learning accuracy,and the validation accuracy is performed to determine the efficiency of the proposed research work.The experimental validation of the proposed model demonstrates its promising performance over other existing methods. 展开更多
关键词 Brain tumor machine learning SEGMENTATION computer-aided diagnosis skull stripping
下载PDF
Computer Vision and Deep Learning-enabled Weed Detection Model for Precision Agriculture 被引量:1
4
作者 R.Punithavathi A.Delphin Carolina Rani +4 位作者 K.R.Sughashinir Chinnarao Kurangit M.Nirmala Hasmath Farhana Thariq Ahmed S.P.Balamurugan 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期2759-2774,共16页
Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital ... Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital role in influencing crop productivity.The wastage and pollution of farmland's natural atmosphere instigated by full coverage chemical herbicide spraying are increased.Since the proper identification of weeds from crops helps to reduce the usage of herbicide and improve productivity,this study presents a novel computer vision and deep learning based weed detection and classification(CVDL-WDC)model for precision agriculture.The proposed CVDL-WDC technique intends to prop-erly discriminate the plants as well as weeds.The proposed CVDL-WDC technique involves two processes namely multiscale Faster RCNN based object detection and optimal extreme learning machine(ELM)based weed classification.The parameters of the ELM model are optimally adjusted by the use of farmland fertility optimization(FFO)algorithm.A comprehensive simulation analysis of the CVDL-WDC technique against benchmark dataset reported the enhanced out-comes over its recent approaches interms of several measures. 展开更多
关键词 Precision agriculture smart farming weed detection computer vision deep learning
下载PDF
BDPartNet: Feature Decoupling and Reconstruction Fusion Network for Infrared and Visible Image 被引量:1
5
作者 Xuejie Wang Jianxun Zhang +2 位作者 Ye Tao Xiaoli Yuan Yifan Guo 《Computers, Materials & Continua》 SCIE EI 2024年第6期4621-4639,共19页
While single-modal visible light images or infrared images provide limited information,infrared light captures significant thermal radiation data,whereas visible light excels in presenting detailed texture information... While single-modal visible light images or infrared images provide limited information,infrared light captures significant thermal radiation data,whereas visible light excels in presenting detailed texture information.Com-bining images obtained from both modalities allows for leveraging their respective strengths and mitigating individual limitations,resulting in high-quality images with enhanced contrast and rich texture details.Such capabilities hold promising applications in advanced visual tasks including target detection,instance segmentation,military surveillance,pedestrian detection,among others.This paper introduces a novel approach,a dual-branch decomposition fusion network based on AutoEncoder(AE),which decomposes multi-modal features into intensity and texture information for enhanced fusion.Local contrast enhancement module(CEM)and texture detail enhancement module(DEM)are devised to process the decomposed images,followed by image fusion through the decoder.The proposed loss function ensures effective retention of key information from the source images of both modalities.Extensive comparisons and generalization experiments demonstrate the superior performance of our network in preserving pixel intensity distribution and retaining texture details.From the qualitative results,we can see the advantages of fusion details and local contrast.In the quantitative experiments,entropy(EN),mutual information(MI),structural similarity(SSIM)and other results have improved and exceeded the SOTA(State of the Art)model as a whole. 展开更多
关键词 Deep learning feature enhancement computer vision
下载PDF
Recognition of mortar pumpability via computer vision and deep learning
6
作者 Hao-Zhe Feng Hong-Yang Yu +2 位作者 Wen-Yong Wang Wen-Xuan Wang Ming-Qian Du 《Journal of Electronic Science and Technology》 EI CAS CSCD 2023年第3期73-81,共9页
The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional con... The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional convolutional neural network(3D CNN)with a 2-dimensional convolutional long short-term memory network(ConvLSTM2D)to automatically classify the mortar pumpability.Experiment results show that the proposed model has an accuracy rate of 100%with a fast convergence speed,based on the dataset organized by collecting the corresponding mortar image sequences.This work demonstrates the feasibility of using computer vision and deep learning for mortar pumpability classification. 展开更多
关键词 Classification Computer vision Deep learning PUMPABILITY 2-dimensional convolutional long short-term memory network (ConvLSTM2D) 3-dimensional convolutional neural network(3D CNN)
下载PDF
Defect Detection Model Using Time Series Data Augmentation and Transformation 被引量:1
7
作者 Gyu-Il Kim Hyun Yoo +1 位作者 Han-Jin Cho Kyungyong Chung 《Computers, Materials & Continua》 SCIE EI 2024年第2期1713-1730,共18页
Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal depende... Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight. 展开更多
关键词 Defect detection time series deep learning data augmentation data transformation
下载PDF
Refinement modeling and verification of secure operating systems for communication in digital twins
8
作者 Zhenjiang Qian Gaofei Sun +1 位作者 Xiaoshuang Xing Gaurav Dhiman 《Digital Communications and Networks》 SCIE CSCD 2024年第2期304-314,共11页
In traditional digital twin communication system testing,we can apply test cases as completely as possible in order to ensure the correctness of the system implementation,and even then,there is no guarantee that the d... In traditional digital twin communication system testing,we can apply test cases as completely as possible in order to ensure the correctness of the system implementation,and even then,there is no guarantee that the digital twin communication system implementation is completely correct.Formal verification is currently recognized as a method to ensure the correctness of software system for communication in digital twins because it uses rigorous mathematical methods to verify the correctness of systems for communication in digital twins and can effectively help system designers determine whether the system is designed and implemented correctly.In this paper,we use the interactive theorem proving tool Isabelle/HOL to construct the formal model of the X86 architecture,and to model the related assembly instructions.The verification result shows that the system states obtained after the operations of relevant assembly instructions is consistent with the expected states,indicating that the system meets the design expectations. 展开更多
关键词 Theorem proving Isabelle/HOL Formal verification System modeling Correctness verification
下载PDF
Correlation Composition Awareness Model with Pair Collaborative Localization for IoT Authentication and Localization
9
作者 Kranthi Alluri S.Gopikrishnan 《Computers, Materials & Continua》 SCIE EI 2024年第4期943-961,共19页
Secure authentication and accurate localization among Internet of Things(IoT)sensors are pivotal for the functionality and integrity of IoT networks.IoT authentication and localization are intricate and symbiotic,impa... Secure authentication and accurate localization among Internet of Things(IoT)sensors are pivotal for the functionality and integrity of IoT networks.IoT authentication and localization are intricate and symbiotic,impacting both the security and operational functionality of IoT systems.Hence,accurate localization and lightweight authentication on resource-constrained IoT devices pose several challenges.To overcome these challenges,recent approaches have used encryption techniques with well-known key infrastructures.However,these methods are inefficient due to the increasing number of data breaches in their localization approaches.This proposed research efficiently integrates authentication and localization processes in such a way that they complement each other without compromising on security or accuracy.The proposed framework aims to detect active attacks within IoT networks,precisely localize malicious IoT devices participating in these attacks,and establish dynamic implicit authentication mechanisms.This integrated framework proposes a Correlation Composition Awareness(CCA)model,which explores innovative approaches to device correlations,enhancing the accuracy of attack detection and localization.Additionally,this framework introduces the Pair Collaborative Localization(PCL)technique,facilitating precise identification of the exact locations of malicious IoT devices.To address device authentication,a Behavior and Performance Measurement(BPM)scheme is developed,ensuring that only trusted devices gain access to the network.This work has been evaluated across various environments and compared against existing models.The results prove that the proposed methodology attains 96%attack detection accuracy,84%localization accuracy,and 98%device authentication accuracy. 展开更多
关键词 Sensor localization IoT authentication network security data accuracy precise location access control security framework
下载PDF
Trusted Certified Auditor Using Cryptography for Secure Data Outsourcing and Privacy Preservation in Fog-Enabled VANETs
10
作者 Nagaraju Pacharla K.Srinivasa Reddy 《Computers, Materials & Continua》 SCIE EI 2024年第5期3089-3110,共22页
With the recent technological developments,massive vehicular ad hoc networks(VANETs)have been established,enabling numerous vehicles and their respective Road Side Unit(RSU)components to communicate with oneanother.Th... With the recent technological developments,massive vehicular ad hoc networks(VANETs)have been established,enabling numerous vehicles and their respective Road Side Unit(RSU)components to communicate with oneanother.The best way to enhance traffic flow for vehicles and traffic management departments is to share thedata they receive.There needs to be more protection for the VANET systems.An effective and safe methodof outsourcing is suggested,which reduces computation costs by achieving data security using a homomorphicmapping based on the conjugate operation of matrices.This research proposes a VANET-based data outsourcingsystem to fix the issues.To keep data outsourcing secure,the suggested model takes cryptography models intoaccount.Fog will keep the generated keys for the purpose of vehicle authentication.For controlling and overseeingthe outsourced data while preserving privacy,the suggested approach considers the Trusted Certified Auditor(TCA).Using the secret key,TCA can identify the genuine identity of VANETs when harmful messages aredetected.The proposed model develops a TCA-based unique static vehicle labeling system using cryptography(TCA-USVLC)for secure data outsourcing and privacy preservation in VANETs.The proposed model calculatesthe trust of vehicles in 16 ms for an average of 180 vehicles and achieves 98.6%accuracy for data encryption toprovide security.The proposedmodel achieved 98.5%accuracy in data outsourcing and 98.6%accuracy in privacypreservation in fog-enabled VANETs.Elliptical curve cryptography models can be applied in the future for betterencryption and decryption rates with lightweight cryptography operations. 展开更多
关键词 Vehicular ad-hoc networks data outsourcing privacy preservation CRYPTOGRAPHY keys trusted certified auditors data security
下载PDF
Hybrid Prairie Dog and Beluga Whale Optimization Algorithm for Multi-Objective Load Balanced-Task Scheduling in Cloud Computing Environments
11
作者 K Ramya Senthilselvi Ayothi 《China Communications》 SCIE CSCD 2024年第7期307-324,共18页
The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource pr... The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource provisioning,but the necessitated constraints of rapid turnaround time,minimal execution cost,high rate of resource utilization and limited makespan transforms the Load Balancing(LB)process-based Task Scheduling(TS)problem into an NP-hard optimization issue.In this paper,Hybrid Prairie Dog and Beluga Whale Optimization Algorithm(HPDBWOA)is propounded for precise mapping of tasks to virtual machines with the due objective of addressing the dynamic nature of cloud environment.This capability of HPDBWOA helps in decreasing the SLA violations and Makespan with optimal resource management.It is modelled as a scheduling strategy which utilizes the merits of PDOA and BWOA for attaining reactive decisions making with respect to the process of assigning the tasks to virtual resources by considering their priorities into account.It addresses the problem of pre-convergence with wellbalanced exploration and exploitation to attain necessitated Quality of Service(QoS)for minimizing the waiting time incurred during TS process.It further balanced exploration and exploitation rates for reducing the makespan during the task allocation with complete awareness of VM state.The results of the proposed HPDBWOA confirmed minimized energy utilization of 32.18% and reduced cost of 28.94% better than approaches used for investigation.The statistical investigation of the proposed HPDBWOA conducted using ANOVA confirmed its efficacy over the benchmarked systems in terms of throughput,system,and response time. 展开更多
关键词 Beluga Whale Optimization Algorithm(BWOA) cloud computing Improved Hopcroft-Karp algorithm Infrastructure as a Service(IaaS) Prairie Dog Optimization Algorithm(PDOA) Virtual Machine(VM)
下载PDF
MAIPFE:An Efficient Multimodal Approach Integrating Pre-Emptive Analysis,Personalized Feature Selection,and Explainable AI
12
作者 Moshe Dayan Sirapangi S.Gopikrishnan 《Computers, Materials & Continua》 SCIE EI 2024年第5期2229-2251,共23页
Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of mu... Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way.Existing methods,while useful,have limitations in predictive accuracy,delay,personalization,and user interpretability,requiring a more comprehensive and efficient approach to harness modern medical IoT devices.MAIPFE is a multimodal approach integrating pre-emptive analysis,personalized feature selection,and explainable AI for real-time health monitoring and disease detection.By using AI for early disease detection,personalized health recommendations,and transparency,healthcare will be transformed.The Multimodal Approach Integrating Pre-emptive Analysis,Personalized Feature Selection,and Explainable AI(MAIPFE)framework,which combines Firefly Optimizer,Recurrent Neural Network(RNN),Fuzzy C Means(FCM),and Explainable AI,improves disease detection precision over existing methods.Comprehensive metrics show the model’s superiority in real-time health analysis.The proposed framework outperformed existing models by 8.3%in disease detection classification precision,8.5%in accuracy,5.5%in recall,2.9%in specificity,4.5%in AUC(Area Under the Curve),and 4.9%in delay reduction.Disease prediction precision increased by 4.5%,accuracy by 3.9%,recall by 2.5%,specificity by 3.5%,AUC by 1.9%,and delay levels decreased by 9.4%.MAIPFE can revolutionize healthcare with preemptive analysis,personalized health insights,and actionable recommendations.The research shows that this innovative approach improves patient outcomes and healthcare efficiency in the real world. 展开更多
关键词 Predictive health modeling Medical Internet of Things explainable artificial intelligence personalized feature selection preemptive analysis
下载PDF
Analysis and Modeling of Mobile Phone Activity Data Using Interactive Cyber-Physical Social System
13
作者 Farhan Amin Gyu Sang Choi 《Computers, Materials & Continua》 SCIE EI 2024年第9期3507-3521,共15页
Mobile networks possess significant information and thus are considered a gold mine for the researcher’s community.The call detail records(CDR)of a mobile network are used to identify the network’s efficacy and the ... Mobile networks possess significant information and thus are considered a gold mine for the researcher’s community.The call detail records(CDR)of a mobile network are used to identify the network’s efficacy and the mobile user’s behavior.It is evident from the recent literature that cyber-physical systems(CPS)were used in the analytics and modeling of telecom data.In addition,CPS is used to provide valuable services in smart cities.In general,a typical telecom company hasmillions of subscribers and thus generatesmassive amounts of data.From this aspect,data storage,analysis,and processing are the key concerns.To solve these issues,herein we propose a multilevel cyber-physical social system(CPSS)for the analysis and modeling of large internet data.Our proposed multilevel system has three levels and each level has a specific functionality.Initially,raw Call Detail Data(CDR)was collected at the first level.Herein,the data preprocessing,cleaning,and error removal operations were performed.In the second level,data processing,cleaning,reduction,integration,processing,and storage were performed.Herein,suggested internet activity record measures were applied.Our proposed system initially constructs a graph and then performs network analysis.Thus proposed CPSS system accurately identifies different areas of internet peak usage in a city(Milan city).Our research is helpful for the network operators to plan effective network configuration,management,and optimization of resources. 展开更多
关键词 Cyber-physical social systems big data cyber-physical systems pervasive computing smart city big data management techniques
下载PDF
Leveraging User-Generated Comments and Fused BiLSTM Models to Detect and Predict Issues with Mobile Apps
14
作者 Wael M.S.Yafooz Abdullah Alsaeedi 《Computers, Materials & Continua》 SCIE EI 2024年第4期735-759,共25页
In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mo... In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mobileapps. The use of these apps eases our daily lives, and all customers who need any type of service can accessit easily, comfortably, and efficiently through mobile apps. Particularly, Saudi Arabia greatly depends on digitalservices to assist people and visitors. Such mobile devices are used in organizing daily work schedules and services,particularly during two large occasions, Umrah and Hajj. However, pilgrims encounter mobile app issues such asslowness, conflict, unreliability, or user-unfriendliness. Pilgrims comment on these issues on mobile app platformsthrough reviews of their experiences with these digital services. Scholars have made several attempts to solve suchmobile issues by reporting bugs or non-functional requirements by utilizing user comments.However, solving suchissues is a great challenge, and the issues still exist. Therefore, this study aims to propose a hybrid deep learningmodel to classify and predict mobile app software issues encountered by millions of pilgrims during the Hajj andUmrah periods from the user perspective. Firstly, a dataset was constructed using user-generated comments fromrelevant mobile apps using natural language processing methods, including information extraction, the annotationprocess, and pre-processing steps, considering a multi-class classification problem. Then, several experimentswere conducted using common machine learning classifiers, Artificial Neural Networks (ANN), Long Short-TermMemory (LSTM), and Convolutional Neural Network Long Short-Term Memory (CNN-LSTM) architectures, toexamine the performance of the proposed model. Results show 96% in F1-score and accuracy, and the proposedmodel outperformed the mentioned models. 展开更多
关键词 Mobile apps issues play store user comments deep learning LSTM bidirectional LSTM
下载PDF
An Optimized Approach to Deep Learning for Botnet Detection and Classification for Cybersecurity in Internet of Things Environment
15
作者 Abdulrahman Alzahrani 《Computers, Materials & Continua》 SCIE EI 2024年第8期2331-2349,共19页
The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent ... The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent botnets in interconnected devices.Anomaly detection models evaluate transmission patterns,network traffic,and device behaviour to detect deviations from usual activities.Machine learning(ML)techniques detect patterns signalling botnet activity,namely sudden traffic increase,unusual command and control patterns,or irregular device behaviour.In addition,intrusion detection systems(IDSs)and signature-based techniques are applied to recognize known malware signatures related to botnets.Various ML and deep learning(DL)techniques have been developed to detect botnet attacks in IoT systems.To overcome security issues in an IoT environment,this article designs a gorilla troops optimizer with DL-enabled botnet attack detection and classification(GTODL-BADC)technique.The GTODL-BADC technique follows feature selection(FS)with optimal DL-based classification for accomplishing security in an IoT environment.For data preprocessing,the min-max data normalization approach is primarily used.The GTODL-BADC technique uses the GTO algorithm to select features and elect optimal feature subsets.Moreover,the multi-head attention-based long short-term memory(MHA-LSTM)technique was applied for botnet detection.Finally,the tree seed algorithm(TSA)was used to select the optimum hyperparameter for the MHA-LSTM method.The experimental validation of the GTODL-BADC technique can be tested on a benchmark dataset.The simulation results highlighted that the GTODL-BADC technique demonstrates promising performance in the botnet detection process. 展开更多
关键词 Botnet detection internet of things gorilla troops optimizer hyperparameter tuning intrusion detection system
下载PDF
ED-Ged:Nighttime Image Semantic Segmentation Based on Enhanced Detail and Bidirectional Guidance
16
作者 Xiaoli Yuan Jianxun Zhang +1 位作者 Xuejie Wang Zhuhong Chu 《Computers, Materials & Continua》 SCIE EI 2024年第8期2443-2462,共20页
Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to fac... Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to factors like poor lighting and overexposure,making it difficult to recognize small objects.To address this,we propose an Image Adaptive Enhancement(IAEN)module comprising a parameter predictor(Edip),multiple image processing filters(Mdif),and a Detail Processing Module(DPM).Edip combines image processing filters to predict parameters like exposure and hue,optimizing image quality.We adopt a novel image encoder to enhance parameter prediction accuracy by enabling Edip to handle features at different scales.DPM strengthens overlooked image details,extending the IAEN module’s functionality.After the segmentation network,we integrate a Depth Guided Filter(DGF)to refine segmentation outputs.The entire network is trained end-to-end,with segmentation results guiding parameter prediction optimization,promoting self-learning and network improvement.This lightweight and efficient network architecture is particularly suitable for addressing challenges in nighttime image segmentation.Extensive experiments validate significant performance improvements of our approach on the ACDC-night and Nightcity datasets. 展开更多
关键词 Night driving semantic segmentation nighttime image processing adverse illumination differentiable filters
下载PDF
Prediction of Bandwidth of Metamaterial Antenna Using Pearson Kernel-Based Techniques
17
作者 Sherly Alphonse S.Abinaya Sourabh Paul 《Computers, Materials & Continua》 SCIE EI 2024年第3期3449-3467,共19页
The use of metamaterial enhances the performance of a specific class of antennas known as metamaterial antennas.The radiation cost and quality factor of the antenna are influenced by the size of the antenna.Metamateri... The use of metamaterial enhances the performance of a specific class of antennas known as metamaterial antennas.The radiation cost and quality factor of the antenna are influenced by the size of the antenna.Metamaterial antennas allow for the circumvention of the bandwidth restriction for small antennas.Antenna parameters have recently been predicted using machine learning algorithms in existing literature.Machine learning can take the place of the manual process of experimenting to find the ideal simulated antenna parameters.The accuracy of the prediction will be primarily dependent on the model that is used.In this paper,a novel method for forecasting the bandwidth of the metamaterial antenna is proposed,based on using the Pearson Kernel as a standard kernel.Along with these new approaches,this paper suggests a unique hypersphere-based normalization to normalize the values of the dataset attributes and a dimensionality reduction method based on the Pearson kernel to reduce the dimension.A novel algorithm for optimizing the parameters of Convolutional Neural Network(CNN)based on improved Bat Algorithm-based Optimization with Pearson Mutation(BAO-PM)is also presented in this work.The prediction results of the proposed work are better when compared to the existing models in the literature. 展开更多
关键词 ANTENNA pearson optimization BANDWIDTH METAMATERIAL
下载PDF
Preserving Data Secrecy and Integrity for Cloud Storage Using Smart Contracts and Cryptographic Primitives
18
作者 Maher Alharby 《Computers, Materials & Continua》 SCIE EI 2024年第5期2449-2463,共15页
Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This a... Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This article presents an effective mechanism to preserve the secrecy and integrity of data stored on the public cloud by leveraging blockchain technology,smart contracts,and cryptographic primitives.The proposed approach utilizes a Solidity-based smart contract as an auditor for maintaining and verifying the integrity of outsourced data.To preserve data secrecy,symmetric encryption systems are employed to encrypt user data before outsourcing it.An extensive performance analysis is conducted to illustrate the efficiency of the proposed mechanism.Additionally,a rigorous assessment is conducted to ensure that the developed smart contract is free from vulnerabilities and to measure its associated running costs.The security analysis of the proposed system confirms that our approach can securely maintain the confidentiality and integrity of cloud storage,even in the presence of malicious entities.The proposed mechanism contributes to enhancing data security in cloud computing environments and can be used as a foundation for developing more secure cloud storage systems. 展开更多
关键词 Cloud storage data secrecy data integrity smart contracts CRYPTOGRAPHY
下载PDF
Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification
19
作者 Jungpil Shin Md.Al Mehedi Hasan +2 位作者 Abu Saleh Musa Miah Kota Suzuki Koki Hirooka 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2605-2625,共21页
Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japane... Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japanese Sign Language(JSL)for communication.However,existing JSL recognition systems have faced significant performance limitations due to inherent complexities.In response to these challenges,we present a novel JSL recognition system that employs a strategic fusion approach,combining joint skeleton-based handcrafted features and pixel-based deep learning features.Our system incorporates two distinct streams:the first stream extracts crucial handcrafted features,emphasizing the capture of hand and body movements within JSL gestures.Simultaneously,a deep learning-based transfer learning stream captures hierarchical representations of JSL gestures in the second stream.Then,we concatenated the critical information of the first stream and the hierarchy of the second stream features to produce the multiple levels of the fusion features,aiming to create a comprehensive representation of the JSL gestures.After reducing the dimensionality of the feature,a feature selection approach and a kernel-based support vector machine(SVM)were used for the classification.To assess the effectiveness of our approach,we conducted extensive experiments on our Lab JSL dataset and a publicly available Arabic sign language(ArSL)dataset.Our results unequivocally demonstrate that our fusion approach significantly enhances JSL recognition accuracy and robustness compared to individual feature sets or traditional recognition methods. 展开更多
关键词 Japanese Sign Language(JSL) hand gesture recognition geometric feature distance feature angle feature GoogleNet
下载PDF
Gates joint locally connected network for accurate and robust reconstruction in optical molecular tomography
20
作者 Minghua Zhao Yahui Xiao +2 位作者 Jiaqi Zhang Xin Cao Lin Wang 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2024年第3期11-22,共12页
Optical molecular tomography(OMT)is a potential pre-clinical molecular imaging technique with applications in a variety of biomedical areas,which can provide non-invasive quantitative three-dimensional(3D)information ... Optical molecular tomography(OMT)is a potential pre-clinical molecular imaging technique with applications in a variety of biomedical areas,which can provide non-invasive quantitative three-dimensional(3D)information regarding tumor distribution in living animals.The construction of optical transmission models and the application of reconstruction algorithms in traditional model-based reconstruction processes have affected the reconstruction results,resulting in problems such as low accuracy,poor robustness,and long-time consumption.Here,a gates joint locally connected network(GLCN)method is proposed by establishing the mapping relationship between the inside source distribution and the photon density on surface directly,thus avoiding the extra time consumption caused by iteration and the reconstruction errors caused by model inaccuracy.Moreover,gates module was composed of the concatenation and multiplication operators of three different gates.It was embedded into the network aiming at remembering input surface photon density over a period and allowing the network to capture neurons connected to the true source selectively by controlling three different gates.To evaluate the performance of the proposed method,numerical simulations were conducted,whose results demonstrated good performance in terms of reconstruction positioning accuracy and robustness. 展开更多
关键词 Optical molecular tomography gates module positioning accuracy ROBUSTNESS
下载PDF
上一页 1 2 189 下一页 到第
使用帮助 返回顶部