期刊文献+
共找到76篇文章
< 1 2 4 >
每页显示 20 50 100
Computing Challenges of UAV Networks: A Comprehensive Survey
1
作者 Altaf Hussain Shuaiyong Li +3 位作者 Tariq Hussain Xianxuan Lin Farman Ali Ahmad Ali AlZubi 《Computers, Materials & Continua》 SCIE EI 2024年第11期1999-2051,共53页
Devices and networks constantly upgrade,leading to rapid technological evolution.Three-dimensional(3D)point cloud transmission plays a crucial role in aerial computing terminology,facilitating information exchange.Var... Devices and networks constantly upgrade,leading to rapid technological evolution.Three-dimensional(3D)point cloud transmission plays a crucial role in aerial computing terminology,facilitating information exchange.Various network types,including sensor networks and 5G mobile networks,support this transmission.Notably,Flying Ad hoc Networks(FANETs)utilize Unmanned Aerial Vehicles(UAVs)as nodes,operating in a 3D environment with Six Degrees of Freedom(6DoF).This study comprehensively surveys UAV networks,focusing on models for Light Detection and Ranging(LiDAR)3D point cloud compression/transmission.Key topics covered include autonomous navigation,challenges in video streaming infrastructure,motivations for Quality of Experience(QoE)enhancement,and avenues for future research.Additionally,the paper conducts an extensive review of UAVs,encompassing current wireless technologies,applications across various sectors,routing protocols,design considerations,security measures,blockchain applications in UAVs,contributions to healthcare systems,and integration with the Internet of Things(IoT),Artificial Intelligence(AI),Machine Learning(ML),and Deep Learning(DL).Furthermore,the paper thoroughly discusses the core contributions of LiDAR 3D point clouds in UAV systems and their future prediction along with mobility models.It also explores the prospects of UAV systems and presents state-of-the-art solutions. 展开更多
关键词 Autonomous vehicles UAV systems UAV transmission UAV in healthcare future of UAV system security measures
下载PDF
Early Detection of Alzheimer’s Disease Based on Laplacian Re-Decomposition and XGBoosting
2
作者 Hala Ahmed Hassan Soliman +2 位作者 Shaker El-Sappagh Tamer Abuhmed Mohammed Elmogy 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期2773-2795,共23页
The precise diagnosis of Alzheimer’s disease is critical for patient treatment,especially at the early stage,because awareness of the severity and progression risks lets patients take preventative actions before irre... The precise diagnosis of Alzheimer’s disease is critical for patient treatment,especially at the early stage,because awareness of the severity and progression risks lets patients take preventative actions before irreversible brain damage occurs.It is possible to gain a holistic view of Alzheimer’s disease staging by combining multiple data modalities,known as image fusion.In this paper,the study proposes the early detection of Alzheimer’s disease using different modalities of Alzheimer’s disease brain images.First,the preprocessing was performed on the data.Then,the data augmentation techniques are used to handle overfitting.Also,the skull is removed to lead to good classification.In the second phase,two fusion stages are used:pixel level(early fusion)and feature level(late fusion).We fused magnetic resonance imaging and positron emission tomography images using early fusion(Laplacian Re-Decomposition)and late fusion(Canonical Correlation Analysis).The proposed system used magnetic resonance imaging and positron emission tomography to take advantage of each.Magnetic resonance imaging system’s primary benefits are providing images with excellent spatial resolution and structural information for specific organs.Positron emission tomography images can provide functional information and the metabolisms of particular tissues.This characteristic helps clinicians detect diseases and tumor progression at an early stage.Third,the feature extraction of fused images is extracted using a convolutional neural network.In the case of late fusion,the features are extracted first and then fused.Finally,the proposed system performs XGB to classify Alzheimer’s disease.The system’s performance was evaluated using accuracy,specificity,and sensitivity.All medical data were retrieved in the 2D format of 256×256 pixels.The classifiers were optimized to achieve the final results:for the decision tree,the maximum depth of a tree was 2.The best number of trees for the random forest was 60;for the support vector machine,the maximum depth was 4,and the kernel gamma was 0.01.The system achieved an accuracy of 98.06%,specificity of 94.32%,and sensitivity of 97.02%in the case of early fusion.Also,if the system achieved late fusion,accuracy was 99.22%,specificity was 96.54%,and sensitivity was 99.54%. 展开更多
关键词 Alzheimer’s disease(AD) machine learning(ML) image fusion Laplacian Re-decomposition(LRD) XGBoosting
下载PDF
Deep Learning Approach for Hand Gesture Recognition:Applications in Deaf Communication and Healthcare
3
作者 Khursheed Aurangzeb Khalid Javeed +3 位作者 Musaed Alhussein Imad Rida Syed Irtaza Haider Anubha Parashar 《Computers, Materials & Continua》 SCIE EI 2024年第1期127-144,共18页
Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seaml... Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seamless and error-free HCI.HGRoc technology is pivotal in healthcare and communication for the deaf community.Despite significant advancements in computer vision-based gesture recognition for language understanding,two considerable challenges persist in this field:(a)limited and common gestures are considered,(b)processing multiple channels of information across a network takes huge computational time during discriminative feature extraction.Therefore,a novel hand vision-based convolutional neural network(CNN)model named(HVCNNM)offers several benefits,notably enhanced accuracy,robustness to variations,real-time performance,reduced channels,and scalability.Additionally,these models can be optimized for real-time performance,learn from large amounts of data,and are scalable to handle complex recognition tasks for efficient human-computer interaction.The proposed model was evaluated on two challenging datasets,namely the Massey University Dataset(MUD)and the American Sign Language(ASL)Alphabet Dataset(ASLAD).On the MUD and ASLAD datasets,HVCNNM achieved a score of 99.23% and 99.00%,respectively.These results demonstrate the effectiveness of CNN as a promising HGRoc approach.The findings suggest that the proposed model have potential roles in applications such as sign language recognition,human-computer interaction,and robotics. 展开更多
关键词 Computer vision deep learning gait recognition sign language recognition machine learning
下载PDF
BHJO: A Novel Hybrid Metaheuristic Algorithm Combining the Beluga Whale, Honey Badger, and Jellyfish Search Optimizers for Solving Engineering Design Problems
4
作者 Farouq Zitouni Saad Harous +4 位作者 Abdulaziz S.Almazyad Ali Wagdy Mohamed Guojiang Xiong Fatima Zohra Khechiba Khadidja  Kherchouche 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期219-265,共47页
Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengt... Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengths of multiple algorithms,enhancing solution quality,convergence speed,and robustness,thereby offering a more versatile and efficient means of solving intricate real-world optimization tasks.In this paper,we introduce a hybrid algorithm that amalgamates three distinct metaheuristics:the Beluga Whale Optimization(BWO),the Honey Badger Algorithm(HBA),and the Jellyfish Search(JS)optimizer.The proposed hybrid algorithm will be referred to as BHJO.Through this fusion,the BHJO algorithm aims to leverage the strengths of each optimizer.Before this hybridization,we thoroughly examined the exploration and exploitation capabilities of the BWO,HBA,and JS metaheuristics,as well as their ability to strike a balance between exploration and exploitation.This meticulous analysis allowed us to identify the pros and cons of each algorithm,enabling us to combine them in a novel hybrid approach that capitalizes on their respective strengths for enhanced optimization performance.In addition,the BHJO algorithm incorporates Opposition-Based Learning(OBL)to harness the advantages offered by this technique,leveraging its diverse exploration,accelerated convergence,and improved solution quality to enhance the overall performance and effectiveness of the hybrid algorithm.Moreover,the performance of the BHJO algorithm was evaluated across a range of both unconstrained and constrained optimization problems,providing a comprehensive assessment of its efficacy and applicability in diverse problem domains.Similarly,the BHJO algorithm was subjected to a comparative analysis with several renowned algorithms,where mean and standard deviation values were utilized as evaluation metrics.This rigorous comparison aimed to assess the performance of the BHJOalgorithmabout its counterparts,shedding light on its effectiveness and reliability in solving optimization problems.Finally,the obtained numerical statistics underwent rigorous analysis using the Friedman post hoc Dunn’s test.The resulting numerical values revealed the BHJO algorithm’s competitiveness in tackling intricate optimization problems,affirming its capability to deliver favorable outcomes in challenging scenarios. 展开更多
关键词 Global optimization hybridization of metaheuristics beluga whale optimization honey badger algorithm jellyfish search optimizer chaotic maps opposition-based learning
下载PDF
Intelligent Solution System for Cloud Security Based on Equity Distribution:Model and Algorithms
5
作者 Sarah Mustafa Eljack Mahdi Jemmali +3 位作者 Mohsen Denden Mutasim Al Sadig Abdullah M.Algashami Sadok Turki 《Computers, Materials & Continua》 SCIE EI 2024年第1期1461-1479,共19页
In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding ... In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s. 展开更多
关键词 Cyber-security cloud computing cloud security ALGORITHMS HEURISTICS
下载PDF
Identification of Software Bugs by Analyzing Natural Language-Based Requirements Using Optimized Deep Learning Features
6
作者 Qazi Mazhar ul Haq Fahim Arif +4 位作者 Khursheed Aurangzeb Noor ul Ain Javed Ali Khan Saddaf Rubab Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2024年第3期4379-4397,共19页
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn... Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode. 展开更多
关键词 Natural language processing software bug prediction transfer learning ensemble learning feature selection
下载PDF
UNet Based onMulti-Object Segmentation and Convolution Neural Network for Object Recognition
7
作者 Nouf Abdullah Almujally Bisma Riaz Chughtai +4 位作者 Naif Al Mudawi Abdulwahab Alazeb Asaad Algarni Hamdan A.Alzahrani Jeongmin Park 《Computers, Materials & Continua》 SCIE EI 2024年第7期1563-1580,共18页
The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integrat... The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integration,robotic navigation,autonomous driving,and guided tour systems,heavily rely on this type of scene comprehension.This paper presents a novel segmentation approach based on the UNet network model,aimed at recognizing multiple objects within an image.The methodology begins with the acquisition and preprocessing of the image,followed by segmentation using the fine-tuned UNet architecture.Afterward,we use an annotation tool to accurately label the segmented regions.Upon labeling,significant features are extracted from these segmented objects,encompassing KAZE(Accelerated Segmentation and Extraction)features,energy-based edge detection,frequency-based,and blob characteristics.For the classification stage,a convolution neural network(CNN)is employed.This comprehensive methodology demonstrates a robust framework for achieving accurate and efficient recognition of multiple objects in images.The experimental results,which include complex object datasets like MSRC-v2 and PASCAL-VOC12,have been documented.After analyzing the experimental results,it was found that the PASCAL-VOC12 dataset achieved an accuracy rate of 95%,while the MSRC-v2 dataset achieved an accuracy of 89%.The evaluation performed on these diverse datasets highlights a notably impressive level of performance. 展开更多
关键词 UNet segmentation BLOB fourier transform convolution neural network
下载PDF
Big Data and Data Science:Opportunities and Challenges of iSchools 被引量:15
8
作者 Il-Yeol Song Yongjun Zhu 《Journal of Data and Information Science》 CSCD 2017年第3期1-18,共18页
Due to the recent explosion of big data, our society has been rapidly going through digital transformation and entering a new world with numerous eye-opening developments. These new trends impact the society and futur... Due to the recent explosion of big data, our society has been rapidly going through digital transformation and entering a new world with numerous eye-opening developments. These new trends impact the society and future jobs, and thus student careers. At the heart of this digital transformation is data science, the discipline that makes sense of big data. With many rapidly emerging digital challenges ahead of us, this article discusses perspectives on iSchools' opportunities and suggestions in data science education. We argue that iSchools should empower their students with "information computing" disciplines, which we define as the ability to solve problems and create values, information, and knowledge using tools in application domains. As specific approaches to enforcing information computing disciplines in data science education, we suggest the three foci of user-based, tool-based, and application- based. These three loci will serve to differentiate the data science education of iSchools from that of computer science or business schools. We present a layered Data Science Education Framework (DSEF) with building blocks that include the three pillars of data science (people, technology, and data), computational thinking, data-driven paradigms, and data science lifecycles. Data science courses built on the top of this framework should thus be executed with user-based, tool-based, and application-based approaches. This framework will help our students think about data science problems from the big picture perspective and foster appropriate problem-solving skills in conjunction with broad perspectives of data science lifecycles. We hope the DSEF discussed in this article will help fellow iSchools in their design of new data science curricula. 展开更多
关键词 Big data Data science Information computing The fourth Industrial Revolution ISCHOOL Computational thinking Data-driven paradigm Data science lifecycle
下载PDF
Importance of Features Selection,Attributes Selection,Challenges and Future Directions for Medical Imaging Data:A Review 被引量:6
9
作者 Nazish Naheed Muhammad Shaheen +2 位作者 Sajid Ali Khan Mohammed Alawairdhi Muhammad Attique Khan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2020年第10期315-344,共30页
In the area of pattern recognition and machine learning,features play a key role in prediction.The famous applications of features are medical imaging,image classification,and name a few more.With the exponential grow... In the area of pattern recognition and machine learning,features play a key role in prediction.The famous applications of features are medical imaging,image classification,and name a few more.With the exponential growth of information investments in medical data repositories and health service provision,medical institutions are collecting large volumes of data.These data repositories contain details information essential to support medical diagnostic decisions and also improve patient care quality.On the other hand,this growth also made it difficult to comprehend and utilize data for various purposes.The results of imaging data can become biased because of extraneous features present in larger datasets.Feature selection gives a chance to decrease the number of components in such large datasets.Through selection techniques,ousting the unimportant features and selecting a subset of components that produces prevalent characterization precision.The correct decision to find a good attribute produces a precise grouping model,which enhances learning pace and forecast control.This paper presents a review of feature selection techniques and attributes selection measures for medical imaging.This review is meant to describe feature selection techniques in a medical domainwith their pros and cons and to signify its application in imaging data and data mining algorithms.The review reveals the shortcomings of the existing feature and attributes selection techniques to multi-sourced data.Moreover,this review provides the importance of feature selection for correct classification of medical infections.In the end,critical analysis and future directions are provided. 展开更多
关键词 Medical imaging imaging data feature selection data mining attribute selection medical challenges future directions
下载PDF
Fusion of Infrared and Visible Images Using Fuzzy Based Siamese Convolutional Network 被引量:2
10
作者 Kanika Bhalla Deepika Koundal +2 位作者 Surbhi Bhatia Mohammad Khalid Imam Rahmani Muhammad Tahir 《Computers, Materials & Continua》 SCIE EI 2022年第3期5503-5518,共16页
Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared(IR)/visible(VS)images.Dissimilarities in various kind of features in these images are vital to preserve i... Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared(IR)/visible(VS)images.Dissimilarities in various kind of features in these images are vital to preserve in the single fused image.Hence,simultaneous preservation of both the aspects at the same time is a challenging task.However,most of the existing methods utilize the manual extraction of features;and manual complicated designing of fusion rules resulted in a blurry artifact in the fused image.Therefore,this study has proposed a hybrid algorithm for the integration of multi-features among two heterogeneous images.Firstly,fuzzification of two IR/VS images has been done by feeding it to the fuzzy sets to remove the uncertainty present in the background and object of interest of the image.Secondly,images have been learned by two parallel branches of the siamese convolutional neural network(CNN)to extract prominent features from the images as well as high-frequency information to produce focus maps containing source image information.Finally,the obtained focused maps which contained the detailed integrated information are directly mapped with the source image via pixelwise strategy to result in fused image.Different parameters have been used to evaluate the performance of the proposed image fusion by achieving 1.008 for mutual information(MI),0.841 for entropy(EG),0.655 for edge information(EI),0.652 for human perception(HP),and 0.980 for image structural similarity(ISS).Experimental results have shown that the proposed technique has attained the best qualitative and quantitative results using 78 publically available images in comparison to the existing discrete cosine transform(DCT),anisotropic diffusion&karhunen-loeve(ADKL),guided filter(GF),random walk(RW),principal component analysis(PCA),and convolutional neural network(CNN)methods. 展开更多
关键词 Convolutional neural network fuzzy sets infrared and visible image fusion deep learning
下载PDF
Smart Bubble Sort:A Novel and Dynamic Variant of Bubble Sort Algorithm
11
作者 Mohammad Khalid Imam Rahmani 《Computers, Materials & Continua》 SCIE EI 2022年第6期4895-4913,共19页
In the present era,a very huge volume of data is being stored in online and offline databases.Enterprise houses,research,medical as well as healthcare organizations,and academic institutions store data in databases an... In the present era,a very huge volume of data is being stored in online and offline databases.Enterprise houses,research,medical as well as healthcare organizations,and academic institutions store data in databases and their subsequent retrievals are performed for further processing.Finding the required data from a given database within the minimum possible time is one of the key factors in achieving the best possible performance of any computer-based application.If the data is already sorted,finding or searching is comparatively faster.In real-life scenarios,the data collected from different sources may not be in sorted order.Sorting algorithms are required to arrange the data in some order in the least possible time.In this paper,I propose an intelligent approach towards designing a smart variant of the bubble sort algorithm.I call it Smart Bubble sort that exhibits dynamic footprint:The capability of adapting itself from the average-case to the best-case scenario.It is an in-place sorting algorithm and its best-case time complexity isΩ(n).It is linear and better than bubble sort,selection sort,and merge sort.In averagecase and worst-case analyses,the complexity estimates are based on its static footprint analyses.Its complexity in worst-case is O(n2)and in average-case isΘ(n^(2)).Smart Bubble sort is capable of adapting itself to the best-case scenario from the average-case scenario at any subsequent stages due to its dynamic and intelligent nature.The Smart Bubble sort outperforms bubble sort,selection sort,and merge sort in the best-case scenario whereas it outperforms bubble sort in the average-case scenario. 展开更多
关键词 Sorting algorithms smart bubble sort FOOTPRINT dynamic footprint time complexity asymptotic analysis
下载PDF
A Machine Learning-Based Attack Detection and Prevention System in Vehicular Named Data Networking 被引量:1
12
作者 Arif Hussain Magsi Ali Ghulam +3 位作者 Saifullah Memon Khalid Javeed Musaed Alhussein Imad Rida 《Computers, Materials & Continua》 SCIE EI 2023年第11期1445-1465,共21页
Named Data Networking(NDN)is gaining a significant attention in Vehicular Ad-hoc Networks(VANET)due to its in-network content caching,name-based routing,and mobility-supporting characteristics.Nevertheless,existing ND... Named Data Networking(NDN)is gaining a significant attention in Vehicular Ad-hoc Networks(VANET)due to its in-network content caching,name-based routing,and mobility-supporting characteristics.Nevertheless,existing NDN faces three significant challenges,including security,privacy,and routing.In particular,security attacks,such as Content Poisoning Attacks(CPA),can jeopardize legitimate vehicles with malicious content.For instance,attacker host vehicles can serve consumers with invalid information,which has dire consequences,including road accidents.In such a situation,trust in the content-providing vehicles brings a new challenge.On the other hand,ensuring privacy and preventing unauthorized access in vehicular(VNDN)is another challenge.Moreover,NDN’s pull-based content retrieval mechanism is inefficient for delivering emergency messages in VNDN.In this connection,our contribution is threefold.Unlike existing rule-based reputation evaluation,we propose a Machine Learning(ML)-based reputation evaluation mechanism that identifies CPA attackers and legitimate nodes.Based on ML evaluation results,vehicles accept or discard served content.Secondly,we exploit a decentralized blockchain system to ensure vehicles’privacy by maintaining their information in a secure digital ledger.Finally,we improve the default routing mechanism of VNDN from pull to a push-based content dissemination using Publish-Subscribe(Pub-Sub)approach.We implemented and evaluated our ML-based classification model on a publicly accessible BurST-Asutralian dataset for Misbehavior Detection(BurST-ADMA).We used five(05)hybrid ML classifiers,including Logistic Regression,Decision Tree,K-Nearest Neighbors,Random Forest,and Gaussian Naive Bayes.The qualitative results indicate that Random Forest has achieved the highest average accuracy rate of 100%.Our proposed research offers the most accurate solution to detect CPA in VNDN for safe,secure,and reliable vehicle communication. 展开更多
关键词 Named data networking vehicular networks REPUTATION CACHING MACHINE-LEARNING
下载PDF
Edge Metric Dimension of Honeycomb and Hexagonal Networks for IoT
13
作者 Sohail Abbas Zahid Raza +2 位作者 Nida Siddiqui Faheem Khan Taegkeun Whangbo 《Computers, Materials & Continua》 SCIE EI 2022年第5期2683-2695,共13页
Wireless Sensor Network(WSN)is considered to be one of the fundamental technologies employed in the Internet of things(IoT);hence,enabling diverse applications for carrying out real-time observations.Robot navigation ... Wireless Sensor Network(WSN)is considered to be one of the fundamental technologies employed in the Internet of things(IoT);hence,enabling diverse applications for carrying out real-time observations.Robot navigation in such networks was the main motivation for the introduction of the concept of landmarks.A robot can identify its own location by sending signals to obtain the distances between itself and the landmarks.Considering networks to be a type of graph,this concept was redefined as metric dimension of a graph which is the minimum number of nodes needed to identify all the nodes of the graph.This idea was extended to the concept of edge metric dimension of a graph G,which is the minimum number of nodes needed in a graph to uniquely identify each edge of the network.Regular plane networks can be easily constructed by repeating regular polygons.This design is of extreme importance as it yields high overall performance;hence,it can be used in various networking and IoT domains.The honeycomb and the hexagonal networks are two such popular mesh-derived parallel networks.In this paper,it is proved that the minimum landmarks required for the honeycomb network HC(n),and the hexagonal network HX(n)are 3 and 6 respectively.The bounds for the landmarks required for the hex-derived network HDN1(n)are also proposed. 展开更多
关键词 Edge metric dimension internet of things wireless sensor network honeycomb network hexagonal network hex-derived networks
下载PDF
Simulation and Modelling of Water Injection for Reservoir Pressure Maintenance
14
作者 Rishi Dewan Adarsh Kumar +2 位作者 Mohammad Khalid Imam Rahmani Surbhi Bhatia Md Ezaz Ahmed 《Computers, Materials & Continua》 SCIE EI 2022年第9期5761-5776,共16页
Water injection has shown to be one of the most successful,efficient,and cost-effective reservoir management strategies.By re-injecting treated and filtered water into reservoirs,this approach can help maintain reserv... Water injection has shown to be one of the most successful,efficient,and cost-effective reservoir management strategies.By re-injecting treated and filtered water into reservoirs,this approach can help maintain reservoir pressure,increase hydrocarbon output,and reduce the environmental effect.The goal of this project is to create a water injection model utilizing Eclipse reservoir simulation software to better understand water injection methods for reservoir pressure maintenance.A basic reservoir model is utilized in this investigation.For simulation designs,the reservoir length,breadth,and thickness may be changed to different levels.The water-oil contact was discovered at 7000 feet,and the reservoir pressure was recorded at 3000 pounds per square inch at a depth of 6900 feet.The aquifer chosen was of the Fetkovich type and was linked to the reservoir in the j+direction.The porosity was estimated to be varied,ranging from 9%to 16%.The residual oil saturation was set to 25%and the irreducible water saturation was set at 20%.The vertical permeability was set at 50 md as a constant.Pressure Volume Temperature(PVT)data was used to estimate the gas and water characteristics. 展开更多
关键词 ECLIPSE water injection RESERVOIR MODELING SIMULATION oil saturation
下载PDF
Intelligent Identification and Resolution of Software Requirement Conflicts:Assessment and Evaluation
15
作者 Maysoon Aldekhail Marwah Almasri 《Computer Systems Science & Engineering》 SCIE EI 2022年第2期469-489,共21页
Considerable research has demonstrated how effective requirements engineering is critical for the success of software projects.Requirements engineering has been established and recognized as one of the most important ... Considerable research has demonstrated how effective requirements engineering is critical for the success of software projects.Requirements engineering has been established and recognized as one of the most important aspects of software engineering as of late.It is noteworthy to mention that requirement consistency is a critical factor in project success,and conflicts in requirements lead to waste of cost,time,and effort.A considerable number of research studies have shown the risks and problems caused by working with requirements that are in conflict with other requirements.These risks include running overtime or over budget,which may lead to project failure.At the very least,it would result in the extra expended effort.Various studies have also stated that failure in managing requirement conflicts is one of the main reasons for unsuccessful software projects due to high cost and insufficient time.Many prior research studies have proposed manual techniques to detect conflicts,whereas other research recommends automated approaches based on human analysis.Moreover,there are different resolutions for conflicting requirements.Our previous work proposed a scheme for dealing with this problem using a novel intelligent method to detect conflicts and resolve them.A rule-based system was proposed to identify conflicts in requirements,and a genetic algorithm(GA)was used to resolve conflicts.The objective of this work is to assess and evaluate the implementation of the method of minimizing the number of conflicts in the requirements.The methodology implemented comprises two different stages.The first stage,detecting conflicts using a rule-based system,demonstrated a correct result with 100% accuracy.The evaluation of using the GA to resolve and reduce conflicts in the second stage also displayed a good result and achieved the desired goal as well as the main objective of the research. 展开更多
关键词 Requirement conflicts genetic algorithm rule-based system software requirements requirements engineering
下载PDF
Performance Evaluation of Deep Dense Layer Neural Network for Diabetes Prediction
16
作者 Niharika Gupta Baijnath Kaushik +1 位作者 Mohammad Khalid Imam Rahmani Saima Anwar Lashari 《Computers, Materials & Continua》 SCIE EI 2023年第7期347-366,共20页
Diabetes is one of the fastest-growing human diseases worldwide and poses a significant threat to the population’s longer lives.Early prediction of diabetes is crucial to taking precautionary steps to avoid or delay ... Diabetes is one of the fastest-growing human diseases worldwide and poses a significant threat to the population’s longer lives.Early prediction of diabetes is crucial to taking precautionary steps to avoid or delay its onset.In this study,we proposed a Deep Dense Layer Neural Network(DDLNN)for diabetes prediction using a dataset with 768 instances and nine variables.We also applied a combination of classical machine learning(ML)algorithms and ensemble learning algorithms for the effective prediction of the disease.The classical ML algorithms used were Support Vector Machine(SVM),Logistic Regression(LR),Decision Tree(DT),K-Nearest Neighbor(KNN),and Naïve Bayes(NB).We also constructed ensemble models such as bagging(Random Forest)and boosting like AdaBoost and Extreme Gradient Boosting(XGBoost)to evaluate the performance of prediction models.The proposed DDLNN model and ensemble learning models were trained and tested using hyperparameter tuning and K-Fold cross-validation to determine the best parameters for predicting the disease.The combined ML models used majority voting to select the best outcomes among the models.The efficacy of the proposed and other models was evaluated for effective diabetes prediction.The investigation concluded that the proposed model,after hyperparameter tuning,outperformed other learning models with an accuracy of 84.42%,a precision of 85.12%,a recall rate of 65.40%,and a specificity of 94.11%. 展开更多
关键词 Diabetes prediction hyperparameter tuning k-fold validation machine learning neural network
下载PDF
Intelligence COVID-19 Monitoring Framework Based on Deep Learning and Smart Wearable IoT Sensors
17
作者 Fadhil Mukhlif Norafida Ithnin +3 位作者 Roobaea Alroobaea Sultan Algarni Wael Y.Alghamdi Ibrahim Hashem 《Computers, Materials & Continua》 SCIE EI 2023年第10期583-599,共17页
The World Health Organization(WHO)refers to the 2019 new coronavirus epidemic as COVID-19,and it has caused an unprecedented global crisis for several nations.Nearly every country around the globe is now very concerne... The World Health Organization(WHO)refers to the 2019 new coronavirus epidemic as COVID-19,and it has caused an unprecedented global crisis for several nations.Nearly every country around the globe is now very concerned about the effects of the COVID-19 outbreaks,which were previously only experienced by Chinese residents.Most of these nations are now under a partial or complete state of lockdown due to the lack of resources needed to combat the COVID-19 epidemic and the concern about overstretched healthcare systems.Every time the pandemic surprises them by providing new values for various parameters,all the connected research groups strive to understand the behavior of the pandemic to determine when it will stop.The prediction models in this research were created using deep neural networks and Decision Trees(DT).DT employs the support vector machine method,which predicts the transition from an initial dataset to actual figures using a function trained on a model.Extended short-term memory networks(LSTMs)are a special sort of recurrent neural network(RNN)that can pick up on long-term dependencies.As an added bonus,it is helpful when the neural network can both recall current events and recall past events,resulting in an accurate prediction for COVID-19.We provided a solid foundation for intelligent healthcare by devising an intelligence COVID-19 monitoring framework.We developed a data analysis methodology,including data preparation and dataset splitting.We examine two popular algorithms,LSTM and Decision tree on the official datasets.Moreover,we have analysed the effectiveness of deep learning and machine learning methods to predict the scale of the pandemic.Key issues and challenges are discussed for future improvement.It is expected that the results these methods provide for the Health Scenario would be reliable and credible. 展开更多
关键词 Healthcare framework AI COVID-19 machine&deep learning LSTM RNN decision tree
下载PDF
IoMT-Based Smart Healthcare of Elderly People Using Deep Extreme Learning Machine
18
作者 Muath Jarrah Hussam Al Hamadi +1 位作者 Ahmed Abu-Khadrah Taher M.Ghazal 《Computers, Materials & Continua》 SCIE EI 2023年第7期19-33,共15页
The Internet of Medical Things(IoMT)enables digital devices to gather,infer,and broadcast health data via the cloud platform.The phenomenal growth of the IoMT is fueled by many factors,including the widespread and gro... The Internet of Medical Things(IoMT)enables digital devices to gather,infer,and broadcast health data via the cloud platform.The phenomenal growth of the IoMT is fueled by many factors,including the widespread and growing availability of wearables and the ever-decreasing cost of sensor-based technology.There is a growing interest in providing solutions for elderly people living assistance in a world where the population is rising rapidly.The IoMT is a novel reality transforming our daily lives.It can renovate modern healthcare by delivering a more personalized,protective,and collaborative approach to care.However,the current healthcare system for outdoor senior citizens faces new challenges.Traditional healthcare systems are inefficient and lack user-friendly technologies and interfaces appropriate for elderly people in an outdoor environment.Hence,in this research work,a IoMT based Smart Healthcare of Elderly people using Deep Extreme Learning Machine(SH-EDELM)is proposed to monitor the senior citizens’healthcare.The performance of the proposed SH-EDELM technique gives better results in terms of 0.9301 accuracy and 0.0699 miss rate,respectively. 展开更多
关键词 ICT ML FN DELM SH-EDELM
下载PDF
Classification of Gastric Lesions Using Gabor Block Local Binary Patterns
19
作者 Muhammad Tahir Farhan Riaz +1 位作者 Imran Usman Mohamed Ibrahim Habib 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期4007-4022,共16页
The identification of cancer tissues in Gastroenterology imaging poses novel challenges to the computer vision community in designing generic decision support systems.This generic nature demands the image descriptors ... The identification of cancer tissues in Gastroenterology imaging poses novel challenges to the computer vision community in designing generic decision support systems.This generic nature demands the image descriptors to be invariant to illumination gradients,scaling,homogeneous illumination,and rotation.In this article,we devise a novel feature extraction methodology,which explores the effectiveness of Gabor filters coupled with Block Local Binary Patterns in designing such descriptors.We effectively exploit the illumination invariance properties of Block Local Binary Patterns and the inherent capability of convolutional neural networks to construct novel rotation,scale and illumination invariant features.The invariance characteristics of the proposed Gabor Block Local Binary Patterns(GBLBP)are demonstrated using a publicly available texture dataset.We use the proposed feature extraction methodology to extract texture features from Chromoendoscopy(CH)images for the classification of cancer lesions.The proposed feature set is later used in conjuncture with convolutional neural networks to classify the CH images.The proposed convolutional neural network is a shallow network comprising of fewer parameters in contrast to other state-of-the-art networks exhibiting millions of parameters required for effective training.The obtained results reveal that the proposed GBLBP performs favorably to several other state-of-the-art methods including both hand crafted and convolutional neural networks-based features. 展开更多
关键词 Texture analysis Gabor filters gastroenterology imaging convolutional neural networks block local binary patterns
下载PDF
Blockchain-Based Decentralized Authentication Model for IoT-Based E-Learning and Educational Environments
20
作者 Osama A.Khashan Sultan Alamri +3 位作者 Waleed Alomoush Mutasem K.Alsmadi Samer Atawneh Usama Mir 《Computers, Materials & Continua》 SCIE EI 2023年第5期3133-3158,共26页
In recent times,technology has advanced significantly and is currently being integrated into educational environments to facilitate distance learning and interaction between learners.Integrating the Internet of Things... In recent times,technology has advanced significantly and is currently being integrated into educational environments to facilitate distance learning and interaction between learners.Integrating the Internet of Things(IoT)into education can facilitate the teaching and learning process and expand the context in which students learn.Nevertheless,learning data is very sensitive and must be protected when transmitted over the network or stored in data centers.Moreover,the identity and the authenticity of interacting students,instructors,and staff need to be verified to mitigate the impact of attacks.However,most of the current security and authentication schemes are centralized,relying on trusted third-party cloud servers,to facilitate continuous secure communication.In addition,most of these schemes are resourceintensive;thus,security and efficiency issues arise when heterogeneous and resource-limited IoT devices are being used.In this paper,we propose a blockchain-based architecture that accurately identifies and authenticates learners and their IoT devices in a decentralized manner and prevents the unauthorized modification of stored learning records in a distributed university network.It allows students and instructors to easily migrate to and join multiple universities within the network using their identity without the need for user re-authentication.The proposed architecture was tested using a simulation tool,and measured to evaluate its performance.The simulation results demonstrate the ability of the proposed architecture to significantly increase the throughput of learning transactions(40%),reduce the communication overhead and response time(26%),improve authentication efficiency(27%),and reduce the IoT power consumption(35%)compared to the centralized authentication mechanisms.In addition,the security analysis proves the effectiveness of the proposed architecture in resisting various attacks and ensuring the security requirements of learning data in the university network. 展开更多
关键词 Blockchain decentralized authentication Internet of Things(IoT) E-LEARNING IoT security
下载PDF
上一页 1 2 4 下一页 到第
使用帮助 返回顶部