Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties repo...Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties reported worldwide annually.Therefore,there is a pressing need to employ diverse landmine detection techniques for their removal.One effective approach for landmine detection is UAV(Unmanned Aerial Vehicle)based AirborneMagnetometry,which identifies magnetic anomalies in the local terrestrial magnetic field.It can generate a contour plot or heat map that visually represents the magnetic field strength.Despite the effectiveness of this approach,landmine removal remains a challenging and resource-intensive task,fraughtwith risks.Edge computing,on the other hand,can play a crucial role in critical drone monitoring applications like landmine detection.By processing data locally on a nearby edge server,edge computing can reduce communication latency and bandwidth requirements,allowing real-time analysis of magnetic field data.It enables faster decision-making and more efficient landmine detection,potentially saving lives and minimizing the risks involved in the process.Furthermore,edge computing can provide enhanced security and privacy by keeping sensitive data close to the source,reducing the chances of data exposure during transmission.This paper introduces the MAGnetometry Imaging based Classification System(MAGICS),a fully automated UAV-based system designed for landmine and buried object detection and localization.We have developed an efficient deep learning-based strategy for automatic image classification using magnetometry dataset traces.By simulating the proposal in various network scenarios,we have successfully detected landmine signatures present in themagnetometry images.The trained models exhibit significant performance improvements,achieving a maximum mean average precision value of 97.8%.展开更多
Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on...Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on deep learning for the localization and classification of thyroid nodules.First,we propose the multitask TransUnet,which combines the TransUnet encoder and decoder with multitask learning.Second,we propose the DualLoss function,tailored to the thyroid nodule localization and classification tasks.It balances the learning of the localization and classification tasks to help improve the model’s generalization ability.Third,we introduce strategies for augmenting the data.Finally,we submit a novel deep learning model,ThyroidNet,to accurately detect thyroid nodules.Results:ThyroidNet was evaluated on private datasets and was comparable to other existing methods,including U-Net and TransUnet.Experimental results show that ThyroidNet outperformed these methods in localizing and classifying thyroid nodules.It achieved improved accuracy of 3.9%and 1.5%,respectively.Conclusion:ThyroidNet significantly improves the clinical diagnosis of thyroid nodules and supports medical image analysis tasks.Future research directions include optimization of the model structure,expansion of the dataset size,reduction of computational complexity and memory requirements,and exploration of additional applications of ThyroidNet in medical image analysis.展开更多
The concept of smart houses has grown in prominence in recent years.Major challenges linked to smart homes are identification theft,data safety,automated decision-making for IoT-based devices,and the security of the d...The concept of smart houses has grown in prominence in recent years.Major challenges linked to smart homes are identification theft,data safety,automated decision-making for IoT-based devices,and the security of the device itself.Current home automation systems try to address these issues but there is still an urgent need for a dependable and secure smart home solution that includes automatic decision-making systems and methodical features.This paper proposes a smart home system based on ensemble learning of random forest(RF)and convolutional neural networks(CNN)for programmed decision-making tasks,such as categorizing gadgets as“OFF”or“ON”based on their normal routine in homes.We have integrated emerging blockchain technology to provide secure,decentralized,and trustworthy authentication and recognition of IoT devices.Our system consists of a 5V relay circuit,various sensors,and a Raspberry Pi server and database for managing devices.We have also developed an Android app that communicates with the server interface through an HTTP web interface and an Apache server.The feasibility and efficacy of the proposed smart home automation system have been evaluated in both laboratory and real-time settings.It is essential to use inexpensive,scalable,and readily available components and technologies in smart home automation systems.Additionally,we must incorporate a comprehensive security and privacy-centric design that emphasizes risk assessments,such as cyberattacks,hardware security,and other cyber threats.The trial results support the proposed system and demonstrate its potential for use in everyday life.展开更多
Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a p...Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a particular geographic region or location, also known as geo-spatial data or geographic information. Focusing on spatial heterogeneity, we present a hybrid machine learning model combining two competitive algorithms: the Random Forest Regressor and CNN. The model is fine-tuned using cross validation for hyper-parameter adjustment and performance evaluation, ensuring robustness and generalization. Our approach integrates Global Moran’s I for examining global autocorrelation, and local Moran’s I for assessing local spatial autocorrelation in the residuals. To validate our approach, we implemented the hybrid model on a real-world dataset and compared its performance with that of the traditional machine learning models. Results indicate superior performance with an R-squared of 0.90, outperforming RF 0.84 and CNN 0.74. This study contributed to a detailed understanding of spatial variations in data considering the geographical information (Longitude & Latitude) present in the dataset. Our results, also assessed using the Root Mean Squared Error (RMSE), indicated that the hybrid yielded lower errors, showing a deviation of 53.65% from the RF model and 63.24% from the CNN model. Additionally, the global Moran’s I index was observed to be 0.10. This study underscores that the hybrid was able to predict correctly the house prices both in clusters and in dispersed areas.展开更多
The development of human-robot interaction has been continu-ously increasing for the last decades.Through this development,it has become simpler and safe interactions using a remotely controlled telepresence robot in ...The development of human-robot interaction has been continu-ously increasing for the last decades.Through this development,it has become simpler and safe interactions using a remotely controlled telepresence robot in an insecure and hazardous environment.The audio-video communication connection or data transmission stability has already been well handled by fast-growing technologies such as 5G and 6G.However,the design of the phys-ical parameters,e.g.,maneuverability,controllability,and stability,still needs attention.Therefore,the paper aims to present a systematic,controlled design and implementation of a telepresence mobile robot.The primary focus of this paper is to perform the computational analysis and experimental implementa-tion design with sophisticated position control,which autonomously controls the robot’s position and speed when reaching an obstacle.A system model and a position controller design are developed with root locus points.The design robot results are verified experimentally,showing the robot’s agreement and control in the desired position.The robot was tested by considering various parameters:driving straight ahead,right turn,self-localization and complex path.The results prove that the proposed approach is flexible and adaptable and gives a better alternative.The experimental results show that the proposed method significantly minimizes the obstacle hits.展开更多
The main idea behind the present research is to design a state-feedback controller for an underactuated nonlinear rotary inverted pendulum module by employing the linear quadratic regulator(LQR)technique using local a...The main idea behind the present research is to design a state-feedback controller for an underactuated nonlinear rotary inverted pendulum module by employing the linear quadratic regulator(LQR)technique using local approximation.The LQR is an excellent method for developing a controller for nonlinear systems.It provides optimal feedback to make the closed-loop system robust and stable,rejecting external disturbances.Model-based optimal controller for a nonlinear system such as a rotatory inverted pendulum has not been designed and implemented using Newton-Euler,Lagrange method,and local approximation.Therefore,implementing LQR to an underactuated nonlinear system was vital to design a stable controller.A mathematical model has been developed for the controller design by utilizing the Newton-Euler,Lagrange method.The nonlinear model has been linearized around an equilibrium point.Linear and nonlinear models have been compared to find the range in which linear and nonlinear models’behaviour is similar.MATLAB LQR function and system dynamics have been used to estimate the controller parameters.For the performance evaluation of the designed controller,Simulink has been used.Linear and nonlinear models have been simulated along with the designed controller.Simulations have been performed for the designed controller over the linear and nonlinear system under different conditions through varying system variables.The results show that the system is stable and robust enough to act against external disturbances.The controller maintains the rotary inverted pendulum in an upright position and rejects disruptions like falling under gravitational force or any external disturbance by adjusting the rotation of the horizontal link in both linear and nonlinear environments in a specific range.The controller has been practically designed and implemented.It is vivid from the results that the controller is robust enough to reject the disturbances in milliseconds and keeps the pendulum arm deflection angle to zero degrees.展开更多
A document layout can be more informative than merely a document’s visual and structural appearance.Thus,document layout analysis(DLA)is considered a necessary prerequisite for advanced processing and detailed docume...A document layout can be more informative than merely a document’s visual and structural appearance.Thus,document layout analysis(DLA)is considered a necessary prerequisite for advanced processing and detailed document image analysis to be further used in several applications and different objectives.This research extends the traditional approaches of DLA and introduces the concept of semantic document layout analysis(SDLA)by proposing a novel framework for semantic layout analysis and characterization of handwritten manuscripts.The proposed SDLA approach enables the derivation of implicit information and semantic characteristics,which can be effectively utilized in dozens of practical applications for various purposes,in a way bridging the semantic gap and providingmore understandable high-level document image analysis and more invariant characterization via absolute and relative labeling.This approach is validated and evaluated on a large dataset ofArabic handwrittenmanuscripts comprising complex layouts.The experimental work shows promising results in terms of accurate and effective semantic characteristic-based clustering and retrieval of handwritten manuscripts.It also indicates the expected efficacy of using the capabilities of the proposed approach in automating and facilitating many functional,reallife tasks such as effort estimation and pricing of transcription or typing of such complex manuscripts.展开更多
The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision...The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision approaches.In multiple real-life applications,for example,social media,content-based face picture retrieval is a well-invested technique for large-scale databases,where there is a significant necessity for reliable retrieval capabilities enabling quick search in a vast number of pictures.Humans widely employ faces for recognizing and identifying people.Thus,face recognition through formal or personal pictures is increasingly used in various real-life applications,such as helping crime investigators retrieve matching images from face image databases to identify victims and criminals.However,such face image retrieval becomes more challenging in large-scale databases,where traditional vision-based face analysis requires ample additional storage space than the raw face images already occupied to store extracted lengthy feature vectors and takes much longer to process and match thousands of face images.This work mainly contributes to enhancing face image retrieval performance in large-scale databases using hash codes inferred by locality-sensitive hashing(LSH)for facial hard and soft biometrics as(Hard BioHash)and(Soft BioHash),respectively,to be used as a search input for retrieving the top-k matching faces.Moreover,we propose the multi-biometric score-level fusion of both face hard and soft BioHashes(Hard-Soft BioHash Fusion)for further augmented face image retrieval.The experimental outcomes applied on the Labeled Faces in the Wild(LFW)dataset and the related attributes dataset(LFW-attributes),demonstrate that the retrieval performance of the suggested fusion approach(Hard-Soft BioHash Fusion)significantly improved the retrieval performance compared to solely using Hard BioHash or Soft BioHash in isolation,where the suggested method provides an augmented accuracy of 87%when executed on 1000 specimens and 77%on 5743 samples.These results remarkably outperform the results of the Hard BioHash method by(50%on the 1000 samples and 30%on the 5743 samples),and the Soft BioHash method by(78%on the 1000 samples and 63%on the 5743 samples).展开更多
One of the most recent developments in the field of graph theory is the analysis of networks such as Butterfly networks,Benes networks,Interconnection networks,and David-derived networks using graph theoretic paramete...One of the most recent developments in the field of graph theory is the analysis of networks such as Butterfly networks,Benes networks,Interconnection networks,and David-derived networks using graph theoretic parameters.The topological indices(TIs)have been widely used as graph invariants among various graph theoretic tools.Quantitative structure activity relationships(QSAR)and quantitative structure property relationships(QSPR)need the use of TIs.Different structure-based parameters,such as the degree and distance of vertices in graphs,contribute to the determination of the values of TIs.Among other recently introduced novelties,the classes of ev-degree and ve-degree dependent TIs have been extensively explored for various graph families.The current research focuses on the development of formulae for different ev-degree and ve-degree dependent TIs for s−dimensional Benes network and certain networks derived from it.In the end,a comparison between the values of the TIs for these networks has been presented through graphical tools.展开更多
Increasing renewable energy targets globally has raised the requirement for the efficient and profitable operation of solar photovoltaic(PV)systems.In light of this requirement,this paper provides a path for evaluatin...Increasing renewable energy targets globally has raised the requirement for the efficient and profitable operation of solar photovoltaic(PV)systems.In light of this requirement,this paper provides a path for evaluating the operating condition and improving the power output of the PV system in a grid integrated environment.To achieve this,different types of faults in grid-connected PV systems(GCPVs)and their impact on the energy loss associated with the electrical network are analyzed.A data-driven approach using neural networks(NNs)is proposed to achieve root cause analysis and localize the fault to the component level in the system.The localized fault condition is combined with a parallel operation of adaptive neurofuzzy inference units(ANFIUs)to develop a power mismatch-based control unit(PMCU)for improving the power output of the GCPV.To develop the proposed framework,a 10-kW single-phase GCPV is simulated for training the NN-based anomaly detection approach with 14 deviation signals.Further,the developed algorithm is combined with the PMCU implemented with the experimental setup of GCPV.The results identified 98.2%training accuracy and 43000 observations/sec prediction speed for the trained classifier,and improved power output with reduced voltage and current harmonics for the grid-connected PV operation.展开更多
Data Encryption Standard(DES)is a symmetric key cryptosystem that is applied in different cryptosystems of recent times.However,researchers found defects in the main assembling of the DES and declared it insecure agai...Data Encryption Standard(DES)is a symmetric key cryptosystem that is applied in different cryptosystems of recent times.However,researchers found defects in the main assembling of the DES and declared it insecure against linear and differential cryptanalysis.In this paper,we have studied the faults and made improvements in their internal structure and get the new algorithm for Improved DES.The improvement is being made in the substitution step,which is the only nonlinear component of the algorithm.This alteration provided us with great outcomes and increase the strength of DES.Accordingly,a novel 6×6 good quality S-box construction scheme has been hired in the substitution phase of the DES.The construction involves the Galois field method and generates robust S-boxes that are used to secure the scheme against linear and differential attacks.Then again,the key space of the improved DES has been enhanced against the brute force attack.The out-comes of different performance analyses depict the strength of our proposed substitution boxes which also guarantees the strength of the overall DES.展开更多
Rapid technological advancement has enabled modern healthcare systems to provide more sophisticated and real-time services on the Internet of Medical Things(IoMT).The existing cloud-based,centralized IoMT architecture...Rapid technological advancement has enabled modern healthcare systems to provide more sophisticated and real-time services on the Internet of Medical Things(IoMT).The existing cloud-based,centralized IoMT architectures are vulnerable to multiple security and privacy problems.The blockchain-enabled IoMT is an emerging paradigm that can ensure the security and trustworthiness of medical data sharing in the IoMT networks.This article presents a private and easily expandable blockchain-based framework for the IoMT.The proposed framework contains several participants,including private blockchain,hospitalmanagement systems,cloud service providers,doctors,and patients.Data security is ensured by incorporating an attributebased encryption scheme.Furthermore,an IoT-friendly consensus algorithm is deployed to ensure fast block validation and high scalability in the IoMT network.The proposed framework can perform multiple healthcare-related services in a secure and trustworthy manner.The performance of blockchain read/write operations is evaluated in terms of transaction throughput and latency.Experimental outcomes indicate that the proposed scheme achieved an average throughput of 857 TPS and 151 TPS for read and write operations.The average latency is 61 ms and 16 ms for read and write operations,respectively.展开更多
Smart environments offer various services,including smart cities,ehealthcare,transportation,and wearable devices,generating multiple traffic flows with different Quality of Service(QoS)demands.Achieving the desired Qo...Smart environments offer various services,including smart cities,ehealthcare,transportation,and wearable devices,generating multiple traffic flows with different Quality of Service(QoS)demands.Achieving the desired QoS with security in this heterogeneous environment can be challenging due to traffic flows and device management,unoptimized routing with resource awareness,and security threats.Software Defined Networks(SDN)can help manage these devices through centralized SDN controllers and address these challenges.Various schemes have been proposed to integrate SDN with emerging technologies for better resource utilization and security.Software Defined Wireless Body Area Networks(SDWBAN)and Software Defined Internet of Things(SDIoT)are the recently introduced frameworks to overcome these challenges.This study surveys the existing SDWBAN and SDIoT routing and security challenges.The paper discusses each solution in detail and analyses its weaknesses.It covers SDWBAN frameworks for efficient management of WBAN networks,management of IoT devices,and proposed security mechanisms for IoT and data security in WBAN.The survey provides insights into the state-of-the-art in SDWBAN and SDIoT routing with resource awareness and security threats.Finally,this study highlights potential areas for future research.展开更多
Complex networks on the Internet of Things(IoT)and brain communication are the main focus of this paper.The benefits of complex networks may be applicable in the future research directions of 6G,photonic,IoT,brain,etc...Complex networks on the Internet of Things(IoT)and brain communication are the main focus of this paper.The benefits of complex networks may be applicable in the future research directions of 6G,photonic,IoT,brain,etc.,communication technologies.Heavy data traffic,huge capacity,minimal level of dynamic latency,etc.are some of the future requirements in 5G+and 6G communication systems.In emerging communication,technologies such as 5G+/6G-based photonic sensor communication and complex networks play an important role in improving future requirements of IoT and brain communication.In this paper,the state of the complex system considered as a complex network(the connection between the brain cells,neurons,etc.)needs measurement for analyzing the functions of the neurons during brain communication.Here,we measure the state of the complex system through observability.Using 5G+/6G-based photonic sensor nodes,finding observability influenced by the concept of contraction provides the stability of neurons.When IoT or any sensors fail to measure the state of the connectivity in the 5G+or 6G communication due to external noise and attacks,some information about the sensor nodes during the communication will be lost.Similarly,neurons considered sing the complex networks concept neuron sensors in the brain lose communication and connections.Therefore,affected sensor nodes in a contraction are equivalent to compensate for maintaining stability conditions.In this compensation,loss of observability depends on the contraction size which is a key factor for employing a complex network.To analyze the observability recovery,we can use a contraction detection algorithm with complex network properties.Our survey paper shows that contraction size will allow us to improve the performance of brain communication,stability of neurons,etc.,through the clustering coefficient considered in the contraction detection algorithm.In addition,we discuss the scalability of IoT communication using 5G+/6G-based photonic technology.展开更多
Electrocardiogram(ECG)signal is a measure of the heart’s electrical activity.Recently,ECG detection and classification have benefited from the use of computer-aided systems by cardiologists.The goal of this paper is ...Electrocardiogram(ECG)signal is a measure of the heart’s electrical activity.Recently,ECG detection and classification have benefited from the use of computer-aided systems by cardiologists.The goal of this paper is to improve the accuracy of ECG classification by combining the Dipper Throated Optimization(DTO)and Differential Evolution Algorithm(DEA)into a unified algorithm to optimize the hyperparameters of neural network(NN)for boosting the ECG classification accuracy.In addition,we proposed a new feature selection method for selecting the significant feature that can improve the overall performance.To prove the superiority of the proposed approach,several experimentswere conducted to compare the results achieved by the proposed approach and other competing approaches.Moreover,statistical analysis is performed to study the significance and stability of the proposed approach using Wilcoxon and ANOVA tests.Experimental results confirmed the superiority and effectiveness of the proposed approach.The classification accuracy achieved by the proposed approach is(99.98%).展开更多
Addressing classification and prediction challenges, tree ensemble models have gained significant importance. Boosting ensemble techniques are commonly employed for forecasting Type-II diabetes mellitus. Light Gradien...Addressing classification and prediction challenges, tree ensemble models have gained significant importance. Boosting ensemble techniques are commonly employed for forecasting Type-II diabetes mellitus. Light Gradient Boosting Machine (LightGBM) is a widely used algorithm known for its leaf growth strategy, loss reduction, and enhanced training precision. However, LightGBM is prone to overfitting. In contrast, CatBoost utilizes balanced base predictors known as decision tables, which mitigate overfitting risks and significantly improve testing time efficiency. CatBoost’s algorithm structure counteracts gradient boosting biases and incorporates an overfitting detector to stop training early. This study focuses on developing a hybrid model that combines LightGBM and CatBoost to minimize overfitting and improve accuracy by reducing variance. For the purpose of finding the best hyperparameters to use with the underlying learners, the Bayesian hyperparameter optimization method is used. By fine-tuning the regularization parameter values, the hybrid model effectively reduces variance (overfitting). Comparative evaluation against LightGBM, CatBoost, XGBoost, Decision Tree, Random Forest, AdaBoost, and GBM algorithms demonstrates that the hybrid model has the best F1-score (99.37%), recall (99.25%), and accuracy (99.37%). Consequently, the proposed framework holds promise for early diabetes prediction in the healthcare industry and exhibits potential applicability to other datasets sharing similarities with diabetes.展开更多
Breast Arterial Calcification(BAC)is a mammographic decision dissimilar to cancer and commonly observed in elderly women.Thus identifying BAC could provide an expense,and be inaccurate.Recently Deep Learning(DL)method...Breast Arterial Calcification(BAC)is a mammographic decision dissimilar to cancer and commonly observed in elderly women.Thus identifying BAC could provide an expense,and be inaccurate.Recently Deep Learning(DL)methods have been introduced for automatic BAC detection and quantification with increased accuracy.Previously,classification with deep learning had reached higher efficiency,but designing the structure of DL proved to be an extremely challenging task due to overfitting models.It also is not able to capture the patterns and irregularities presented in the images.To solve the overfitting problem,an optimal feature set has been formed by Enhanced Wolf Pack Algorithm(EWPA),and their irregularities are identified by Dense-kUNet segmentation.In this paper,Dense-kUNet for segmentation and optimal feature has been introduced for classification(severe,mild,light)that integrates DenseUNet and kU-Net.Longer bound links exist among adjacent modules,allowing relatively rough data to be sent to the following component and assisting the system in finding higher qualities.The major contribution of the work is to design the best features selected by Enhanced Wolf Pack Algorithm(EWPA),and Modified Support Vector Machine(MSVM)based learning for classification.k-Dense-UNet is introduced which combines the procedure of Dense-UNet and kU-Net for image segmentation.Longer bound associations occur among nearby sections,allowing relatively granular data to be sent to the next subsystem and benefiting the system in recognizing smaller characteristics.The proposed techniques and the performance are tested using several types of analysis techniques 826 filled digitized mammography.The proposed method achieved the highest precision,recall,F-measure,and accuracy of 84.4333%,84.5333%,84.4833%,and 86.8667%when compared to other methods on the Digital Database for Screening Mammography(DDSM).展开更多
The Message Passing Interface (MPI) is a widely accepted standard for parallel computing on distributed memorysystems.However, MPI implementations can contain defects that impact the reliability and performance of par...The Message Passing Interface (MPI) is a widely accepted standard for parallel computing on distributed memorysystems.However, MPI implementations can contain defects that impact the reliability and performance of parallelapplications. Detecting and correcting these defects is crucial, yet there is a lack of published models specificallydesigned for correctingMPI defects. To address this, we propose a model for detecting and correcting MPI defects(DC_MPI), which aims to detect and correct defects in various types of MPI communication, including blockingpoint-to-point (BPTP), nonblocking point-to-point (NBPTP), and collective communication (CC). The defectsaddressed by the DC_MPI model include illegal MPI calls, deadlocks (DL), race conditions (RC), and messagemismatches (MM). To assess the effectiveness of the DC_MPI model, we performed experiments on a datasetconsisting of 40 MPI codes. The results indicate that the model achieved a detection rate of 37 out of 40 codes,resulting in an overall detection accuracy of 92.5%. Additionally, the execution duration of the DC_MPI modelranged from 0.81 to 1.36 s. These findings show that the DC_MPI model is useful in detecting and correctingdefects in MPI implementations, thereby enhancing the reliability and performance of parallel applications. TheDC_MPImodel fills an important research gap and provides a valuable tool for improving the quality ofMPI-basedparallel computing systems.展开更多
Object tracking is one of the major tasks for mobile robots in many real-world applications.Also,artificial intelligence and automatic control techniques play an important role in enhancing the performance of mobile r...Object tracking is one of the major tasks for mobile robots in many real-world applications.Also,artificial intelligence and automatic control techniques play an important role in enhancing the performance of mobile robot navigation.In contrast to previous simulation studies,this paper presents a new intelligent mobile robot for accomplishing multi-tasks by tracking red-green-blue(RGB)colored objects in a real experimental field.Moreover,a practical smart controller is developed based on adaptive fuzzy logic and custom proportional-integral-derivative(PID)schemes to achieve accurate tracking results,considering robot command delay and tolerance errors.The design of developed controllers implies some motion rules to mimic the knowledge of experienced operators.Twelve scenarios of three colored object combinations have been successfully tested and evaluated by using the developed controlled image-based robot tracker.Classical PID control failed to handle some tracking scenarios in this study.The proposed adaptive fuzzy PID control achieved the best accurate results with the minimum average final error of 13.8 cm to reach the colored targets,while our designed custom PID control is efficient in saving both average time and traveling distance of 6.6 s and 14.3 cm,respectively.These promising results demonstrate the feasibility of applying our developed image-based robotic system in a colored object-tracking environment to reduce human workloads.展开更多
The rapid growth of smart technologies and services has intensified the challenges surrounding identity authenti-cation techniques.Biometric credentials are increasingly being used for verification due to their advant...The rapid growth of smart technologies and services has intensified the challenges surrounding identity authenti-cation techniques.Biometric credentials are increasingly being used for verification due to their advantages over traditional methods,making it crucial to safeguard the privacy of people’s biometric data in various scenarios.This paper offers an in-depth exploration for privacy-preserving techniques and potential threats to biometric systems.It proposes a noble and thorough taxonomy survey for privacy-preserving techniques,as well as a systematic framework for categorizing the field’s existing literature.We review the state-of-the-art methods and address their advantages and limitations in the context of various biometric modalities,such as face,fingerprint,and eye detection.The survey encompasses various categories of privacy-preserving mechanisms and examines the trade-offs between security,privacy,and recognition performance,as well as the issues and future research directions.It aims to provide researchers,professionals,and decision-makers with a thorough understanding of the existing privacy-preserving solutions in biometric recognition systems and serves as the foundation of the development of more secure and privacy-preserving biometric technologies.展开更多
基金funded by Institutional Fund Projects under Grant No(IFPNC-001-611-2020).
文摘Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties reported worldwide annually.Therefore,there is a pressing need to employ diverse landmine detection techniques for their removal.One effective approach for landmine detection is UAV(Unmanned Aerial Vehicle)based AirborneMagnetometry,which identifies magnetic anomalies in the local terrestrial magnetic field.It can generate a contour plot or heat map that visually represents the magnetic field strength.Despite the effectiveness of this approach,landmine removal remains a challenging and resource-intensive task,fraughtwith risks.Edge computing,on the other hand,can play a crucial role in critical drone monitoring applications like landmine detection.By processing data locally on a nearby edge server,edge computing can reduce communication latency and bandwidth requirements,allowing real-time analysis of magnetic field data.It enables faster decision-making and more efficient landmine detection,potentially saving lives and minimizing the risks involved in the process.Furthermore,edge computing can provide enhanced security and privacy by keeping sensitive data close to the source,reducing the chances of data exposure during transmission.This paper introduces the MAGnetometry Imaging based Classification System(MAGICS),a fully automated UAV-based system designed for landmine and buried object detection and localization.We have developed an efficient deep learning-based strategy for automatic image classification using magnetometry dataset traces.By simulating the proposal in various network scenarios,we have successfully detected landmine signatures present in themagnetometry images.The trained models exhibit significant performance improvements,achieving a maximum mean average precision value of 97.8%.
基金supported by MRC,UK (MC_PC_17171)Royal Society,UK (RP202G0230)+8 种基金BHF,UK (AA/18/3/34220)Hope Foundation for Cancer Research,UK (RM60G0680)GCRF,UK (P202PF11)Sino-UK Industrial Fund,UK (RP202G0289)LIAS,UK (P202ED10,P202RE969)Data Science Enhancement Fund,UK (P202RE237)Fight for Sight,UK (24NN201)Sino-UK Education Fund,UK (OP202006)BBSRC,UK (RM32G0178B8).
文摘Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on deep learning for the localization and classification of thyroid nodules.First,we propose the multitask TransUnet,which combines the TransUnet encoder and decoder with multitask learning.Second,we propose the DualLoss function,tailored to the thyroid nodule localization and classification tasks.It balances the learning of the localization and classification tasks to help improve the model’s generalization ability.Third,we introduce strategies for augmenting the data.Finally,we submit a novel deep learning model,ThyroidNet,to accurately detect thyroid nodules.Results:ThyroidNet was evaluated on private datasets and was comparable to other existing methods,including U-Net and TransUnet.Experimental results show that ThyroidNet outperformed these methods in localizing and classifying thyroid nodules.It achieved improved accuracy of 3.9%and 1.5%,respectively.Conclusion:ThyroidNet significantly improves the clinical diagnosis of thyroid nodules and supports medical image analysis tasks.Future research directions include optimization of the model structure,expansion of the dataset size,reduction of computational complexity and memory requirements,and exploration of additional applications of ThyroidNet in medical image analysis.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R333)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The concept of smart houses has grown in prominence in recent years.Major challenges linked to smart homes are identification theft,data safety,automated decision-making for IoT-based devices,and the security of the device itself.Current home automation systems try to address these issues but there is still an urgent need for a dependable and secure smart home solution that includes automatic decision-making systems and methodical features.This paper proposes a smart home system based on ensemble learning of random forest(RF)and convolutional neural networks(CNN)for programmed decision-making tasks,such as categorizing gadgets as“OFF”or“ON”based on their normal routine in homes.We have integrated emerging blockchain technology to provide secure,decentralized,and trustworthy authentication and recognition of IoT devices.Our system consists of a 5V relay circuit,various sensors,and a Raspberry Pi server and database for managing devices.We have also developed an Android app that communicates with the server interface through an HTTP web interface and an Apache server.The feasibility and efficacy of the proposed smart home automation system have been evaluated in both laboratory and real-time settings.It is essential to use inexpensive,scalable,and readily available components and technologies in smart home automation systems.Additionally,we must incorporate a comprehensive security and privacy-centric design that emphasizes risk assessments,such as cyberattacks,hardware security,and other cyber threats.The trial results support the proposed system and demonstrate its potential for use in everyday life.
文摘Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a particular geographic region or location, also known as geo-spatial data or geographic information. Focusing on spatial heterogeneity, we present a hybrid machine learning model combining two competitive algorithms: the Random Forest Regressor and CNN. The model is fine-tuned using cross validation for hyper-parameter adjustment and performance evaluation, ensuring robustness and generalization. Our approach integrates Global Moran’s I for examining global autocorrelation, and local Moran’s I for assessing local spatial autocorrelation in the residuals. To validate our approach, we implemented the hybrid model on a real-world dataset and compared its performance with that of the traditional machine learning models. Results indicate superior performance with an R-squared of 0.90, outperforming RF 0.84 and CNN 0.74. This study contributed to a detailed understanding of spatial variations in data considering the geographical information (Longitude & Latitude) present in the dataset. Our results, also assessed using the Root Mean Squared Error (RMSE), indicated that the hybrid yielded lower errors, showing a deviation of 53.65% from the RF model and 63.24% from the CNN model. Additionally, the global Moran’s I index was observed to be 0.10. This study underscores that the hybrid was able to predict correctly the house prices both in clusters and in dispersed areas.
基金supported by the Deanship of Scientific Research at Prince Sattam bin Abdulaziz University under the research project (PSAU/2023/01/23001).
文摘The development of human-robot interaction has been continu-ously increasing for the last decades.Through this development,it has become simpler and safe interactions using a remotely controlled telepresence robot in an insecure and hazardous environment.The audio-video communication connection or data transmission stability has already been well handled by fast-growing technologies such as 5G and 6G.However,the design of the phys-ical parameters,e.g.,maneuverability,controllability,and stability,still needs attention.Therefore,the paper aims to present a systematic,controlled design and implementation of a telepresence mobile robot.The primary focus of this paper is to perform the computational analysis and experimental implementa-tion design with sophisticated position control,which autonomously controls the robot’s position and speed when reaching an obstacle.A system model and a position controller design are developed with root locus points.The design robot results are verified experimentally,showing the robot’s agreement and control in the desired position.The robot was tested by considering various parameters:driving straight ahead,right turn,self-localization and complex path.The results prove that the proposed approach is flexible and adaptable and gives a better alternative.The experimental results show that the proposed method significantly minimizes the obstacle hits.
文摘The main idea behind the present research is to design a state-feedback controller for an underactuated nonlinear rotary inverted pendulum module by employing the linear quadratic regulator(LQR)technique using local approximation.The LQR is an excellent method for developing a controller for nonlinear systems.It provides optimal feedback to make the closed-loop system robust and stable,rejecting external disturbances.Model-based optimal controller for a nonlinear system such as a rotatory inverted pendulum has not been designed and implemented using Newton-Euler,Lagrange method,and local approximation.Therefore,implementing LQR to an underactuated nonlinear system was vital to design a stable controller.A mathematical model has been developed for the controller design by utilizing the Newton-Euler,Lagrange method.The nonlinear model has been linearized around an equilibrium point.Linear and nonlinear models have been compared to find the range in which linear and nonlinear models’behaviour is similar.MATLAB LQR function and system dynamics have been used to estimate the controller parameters.For the performance evaluation of the designed controller,Simulink has been used.Linear and nonlinear models have been simulated along with the designed controller.Simulations have been performed for the designed controller over the linear and nonlinear system under different conditions through varying system variables.The results show that the system is stable and robust enough to act against external disturbances.The controller maintains the rotary inverted pendulum in an upright position and rejects disruptions like falling under gravitational force or any external disturbance by adjusting the rotation of the horizontal link in both linear and nonlinear environments in a specific range.The controller has been practically designed and implemented.It is vivid from the results that the controller is robust enough to reject the disturbances in milliseconds and keeps the pendulum arm deflection angle to zero degrees.
基金This research was supported and funded by KAU Scientific Endowment,King Abdulaziz University,Jeddah,Saudi Arabia.
文摘A document layout can be more informative than merely a document’s visual and structural appearance.Thus,document layout analysis(DLA)is considered a necessary prerequisite for advanced processing and detailed document image analysis to be further used in several applications and different objectives.This research extends the traditional approaches of DLA and introduces the concept of semantic document layout analysis(SDLA)by proposing a novel framework for semantic layout analysis and characterization of handwritten manuscripts.The proposed SDLA approach enables the derivation of implicit information and semantic characteristics,which can be effectively utilized in dozens of practical applications for various purposes,in a way bridging the semantic gap and providingmore understandable high-level document image analysis and more invariant characterization via absolute and relative labeling.This approach is validated and evaluated on a large dataset ofArabic handwrittenmanuscripts comprising complex layouts.The experimental work shows promising results in terms of accurate and effective semantic characteristic-based clustering and retrieval of handwritten manuscripts.It also indicates the expected efficacy of using the capabilities of the proposed approach in automating and facilitating many functional,reallife tasks such as effort estimation and pricing of transcription or typing of such complex manuscripts.
基金supported and funded by KAU Scientific Endowment,King Abdulaziz University,Jeddah,Saudi Arabia,grant number 077416-04.
文摘The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision approaches.In multiple real-life applications,for example,social media,content-based face picture retrieval is a well-invested technique for large-scale databases,where there is a significant necessity for reliable retrieval capabilities enabling quick search in a vast number of pictures.Humans widely employ faces for recognizing and identifying people.Thus,face recognition through formal or personal pictures is increasingly used in various real-life applications,such as helping crime investigators retrieve matching images from face image databases to identify victims and criminals.However,such face image retrieval becomes more challenging in large-scale databases,where traditional vision-based face analysis requires ample additional storage space than the raw face images already occupied to store extracted lengthy feature vectors and takes much longer to process and match thousands of face images.This work mainly contributes to enhancing face image retrieval performance in large-scale databases using hash codes inferred by locality-sensitive hashing(LSH)for facial hard and soft biometrics as(Hard BioHash)and(Soft BioHash),respectively,to be used as a search input for retrieving the top-k matching faces.Moreover,we propose the multi-biometric score-level fusion of both face hard and soft BioHashes(Hard-Soft BioHash Fusion)for further augmented face image retrieval.The experimental outcomes applied on the Labeled Faces in the Wild(LFW)dataset and the related attributes dataset(LFW-attributes),demonstrate that the retrieval performance of the suggested fusion approach(Hard-Soft BioHash Fusion)significantly improved the retrieval performance compared to solely using Hard BioHash or Soft BioHash in isolation,where the suggested method provides an augmented accuracy of 87%when executed on 1000 specimens and 77%on 5743 samples.These results remarkably outperform the results of the Hard BioHash method by(50%on the 1000 samples and 30%on the 5743 samples),and the Soft BioHash method by(78%on the 1000 samples and 63%on the 5743 samples).
基金supported by the National Natural Science Foundation of China (Grant No.61702291)China Henan International Joint Laboratory for Multidimensional Topology and Carcinogenic Characteristics Analysis of Atmospheric Particulate Matter PM2.5.
文摘One of the most recent developments in the field of graph theory is the analysis of networks such as Butterfly networks,Benes networks,Interconnection networks,and David-derived networks using graph theoretic parameters.The topological indices(TIs)have been widely used as graph invariants among various graph theoretic tools.Quantitative structure activity relationships(QSAR)and quantitative structure property relationships(QSPR)need the use of TIs.Different structure-based parameters,such as the degree and distance of vertices in graphs,contribute to the determination of the values of TIs.Among other recently introduced novelties,the classes of ev-degree and ve-degree dependent TIs have been extensively explored for various graph families.The current research focuses on the development of formulae for different ev-degree and ve-degree dependent TIs for s−dimensional Benes network and certain networks derived from it.In the end,a comparison between the values of the TIs for these networks has been presented through graphical tools.
基金Funding for this study was received from the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia through the project number“IFPHI-021–135–2020”and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Increasing renewable energy targets globally has raised the requirement for the efficient and profitable operation of solar photovoltaic(PV)systems.In light of this requirement,this paper provides a path for evaluating the operating condition and improving the power output of the PV system in a grid integrated environment.To achieve this,different types of faults in grid-connected PV systems(GCPVs)and their impact on the energy loss associated with the electrical network are analyzed.A data-driven approach using neural networks(NNs)is proposed to achieve root cause analysis and localize the fault to the component level in the system.The localized fault condition is combined with a parallel operation of adaptive neurofuzzy inference units(ANFIUs)to develop a power mismatch-based control unit(PMCU)for improving the power output of the GCPV.To develop the proposed framework,a 10-kW single-phase GCPV is simulated for training the NN-based anomaly detection approach with 14 deviation signals.Further,the developed algorithm is combined with the PMCU implemented with the experimental setup of GCPV.The results identified 98.2%training accuracy and 43000 observations/sec prediction speed for the trained classifier,and improved power output with reduced voltage and current harmonics for the grid-connected PV operation.
文摘Data Encryption Standard(DES)is a symmetric key cryptosystem that is applied in different cryptosystems of recent times.However,researchers found defects in the main assembling of the DES and declared it insecure against linear and differential cryptanalysis.In this paper,we have studied the faults and made improvements in their internal structure and get the new algorithm for Improved DES.The improvement is being made in the substitution step,which is the only nonlinear component of the algorithm.This alteration provided us with great outcomes and increase the strength of DES.Accordingly,a novel 6×6 good quality S-box construction scheme has been hired in the substitution phase of the DES.The construction involves the Galois field method and generates robust S-boxes that are used to secure the scheme against linear and differential attacks.Then again,the key space of the improved DES has been enhanced against the brute force attack.The out-comes of different performance analyses depict the strength of our proposed substitution boxes which also guarantees the strength of the overall DES.
基金The Deanship of Scientific Research(DSR)at King Abdulaziz University(KAU),Jeddah,Saudi Arabia has funded this project,under grant no.(RG-91-611-42).
文摘Rapid technological advancement has enabled modern healthcare systems to provide more sophisticated and real-time services on the Internet of Medical Things(IoMT).The existing cloud-based,centralized IoMT architectures are vulnerable to multiple security and privacy problems.The blockchain-enabled IoMT is an emerging paradigm that can ensure the security and trustworthiness of medical data sharing in the IoMT networks.This article presents a private and easily expandable blockchain-based framework for the IoMT.The proposed framework contains several participants,including private blockchain,hospitalmanagement systems,cloud service providers,doctors,and patients.Data security is ensured by incorporating an attributebased encryption scheme.Furthermore,an IoT-friendly consensus algorithm is deployed to ensure fast block validation and high scalability in the IoMT network.The proposed framework can perform multiple healthcare-related services in a secure and trustworthy manner.The performance of blockchain read/write operations is evaluated in terms of transaction throughput and latency.Experimental outcomes indicate that the proposed scheme achieved an average throughput of 857 TPS and 151 TPS for read and write operations.The average latency is 61 ms and 16 ms for read and write operations,respectively.
基金supporting this research through the Post-Doctoral Fellowship Scheme under Grant Q.J130000.21A2.06E03 and Q.J130000.2409.08G77.
文摘Smart environments offer various services,including smart cities,ehealthcare,transportation,and wearable devices,generating multiple traffic flows with different Quality of Service(QoS)demands.Achieving the desired QoS with security in this heterogeneous environment can be challenging due to traffic flows and device management,unoptimized routing with resource awareness,and security threats.Software Defined Networks(SDN)can help manage these devices through centralized SDN controllers and address these challenges.Various schemes have been proposed to integrate SDN with emerging technologies for better resource utilization and security.Software Defined Wireless Body Area Networks(SDWBAN)and Software Defined Internet of Things(SDIoT)are the recently introduced frameworks to overcome these challenges.This study surveys the existing SDWBAN and SDIoT routing and security challenges.The paper discusses each solution in detail and analyses its weaknesses.It covers SDWBAN frameworks for efficient management of WBAN networks,management of IoT devices,and proposed security mechanisms for IoT and data security in WBAN.The survey provides insights into the state-of-the-art in SDWBAN and SDIoT routing with resource awareness and security threats.Finally,this study highlights potential areas for future research.
基金support from the USA-based research group(Computing and Engineering,Indiana University)the KSA-based research group(Department of Computer Science,King Abdulaziz University).
文摘Complex networks on the Internet of Things(IoT)and brain communication are the main focus of this paper.The benefits of complex networks may be applicable in the future research directions of 6G,photonic,IoT,brain,etc.,communication technologies.Heavy data traffic,huge capacity,minimal level of dynamic latency,etc.are some of the future requirements in 5G+and 6G communication systems.In emerging communication,technologies such as 5G+/6G-based photonic sensor communication and complex networks play an important role in improving future requirements of IoT and brain communication.In this paper,the state of the complex system considered as a complex network(the connection between the brain cells,neurons,etc.)needs measurement for analyzing the functions of the neurons during brain communication.Here,we measure the state of the complex system through observability.Using 5G+/6G-based photonic sensor nodes,finding observability influenced by the concept of contraction provides the stability of neurons.When IoT or any sensors fail to measure the state of the connectivity in the 5G+or 6G communication due to external noise and attacks,some information about the sensor nodes during the communication will be lost.Similarly,neurons considered sing the complex networks concept neuron sensors in the brain lose communication and connections.Therefore,affected sensor nodes in a contraction are equivalent to compensate for maintaining stability conditions.In this compensation,loss of observability depends on the contraction size which is a key factor for employing a complex network.To analyze the observability recovery,we can use a contraction detection algorithm with complex network properties.Our survey paper shows that contraction size will allow us to improve the performance of brain communication,stability of neurons,etc.,through the clustering coefficient considered in the contraction detection algorithm.In addition,we discuss the scalability of IoT communication using 5G+/6G-based photonic technology.
文摘Electrocardiogram(ECG)signal is a measure of the heart’s electrical activity.Recently,ECG detection and classification have benefited from the use of computer-aided systems by cardiologists.The goal of this paper is to improve the accuracy of ECG classification by combining the Dipper Throated Optimization(DTO)and Differential Evolution Algorithm(DEA)into a unified algorithm to optimize the hyperparameters of neural network(NN)for boosting the ECG classification accuracy.In addition,we proposed a new feature selection method for selecting the significant feature that can improve the overall performance.To prove the superiority of the proposed approach,several experimentswere conducted to compare the results achieved by the proposed approach and other competing approaches.Moreover,statistical analysis is performed to study the significance and stability of the proposed approach using Wilcoxon and ANOVA tests.Experimental results confirmed the superiority and effectiveness of the proposed approach.The classification accuracy achieved by the proposed approach is(99.98%).
文摘Addressing classification and prediction challenges, tree ensemble models have gained significant importance. Boosting ensemble techniques are commonly employed for forecasting Type-II diabetes mellitus. Light Gradient Boosting Machine (LightGBM) is a widely used algorithm known for its leaf growth strategy, loss reduction, and enhanced training precision. However, LightGBM is prone to overfitting. In contrast, CatBoost utilizes balanced base predictors known as decision tables, which mitigate overfitting risks and significantly improve testing time efficiency. CatBoost’s algorithm structure counteracts gradient boosting biases and incorporates an overfitting detector to stop training early. This study focuses on developing a hybrid model that combines LightGBM and CatBoost to minimize overfitting and improve accuracy by reducing variance. For the purpose of finding the best hyperparameters to use with the underlying learners, the Bayesian hyperparameter optimization method is used. By fine-tuning the regularization parameter values, the hybrid model effectively reduces variance (overfitting). Comparative evaluation against LightGBM, CatBoost, XGBoost, Decision Tree, Random Forest, AdaBoost, and GBM algorithms demonstrates that the hybrid model has the best F1-score (99.37%), recall (99.25%), and accuracy (99.37%). Consequently, the proposed framework holds promise for early diabetes prediction in the healthcare industry and exhibits potential applicability to other datasets sharing similarities with diabetes.
文摘Breast Arterial Calcification(BAC)is a mammographic decision dissimilar to cancer and commonly observed in elderly women.Thus identifying BAC could provide an expense,and be inaccurate.Recently Deep Learning(DL)methods have been introduced for automatic BAC detection and quantification with increased accuracy.Previously,classification with deep learning had reached higher efficiency,but designing the structure of DL proved to be an extremely challenging task due to overfitting models.It also is not able to capture the patterns and irregularities presented in the images.To solve the overfitting problem,an optimal feature set has been formed by Enhanced Wolf Pack Algorithm(EWPA),and their irregularities are identified by Dense-kUNet segmentation.In this paper,Dense-kUNet for segmentation and optimal feature has been introduced for classification(severe,mild,light)that integrates DenseUNet and kU-Net.Longer bound links exist among adjacent modules,allowing relatively rough data to be sent to the following component and assisting the system in finding higher qualities.The major contribution of the work is to design the best features selected by Enhanced Wolf Pack Algorithm(EWPA),and Modified Support Vector Machine(MSVM)based learning for classification.k-Dense-UNet is introduced which combines the procedure of Dense-UNet and kU-Net for image segmentation.Longer bound associations occur among nearby sections,allowing relatively granular data to be sent to the next subsystem and benefiting the system in recognizing smaller characteristics.The proposed techniques and the performance are tested using several types of analysis techniques 826 filled digitized mammography.The proposed method achieved the highest precision,recall,F-measure,and accuracy of 84.4333%,84.5333%,84.4833%,and 86.8667%when compared to other methods on the Digital Database for Screening Mammography(DDSM).
基金the Deanship of Scientific Research at King Abdulaziz University,Jeddah,Saudi Arabia under the Grant No.RG-12-611-43.
文摘The Message Passing Interface (MPI) is a widely accepted standard for parallel computing on distributed memorysystems.However, MPI implementations can contain defects that impact the reliability and performance of parallelapplications. Detecting and correcting these defects is crucial, yet there is a lack of published models specificallydesigned for correctingMPI defects. To address this, we propose a model for detecting and correcting MPI defects(DC_MPI), which aims to detect and correct defects in various types of MPI communication, including blockingpoint-to-point (BPTP), nonblocking point-to-point (NBPTP), and collective communication (CC). The defectsaddressed by the DC_MPI model include illegal MPI calls, deadlocks (DL), race conditions (RC), and messagemismatches (MM). To assess the effectiveness of the DC_MPI model, we performed experiments on a datasetconsisting of 40 MPI codes. The results indicate that the model achieved a detection rate of 37 out of 40 codes,resulting in an overall detection accuracy of 92.5%. Additionally, the execution duration of the DC_MPI modelranged from 0.81 to 1.36 s. These findings show that the DC_MPI model is useful in detecting and correctingdefects in MPI implementations, thereby enhancing the reliability and performance of parallel applications. TheDC_MPImodel fills an important research gap and provides a valuable tool for improving the quality ofMPI-basedparallel computing systems.
基金The authors extend their appreciation to the Deanship of Scientific Research at Shaqra University for funding this research work through the Project Number(SU-ANN-2023016).
文摘Object tracking is one of the major tasks for mobile robots in many real-world applications.Also,artificial intelligence and automatic control techniques play an important role in enhancing the performance of mobile robot navigation.In contrast to previous simulation studies,this paper presents a new intelligent mobile robot for accomplishing multi-tasks by tracking red-green-blue(RGB)colored objects in a real experimental field.Moreover,a practical smart controller is developed based on adaptive fuzzy logic and custom proportional-integral-derivative(PID)schemes to achieve accurate tracking results,considering robot command delay and tolerance errors.The design of developed controllers implies some motion rules to mimic the knowledge of experienced operators.Twelve scenarios of three colored object combinations have been successfully tested and evaluated by using the developed controlled image-based robot tracker.Classical PID control failed to handle some tracking scenarios in this study.The proposed adaptive fuzzy PID control achieved the best accurate results with the minimum average final error of 13.8 cm to reach the colored targets,while our designed custom PID control is efficient in saving both average time and traveling distance of 6.6 s and 14.3 cm,respectively.These promising results demonstrate the feasibility of applying our developed image-based robotic system in a colored object-tracking environment to reduce human workloads.
基金The research is supported by Nature Science Foundation of Zhejiang Province(LQ20F020008)“Pioneer”and“Leading Goose”R&D Program of Zhejiang(Grant Nos.2023C03203,2023C01150).
文摘The rapid growth of smart technologies and services has intensified the challenges surrounding identity authenti-cation techniques.Biometric credentials are increasingly being used for verification due to their advantages over traditional methods,making it crucial to safeguard the privacy of people’s biometric data in various scenarios.This paper offers an in-depth exploration for privacy-preserving techniques and potential threats to biometric systems.It proposes a noble and thorough taxonomy survey for privacy-preserving techniques,as well as a systematic framework for categorizing the field’s existing literature.We review the state-of-the-art methods and address their advantages and limitations in the context of various biometric modalities,such as face,fingerprint,and eye detection.The survey encompasses various categories of privacy-preserving mechanisms and examines the trade-offs between security,privacy,and recognition performance,as well as the issues and future research directions.It aims to provide researchers,professionals,and decision-makers with a thorough understanding of the existing privacy-preserving solutions in biometric recognition systems and serves as the foundation of the development of more secure and privacy-preserving biometric technologies.