Breast cancer detection heavily relies on medical imaging, particularly ultrasound, for early diagnosis and effectivetreatment. This research addresses the challenges associated with computer-aided diagnosis (CAD) of ...Breast cancer detection heavily relies on medical imaging, particularly ultrasound, for early diagnosis and effectivetreatment. This research addresses the challenges associated with computer-aided diagnosis (CAD) of breastcancer fromultrasound images. The primary challenge is accurately distinguishing between malignant and benigntumors, complicated by factors such as speckle noise, variable image quality, and the need for precise segmentationand classification. The main objective of the research paper is to develop an advanced methodology for breastultrasound image classification, focusing on speckle noise reduction, precise segmentation, feature extraction, andmachine learning-based classification. A unique approach is introduced that combines Enhanced Speckle ReducedAnisotropic Diffusion (SRAD) filters for speckle noise reduction, U-NET-based segmentation, Genetic Algorithm(GA)-based feature selection, and Random Forest and Bagging Tree classifiers, resulting in a novel and efficientmodel. To test and validate the hybrid model, rigorous experimentations were performed and results state thatthe proposed hybrid model achieved accuracy rate of 99.9%, outperforming other existing techniques, and alsosignificantly reducing computational time. This enhanced accuracy, along with improved sensitivity and specificity,makes the proposed hybrid model a valuable addition to CAD systems in breast cancer diagnosis, ultimatelyenhancing diagnostic accuracy in clinical applications.展开更多
Internet of Vehicles(IoV)is an evolution of the Internet of Things(IoT)to improve the capabilities of vehicular ad-hoc networks(VANETs)in intelligence transport systems.The network topology in IoV paradigm is highly d...Internet of Vehicles(IoV)is an evolution of the Internet of Things(IoT)to improve the capabilities of vehicular ad-hoc networks(VANETs)in intelligence transport systems.The network topology in IoV paradigm is highly dynamic.Clustering is one of the promising solutions to maintain the route stability in the dynamic network.However,existing algorithms consume a considerable amount of time in the cluster head(CH)selection process.Thus,this study proposes a mobility aware dynamic clustering-based routing(MADCR)protocol in IoV to maximize the lifespan of networks and reduce the end-to-end delay of vehicles.The MADCR protocol consists of cluster formation and CH selection processes.A cluster is formed on the basis of Euclidean distance.The CH is then chosen using the mayfly optimization algorithm(MOA).The CH subsequently receives vehicle data and forwards such data to the Road Side Unit(RSU).The performance of the MADCR protocol is compared with that ofAnt Colony Optimization(ACO),Comprehensive Learning Particle Swarm Optimization(CLPSO),and Clustering Algorithm for Internet of Vehicles based on Dragonfly Optimizer(CAVDO).The proposed MADCR protocol decreases the end-toend delay by 5–80 ms and increases the packet delivery ratio by 5%–15%.展开更多
Wireless sensor network(WSN)is considered as the fastest growing technology pattern in recent years because of its applicability in varied domains.Many sensor nodes with different sensing functionalities are deployed ...Wireless sensor network(WSN)is considered as the fastest growing technology pattern in recent years because of its applicability in varied domains.Many sensor nodes with different sensing functionalities are deployed in the monitoring area to collect suitable data and transmit it to the gateway.Ensuring communications in heterogeneous WSNs,is a critical issue that needs to be studied.In this research paper,we study the system performance of a heterogeneous WSN using LoRa–Zigbee hybrid communication.Specifically,two Zigbee sensor clusters and two LoRa sensor clusters are used and combined with two Zigbee-to-LoRa converters to communicate in a network managed by a LoRa gateway.The overall system integrates many different sensors in terms of types,communication protocols,and accuracy,which can be used in many applications in realistic environments such as on land,under water,or in the air.In addition to this,a synchronous management software on ThingSpeak Web server and Blynk app is designed.In the proposed system,the token ring protocol in Zigbee network and polling mechanism in LoRa network is used.The system can operate with a packet loss rate of less than 0.5%when the communication range of the Zigbee network is 630 m,and the communication range of the LoRa network is 3.7 km.On the basis of the digital results collected on the management software,this study proves tremendous improvements in the system performance.展开更多
Component-based software development is rapidly introducing numerous new paradigms and possibilities to deliver highly customized software in a distributed environment.Among other communication,teamwork,and coordinati...Component-based software development is rapidly introducing numerous new paradigms and possibilities to deliver highly customized software in a distributed environment.Among other communication,teamwork,and coordination problems in global software development,the detection of faults is seen as the key challenge.Thus,there is a need to ensure the reliability of component-based applications requirements.Distributed device detection faults applied to tracked components from various sources and failed to keep track of all the large number of components from different locations.In this study,we propose an approach for fault detection from componentbased systems requirements using the fuzzy logic approach and historical information during acceptance testing.This approach identified error-prone components selection for test case extraction and for prioritization of test cases to validate components in acceptance testing.For the evaluation,we used empirical study,and results depicted that the proposed approach significantly outperforms in component selection and acceptance testing.The comparison to the conventional procedures,i.e.,requirement criteria,and communication coverage criteria without irrelevancy and redundancy successfully outperform other procedures.Consequently,the F-measures of the proposed approach define the accurate selection of components,and faults identification increases in components using the proposed approach were higher(i.e.,more than 80 percent)than requirement criteria,and code coverage criteria procedures(i.e.,less than 80 percent),respectively.Similarly,the rate of fault detection in the proposed approach increases,i.e.,92.80 compared to existing methods i.e.,less than 80 percent.The proposed approach will provide a comprehensive guideline and roadmap for practitioners and researchers.展开更多
Cloud computing is a collection of disparate resources or services,a web of massive infrastructures,which is aimed at achieving maximum utilization with higher availability at a minimized cost.One of the most attracti...Cloud computing is a collection of disparate resources or services,a web of massive infrastructures,which is aimed at achieving maximum utilization with higher availability at a minimized cost.One of the most attractive applications for cloud computing is the concept of distributed information processing.Security,privacy,energy saving,reliability and load balancing are the major challenges facing cloud computing and most information technology innovations.Load balancing is the process of redistributing workload among all nodes in a network;to improve resource utilization and job response time,while avoiding overloading some nodes when other nodes are underloaded or idle is a major challenge.Thus,this research aims to design a novel load balancing systems in a cloud computing environment.The research is based on the modification of the existing approaches,namely;particle swarm optimization(PSO),honeybee,and ant colony optimization(ACO)with mathematical expression to form a novel approach called PACOHONEYBEE.The experiments were conducted on response time and throughput.The results of the response time of honeybee,PSO,SASOS,round-robin,PSO-ACO,and P-ACOHONEYBEE are:2791,2780,2784,2767,2727,and 2599(ms)respectively.The outcome of throughput of honeybee,PSO,SASOS,round-robin,PSO-ACO,and P-ACOHONEYBEE are:7451,7425,7398,7357,7387 and 7482(bps)respectively.It is observed that P-ACOHONEYBEE approach produces the lowest response time,high throughput and overall improved performance for the 10 nodes.The research has helped in managing the imbalance drawback by maximizing throughput,and reducing response time with scalability and reliability.展开更多
The simultaneous advances in the Internet of Things(IoT),Artificial intelligence(AI)and Robotics is going to revolutionize our world in the near future.In recent years,LoRa(Long Range)wireless powered by LoRaWAN(LoRa ...The simultaneous advances in the Internet of Things(IoT),Artificial intelligence(AI)and Robotics is going to revolutionize our world in the near future.In recent years,LoRa(Long Range)wireless powered by LoRaWAN(LoRa Wide Area Network)protocol has attracted the attention of researchers for numerous applications in the IoT domain.LoRa is a low power,unlicensed Industrial,Scientific,and Medical(ISM)bandequipped wireless technology that utilizes a wide area network protocol,i.e.,LoRaWAN,to incorporate itself into the network infrastructure.In this paper,we have evaluated the LoRaWAN communication protocol for the implementation of the IoT(Internet of Things)nodes’communication in a forest scenario.The outdoor performance of LoRa wireless in LoRaWAN,i.e.,the physical layer,has been evaluated in the forest area of Kashirampur Uttarakhand,India.Hence,the present paper aims towards analyzing the performance level of the LoRaWAN technology by observing the changes in Signal to Noise Ratio(SNR),Packet Reception Ratio(PRR)and Received Signal Strength Indicator(RSSI),with respect to the distance between IoT nodes.The article focuses on estimating network lifetime for a specific set of LoRa configuration parameters,hardware selection and power constraints.From the experimental results,it has been observed that transmissions can propagate to a distance of 300 m in the forest environment,while consuming approx.63%less energy for spreading factor 7 at 2 dBm,without incurring significant packet loss with PRR greater than 80%.展开更多
In mobile crowd computing(MCC),people’s smart mobile devices(SMDs)are utilized as computing resources.Considering the ever-growing computing capabilities of today’s SMDs,a collection of them can offer significantly ...In mobile crowd computing(MCC),people’s smart mobile devices(SMDs)are utilized as computing resources.Considering the ever-growing computing capabilities of today’s SMDs,a collection of them can offer significantly high-performance computing services.In a localMCC,the SMDs are typically connected to a local Wi-Fi network.Organizations and institutions can leverage the SMDs available within the campus to form local MCCs to cater to their computing needs without any financial and operational burden.Though it offers an economical and sustainable computing solution,users’mobility poses a serious issue in the QoS of MCC.To address this,before submitting a job to an SMD,we suggest estimating that particular SMD’s availability in the network until the job is finished.For this,we propose a convolutional GRU-based prediction model to assess how long an SMD is likely to be available in the network from any given point of time.For experimental purposes,we collected real users’mobility data(in-time and outtime)with respect to a Wi-Fi access point.To build the prediction model,we presented a novel feature extraction method to be applied to the time-series data.The experimental results prove that the proposed convolutional GRU model outperforms the conventional GRU model.展开更多
Energy conservation is a significant task in the Internet of Things(IoT)because IoT involves highly resource-constrained devices.Clustering is an effective technique for saving energy by reducing duplicate data.In a c...Energy conservation is a significant task in the Internet of Things(IoT)because IoT involves highly resource-constrained devices.Clustering is an effective technique for saving energy by reducing duplicate data.In a clustering protocol,the selection of a cluster head(CH)plays a key role in prolonging the lifetime of a network.However,most cluster-based protocols,including routing protocols for low-power and lossy networks(RPLs),have used fuzzy logic and probabilistic approaches to select the CH node.Consequently,early battery depletion is produced near the sink.To overcome this issue,a lion optimization algorithm(LOA)for selecting CH in RPL is proposed in this study.LOA-RPL comprises three processes:cluster formation,CH selection,and route establishment.A cluster is formed using the Euclidean distance.CH selection is performed using LOA.Route establishment is implemented using residual energy information.An extensive simulation is conducted in the network simulator ns-3 on various parameters,such as network lifetime,power consumption,packet delivery ratio(PDR),and throughput.The performance of LOA-RPL is also compared with those of RPL,fuzzy rule-based energyefficient clustering and immune-inspired routing(FEEC-IIR),and the routing scheme for IoT that uses shuffled frog-leaping optimization algorithm(RISARPL).The performance evaluation metrics used in this study are network lifetime,power consumption,PDR,and throughput.The proposed LOARPL increases network lifetime by 20%and PDR by 5%–10%compared with RPL,FEEC-IIR,and RISA-RPL.LOA-RPL is also highly energy-efficient compared with other similar routing protocols.展开更多
A collaborative filtering-based recommendation system has been an integral part of e-commerce and e-servicing.To keep the recommendation systems reliable,authentic,and superior,the security of these systems is very cr...A collaborative filtering-based recommendation system has been an integral part of e-commerce and e-servicing.To keep the recommendation systems reliable,authentic,and superior,the security of these systems is very crucial.Though the existing shilling attack detection methods in collaborative filtering are able to detect the standard attacks,in this paper,we prove that they fail to detect a new or unknown attack.We develop a new attack model,named Obscure attack,with unknown features and observed that it has been successful in biasing the overall top-N list of the target users as intended.The Obscure attack is able to push target items to the top-N list as well as remove the actual rated items from the list.Our proposed attack is more effective at a smaller number of k in top-k similar user as compared to other existing attacks.The effectivity of the proposed attack model is tested on the MovieLens dataset,where various classifiers like SVM,J48,random forest,and naïve Bayes are utilized.展开更多
With the popularity of e-learning,personalization and ubiquity have become important aspects of online learning.To make learning more personalized and ubiquitous,we propose a learner model for a query-based personaliz...With the popularity of e-learning,personalization and ubiquity have become important aspects of online learning.To make learning more personalized and ubiquitous,we propose a learner model for a query-based personalized learning recommendation system.Several contextual attributes characterize a learner,but considering all of them is costly for a ubiquitous learning system.In this paper,a set of optimal intrinsic and extrinsic contexts of a learner are identified for learner modeling.A total of 208 students are surveyed.DEMATEL(Decision Making Trial and Evaluation Laboratory)technique is used to establish the validity and importance of the identified contexts and find the interdependency among them.The acquiring methods of these contexts are also defined.On the basis of these contexts,the learner model is designed.A layered architecture is presented for interfacing the learner model with a query-based personalized learning recommendation system.In a ubiquitous learning scenario,the necessary adaptive decisions are identified to make a personalized recommendation to a learner.展开更多
Lightweight Cryptography(LWC)is widely used to provide integrity,secrecy and authentication for the sensitive applications.However,the LWC is vulnerable to various constraints such as high-power consumption,time consu...Lightweight Cryptography(LWC)is widely used to provide integrity,secrecy and authentication for the sensitive applications.However,the LWC is vulnerable to various constraints such as high-power consumption,time consumption,and hardware utilization and susceptible to the malicious attackers.In order to overcome this,a lightweight block cipher namely PRESENT architecture is proposed to provide the security against malicious attacks.The True Random Number Generator-Pseudo Random Number Generator(TRNG-PRNG)based key generation is proposed to generate the unpredictable keys,being highly difficult to predict by the hackers.Moreover,the hardware utilization of PRESENT architecture is optimized using the Dual port Read Only Memory(DROM).The proposed PRESENT-TRNGPRNG architecture supports the 64-bit input with 80-bit of key value.The performance of the PRESENT-TRNG-PRNG architecture is evaluated by means of number of slice registers,flip flops,number of slices Look Up Table(LUT),number of logical elements,slices,bonded input/output block(IOB),frequency,power and delay.The input retrieval performances analyzed in this PRESENT-TRNG-PRNG architecture are Peak Signal to Noise Ratio(PSNR),Structural Similarity Index(SSIM)and Mean-Square Error(MSE).The PRESENT-TRNG-PRNG architecture is compared with three different existing PRESENT architectures such as PRESENT On-TheFly(PERSENT-OTF),PRESENT Self-Test Structure(PRESENT-STS)and PRESENT-Round Keys(PRESENT-RK).The operating frequency of the PRESENT-TRNG-PRNG is 612.208 MHz for Virtex 5,which is high as compared to the PRESENT-RK.展开更多
Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature.The stock data is usually non-stationary,and attributes are non-correlative to...Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature.The stock data is usually non-stationary,and attributes are non-correlative to each other.Several traditional Stock Technical Indicators(STIs)may incorrectly predict the stockmarket trends.To study the stock market characteristics using STIs and make efficient trading decisions,a robust model is built.This paper aims to build up an Evolutionary Deep Learning Model(EDLM)to identify stock trends’prices by using STIs.The proposed model has implemented the Deep Learning(DL)model to establish the concept of Correlation-Tensor.The analysis of the dataset of three most popular banking organizations obtained from the live stock market based on the National Stock exchange(NSE)-India,a Long Short Term Memory(LSTM)is used.The datasets encompassed the trading days from the 17^(th) of Nov 2008 to the 15^(th) of Nov 2018.This work also conducted exhaustive experiments to study the correlation of various STIs with stock price trends.The model built with an EDLM has shown significant improvements over two benchmark ML models and a deep learning one.The proposed model aids investors in making profitable investment decisions as it presents trend-based forecasting and has achieved a prediction accuracy of 63.59%,56.25%,and 57.95%on the datasets of HDFC,Yes Bank,and SBI,respectively.Results indicate that the proposed EDLA with a combination of STIs can often provide improved results than the other state-of-the-art algorithms.展开更多
In today’s information technology(IT)world,the multi-hop wireless sensor networks(MHWSNs)are considered the building block for the Internet of Things(IoT)enabled communication systems for controlling everyday tasks o...In today’s information technology(IT)world,the multi-hop wireless sensor networks(MHWSNs)are considered the building block for the Internet of Things(IoT)enabled communication systems for controlling everyday tasks of organizations and industry to provide quality of service(QoS)in a stipulated time slot to end-user over the Internet.Smart city(SC)is an example of one such application which can automate a group of civil services like automatic control of traffic lights,weather prediction,surveillance,etc.,in our daily life.These IoT-based networks with multi-hop communication and multiple sink nodes provide efficient communication in terms of performance parameters such as throughput,energy efficiency,and end-to-end delay,wherein low latency is considered a challenging issue in next-generation networks(NGN).This paper introduces a single and parallels stable server queuing model with amulti-class of packets and native and coded packet flowto illustrate the simple chain topology and complexmultiway relay(MWR)node with specific neighbor topology.Further,for improving data transmission capacity inMHWSNs,an analytical framework for packet transmission using network coding at the MWR node in the network layer with opportunistic listening is performed by considering bi-directional network flow at the MWR node.Finally,the accuracy of the proposed multi-server multi-class queuing model is evaluated with and without network coding at the network layer by transmitting data packets.The results of the proposed analytical framework are validated and proved effective by comparing these analytical results to simulation results.展开更多
The study of viruses and their genetics has been an opportunity as well as a challenge for the scientific community.The recent ongoing SARSCov2(Severe Acute Respiratory Syndrome)pandemic proved the unpreparedness for ...The study of viruses and their genetics has been an opportunity as well as a challenge for the scientific community.The recent ongoing SARSCov2(Severe Acute Respiratory Syndrome)pandemic proved the unpreparedness for these situations.Not only the countermeasures for the effect caused by virus need to be tackled but the mutation taking place in the very genome of the virus is needed to be kept in check frequently.One major way to find out more information about such pathogens is by extracting the genetic data of such viruses.Though genetic data of viruses have been cultured and stored as well as isolated in form of their genome sequences,there is still limited methods on what new viruses can be generated in future due to mutation.This research proposes a deep learning model to predict the genome sequences of the SARS-Cov2 virus using only the previous viruses of the coronaviridae family with the help of RNN-LSTM(Recurrent Neural Network-Long ShortTerm Memory)and RNN-GRU(Gated Recurrent Unit)so that in the future,several counter measures can be taken by predicting possible changes in the genome with the help of existing mutations in the virus.After the process of testing the model,the F1-recall came out to be more than 0.95.The mutation detection’s accuracy of both the models come out about 98.5%which shows the capability of the recurrent neural network to predict future changes in the genome of virus.展开更多
In the past few decades,climatic changes led by environmental pollution,the emittance of greenhouse gases,and the emergence of brown energy utilization have led to global warming.Global warming increases the Earth’s ...In the past few decades,climatic changes led by environmental pollution,the emittance of greenhouse gases,and the emergence of brown energy utilization have led to global warming.Global warming increases the Earth’s temperature,thereby causing severe effects on human and environmental conditions and threatening the livelihoods of millions of people.Global warming issues are the increase in global temperatures that lead to heat strokes and high-temperature-related diseases during the summer,causing the untimely death of thousands of people.To forecast weather conditions,researchers have utilized machine learning algorithms,such as autoregressive integrated moving average,ensemble learning,and long short-term memory network.These techniques have been widely used for the prediction of temperature.In this paper,we present a swarm-based approach called Cauchy particle swarm optimization(CPSO)to find the hyperparameters of the long shortterm memory(LSTM)network.The hyperparameters were determined by minimizing the LSTM validationmean square error rate.The optimized hyperparameters of the LSTM were used to forecast the temperature of Chennai City.The proposed CPSO-LSTM model was tested on the openly available 25-year Chennai temperature dataset.The experimental evaluation on MATLABR2020a analyzed the root mean square error rate and mean absolute error to evaluate the forecasted output.The proposed CPSO-LSTM outperforms the traditional LSTM algorithm by reducing its computational time to 25 min under 200 epochs and 150 hidden neurons during training.The proposed hyperparameter-based LSTM can predict the temperature accurately by having a root mean square error(RMSE)value of 0.250 compared with the traditional LSTM of 0.35 RMSE.展开更多
Artificial intelligence plays an essential role in the medical and health industries.Deep convolution networks offer valuable services and help create automated systems to perform medical image analysis.However,convol...Artificial intelligence plays an essential role in the medical and health industries.Deep convolution networks offer valuable services and help create automated systems to perform medical image analysis.However,convolution networks examine medical images effectively;such systems require high computational complexity when recognizing the same disease-affected region.Therefore,an optimized deep convolution network is utilized for analyzing disease-affected regions in this work.Different disease-relatedmedical images are selected and examined pixel by pixel;this analysis uses the gray wolf optimized deep learning network.This method identifies affected pixels by the gray wolf hunting process.The convolution network uses an automatic learning function that predicts the disease affected by previous imaging analysis.The optimized algorithm-based selected regions are further examined using the distribution pattern-matching rule.The pattern-matching process recognizes the disease effectively,and the system’s efficiency is evaluated using theMATLAB implementation process.This process ensures high accuracy of up to 99.02%to 99.37%and reduces computational complexity.展开更多
Cardiovascular disease is among the top five fatal diseases that affect lives worldwide.Therefore,its early prediction and detection are crucial,allowing one to take proper and necessary measures at earlier stages.Mac...Cardiovascular disease is among the top five fatal diseases that affect lives worldwide.Therefore,its early prediction and detection are crucial,allowing one to take proper and necessary measures at earlier stages.Machine learning(ML)techniques are used to assist healthcare providers in better diagnosing heart disease.This study employed three boosting algorithms,namely,gradient boost,XGBoost,and AdaBoost,to predict heart disease.The dataset contained heart disease-related clinical features and was sourced from the publicly available UCI ML repository.Exploratory data analysis is performed to find the characteristics of data samples about descriptive and inferential statistics.Specifically,it was carried out to identify and replace outliers using the interquartile range and detect and replace the missing values using the imputation method.Results were recorded before and after the data preprocessing techniques were applied.Out of all the algorithms,gradient boosting achieved the highest accuracy rate of 92.20%for the proposed model.The proposed model yielded better results with gradient boosting in terms of precision,recall,and f1-score.It attained better prediction performance than the existing works and can be used for other diseases that share common features using transfer learning.展开更多
Internet of Things(IoT) devices are becoming increasingly ubiquitous, and their adoption is growing at an exponential rate. However, they are vulnerable to security breaches, and traditional security mechanisms are no...Internet of Things(IoT) devices are becoming increasingly ubiquitous, and their adoption is growing at an exponential rate. However, they are vulnerable to security breaches, and traditional security mechanisms are not enough to protect them. The massive amounts of data generated by IoT devices can be easily manipulated or stolen, posing significant privacy concerns. This paper is to provide a comprehensive overview of the integration of blockchain and IoT technologies and their potential to enhance the security and privacy of IoT systems. The paper examines various security issues and vulnerabilities in IoT and explores how blockchain-based solutions can be used to address them. It provides insights into the various security issues and vulnerabilities in IoT and explores how blockchain can be used to enhance security and privacy. The paper also discusses the potential applications of blockchain-based IoT(B-IoT) systems in various sectors, such as healthcare, transportation, and supply chain management. The paper reveals that the integration of blockchain and IoT has the potential to enhance the security,privacy, and trustworthiness of IoT systems. The multi-layered architecture of B-IoT, consisting of perception, network, data processing, and application layers, provides a comprehensive framework for the integration of blockchain and IoT technologies.The study identifies various security solutions for B-IoT, including smart contracts, decentralized control, immutable data storage,identity and access management(IAM), and consensus mechanisms. The study also discusses the challenges and future research directions in the field of B-IoT.展开更多
基金funded through Researchers Supporting Project Number(RSPD2024R996)King Saud University,Riyadh,Saudi Arabia。
文摘Breast cancer detection heavily relies on medical imaging, particularly ultrasound, for early diagnosis and effectivetreatment. This research addresses the challenges associated with computer-aided diagnosis (CAD) of breastcancer fromultrasound images. The primary challenge is accurately distinguishing between malignant and benigntumors, complicated by factors such as speckle noise, variable image quality, and the need for precise segmentationand classification. The main objective of the research paper is to develop an advanced methodology for breastultrasound image classification, focusing on speckle noise reduction, precise segmentation, feature extraction, andmachine learning-based classification. A unique approach is introduced that combines Enhanced Speckle ReducedAnisotropic Diffusion (SRAD) filters for speckle noise reduction, U-NET-based segmentation, Genetic Algorithm(GA)-based feature selection, and Random Forest and Bagging Tree classifiers, resulting in a novel and efficientmodel. To test and validate the hybrid model, rigorous experimentations were performed and results state thatthe proposed hybrid model achieved accuracy rate of 99.9%, outperforming other existing techniques, and alsosignificantly reducing computational time. This enhanced accuracy, along with improved sensitivity and specificity,makes the proposed hybrid model a valuable addition to CAD systems in breast cancer diagnosis, ultimatelyenhancing diagnostic accuracy in clinical applications.
基金This work was supported by National Natural Science Foundation of China(No.61821001)Science and Tech-nology Key Project of Guangdong Province,China(2019B010157001).
文摘Internet of Vehicles(IoV)is an evolution of the Internet of Things(IoT)to improve the capabilities of vehicular ad-hoc networks(VANETs)in intelligence transport systems.The network topology in IoV paradigm is highly dynamic.Clustering is one of the promising solutions to maintain the route stability in the dynamic network.However,existing algorithms consume a considerable amount of time in the cluster head(CH)selection process.Thus,this study proposes a mobility aware dynamic clustering-based routing(MADCR)protocol in IoV to maximize the lifespan of networks and reduce the end-to-end delay of vehicles.The MADCR protocol consists of cluster formation and CH selection processes.A cluster is formed on the basis of Euclidean distance.The CH is then chosen using the mayfly optimization algorithm(MOA).The CH subsequently receives vehicle data and forwards such data to the Road Side Unit(RSU).The performance of the MADCR protocol is compared with that ofAnt Colony Optimization(ACO),Comprehensive Learning Particle Swarm Optimization(CLPSO),and Clustering Algorithm for Internet of Vehicles based on Dragonfly Optimizer(CAVDO).The proposed MADCR protocol decreases the end-toend delay by 5–80 ms and increases the packet delivery ratio by 5%–15%.
文摘Wireless sensor network(WSN)is considered as the fastest growing technology pattern in recent years because of its applicability in varied domains.Many sensor nodes with different sensing functionalities are deployed in the monitoring area to collect suitable data and transmit it to the gateway.Ensuring communications in heterogeneous WSNs,is a critical issue that needs to be studied.In this research paper,we study the system performance of a heterogeneous WSN using LoRa–Zigbee hybrid communication.Specifically,two Zigbee sensor clusters and two LoRa sensor clusters are used and combined with two Zigbee-to-LoRa converters to communicate in a network managed by a LoRa gateway.The overall system integrates many different sensors in terms of types,communication protocols,and accuracy,which can be used in many applications in realistic environments such as on land,under water,or in the air.In addition to this,a synchronous management software on ThingSpeak Web server and Blynk app is designed.In the proposed system,the token ring protocol in Zigbee network and polling mechanism in LoRa network is used.The system can operate with a packet loss rate of less than 0.5%when the communication range of the Zigbee network is 630 m,and the communication range of the LoRa network is 3.7 km.On the basis of the digital results collected on the management software,this study proves tremendous improvements in the system performance.
基金Taif University Researchers Supporting Project No.(TURSP-2020/10),Taif University,Taif,Saudi Arabia.
文摘Component-based software development is rapidly introducing numerous new paradigms and possibilities to deliver highly customized software in a distributed environment.Among other communication,teamwork,and coordination problems in global software development,the detection of faults is seen as the key challenge.Thus,there is a need to ensure the reliability of component-based applications requirements.Distributed device detection faults applied to tracked components from various sources and failed to keep track of all the large number of components from different locations.In this study,we propose an approach for fault detection from componentbased systems requirements using the fuzzy logic approach and historical information during acceptance testing.This approach identified error-prone components selection for test case extraction and for prioritization of test cases to validate components in acceptance testing.For the evaluation,we used empirical study,and results depicted that the proposed approach significantly outperforms in component selection and acceptance testing.The comparison to the conventional procedures,i.e.,requirement criteria,and communication coverage criteria without irrelevancy and redundancy successfully outperform other procedures.Consequently,the F-measures of the proposed approach define the accurate selection of components,and faults identification increases in components using the proposed approach were higher(i.e.,more than 80 percent)than requirement criteria,and code coverage criteria procedures(i.e.,less than 80 percent),respectively.Similarly,the rate of fault detection in the proposed approach increases,i.e.,92.80 compared to existing methods i.e.,less than 80 percent.The proposed approach will provide a comprehensive guideline and roadmap for practitioners and researchers.
基金Taif University Researchers are supporting project number(TURSP-2020/211),Taif University,Taif,Saudi Arabia.
文摘Cloud computing is a collection of disparate resources or services,a web of massive infrastructures,which is aimed at achieving maximum utilization with higher availability at a minimized cost.One of the most attractive applications for cloud computing is the concept of distributed information processing.Security,privacy,energy saving,reliability and load balancing are the major challenges facing cloud computing and most information technology innovations.Load balancing is the process of redistributing workload among all nodes in a network;to improve resource utilization and job response time,while avoiding overloading some nodes when other nodes are underloaded or idle is a major challenge.Thus,this research aims to design a novel load balancing systems in a cloud computing environment.The research is based on the modification of the existing approaches,namely;particle swarm optimization(PSO),honeybee,and ant colony optimization(ACO)with mathematical expression to form a novel approach called PACOHONEYBEE.The experiments were conducted on response time and throughput.The results of the response time of honeybee,PSO,SASOS,round-robin,PSO-ACO,and P-ACOHONEYBEE are:2791,2780,2784,2767,2727,and 2599(ms)respectively.The outcome of throughput of honeybee,PSO,SASOS,round-robin,PSO-ACO,and P-ACOHONEYBEE are:7451,7425,7398,7357,7387 and 7482(bps)respectively.It is observed that P-ACOHONEYBEE approach produces the lowest response time,high throughput and overall improved performance for the 10 nodes.The research has helped in managing the imbalance drawback by maximizing throughput,and reducing response time with scalability and reliability.
文摘The simultaneous advances in the Internet of Things(IoT),Artificial intelligence(AI)and Robotics is going to revolutionize our world in the near future.In recent years,LoRa(Long Range)wireless powered by LoRaWAN(LoRa Wide Area Network)protocol has attracted the attention of researchers for numerous applications in the IoT domain.LoRa is a low power,unlicensed Industrial,Scientific,and Medical(ISM)bandequipped wireless technology that utilizes a wide area network protocol,i.e.,LoRaWAN,to incorporate itself into the network infrastructure.In this paper,we have evaluated the LoRaWAN communication protocol for the implementation of the IoT(Internet of Things)nodes’communication in a forest scenario.The outdoor performance of LoRa wireless in LoRaWAN,i.e.,the physical layer,has been evaluated in the forest area of Kashirampur Uttarakhand,India.Hence,the present paper aims towards analyzing the performance level of the LoRaWAN technology by observing the changes in Signal to Noise Ratio(SNR),Packet Reception Ratio(PRR)and Received Signal Strength Indicator(RSSI),with respect to the distance between IoT nodes.The article focuses on estimating network lifetime for a specific set of LoRa configuration parameters,hardware selection and power constraints.From the experimental results,it has been observed that transmissions can propagate to a distance of 300 m in the forest environment,while consuming approx.63%less energy for spreading factor 7 at 2 dBm,without incurring significant packet loss with PRR greater than 80%.
基金This research was supported by Taif University Researchers Supporting Project Number(TURSP-2020/10),Taif University,Taif,Saudi Arabia.
文摘In mobile crowd computing(MCC),people’s smart mobile devices(SMDs)are utilized as computing resources.Considering the ever-growing computing capabilities of today’s SMDs,a collection of them can offer significantly high-performance computing services.In a localMCC,the SMDs are typically connected to a local Wi-Fi network.Organizations and institutions can leverage the SMDs available within the campus to form local MCCs to cater to their computing needs without any financial and operational burden.Though it offers an economical and sustainable computing solution,users’mobility poses a serious issue in the QoS of MCC.To address this,before submitting a job to an SMD,we suggest estimating that particular SMD’s availability in the network until the job is finished.For this,we propose a convolutional GRU-based prediction model to assess how long an SMD is likely to be available in the network from any given point of time.For experimental purposes,we collected real users’mobility data(in-time and outtime)with respect to a Wi-Fi access point.To build the prediction model,we presented a novel feature extraction method to be applied to the time-series data.The experimental results prove that the proposed convolutional GRU model outperforms the conventional GRU model.
基金This research was supported by X-mind Corps program of National Research Foundation of Korea(NRF)funded by the Ministry of Science,ICT(No.2019H1D8A1105622)the Soonchunhyang University Research Fund.
文摘Energy conservation is a significant task in the Internet of Things(IoT)because IoT involves highly resource-constrained devices.Clustering is an effective technique for saving energy by reducing duplicate data.In a clustering protocol,the selection of a cluster head(CH)plays a key role in prolonging the lifetime of a network.However,most cluster-based protocols,including routing protocols for low-power and lossy networks(RPLs),have used fuzzy logic and probabilistic approaches to select the CH node.Consequently,early battery depletion is produced near the sink.To overcome this issue,a lion optimization algorithm(LOA)for selecting CH in RPL is proposed in this study.LOA-RPL comprises three processes:cluster formation,CH selection,and route establishment.A cluster is formed using the Euclidean distance.CH selection is performed using LOA.Route establishment is implemented using residual energy information.An extensive simulation is conducted in the network simulator ns-3 on various parameters,such as network lifetime,power consumption,packet delivery ratio(PDR),and throughput.The performance of LOA-RPL is also compared with those of RPL,fuzzy rule-based energyefficient clustering and immune-inspired routing(FEEC-IIR),and the routing scheme for IoT that uses shuffled frog-leaping optimization algorithm(RISARPL).The performance evaluation metrics used in this study are network lifetime,power consumption,PDR,and throughput.The proposed LOARPL increases network lifetime by 20%and PDR by 5%–10%compared with RPL,FEEC-IIR,and RISA-RPL.LOA-RPL is also highly energy-efficient compared with other similar routing protocols.
基金Funding is provided by Taif University Researchers Supporting Project number(TURSP-2020/10),Taif University,Taif,Saudi Arabia.
文摘A collaborative filtering-based recommendation system has been an integral part of e-commerce and e-servicing.To keep the recommendation systems reliable,authentic,and superior,the security of these systems is very crucial.Though the existing shilling attack detection methods in collaborative filtering are able to detect the standard attacks,in this paper,we prove that they fail to detect a new or unknown attack.We develop a new attack model,named Obscure attack,with unknown features and observed that it has been successful in biasing the overall top-N list of the target users as intended.The Obscure attack is able to push target items to the top-N list as well as remove the actual rated items from the list.Our proposed attack is more effective at a smaller number of k in top-k similar user as compared to other existing attacks.The effectivity of the proposed attack model is tested on the MovieLens dataset,where various classifiers like SVM,J48,random forest,and naïve Bayes are utilized.
基金This work was supported by the College of Computer and Information Sciences,Prince Sultan University,Saudi Arabia.
文摘With the popularity of e-learning,personalization and ubiquity have become important aspects of online learning.To make learning more personalized and ubiquitous,we propose a learner model for a query-based personalized learning recommendation system.Several contextual attributes characterize a learner,but considering all of them is costly for a ubiquitous learning system.In this paper,a set of optimal intrinsic and extrinsic contexts of a learner are identified for learner modeling.A total of 208 students are surveyed.DEMATEL(Decision Making Trial and Evaluation Laboratory)technique is used to establish the validity and importance of the identified contexts and find the interdependency among them.The acquiring methods of these contexts are also defined.On the basis of these contexts,the learner model is designed.A layered architecture is presented for interfacing the learner model with a query-based personalized learning recommendation system.In a ubiquitous learning scenario,the necessary adaptive decisions are identified to make a personalized recommendation to a learner.
基金supported by the Xiamen University Malaysia Research Fund(XMUMRF)(Grant No:XMUMRF/2019-C3/IECE/0007).
文摘Lightweight Cryptography(LWC)is widely used to provide integrity,secrecy and authentication for the sensitive applications.However,the LWC is vulnerable to various constraints such as high-power consumption,time consumption,and hardware utilization and susceptible to the malicious attackers.In order to overcome this,a lightweight block cipher namely PRESENT architecture is proposed to provide the security against malicious attacks.The True Random Number Generator-Pseudo Random Number Generator(TRNG-PRNG)based key generation is proposed to generate the unpredictable keys,being highly difficult to predict by the hackers.Moreover,the hardware utilization of PRESENT architecture is optimized using the Dual port Read Only Memory(DROM).The proposed PRESENT-TRNGPRNG architecture supports the 64-bit input with 80-bit of key value.The performance of the PRESENT-TRNG-PRNG architecture is evaluated by means of number of slice registers,flip flops,number of slices Look Up Table(LUT),number of logical elements,slices,bonded input/output block(IOB),frequency,power and delay.The input retrieval performances analyzed in this PRESENT-TRNG-PRNG architecture are Peak Signal to Noise Ratio(PSNR),Structural Similarity Index(SSIM)and Mean-Square Error(MSE).The PRESENT-TRNG-PRNG architecture is compared with three different existing PRESENT architectures such as PRESENT On-TheFly(PERSENT-OTF),PRESENT Self-Test Structure(PRESENT-STS)and PRESENT-Round Keys(PRESENT-RK).The operating frequency of the PRESENT-TRNG-PRNG is 612.208 MHz for Virtex 5,which is high as compared to the PRESENT-RK.
基金Funding is provided by Taif University Researchers Supporting Project Number(TURSP-2020/10),Taif University,Taif,Saudi Arabia.
文摘Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature.The stock data is usually non-stationary,and attributes are non-correlative to each other.Several traditional Stock Technical Indicators(STIs)may incorrectly predict the stockmarket trends.To study the stock market characteristics using STIs and make efficient trading decisions,a robust model is built.This paper aims to build up an Evolutionary Deep Learning Model(EDLM)to identify stock trends’prices by using STIs.The proposed model has implemented the Deep Learning(DL)model to establish the concept of Correlation-Tensor.The analysis of the dataset of three most popular banking organizations obtained from the live stock market based on the National Stock exchange(NSE)-India,a Long Short Term Memory(LSTM)is used.The datasets encompassed the trading days from the 17^(th) of Nov 2008 to the 15^(th) of Nov 2018.This work also conducted exhaustive experiments to study the correlation of various STIs with stock price trends.The model built with an EDLM has shown significant improvements over two benchmark ML models and a deep learning one.The proposed model aids investors in making profitable investment decisions as it presents trend-based forecasting and has achieved a prediction accuracy of 63.59%,56.25%,and 57.95%on the datasets of HDFC,Yes Bank,and SBI,respectively.Results indicate that the proposed EDLA with a combination of STIs can often provide improved results than the other state-of-the-art algorithms.
文摘In today’s information technology(IT)world,the multi-hop wireless sensor networks(MHWSNs)are considered the building block for the Internet of Things(IoT)enabled communication systems for controlling everyday tasks of organizations and industry to provide quality of service(QoS)in a stipulated time slot to end-user over the Internet.Smart city(SC)is an example of one such application which can automate a group of civil services like automatic control of traffic lights,weather prediction,surveillance,etc.,in our daily life.These IoT-based networks with multi-hop communication and multiple sink nodes provide efficient communication in terms of performance parameters such as throughput,energy efficiency,and end-to-end delay,wherein low latency is considered a challenging issue in next-generation networks(NGN).This paper introduces a single and parallels stable server queuing model with amulti-class of packets and native and coded packet flowto illustrate the simple chain topology and complexmultiway relay(MWR)node with specific neighbor topology.Further,for improving data transmission capacity inMHWSNs,an analytical framework for packet transmission using network coding at the MWR node in the network layer with opportunistic listening is performed by considering bi-directional network flow at the MWR node.Finally,the accuracy of the proposed multi-server multi-class queuing model is evaluated with and without network coding at the network layer by transmitting data packets.The results of the proposed analytical framework are validated and proved effective by comparing these analytical results to simulation results.
基金Taif University Researchers are supporting project number(TURSP-2020/211),Taif University,Taif,Saudi Arabia.
文摘The study of viruses and their genetics has been an opportunity as well as a challenge for the scientific community.The recent ongoing SARSCov2(Severe Acute Respiratory Syndrome)pandemic proved the unpreparedness for these situations.Not only the countermeasures for the effect caused by virus need to be tackled but the mutation taking place in the very genome of the virus is needed to be kept in check frequently.One major way to find out more information about such pathogens is by extracting the genetic data of such viruses.Though genetic data of viruses have been cultured and stored as well as isolated in form of their genome sequences,there is still limited methods on what new viruses can be generated in future due to mutation.This research proposes a deep learning model to predict the genome sequences of the SARS-Cov2 virus using only the previous viruses of the coronaviridae family with the help of RNN-LSTM(Recurrent Neural Network-Long ShortTerm Memory)and RNN-GRU(Gated Recurrent Unit)so that in the future,several counter measures can be taken by predicting possible changes in the genome with the help of existing mutations in the virus.After the process of testing the model,the F1-recall came out to be more than 0.95.The mutation detection’s accuracy of both the models come out about 98.5%which shows the capability of the recurrent neural network to predict future changes in the genome of virus.
文摘In the past few decades,climatic changes led by environmental pollution,the emittance of greenhouse gases,and the emergence of brown energy utilization have led to global warming.Global warming increases the Earth’s temperature,thereby causing severe effects on human and environmental conditions and threatening the livelihoods of millions of people.Global warming issues are the increase in global temperatures that lead to heat strokes and high-temperature-related diseases during the summer,causing the untimely death of thousands of people.To forecast weather conditions,researchers have utilized machine learning algorithms,such as autoregressive integrated moving average,ensemble learning,and long short-term memory network.These techniques have been widely used for the prediction of temperature.In this paper,we present a swarm-based approach called Cauchy particle swarm optimization(CPSO)to find the hyperparameters of the long shortterm memory(LSTM)network.The hyperparameters were determined by minimizing the LSTM validationmean square error rate.The optimized hyperparameters of the LSTM were used to forecast the temperature of Chennai City.The proposed CPSO-LSTM model was tested on the openly available 25-year Chennai temperature dataset.The experimental evaluation on MATLABR2020a analyzed the root mean square error rate and mean absolute error to evaluate the forecasted output.The proposed CPSO-LSTM outperforms the traditional LSTM algorithm by reducing its computational time to 25 min under 200 epochs and 150 hidden neurons during training.The proposed hyperparameter-based LSTM can predict the temperature accurately by having a root mean square error(RMSE)value of 0.250 compared with the traditional LSTM of 0.35 RMSE.
文摘Artificial intelligence plays an essential role in the medical and health industries.Deep convolution networks offer valuable services and help create automated systems to perform medical image analysis.However,convolution networks examine medical images effectively;such systems require high computational complexity when recognizing the same disease-affected region.Therefore,an optimized deep convolution network is utilized for analyzing disease-affected regions in this work.Different disease-relatedmedical images are selected and examined pixel by pixel;this analysis uses the gray wolf optimized deep learning network.This method identifies affected pixels by the gray wolf hunting process.The convolution network uses an automatic learning function that predicts the disease affected by previous imaging analysis.The optimized algorithm-based selected regions are further examined using the distribution pattern-matching rule.The pattern-matching process recognizes the disease effectively,and the system’s efficiency is evaluated using theMATLAB implementation process.This process ensures high accuracy of up to 99.02%to 99.37%and reduces computational complexity.
基金This work was supported by National Research Foundation of Korea-Grant funded by the Korean Government(MSIT)-NRF-2020R1A2B5B02002478.
文摘Cardiovascular disease is among the top five fatal diseases that affect lives worldwide.Therefore,its early prediction and detection are crucial,allowing one to take proper and necessary measures at earlier stages.Machine learning(ML)techniques are used to assist healthcare providers in better diagnosing heart disease.This study employed three boosting algorithms,namely,gradient boost,XGBoost,and AdaBoost,to predict heart disease.The dataset contained heart disease-related clinical features and was sourced from the publicly available UCI ML repository.Exploratory data analysis is performed to find the characteristics of data samples about descriptive and inferential statistics.Specifically,it was carried out to identify and replace outliers using the interquartile range and detect and replace the missing values using the imputation method.Results were recorded before and after the data preprocessing techniques were applied.Out of all the algorithms,gradient boosting achieved the highest accuracy rate of 92.20%for the proposed model.The proposed model yielded better results with gradient boosting in terms of precision,recall,and f1-score.It attained better prediction performance than the existing works and can be used for other diseases that share common features using transfer learning.
文摘Internet of Things(IoT) devices are becoming increasingly ubiquitous, and their adoption is growing at an exponential rate. However, they are vulnerable to security breaches, and traditional security mechanisms are not enough to protect them. The massive amounts of data generated by IoT devices can be easily manipulated or stolen, posing significant privacy concerns. This paper is to provide a comprehensive overview of the integration of blockchain and IoT technologies and their potential to enhance the security and privacy of IoT systems. The paper examines various security issues and vulnerabilities in IoT and explores how blockchain-based solutions can be used to address them. It provides insights into the various security issues and vulnerabilities in IoT and explores how blockchain can be used to enhance security and privacy. The paper also discusses the potential applications of blockchain-based IoT(B-IoT) systems in various sectors, such as healthcare, transportation, and supply chain management. The paper reveals that the integration of blockchain and IoT has the potential to enhance the security,privacy, and trustworthiness of IoT systems. The multi-layered architecture of B-IoT, consisting of perception, network, data processing, and application layers, provides a comprehensive framework for the integration of blockchain and IoT technologies.The study identifies various security solutions for B-IoT, including smart contracts, decentralized control, immutable data storage,identity and access management(IAM), and consensus mechanisms. The study also discusses the challenges and future research directions in the field of B-IoT.