The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for preci...The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for precise and interpretable diagnostic tools for improving clinical decision-making for COVID-19 diagnosis.This paper proposes a novel deep learning approach,called Conformer Network,for explainable discrimination of viral pneumonia depending on the lung Region of Infections(ROI)within a single modality radiographic CT scan.Firstly,an efficient U-shaped transformer network is integrated for lung image segmentation.Then,a robust transfer learning technique is introduced to design a robust feature extractor based on pre-trained lightweight Big Transfer(BiT-L)and finetuned on medical data to effectively learn the patterns of infection in the input image.Secondly,this work presents a visual explanation method to guarantee clinical explainability for decisions made by Conformer Network.Experimental evaluation of real-world CT data demonstrated that the diagnostic accuracy of ourmodel outperforms cutting-edge studies with statistical significance.The Conformer Network achieves 97.40% of detection accuracy under cross-validation settings.Our model not only achieves high sensitivity and specificity but also affords visualizations of salient features contributing to each classification decision,enhancing the overall transparency and trustworthiness of our model.The findings provide obvious implications for the ability of our model to empower clinical staff by generating transparent intuitions about the features driving diagnostic decisions.展开更多
This study proposes a hybridization of two efficient algorithm’s Multi-objective Ant Lion Optimizer Algorithm(MOALO)which is a multi-objective enhanced version of the Ant Lion Optimizer Algorithm(ALO)and the Genetic ...This study proposes a hybridization of two efficient algorithm’s Multi-objective Ant Lion Optimizer Algorithm(MOALO)which is a multi-objective enhanced version of the Ant Lion Optimizer Algorithm(ALO)and the Genetic Algorithm(GA).MOALO version has been employed to address those problems containing many objectives and an archive has been employed for retaining the non-dominated solutions.The uniqueness of the hybrid is that the operators like mutation and crossover of GA are employed in the archive to update the solutions and later those solutions go through the process of MOALO.A first-time hybrid of these algorithms is employed to solve multi-objective problems.The hybrid algorithm overcomes the limitation of ALO of getting caught in the local optimum and the requirement of more computational effort to converge GA.To evaluate the hybridized algorithm’s performance,a set of constrained,unconstrained test problems and engineering design problems were employed and compared with five well-known computational algorithms-MOALO,Multi-objective Crystal Structure Algorithm(MOCryStAl),Multi-objective Particle Swarm Optimization(MOPSO),Multi-objective Multiverse Optimization Algorithm(MOMVO),Multi-objective Salp Swarm Algorithm(MSSA).The outcomes of five performance metrics are statistically analyzed and the most efficient Pareto fronts comparison has been obtained.The proposed hybrid surpasses MOALO based on the results of hypervolume(HV),Spread,and Spacing.So primary objective of developing this hybrid approach has been achieved successfully.The proposed approach demonstrates superior performance on the test functions,showcasing robust convergence and comprehensive coverage that surpasses other existing algorithms.展开更多
Deep learning(DL)plays a critical role in processing and converting data into knowledge and decisions.DL technologies have been applied in a variety of applications,including image,video,and genome sequence analysis.I...Deep learning(DL)plays a critical role in processing and converting data into knowledge and decisions.DL technologies have been applied in a variety of applications,including image,video,and genome sequence analysis.In deep learning the most widely utilized architecture is Convolutional Neural Networks(CNN)are taught discriminatory traits in a supervised environment.In comparison to other classic neural networks,CNN makes use of a limited number of artificial neurons,therefore it is ideal for the recognition and processing of wheat gene sequences.Wheat is an essential crop of cereals for people around the world.Wheat Genotypes identification has an impact on the possible development of many countries in the agricultural sector.In quantitative genetics prediction of genetic values is a central issue.Wheat is an allohexaploid(AABBDD)with three distinct genomes.The sizes of the wheat genome are quite large compared to many other kinds and the availability of a diversity of genetic knowledge and normal structure at breeding lines of wheat,Therefore,genome sequence approaches based on techniques of Artificial Intelligence(AI)are necessary.This paper focuses on using the Wheat genome sequence will assist wheat producers in making better use of their genetic resources and managing genetic variation in their breeding program,as well as propose a novel model based on deep learning for offering a fundamental overview of genomic prediction theory and current constraints.In this paper,the hyperparameters of the network are optimized in the CNN to decrease the requirement for manual search and enhance network performance using a new proposed model built on an optimization algorithm and Convolutional Neural Networks(CNN).展开更多
Image segmentation is vital when analyzing medical images,especially magnetic resonance(MR)images of the brain.Recently,several image segmentation techniques based on multilevel thresholding have been proposed for med...Image segmentation is vital when analyzing medical images,especially magnetic resonance(MR)images of the brain.Recently,several image segmentation techniques based on multilevel thresholding have been proposed for medical image segmentation;however,the algorithms become trapped in local minima and have low convergence speeds,particularly as the number of threshold levels increases.Consequently,in this paper,we develop a new multilevel thresholding image segmentation technique based on the jellyfish search algorithm(JSA)(an optimizer).We modify the JSA to prevent descents into local minima,and we accelerate convergence toward optimal solutions.The improvement is achieved by applying two novel strategies:Rankingbased updating and an adaptive method.Ranking-based updating is used to replace undesirable solutions with other solutions generated by a novel updating scheme that improves the qualities of the removed solutions.We develop a new adaptive strategy to exploit the ability of the JSA to find a best-so-far solution;we allow a small amount of exploration to avoid descents into local minima.The two strategies are integrated with the JSA to produce an improved JSA(IJSA)that optimally thresholds brain MR images.To compare the performances of the IJSA and JSA,seven brain MR images were segmented at threshold levels of 3,4,5,6,7,8,10,15,20,25,and 30.IJSA was compared with several other recent image segmentation algorithms,including the improved and standard marine predator algorithms,the modified salp and standard salp swarm algorithms,the equilibrium optimizer,and the standard JSA in terms of fitness,the Structured Similarity Index Metric(SSIM),the peak signal-to-noise ratio(PSNR),the standard deviation(SD),and the Features Similarity Index Metric(FSIM).The experimental outcomes and the Wilcoxon rank-sum test demonstrate the superiority of the proposed algorithm in terms of the FSIM,the PSNR,the objective values,and the SD;in terms of the SSIM,IJSA was competitive with the others.展开更多
Like the Covid-19 pandemic,smallpox virus infection broke out in the last century,wherein 500 million deaths were reported along with enormous economic loss.But unlike smallpox,the Covid-19 recorded a low exponential ...Like the Covid-19 pandemic,smallpox virus infection broke out in the last century,wherein 500 million deaths were reported along with enormous economic loss.But unlike smallpox,the Covid-19 recorded a low exponential infection rate and mortality rate due to advancement inmedical aid and diagnostics.Data analytics,machine learning,and automation techniques can help in early diagnostics and supporting treatments of many reported patients.This paper proposes a robust and efficient methodology for the early detection of COVID-19 from Chest X-Ray scans utilizing enhanced deep learning techniques.Our study suggests that using the Prediction and Deconvolutional Modules in combination with the SSD architecture can improve the performance of the model trained at this task.We used a publicly open CXR image dataset and implemented the detectionmodelwith task-specific pre-processing and near 80:20 split.This achieved a competitive specificity of 0.9474 and a sensibility/accuracy of 0.9597,which shall help better decision-making for various aspects of identification and treat the infection.展开更多
Location information plays an important role in most of the applications in Wireless Sensor Network(WSN).Recently,many localization techniques have been proposed,while most of these deals with two Dimensional applicat...Location information plays an important role in most of the applications in Wireless Sensor Network(WSN).Recently,many localization techniques have been proposed,while most of these deals with two Dimensional applications.Whereas,in Three Dimensional applications the task is complex and there are large variations in the altitude levels.In these 3D environments,the sensors are placed in mountains for tracking and deployed in air for monitoring pollution level.For such applications,2D localization models are not reliable.Due to this,the design of 3D localization systems in WSNs faces new challenges.In this paper,in order to find unknown nodes in Three-Dimensional environment,only single anchor node is used.In the simulation-based environment,the nodes with unknown locations are moving at middle&lower layers whereas the top layer is equipped with single anchor node.A novel soft computing technique namely Adaptive Plant Propagation Algorithm(APPA)is introduced to obtain the optimized locations of these mobile nodes.Thesemobile target nodes are heterogeneous and deployed in an anisotropic environment having an Irregularity(Degree of Irregularity(DOI))value set to 0.01.The simulation results present that proposed APPAalgorithm outperforms as tested among other meta-heuristic optimization techniques in terms of localization error,computational time,and the located sensor nodes.展开更多
Autism Spectrum Disorder (ASD) is a developmental disorderwhose symptoms become noticeable in early years of the age though it canbe present in any age group. ASD is a mental disorder which affects the communicational...Autism Spectrum Disorder (ASD) is a developmental disorderwhose symptoms become noticeable in early years of the age though it canbe present in any age group. ASD is a mental disorder which affects the communicational, social and non-verbal behaviors. It cannot be cured completelybut can be reduced if detected early. An early diagnosis is hampered by thevariation and severity of ASD symptoms as well as having symptoms commonly seen in other mental disorders as well. Nowadays, with the emergenceof deep learning approaches in various fields, medical experts can be assistedin early diagnosis of ASD. It is very difficult for a practitioner to identifyand concentrate on the major feature’s leading to the accurate prediction ofthe ASD and this arises the need for having an automated approach. Also,presence of different symptoms of ASD traits amongst toddlers directs tothe creation of a large feature dataset. In this study, we propose a hybridapproach comprising of both, deep learning and Explainable Artificial Intelligence (XAI) to find the most contributing features for the early and preciseprediction of ASD. The proposed framework gives more accurate predictionalong with the recommendations of predicted results which will be a vital aidclinically for better and early prediction of ASD traits amongst toddlers.展开更多
Air pollution is one of the major concerns considering detriments to human health.This type of pollution leads to several health problems for humans,such as asthma,heart issues,skin diseases,bronchitis,lung cancer,and...Air pollution is one of the major concerns considering detriments to human health.This type of pollution leads to several health problems for humans,such as asthma,heart issues,skin diseases,bronchitis,lung cancer,and throat and eye infections.Air pollution also poses serious issues to the planet.Pollution from the vehicle industry is the cause of greenhouse effect and CO2 emissions.Thus,real-time monitoring of air pollution in these areas will help local authorities to analyze the current situation of the city and take necessary actions.The monitoring process has become efficient and dynamic with the advancement of the Internet of things and wireless sensor networks.Localization is the main issue in WSNs;if the sensor node location is unknown,then coverage and power and routing are not optimal.This study concentrates on localization-based air pollution prediction systems for real-time monitoring of smart cities.These systems comprise two phases considering the prediction as heavy or light traffic area using the Gaussian support vector machine algorithm based on the air pollutants,such as PM2.5 particulate matter,PM10,nitrogen dioxide(NO2),carbon monoxide(CO),ozone(O3),and sulfur dioxide(SO2).The sensor nodes are localized on the basis of the predicted area using the meta-heuristic algorithms called fast correlation-based elephant herding optimization.The dataset is divided into training and testing parts based on 10 cross-validations.The evaluation on predicting the air pollutant for localization is performed with the training dataset.Mean error prediction in localizing nodes is 9.83 which is lesser than existing solutions and accuracy is 95%.展开更多
Today social media became a communication line among people to share their happiness,sadness,and anger with their end-users.It is necessary to know people’s emotions are very important to identify depressed people fr...Today social media became a communication line among people to share their happiness,sadness,and anger with their end-users.It is necessary to know people’s emotions are very important to identify depressed people from their messages.Early depression detection helps to save people’s lives and other dangerous mental diseases.There are many intelligent algorithms for predicting depression with high accuracy,but they lack the definition of such cases.Several machine learning methods help to identify depressed people.But the accuracy of existing methods was not satisfactory.To overcome this issue,the deep learning method is used in the proposed method for depression detection.In this paper,a novel Deep Learning Multi-Aspect Depression Detection with Hierarchical Atten-tion Network(MDHAN)is used for classifying the depression data.Initially,the Twitter data was preprocessed by tokenization,punctuation mark removal,stop word removal,stemming,and lemmatization.The Adaptive Particle and grey Wolf optimization methods are used for feature selection.The MDHAN classifies the Twitter data and predicts the depressed and non-depressed users.Finally,the proposed method is compared with existing methods such as Convolutional Neur-al Network(CNN),Support Vector Machine(SVM),Minimum Description Length(MDL),and MDHAN.The suggested MDH-PWO architecture gains 99.86%accuracy,more significant than frequency-based deep learning models,with a lower false-positive rate.The experimental result shows that the proposed method achieves better accuracy,precision,recall,and F1-measure.It also mini-mizes the execution time.展开更多
CognitiveRadio(CR)has been developed as an enabling technology that allows the unused or underused spectrum to be used dynamically to increase spectral efficiency.To improve the overall performance of the CR systemit ...CognitiveRadio(CR)has been developed as an enabling technology that allows the unused or underused spectrum to be used dynamically to increase spectral efficiency.To improve the overall performance of the CR systemit is extremely important to adapt or reconfigure the systemparameters.The Decision Engine is a major module in the CR-based system that not only includes radio monitoring and cognition functions but also responsible for parameter adaptation.As meta-heuristic algorithms offer numerous advantages compared to traditional mathematical approaches,the performance of these algorithms is investigated in order to design an efficient CR system that is able to adapt the transmitting parameters to effectively reduce power consumption,bit error rate and adjacent interference of the channel,while maximized secondary user throughput.Self-Learning Salp Swarm Algorithm(SLSSA)is a recent meta-heuristic algorithm that is the enhanced version of SSA inspired by the swarming behavior of salps.In this work,the parametric adaption of CR system is performed by SLSSA and the simulation results show that SLSSA has high accuracy,stability and outperforms other competitive algorithms formaximizing the throughput of secondary users.The results obtained with SLSSA are also shown to be extremely satisfactory and need fewer iterations to converge compared to the competitive methods.展开更多
The present study reports the magnetizations and magneto-transport properties of PrFel_xNixO3 thin films grown by pulsed laser ablation technique on LaA103 snbstrates. From DC M/H plots of these films, weak ferromagne...The present study reports the magnetizations and magneto-transport properties of PrFel_xNixO3 thin films grown by pulsed laser ablation technique on LaA103 snbstrates. From DC M/H plots of these films, weak ferromagnetism or ferrimagnetism behaviors are observed. With Ni substitution, reduction in saturation magnetization is also seen. With Ni doping, variations in saturation field (Hs), coercive field (Hc), Weiss temperature (0), and effective magnetic moment (Pelf) are seen. A small change of magnetoresitance with application of higher field is observed. Various essential parameters like density of state (Nf) at Fermi level, Mott's characteristic temperature (To), and activation energy (Ea) in the presence of and in the absence of magnetic field are calculated. The present observed magnetic properties are related to the change of Fe-O bond length (causing an overlap between the oxygen p orbital and iron d orbital) and the deviation of the Fe-O-Fe angle from 180~. Reduction of magnetic domain after Ni doping is also explored to explain the present observed magnetic behavior of the system. The influence of doping on various transport properties in these thin films indicates a distortion in the lattice structure and single particle band width, owing to stress-induced reduction in unit cell volume.展开更多
Sleep apnea syndrome(SAS)is a breathing disorder while a person is asleep.The traditional method for examining SAS is Polysomnography(PSG).The standard procedure of PSG requires complete overnight observation in a lab...Sleep apnea syndrome(SAS)is a breathing disorder while a person is asleep.The traditional method for examining SAS is Polysomnography(PSG).The standard procedure of PSG requires complete overnight observation in a laboratory.PSG typically provides accurate results,but it is expensive and time consuming.However,for people with Sleep apnea(SA),available beds and laboratories are limited.Resultantly,it may produce inaccurate diagnosis.Thus,this paper proposes the Internet of Medical Things(IoMT)framework with a machine learning concept of fully connected neural network(FCNN)with k-near-est neighbor(k-NN)classifier.This paper describes smart monitoring of a patient’s sleeping habit and diagnosis of SA using FCNN-KNN+average square error(ASE).For diagnosing SA,the Oxygen saturation(SpO2)sensor device is popularly used for monitoring the heart rate and blood oxygen level.This diagnosis information is securely stored in the IoMT fog computing network.Doctors can care-fully monitor the SA patient remotely on the basis of sensor values,which are efficiently stored in the fog computing network.The proposed technique takes less than 0.2 s with an accuracy of 95%,which is higher than existing models.展开更多
Evaluation of commercial banks(CBs)performance has been a signicant issue in the nancial world and deemed as a multi-criteria decision making(MCDM)model.Numerous research assesses CB performance according to different...Evaluation of commercial banks(CBs)performance has been a signicant issue in the nancial world and deemed as a multi-criteria decision making(MCDM)model.Numerous research assesses CB performance according to different metrics and standers.As a result of uncertainty in decision-making problems and large economic variations in Egypt,this research proposes a plithogenic based model to evaluate Egyptian commercial banks’performance based on a set of criteria.The proposed model evaluates the top ten Egyptian commercial banks based on three main metrics including nancial,customer satisfaction,and qualitative evaluation,and 19 subcriteria.The proportional importance of the selected criteria is evaluated by the Analytic Hierarchy Process(AHP).Furthermore,the Technique for Order of Preference by Similarity to Ideal Solution(TOPSIS),Vlse Kriterijumska Optimizacija Kompro-misno Resenje(VIKOR),and COmplex PRoportional ASsessment(COPRAS)are adopted to rank the top ten Egyptian banks based on their performance,comparatively.The main role of this research is to apply the proposed integrated MCDM framework under the plithogenic environment to measure the performance of the CBs under uncertainty.All results show that CIB has the best performance while Faisal Islamic Bank and Bank Audi have the least performance among the top 10 CBs in Egypt.展开更多
The critical path method is one of the oldest and most important techniques used for planning and scheduling projects.The main objective of project management science is to determine the critical path through a networ...The critical path method is one of the oldest and most important techniques used for planning and scheduling projects.The main objective of project management science is to determine the critical path through a network representation of projects.The critical path through a network can be determined by many algorithms and is useful for managing,monitoring,and controlling the time and cost of an entire project.The essential problem in this case is that activity durations are uncertain;time presents considerable uncertainty because the time of an activity is not always easily or accurately estimated.This issue increases the need to use neutrosophic theory to solve the critical path problem.Real-world problems are characterized by a lack of precision,consistency,and completeness.The concept of neutrosophic sets has been introduced as a generalization of fuzzy,intuitionistic fuzzy,and crisp sets to overcome the ambiguity surrounding real-world problems.Truth-,falsity-,and indeterminacy-membership functions are used to express neutrosophic elements.This study was performed to examine a neutrosophic event-oriented algorithm for determining the critical path in activity-on-arc networks.The activity time estimates are presented as trapezoidal neutrosophic numbers,and score and accuracy functions are used to obtain a crisp model of the problem.An appropriate numerical example is then used to explain the proposed method.展开更多
Project scheduling is a key objective of many models and is the proposed method for project planning and management.Project scheduling problems depend on precedence relationships and resource constraints,in addition t...Project scheduling is a key objective of many models and is the proposed method for project planning and management.Project scheduling problems depend on precedence relationships and resource constraints,in addition to some other limitations for achieving a subset of goals.Project scheduling problems are dependent on many limitations,including limitations of precedence relationships,resource constraints,and some other limitations for achieving a subset of goals.Deterministic project scheduling models consider all information about the scheduling problem such as activity durations and precedence relationships information resources available and required,which are known and stable during the implementation process.The concept of deterministic project scheduling conflicts with real situations,in which in many cases,some data on the activity’s durations of the project and the degree of availability of resources change or may have different modes and strategies during the process of project implementation for dealing with multi-mode conditions surrounded by projects and their activity durations.Scheduling the multi-mode resource-constrained project problem is an optimization problem whose minimum project duration subject to the availability of resources is of particular interest to us.We use the multi-mode resource allocation and schedulingmodel that takes into account the dynamicity features of all parameters,that is,the scheduling process must be flexible to dynamic environment features.In this paper,we propose five priority heuristic rules for scheduling multi-mode resource-constrained projects under dynamicity features for more realistic situations,in which we apply the proposed heuristic rules(PHR)for scheduling multi-mode resource-constrained projects.Five projects are considered test problems for the PHR.The obtained results rendered by these priority rules for the test problems are compared by the results obtained from 10 well-known heuristics rules rendered for the same test problems.The results in many cases of the proposed priority rules are very promising,where they achieve better scheduling dates in many test case problems and the same results for the others.The proposed model is based on the dynamic features for project topography.展开更多
Wireless Sensor Network(WSN)technology is the real-time applica-tion that is growing rapidly as the result of smart environments.Battery power is one of the most significant resources in WSN.For enhancing a power facto...Wireless Sensor Network(WSN)technology is the real-time applica-tion that is growing rapidly as the result of smart environments.Battery power is one of the most significant resources in WSN.For enhancing a power factor,the clustering techniques are used.During the forward of data in WSN,more power is consumed.In the existing system,it works with Load Balanced Cluster-ing Method(LBCM)and provides the lifespan of the network with scalability and reliability.In the existing system,it does not deal with end-to-end delay and deliv-ery of packets.For overcoming these issues in WSN,the proposed Genetic Algo-rithm based on Chicken Swarm Optimization(GA-CSO)with Load Balanced Clustering Method(LBCM)is used.Genetic Algorithm generates chromosomes in an arbitrary method then the chromosomes values are calculated using Fitness Function.Chicken Swarm Optimization(CSO)helps to solve the complex opti-mization problems.Also,it consists of chickens,hens,and rooster.It divides the chicken into clusters.Load Balanced Clustering Method(LBCM)maintains the energy during communication among the sensor nodes and also it balances the load in the gateways.The proposed GA-CSO with LBCM improves the life-span of the network.Moreover,it minimizes the energy consumption and also bal-ances the load over the network.The proposed method outperforms by using the following metrics such as energy efficiency,ratio of packet delivery,throughput of the network,lifetime of the sensor nodes.Therefore,the evaluation result shows the energy efficiency that has achieved 83.56%and the delivery ratio of the packet has reached 99.12%.Also,it has attained linear standard deviation and reduced the end-to-end delay as 97.32 ms.展开更多
Currently, relational database management systems (RDBMSs)face different challenges in application development due to the massive growthof unstructured and semi-structured data. This introduced new DBMS categories, kn...Currently, relational database management systems (RDBMSs)face different challenges in application development due to the massive growthof unstructured and semi-structured data. This introduced new DBMS categories, known as not only structured query language (NoSQL) DBMSs, whichdo not adhere to the relational model. The migration from relational databasesto NoSQL databases is challenging due to the data complexity. This study aimsto enhance the storage performance of RDBMSs in handling a variety of data.The paper presents two approaches. The first approach proposes a convenientrepresentation of unstructured data storage. Several extensive experimentswere implemented to assess the efficiency of this approach that could resultin substantial improvements in the RDBMSs storage. The second approachproposes using the JavaScript Object Notation (JSON) format to representmultivalued attributes and many to many (M:N) relationships in relationaldatabases to create a flexible schema and store semi-structured data. Theresults indicate that the proposed approaches outperform similar approachesand improve data storage performance, which helps preserve software stabilityin huge organizations by improving existing software packages whose replacement may be highly costly.展开更多
The high-efficiency video coder(HEVC)is one of the most advanced techniques used in growing real-time multimedia applications today.However,they require large bandwidth for transmission through bandwidth,and bandwidth...The high-efficiency video coder(HEVC)is one of the most advanced techniques used in growing real-time multimedia applications today.However,they require large bandwidth for transmission through bandwidth,and bandwidth varies with different video sequences/formats.This paper proposes an adaptive information-based variable quantization matrix(AIVQM)developed for different video formats having variable energy levels.The quantization method is adapted based on video sequence using statistical analysis,improving bit budget,quality and complexity reduction.Further,to have precise control over bit rate and quality,a multi-constraint prune algorithm is proposed in the second stage of the AI-VQM technique for pre-calculating K numbers of paths.The same should be handy to selfadapt and choose one of the K-path automatically in dynamically changing bandwidth availability as per requirement after extensive testing of the proposed algorithm in the multi-constraint environment for multiple paths and evaluating the performance based on peak signal to noise ratio(PSNR),bit-budget and time complexity for different videos a noticeable improvement in rate-distortion(RD)performance is achieved.Using the proposed AIVQM technique,more feasible and efficient video sequences are achieved with less loss in PSNR than the variable quantization method(VQM)algorithm with approximately a rise of 10%–20%based on different video sequences/formats.展开更多
Wireless sensor networks(WSNs)are made up of several sensors located in a specific area and powered by a finite amount of energy to gather environmental data.WSNs use sensor nodes(SNs)to collect and transmit data.Howe...Wireless sensor networks(WSNs)are made up of several sensors located in a specific area and powered by a finite amount of energy to gather environmental data.WSNs use sensor nodes(SNs)to collect and transmit data.However,the power supplied by the sensor network is restricted.Thus,SNs must store energy as often as to extend the lifespan of the network.In the proposed study,effective clustering and longer network lifetimes are achieved using mul-ti-swarm optimization(MSO)and game theory based on locust search(LS-II).In this research,MSO is used to improve the optimum routing,while the LS-II approach is employed to specify the number of cluster heads(CHs)and select the best ones.After the CHs are identified,the other sensor components are allo-cated to the closest CHs to them.A game theory-based energy-efficient clustering approach is applied to WSNs.Here each SN is considered a player in the game.The SN can implement beneficial methods for itself depending on the length of the idle listening time in the active phase and then determine to choose whether or not to rest.The proposed multi-swarm with energy-efficient game theory on locust search(MSGE-LS)efficiently selects CHs,minimizes energy consumption,and improves the lifetime of networks.The findings of this study indicate that the proposed MSGE-LS is an effective method because its result proves that it increases the number of clusters,average energy consumption,lifespan extension,reduction in average packet loss,and end-to-end delay.展开更多
基金funded by King Saud University,Riyadh,Saudi Arabia.Researchers Supporting Project Number(RSP2024R167),King Saud University,Riyadh,Saudi Arabia.
文摘The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for precise and interpretable diagnostic tools for improving clinical decision-making for COVID-19 diagnosis.This paper proposes a novel deep learning approach,called Conformer Network,for explainable discrimination of viral pneumonia depending on the lung Region of Infections(ROI)within a single modality radiographic CT scan.Firstly,an efficient U-shaped transformer network is integrated for lung image segmentation.Then,a robust transfer learning technique is introduced to design a robust feature extractor based on pre-trained lightweight Big Transfer(BiT-L)and finetuned on medical data to effectively learn the patterns of infection in the input image.Secondly,this work presents a visual explanation method to guarantee clinical explainability for decisions made by Conformer Network.Experimental evaluation of real-world CT data demonstrated that the diagnostic accuracy of ourmodel outperforms cutting-edge studies with statistical significance.The Conformer Network achieves 97.40% of detection accuracy under cross-validation settings.Our model not only achieves high sensitivity and specificity but also affords visualizations of salient features contributing to each classification decision,enhancing the overall transparency and trustworthiness of our model.The findings provide obvious implications for the ability of our model to empower clinical staff by generating transparent intuitions about the features driving diagnostic decisions.
基金supported by the National Research Foundation of Korea(NRF)Grant funded by the Korea government(MSIT)(No.RS-2023-00218176)the Soonchunhyang University Research Fund.
文摘This study proposes a hybridization of two efficient algorithm’s Multi-objective Ant Lion Optimizer Algorithm(MOALO)which is a multi-objective enhanced version of the Ant Lion Optimizer Algorithm(ALO)and the Genetic Algorithm(GA).MOALO version has been employed to address those problems containing many objectives and an archive has been employed for retaining the non-dominated solutions.The uniqueness of the hybrid is that the operators like mutation and crossover of GA are employed in the archive to update the solutions and later those solutions go through the process of MOALO.A first-time hybrid of these algorithms is employed to solve multi-objective problems.The hybrid algorithm overcomes the limitation of ALO of getting caught in the local optimum and the requirement of more computational effort to converge GA.To evaluate the hybridized algorithm’s performance,a set of constrained,unconstrained test problems and engineering design problems were employed and compared with five well-known computational algorithms-MOALO,Multi-objective Crystal Structure Algorithm(MOCryStAl),Multi-objective Particle Swarm Optimization(MOPSO),Multi-objective Multiverse Optimization Algorithm(MOMVO),Multi-objective Salp Swarm Algorithm(MSSA).The outcomes of five performance metrics are statistically analyzed and the most efficient Pareto fronts comparison has been obtained.The proposed hybrid surpasses MOALO based on the results of hypervolume(HV),Spread,and Spacing.So primary objective of developing this hybrid approach has been achieved successfully.The proposed approach demonstrates superior performance on the test functions,showcasing robust convergence and comprehensive coverage that surpasses other existing algorithms.
基金This research was supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)the National Research Foundation of Korea(NRF)grant funded by theKorea government(MSIT)(No.RS-2023-00218176)the Soonchunhyang University Research Fund.
文摘Deep learning(DL)plays a critical role in processing and converting data into knowledge and decisions.DL technologies have been applied in a variety of applications,including image,video,and genome sequence analysis.In deep learning the most widely utilized architecture is Convolutional Neural Networks(CNN)are taught discriminatory traits in a supervised environment.In comparison to other classic neural networks,CNN makes use of a limited number of artificial neurons,therefore it is ideal for the recognition and processing of wheat gene sequences.Wheat is an essential crop of cereals for people around the world.Wheat Genotypes identification has an impact on the possible development of many countries in the agricultural sector.In quantitative genetics prediction of genetic values is a central issue.Wheat is an allohexaploid(AABBDD)with three distinct genomes.The sizes of the wheat genome are quite large compared to many other kinds and the availability of a diversity of genetic knowledge and normal structure at breeding lines of wheat,Therefore,genome sequence approaches based on techniques of Artificial Intelligence(AI)are necessary.This paper focuses on using the Wheat genome sequence will assist wheat producers in making better use of their genetic resources and managing genetic variation in their breeding program,as well as propose a novel model based on deep learning for offering a fundamental overview of genomic prediction theory and current constraints.In this paper,the hyperparameters of the network are optimized in the CNN to decrease the requirement for manual search and enhance network performance using a new proposed model built on an optimization algorithm and Convolutional Neural Networks(CNN).
基金This research was supported by the Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘Image segmentation is vital when analyzing medical images,especially magnetic resonance(MR)images of the brain.Recently,several image segmentation techniques based on multilevel thresholding have been proposed for medical image segmentation;however,the algorithms become trapped in local minima and have low convergence speeds,particularly as the number of threshold levels increases.Consequently,in this paper,we develop a new multilevel thresholding image segmentation technique based on the jellyfish search algorithm(JSA)(an optimizer).We modify the JSA to prevent descents into local minima,and we accelerate convergence toward optimal solutions.The improvement is achieved by applying two novel strategies:Rankingbased updating and an adaptive method.Ranking-based updating is used to replace undesirable solutions with other solutions generated by a novel updating scheme that improves the qualities of the removed solutions.We develop a new adaptive strategy to exploit the ability of the JSA to find a best-so-far solution;we allow a small amount of exploration to avoid descents into local minima.The two strategies are integrated with the JSA to produce an improved JSA(IJSA)that optimally thresholds brain MR images.To compare the performances of the IJSA and JSA,seven brain MR images were segmented at threshold levels of 3,4,5,6,7,8,10,15,20,25,and 30.IJSA was compared with several other recent image segmentation algorithms,including the improved and standard marine predator algorithms,the modified salp and standard salp swarm algorithms,the equilibrium optimizer,and the standard JSA in terms of fitness,the Structured Similarity Index Metric(SSIM),the peak signal-to-noise ratio(PSNR),the standard deviation(SD),and the Features Similarity Index Metric(FSIM).The experimental outcomes and the Wilcoxon rank-sum test demonstrate the superiority of the proposed algorithm in terms of the FSIM,the PSNR,the objective values,and the SD;in terms of the SSIM,IJSA was competitive with the others.
文摘Like the Covid-19 pandemic,smallpox virus infection broke out in the last century,wherein 500 million deaths were reported along with enormous economic loss.But unlike smallpox,the Covid-19 recorded a low exponential infection rate and mortality rate due to advancement inmedical aid and diagnostics.Data analytics,machine learning,and automation techniques can help in early diagnostics and supporting treatments of many reported patients.This paper proposes a robust and efficient methodology for the early detection of COVID-19 from Chest X-Ray scans utilizing enhanced deep learning techniques.Our study suggests that using the Prediction and Deconvolutional Modules in combination with the SSD architecture can improve the performance of the model trained at this task.We used a publicly open CXR image dataset and implemented the detectionmodelwith task-specific pre-processing and near 80:20 split.This achieved a competitive specificity of 0.9474 and a sensibility/accuracy of 0.9597,which shall help better decision-making for various aspects of identification and treat the infection.
基金This research was supported by X-mind Corps program of National Research Foundation of Korea(NRF)funded by the Ministry of Science,ICT(No.2019H1D8A1105622)and the Soonchunhyang University Research Fund.
文摘Location information plays an important role in most of the applications in Wireless Sensor Network(WSN).Recently,many localization techniques have been proposed,while most of these deals with two Dimensional applications.Whereas,in Three Dimensional applications the task is complex and there are large variations in the altitude levels.In these 3D environments,the sensors are placed in mountains for tracking and deployed in air for monitoring pollution level.For such applications,2D localization models are not reliable.Due to this,the design of 3D localization systems in WSNs faces new challenges.In this paper,in order to find unknown nodes in Three-Dimensional environment,only single anchor node is used.In the simulation-based environment,the nodes with unknown locations are moving at middle&lower layers whereas the top layer is equipped with single anchor node.A novel soft computing technique namely Adaptive Plant Propagation Algorithm(APPA)is introduced to obtain the optimized locations of these mobile nodes.Thesemobile target nodes are heterogeneous and deployed in an anisotropic environment having an Irregularity(Degree of Irregularity(DOI))value set to 0.01.The simulation results present that proposed APPAalgorithm outperforms as tested among other meta-heuristic optimization techniques in terms of localization error,computational time,and the located sensor nodes.
基金Authors would like to thank for the support of Taif University Researchers Supporting Project Number(TURSP−2020/10),Taif University,Taif,Saudi Arabia.
文摘Autism Spectrum Disorder (ASD) is a developmental disorderwhose symptoms become noticeable in early years of the age though it canbe present in any age group. ASD is a mental disorder which affects the communicational, social and non-verbal behaviors. It cannot be cured completelybut can be reduced if detected early. An early diagnosis is hampered by thevariation and severity of ASD symptoms as well as having symptoms commonly seen in other mental disorders as well. Nowadays, with the emergenceof deep learning approaches in various fields, medical experts can be assistedin early diagnosis of ASD. It is very difficult for a practitioner to identifyand concentrate on the major feature’s leading to the accurate prediction ofthe ASD and this arises the need for having an automated approach. Also,presence of different symptoms of ASD traits amongst toddlers directs tothe creation of a large feature dataset. In this study, we propose a hybridapproach comprising of both, deep learning and Explainable Artificial Intelligence (XAI) to find the most contributing features for the early and preciseprediction of ASD. The proposed framework gives more accurate predictionalong with the recommendations of predicted results which will be a vital aidclinically for better and early prediction of ASD traits amongst toddlers.
基金The authors would like to acknowledge the support of Taif UniversityResearchers Supporting Project number (TURSP-2020/10), Taif University, Taif, Saudi Arabia.
文摘Air pollution is one of the major concerns considering detriments to human health.This type of pollution leads to several health problems for humans,such as asthma,heart issues,skin diseases,bronchitis,lung cancer,and throat and eye infections.Air pollution also poses serious issues to the planet.Pollution from the vehicle industry is the cause of greenhouse effect and CO2 emissions.Thus,real-time monitoring of air pollution in these areas will help local authorities to analyze the current situation of the city and take necessary actions.The monitoring process has become efficient and dynamic with the advancement of the Internet of things and wireless sensor networks.Localization is the main issue in WSNs;if the sensor node location is unknown,then coverage and power and routing are not optimal.This study concentrates on localization-based air pollution prediction systems for real-time monitoring of smart cities.These systems comprise two phases considering the prediction as heavy or light traffic area using the Gaussian support vector machine algorithm based on the air pollutants,such as PM2.5 particulate matter,PM10,nitrogen dioxide(NO2),carbon monoxide(CO),ozone(O3),and sulfur dioxide(SO2).The sensor nodes are localized on the basis of the predicted area using the meta-heuristic algorithms called fast correlation-based elephant herding optimization.The dataset is divided into training and testing parts based on 10 cross-validations.The evaluation on predicting the air pollutant for localization is performed with the training dataset.Mean error prediction in localizing nodes is 9.83 which is lesser than existing solutions and accuracy is 95%.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R300),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Today social media became a communication line among people to share their happiness,sadness,and anger with their end-users.It is necessary to know people’s emotions are very important to identify depressed people from their messages.Early depression detection helps to save people’s lives and other dangerous mental diseases.There are many intelligent algorithms for predicting depression with high accuracy,but they lack the definition of such cases.Several machine learning methods help to identify depressed people.But the accuracy of existing methods was not satisfactory.To overcome this issue,the deep learning method is used in the proposed method for depression detection.In this paper,a novel Deep Learning Multi-Aspect Depression Detection with Hierarchical Atten-tion Network(MDHAN)is used for classifying the depression data.Initially,the Twitter data was preprocessed by tokenization,punctuation mark removal,stop word removal,stemming,and lemmatization.The Adaptive Particle and grey Wolf optimization methods are used for feature selection.The MDHAN classifies the Twitter data and predicts the depressed and non-depressed users.Finally,the proposed method is compared with existing methods such as Convolutional Neur-al Network(CNN),Support Vector Machine(SVM),Minimum Description Length(MDL),and MDHAN.The suggested MDH-PWO architecture gains 99.86%accuracy,more significant than frequency-based deep learning models,with a lower false-positive rate.The experimental result shows that the proposed method achieves better accuracy,precision,recall,and F1-measure.It also mini-mizes the execution time.
基金The authors would like to thank for the support from Taif University Researchers Supporting Project Number(TURSP-2020/239),Taif University,Taif,Saudi Arabia。
文摘CognitiveRadio(CR)has been developed as an enabling technology that allows the unused or underused spectrum to be used dynamically to increase spectral efficiency.To improve the overall performance of the CR systemit is extremely important to adapt or reconfigure the systemparameters.The Decision Engine is a major module in the CR-based system that not only includes radio monitoring and cognition functions but also responsible for parameter adaptation.As meta-heuristic algorithms offer numerous advantages compared to traditional mathematical approaches,the performance of these algorithms is investigated in order to design an efficient CR system that is able to adapt the transmitting parameters to effectively reduce power consumption,bit error rate and adjacent interference of the channel,while maximized secondary user throughput.Self-Learning Salp Swarm Algorithm(SLSSA)is a recent meta-heuristic algorithm that is the enhanced version of SSA inspired by the swarming behavior of salps.In this work,the parametric adaption of CR system is performed by SLSSA and the simulation results show that SLSSA has high accuracy,stability and outperforms other competitive algorithms formaximizing the throughput of secondary users.The results obtained with SLSSA are also shown to be extremely satisfactory and need fewer iterations to converge compared to the competitive methods.
文摘The present study reports the magnetizations and magneto-transport properties of PrFel_xNixO3 thin films grown by pulsed laser ablation technique on LaA103 snbstrates. From DC M/H plots of these films, weak ferromagnetism or ferrimagnetism behaviors are observed. With Ni substitution, reduction in saturation magnetization is also seen. With Ni doping, variations in saturation field (Hs), coercive field (Hc), Weiss temperature (0), and effective magnetic moment (Pelf) are seen. A small change of magnetoresitance with application of higher field is observed. Various essential parameters like density of state (Nf) at Fermi level, Mott's characteristic temperature (To), and activation energy (Ea) in the presence of and in the absence of magnetic field are calculated. The present observed magnetic properties are related to the change of Fe-O bond length (causing an overlap between the oxygen p orbital and iron d orbital) and the deviation of the Fe-O-Fe angle from 180~. Reduction of magnetic domain after Ni doping is also explored to explain the present observed magnetic behavior of the system. The influence of doping on various transport properties in these thin films indicates a distortion in the lattice structure and single particle band width, owing to stress-induced reduction in unit cell volume.
基金Taif University Researchers Supporting Project Number(TURSP-2020/98),Taif University,Taif,Saudi Arabia.
文摘Sleep apnea syndrome(SAS)is a breathing disorder while a person is asleep.The traditional method for examining SAS is Polysomnography(PSG).The standard procedure of PSG requires complete overnight observation in a laboratory.PSG typically provides accurate results,but it is expensive and time consuming.However,for people with Sleep apnea(SA),available beds and laboratories are limited.Resultantly,it may produce inaccurate diagnosis.Thus,this paper proposes the Internet of Medical Things(IoMT)framework with a machine learning concept of fully connected neural network(FCNN)with k-near-est neighbor(k-NN)classifier.This paper describes smart monitoring of a patient’s sleeping habit and diagnosis of SA using FCNN-KNN+average square error(ASE).For diagnosing SA,the Oxygen saturation(SpO2)sensor device is popularly used for monitoring the heart rate and blood oxygen level.This diagnosis information is securely stored in the IoMT fog computing network.Doctors can care-fully monitor the SA patient remotely on the basis of sensor values,which are efficiently stored in the fog computing network.The proposed technique takes less than 0.2 s with an accuracy of 95%,which is higher than existing models.
基金supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)the Soonchunhyang University Research Fund。
文摘Evaluation of commercial banks(CBs)performance has been a signicant issue in the nancial world and deemed as a multi-criteria decision making(MCDM)model.Numerous research assesses CB performance according to different metrics and standers.As a result of uncertainty in decision-making problems and large economic variations in Egypt,this research proposes a plithogenic based model to evaluate Egyptian commercial banks’performance based on a set of criteria.The proposed model evaluates the top ten Egyptian commercial banks based on three main metrics including nancial,customer satisfaction,and qualitative evaluation,and 19 subcriteria.The proportional importance of the selected criteria is evaluated by the Analytic Hierarchy Process(AHP).Furthermore,the Technique for Order of Preference by Similarity to Ideal Solution(TOPSIS),Vlse Kriterijumska Optimizacija Kompro-misno Resenje(VIKOR),and COmplex PRoportional ASsessment(COPRAS)are adopted to rank the top ten Egyptian banks based on their performance,comparatively.The main role of this research is to apply the proposed integrated MCDM framework under the plithogenic environment to measure the performance of the CBs under uncertainty.All results show that CIB has the best performance while Faisal Islamic Bank and Bank Audi have the least performance among the top 10 CBs in Egypt.
基金This work was supported by the Soonchunhyang University Research Fund.
文摘The critical path method is one of the oldest and most important techniques used for planning and scheduling projects.The main objective of project management science is to determine the critical path through a network representation of projects.The critical path through a network can be determined by many algorithms and is useful for managing,monitoring,and controlling the time and cost of an entire project.The essential problem in this case is that activity durations are uncertain;time presents considerable uncertainty because the time of an activity is not always easily or accurately estimated.This issue increases the need to use neutrosophic theory to solve the critical path problem.Real-world problems are characterized by a lack of precision,consistency,and completeness.The concept of neutrosophic sets has been introduced as a generalization of fuzzy,intuitionistic fuzzy,and crisp sets to overcome the ambiguity surrounding real-world problems.Truth-,falsity-,and indeterminacy-membership functions are used to express neutrosophic elements.This study was performed to examine a neutrosophic event-oriented algorithm for determining the critical path in activity-on-arc networks.The activity time estimates are presented as trapezoidal neutrosophic numbers,and score and accuracy functions are used to obtain a crisp model of the problem.An appropriate numerical example is then used to explain the proposed method.
文摘Project scheduling is a key objective of many models and is the proposed method for project planning and management.Project scheduling problems depend on precedence relationships and resource constraints,in addition to some other limitations for achieving a subset of goals.Project scheduling problems are dependent on many limitations,including limitations of precedence relationships,resource constraints,and some other limitations for achieving a subset of goals.Deterministic project scheduling models consider all information about the scheduling problem such as activity durations and precedence relationships information resources available and required,which are known and stable during the implementation process.The concept of deterministic project scheduling conflicts with real situations,in which in many cases,some data on the activity’s durations of the project and the degree of availability of resources change or may have different modes and strategies during the process of project implementation for dealing with multi-mode conditions surrounded by projects and their activity durations.Scheduling the multi-mode resource-constrained project problem is an optimization problem whose minimum project duration subject to the availability of resources is of particular interest to us.We use the multi-mode resource allocation and schedulingmodel that takes into account the dynamicity features of all parameters,that is,the scheduling process must be flexible to dynamic environment features.In this paper,we propose five priority heuristic rules for scheduling multi-mode resource-constrained projects under dynamicity features for more realistic situations,in which we apply the proposed heuristic rules(PHR)for scheduling multi-mode resource-constrained projects.Five projects are considered test problems for the PHR.The obtained results rendered by these priority rules for the test problems are compared by the results obtained from 10 well-known heuristics rules rendered for the same test problems.The results in many cases of the proposed priority rules are very promising,where they achieve better scheduling dates in many test case problems and the same results for the others.The proposed model is based on the dynamic features for project topography.
基金supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI)funded by the Ministry of Health&Welfare,Republic of Korea(Grant Number:HI21C1831)the Soonchunhyang University Research Fund.
文摘Wireless Sensor Network(WSN)technology is the real-time applica-tion that is growing rapidly as the result of smart environments.Battery power is one of the most significant resources in WSN.For enhancing a power factor,the clustering techniques are used.During the forward of data in WSN,more power is consumed.In the existing system,it works with Load Balanced Cluster-ing Method(LBCM)and provides the lifespan of the network with scalability and reliability.In the existing system,it does not deal with end-to-end delay and deliv-ery of packets.For overcoming these issues in WSN,the proposed Genetic Algo-rithm based on Chicken Swarm Optimization(GA-CSO)with Load Balanced Clustering Method(LBCM)is used.Genetic Algorithm generates chromosomes in an arbitrary method then the chromosomes values are calculated using Fitness Function.Chicken Swarm Optimization(CSO)helps to solve the complex opti-mization problems.Also,it consists of chickens,hens,and rooster.It divides the chicken into clusters.Load Balanced Clustering Method(LBCM)maintains the energy during communication among the sensor nodes and also it balances the load in the gateways.The proposed GA-CSO with LBCM improves the life-span of the network.Moreover,it minimizes the energy consumption and also bal-ances the load over the network.The proposed method outperforms by using the following metrics such as energy efficiency,ratio of packet delivery,throughput of the network,lifetime of the sensor nodes.Therefore,the evaluation result shows the energy efficiency that has achieved 83.56%and the delivery ratio of the packet has reached 99.12%.Also,it has attained linear standard deviation and reduced the end-to-end delay as 97.32 ms.
基金This research was supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),funded by the Ministry of Health&Welfare,Republic of Korea(Grant Number:HI21C1831)and the Soonchunhyang University Research Fund.
文摘Currently, relational database management systems (RDBMSs)face different challenges in application development due to the massive growthof unstructured and semi-structured data. This introduced new DBMS categories, known as not only structured query language (NoSQL) DBMSs, whichdo not adhere to the relational model. The migration from relational databasesto NoSQL databases is challenging due to the data complexity. This study aimsto enhance the storage performance of RDBMSs in handling a variety of data.The paper presents two approaches. The first approach proposes a convenientrepresentation of unstructured data storage. Several extensive experimentswere implemented to assess the efficiency of this approach that could resultin substantial improvements in the RDBMSs storage. The second approachproposes using the JavaScript Object Notation (JSON) format to representmultivalued attributes and many to many (M:N) relationships in relationaldatabases to create a flexible schema and store semi-structured data. Theresults indicate that the proposed approaches outperform similar approachesand improve data storage performance, which helps preserve software stabilityin huge organizations by improving existing software packages whose replacement may be highly costly.
文摘The high-efficiency video coder(HEVC)is one of the most advanced techniques used in growing real-time multimedia applications today.However,they require large bandwidth for transmission through bandwidth,and bandwidth varies with different video sequences/formats.This paper proposes an adaptive information-based variable quantization matrix(AIVQM)developed for different video formats having variable energy levels.The quantization method is adapted based on video sequence using statistical analysis,improving bit budget,quality and complexity reduction.Further,to have precise control over bit rate and quality,a multi-constraint prune algorithm is proposed in the second stage of the AI-VQM technique for pre-calculating K numbers of paths.The same should be handy to selfadapt and choose one of the K-path automatically in dynamically changing bandwidth availability as per requirement after extensive testing of the proposed algorithm in the multi-constraint environment for multiple paths and evaluating the performance based on peak signal to noise ratio(PSNR),bit-budget and time complexity for different videos a noticeable improvement in rate-distortion(RD)performance is achieved.Using the proposed AIVQM technique,more feasible and efficient video sequences are achieved with less loss in PSNR than the variable quantization method(VQM)algorithm with approximately a rise of 10%–20%based on different video sequences/formats.
基金This work was suppoted by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)the Soonchunhyang University Research Fund.
文摘Wireless sensor networks(WSNs)are made up of several sensors located in a specific area and powered by a finite amount of energy to gather environmental data.WSNs use sensor nodes(SNs)to collect and transmit data.However,the power supplied by the sensor network is restricted.Thus,SNs must store energy as often as to extend the lifespan of the network.In the proposed study,effective clustering and longer network lifetimes are achieved using mul-ti-swarm optimization(MSO)and game theory based on locust search(LS-II).In this research,MSO is used to improve the optimum routing,while the LS-II approach is employed to specify the number of cluster heads(CHs)and select the best ones.After the CHs are identified,the other sensor components are allo-cated to the closest CHs to them.A game theory-based energy-efficient clustering approach is applied to WSNs.Here each SN is considered a player in the game.The SN can implement beneficial methods for itself depending on the length of the idle listening time in the active phase and then determine to choose whether or not to rest.The proposed multi-swarm with energy-efficient game theory on locust search(MSGE-LS)efficiently selects CHs,minimizes energy consumption,and improves the lifetime of networks.The findings of this study indicate that the proposed MSGE-LS is an effective method because its result proves that it increases the number of clusters,average energy consumption,lifespan extension,reduction in average packet loss,and end-to-end delay.