Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present wi...Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present with tissues of similar intensities,making automatically segmenting and classifying LTs from abdominal tomography images crucial and challenging.This review examines recent advancements in Liver Segmentation(LS)and Tumor Segmentation(TS)algorithms,highlighting their strengths and limitations regarding precision,automation,and resilience.Performance metrics are utilized to assess key detection algorithms and analytical methods,emphasizing their effectiveness and relevance in clinical contexts.The review also addresses ongoing challenges in liver tumor segmentation and identification,such as managing high variability in patient data and ensuring robustness across different imaging conditions.It suggests directions for future research,with insights into technological advancements that can enhance surgical planning and diagnostic accuracy by comparing popular methods.This paper contributes to a comprehensive understanding of current liver tumor detection techniques,provides a roadmap for future innovations,and improves diagnostic and therapeutic outcomes for liver cancer by integrating recent progress with remaining challenges.展开更多
The increasing focus on electrocatalysis for sustainable hydrogen(H_(2))production has prompted significant interest in MXenes,a class of two-dimensional(2D)materials comprising metal carbides,carbonitrides,and nitrid...The increasing focus on electrocatalysis for sustainable hydrogen(H_(2))production has prompted significant interest in MXenes,a class of two-dimensional(2D)materials comprising metal carbides,carbonitrides,and nitrides.These materials exhibit intriguing chemical and physical properties,including excellent electrical conductivity and a large surface area,making them attractive candidates for the hydrogen evolution reaction(HER).This scientific review explores recent advancements in MXene-based electrocatalysts for HER kinetics.It discusses various compositions,functionalities,and explicit design principles while providing a comprehensive overview of synthesis methods,exceptional properties,and electro-catalytic approaches for H_(2) production via electrochemical reactions.Furthermore,challenges and future prospects in designing MXenes-based electrocatalysts with enhanced kinetics are highlighted,emphasizing the potential of incorporating different metals to expand the scope of electrochemical reactions.This review suggests possible efforts for developing advanced MXenes-based electrocatalysts,particularly for efficient H_(2) generation through electrochemical water-splitting reactions..展开更多
Taurine is a sulfur-containing,semi-essential amino acid that occurs naturally in the body.It alternates between inflammation and oxidative stress-mediated injury in various disease models.As part of its limiting func...Taurine is a sulfur-containing,semi-essential amino acid that occurs naturally in the body.It alternates between inflammation and oxidative stress-mediated injury in various disease models.As part of its limiting functions,taurine also modulates endoplasmic reticulum stress,Ca^(2+)homeostasis,and neuronal activity at the molecular level.Taurine effectively protects against a number of neurological disorders,including stro ke,epilepsy,cerebral ischemia,memory dysfunction,and spinal cord injury.Although various therapies are available,effective management of these disorders remains a global challenge.Approximately 30 million people are affected worldwide.The design of taurine fo rmation co uld lead to potential drugs/supplements for the health maintenance and treatment of central nervous system disorders.The general neuroprotective effects of taurine and the various possible underlying mechanisms are discussed in this review.This article is a good resource for understanding the general effects of taurine on various diseases.Given the strong evidence for the neuropharmacological efficacy of taurine in various experimental paradigms,it is concluded that this molecule should be considered and further investigated as a potential candidate for neurotherapeutics,with emphasis on mechanism and clinical studies to determine efficacy.展开更多
The distinction and precise identification of tumor nodules are crucial for timely lung cancer diagnosis andplanning intervention. This research work addresses the major issues pertaining to the field of medical image...The distinction and precise identification of tumor nodules are crucial for timely lung cancer diagnosis andplanning intervention. This research work addresses the major issues pertaining to the field of medical imageprocessing while focusing on lung cancer Computed Tomography (CT) images. In this context, the paper proposesan improved lung cancer segmentation technique based on the strengths of nature-inspired approaches. Thebetter resolution of CT is exploited to distinguish healthy subjects from those who have lung cancer. In thisprocess, the visual challenges of the K-means are addressed with the integration of four nature-inspired swarmintelligent techniques. The techniques experimented in this paper are K-means with Artificial Bee Colony (ABC),K-means with Cuckoo Search Algorithm (CSA), K-means with Particle Swarm Optimization (PSO), and Kmeanswith Firefly Algorithm (FFA). The testing and evaluation are performed on Early Lung Cancer ActionProgram (ELCAP) database. The simulation analysis is performed using lung cancer images set against metrics:precision, sensitivity, specificity, f-measure, accuracy,Matthews Correlation Coefficient (MCC), Jaccard, and Dice.The detailed evaluation shows that the K-means with Cuckoo Search Algorithm (CSA) significantly improved thequality of lung cancer segmentation in comparison to the other optimization approaches utilized for lung cancerimages. The results exhibit that the proposed approach (K-means with CSA) achieves precision, sensitivity, and Fmeasureof 0.942, 0.964, and 0.953, respectively, and an average accuracy of 93%. The experimental results prove thatK-meanswithABC,K-meanswith PSO,K-meanswith FFA, andK-meanswithCSAhave achieved an improvementof 10.8%, 13.38%, 13.93%, and 15.7%, respectively, for accuracy measure in comparison to K-means segmentationfor lung cancer images. Further, it is highlighted that the proposed K-means with CSA have achieved a significantimprovement in accuracy, hence can be utilized by researchers for improved segmentation processes of medicalimage datasets for identifying the targeted region of interest.展开更多
In numerous real-world healthcare applications,handling incomplete medical data poses significant challenges for missing value imputation and subsequent clustering or classification tasks.Traditional approaches often ...In numerous real-world healthcare applications,handling incomplete medical data poses significant challenges for missing value imputation and subsequent clustering or classification tasks.Traditional approaches often rely on statistical methods for imputation,which may yield suboptimal results and be computationally intensive.This paper aims to integrate imputation and clustering techniques to enhance the classification of incomplete medical data with improved accuracy.Conventional classification methods are ill-suited for incomplete medical data.To enhance efficiency without compromising accuracy,this paper introduces a novel approach that combines imputation and clustering for the classification of incomplete data.Initially,the linear interpolation imputation method alongside an iterative Fuzzy c-means clustering method is applied and followed by a classification algorithm.The effectiveness of the proposed approach is evaluated using multiple performance metrics,including accuracy,precision,specificity,and sensitivity.The encouraging results demonstrate that our proposed method surpasses classical approaches across various performance criteria.展开更多
Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by reta...Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.展开更多
Skin cancer has been recognized as one of the most lethal and complex types of cancer for over a decade.The diagnosis of skin cancer is of paramount importance,yet the process is intricate and challenging.The analysis...Skin cancer has been recognized as one of the most lethal and complex types of cancer for over a decade.The diagnosis of skin cancer is of paramount importance,yet the process is intricate and challenging.The analysis and modeling of human skin pose significant difficulties due to its asymmetrical nature,the visibility of dense hair,and the presence of various substitute characteristics.The texture of the epidermis is notably different from that of normal skin,and these differences are often evident in cases of unhealthy skin.As a consequence,the development of an effective method for monitoring skin cancer has seen little progress.Moreover,the task of diagnosing skin cancer from dermoscopic images is particularly challenging.It is crucial to diagnose skin cancer at an early stage,despite the high cost associated with the procedure,as it is an expensive process.Unfortunately,the advancement of diagnostic techniques for skin cancer has been limited.To address this issue,there is a need for a more accurate and efficient method for identifying and categorizing skin cancer cases.This involves the evaluation of specific characteristics to distinguish between benign and malignant skin cancer occurrences.We present and evaluate several techniques for segmentation,categorized into three main types:thresholding,edge-based,and region-based.These techniques are applied to a dataset of 200 benign and melanoma lesions from the Hospital Pedro Hispano(PH2)collection.The evaluation is based on twelve distinct metrics,which are designed to measure various types of errors with particular clinical significance.Additionally,we assess the effectiveness of these techniques independently for three different types of lesions:melanocytic nevi,atypical nevi,and melanomas.The first technique is capable of classifying lesions into two categories:atypical nevi and melanoma,achieving the highest accuracy score of 90.00%with the Otsu(3-level)method.The second technique also classifies lesions into two categories:common nevi and melanoma,achieving a score of 90.80%with the Binarized Sauvola method.展开更多
The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness...The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness ofIoT devices. These devices, present in offices, homes, industries, and more, need constant monitoring to ensuretheir proper functionality. The success of smart systems relies on their seamless operation and ability to handlefaults. Sensors, crucial components of these systems, gather data and contribute to their functionality. Therefore,sensor faults can compromise the system’s reliability and undermine the trustworthiness of smart environments.To address these concerns, various techniques and algorithms can be employed to enhance the performance ofIoT devices through effective fault detection. This paper conducted a thorough review of the existing literature andconducted a detailed analysis.This analysis effectively links sensor errors with a prominent fault detection techniquecapable of addressing them. This study is innovative because it paves theway for future researchers to explore errorsthat have not yet been tackled by existing fault detection methods. Significant, the paper, also highlights essentialfactors for selecting and adopting fault detection techniques, as well as the characteristics of datasets and theircorresponding recommended techniques. Additionally, the paper presents amethodical overview of fault detectiontechniques employed in smart devices, including themetrics used for evaluation. Furthermore, the paper examinesthe body of academic work related to sensor faults and fault detection techniques within the domain. This reflectsthe growing inclination and scholarly attention of researchers and academicians toward strategies for fault detectionwithin the realm of the Internet of Things.展开更多
The proliferation of IoT devices requires innovative approaches to gaining insights while preserving privacy and resources amid unprecedented data generation.However,FL development for IoT is still in its infancy and ...The proliferation of IoT devices requires innovative approaches to gaining insights while preserving privacy and resources amid unprecedented data generation.However,FL development for IoT is still in its infancy and needs to be explored in various areas to understand the key challenges for deployment in real-world scenarios.The paper systematically reviewed the available literature using the PRISMA guiding principle.The study aims to provide a detailed overview of the increasing use of FL in IoT networks,including the architecture and challenges.A systematic review approach is used to collect,categorize and analyze FL-IoT-based articles.Asearch was performed in the IEEE,Elsevier,Arxiv,ACM,and WOS databases and 92 articles were finally examined.Inclusion measures were published in English and with the keywords“FL”and“IoT”.The methodology begins with an overview of recent advances in FL and the IoT,followed by a discussion of how these two technologies can be integrated.To be more specific,we examine and evaluate the capabilities of FL by talking about communication protocols,frameworks and architecture.We then present a comprehensive analysis of the use of FL in a number of key IoT applications,including smart healthcare,smart transportation,smart cities,smart industry,smart finance,and smart agriculture.The key findings from this analysis of FL IoT services and applications are also presented.Finally,we performed a comparative analysis with FL IID(independent and identical data)and non-ID,traditional centralized deep learning(DL)approaches.We concluded that FL has better performance,especially in terms of privacy protection and resource utilization.FL is excellent for preserving privacy becausemodel training takes place on individual devices or edge nodes,eliminating the need for centralized data aggregation,which poses significant privacy risks.To facilitate development in this rapidly evolving field,the insights presented are intended to help practitioners and researchers navigate the complex terrain of FL and IoT.展开更多
Enhancing the interconnection of devices and systems,the Internet of Things(IoT)is a paradigm-shifting technology.IoT security concerns are still a substantial concern despite its extraordinary advantages.This paper o...Enhancing the interconnection of devices and systems,the Internet of Things(IoT)is a paradigm-shifting technology.IoT security concerns are still a substantial concern despite its extraordinary advantages.This paper offers an extensive review of IoT security,emphasizing the technology’s architecture,important security elements,and common attacks.It highlights how important artificial intelligence(AI)is to bolstering IoT security,especially when it comes to addressing risks at different IoT architecture layers.We systematically examined current mitigation strategies and their effectiveness,highlighting contemporary challenges with practical solutions and case studies from a range of industries,such as healthcare,smart homes,and industrial IoT.Our results highlight the importance of AI methods that are lightweight and improve security without compromising the limited resources of devices and computational capability.IoT networks can ensure operational efficiency and resilience by proactively identifying and countering security risks by utilizing machine learning capabilities.This study provides a comprehensive guide for practitioners and researchers aiming to understand the intricate connection between IoT,security challenges,and AI-driven solutions.展开更多
Brain tumor is a global issue due to which several people suffer,and its early diagnosis can help in the treatment in a more efficient manner.Identifying different types of brain tumors,including gliomas,meningiomas,p...Brain tumor is a global issue due to which several people suffer,and its early diagnosis can help in the treatment in a more efficient manner.Identifying different types of brain tumors,including gliomas,meningiomas,pituitary tumors,as well as confirming the absence of tumors,poses a significant challenge using MRI images.Current approaches predominantly rely on traditional machine learning and basic deep learning methods for image classification.These methods often rely on manual feature extraction and basic convolutional neural networks(CNNs).The limitations include inadequate accuracy,poor generalization of new data,and limited ability to manage the high variability in MRI images.Utilizing the EfficientNetB3 architecture,this study presents a groundbreaking approach in the computational engineering domain,enhancing MRI-based brain tumor classification.Our approach highlights a major advancement in employing sophisticated machine learning techniques within Computer Science and Engineering,showcasing a highly accurate framework with significant potential for healthcare technologies.The model achieves an outstanding 99%accuracy,exhibiting balanced precision,recall,and F1-scores across all tumor types,as detailed in the classification report.This successful implementation demonstrates the model’s potential as an essential tool for diagnosing and classifying brain tumors,marking a notable improvement over current methods.The integration of such advanced computational techniques in medical diagnostics can significantly enhance accuracy and efficiency,paving the way for wider application.This research highlights the revolutionary impact of deep learning technologies in improving diagnostic processes and patient outcomes in neuro-oncology.展开更多
Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research l...Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research leverages the representation ability of pretrained EfficientNet-B0 model and the classification ability of the XGBoost model for the binary classification of breast tumors.In addition,the above transfer learning model is modified in such a way that it will focus more on tumor cells in the input mammogram.Accordingly,the work proposed an EfficientNet-B0 having a Spatial Attention Layer with XGBoost(ESA-XGBNet)for binary classification of mammograms.For this,the work is trained,tested,and validated using original and augmented mammogram images of three public datasets namely CBIS-DDSM,INbreast,and MIAS databases.Maximumclassification accuracy of 97.585%(CBISDDSM),98.255%(INbreast),and 98.91%(MIAS)is obtained using the proposed ESA-XGBNet architecture as compared with the existing models.Furthermore,the decision-making of the proposed ESA-XGBNet architecture is visualized and validated using the Attention Guided GradCAM-based Explainable AI technique.展开更多
In Agriculture Sciences, detection of diseases is one of the mostchallenging tasks. The mis-interpretations of plant diseases often lead towrong pesticide selection, resulting in damage of crops. Hence, the automaticr...In Agriculture Sciences, detection of diseases is one of the mostchallenging tasks. The mis-interpretations of plant diseases often lead towrong pesticide selection, resulting in damage of crops. Hence, the automaticrecognition of the diseases at earlier stages is important as well as economicalfor better quality and quantity of fruits. Computer aided detection (CAD)has proven as a supportive tool for disease detection and classification, thusallowing the identification of diseases and reducing the rate of degradationof fruit quality. In this research work, a model based on convolutional neuralnetwork with 19 convolutional layers has been proposed for effective andaccurate classification of Marsonina Coronaria and Apple Scab diseases fromapple leaves. For this, a database of 50,000 images has been acquired bycollecting images of leaves from apple farms of Himachal Pradesh (H.P)and Uttarakhand (India). An augmentation technique has been performedon the dataset to increase the number of images for increasing the accuracy.The performance analysis of the proposed model has been compared with thenew two Convolutional Neural Network (CNN) models having 8 and 9 layersrespectively. The proposed model has also been compared with the standardmachine learning classifiers like support vector machine, k-Nearest Neighbour, Random Forest and Logistic Regression models. From experimentalresults, it has been observed that the proposed model has outperformed theother CNN based models and machine learning models with an accuracy of99.2%.展开更多
Novel Coronavirus Disease(COVID-19)is a communicable disease that originated during December 2019,when China officially informed the World Health Organization(WHO)regarding the constellation of cases of the disease in...Novel Coronavirus Disease(COVID-19)is a communicable disease that originated during December 2019,when China officially informed the World Health Organization(WHO)regarding the constellation of cases of the disease in the city of Wuhan.Subsequently,the disease started spreading to the rest of the world.Until this point in time,no specific vaccine or medicine is available for the prevention and cure of the disease.Several research works are being carried out in the fields of medicinal and pharmaceutical sciences aided by data analytics and machine learning in the direction of treatment and early detection of this viral disease.The present report describes the use of machine learning algorithms[Linear and Logistic Regression,Decision Tree(DT),K-Nearest Neighbor(KNN),Support Vector Machine(SVM),and SVM with Grid Search]for the prediction and classification in relation to COVID-19.The data used for experimentation was the COVID-19 dataset acquired from the Center for Systems Science and Engineering(CSSE),Johns Hopkins University(JHU).The assimilated results indicated that the risk period for the patients is 12–14 days,beyond which the probability of survival of the patient may increase.In addition,it was also indicated that the probability of death in COVID cases increases with age.The death probability was found to be higher in males as compared to females.SVM with Grid search methods demonstrated the highest accuracy of approximately 95%,followed by the decision tree algorithm with an accuracy of approximately 94%.The present study and analysis pave a way in the direction of attribute correlation,estimation of survival days,and the prediction of death probability.The findings of the present study clearly indicate that machine learning algorithms have strong capabilities of prediction and classification in relation to COVID-19 as well.展开更多
Abstract: Change detection is a standard tool to extract and analyze the earth's surface features from remotely sensed data. Among the different change detection techniques, change vector analysis (CVA) have an ex...Abstract: Change detection is a standard tool to extract and analyze the earth's surface features from remotely sensed data. Among the different change detection techniques, change vector analysis (CVA) have an exceptional advantage of discriminating change in terms of change magnitude and vector direction from multispectral bands. The estimation of precise threshold is one of the most crucial task in CVA to separate the change pixels from unchanged pixels because overall assessment of change detection method is highly dependent on selected threshold value. In recent years, integration of fuzzy clustering and remotely sensed data have become appropriate and realistic choice for change detection applications. The novelty of the proposed model lies within use of fuzzy maximum likelihood classification (FMLC) as fuzzy clustering in CVA. The FMLC based CVA is implemented using diverse threshold determination algorithms such as double-window flexible pace search (DFPS), interactive trial and error (T&E), and 3x3-pixel kernel window (PKW). Unlike existing CVA techniques, addition of fuzzy clustering in CVA permits each pixel to have multiple class categories and offers ease in threshold determination process. In present work, the comparative analysis has highlighted the performance of FMLC based CVA overimproved SCVA both in terms of accuracy assessment and operational complexity. Among all the examined threshold searching algorithms, FMLC based CVA using DFPS algorithm is found to be the most efficient method.展开更多
COVID-19,being the virus of fear and anxiety,is one of the most recent and emergent of various respiratory disorders.It is similar to the MERS-COV and SARS-COV,the viruses that affected a large population of different...COVID-19,being the virus of fear and anxiety,is one of the most recent and emergent of various respiratory disorders.It is similar to the MERS-COV and SARS-COV,the viruses that affected a large population of different countries in the year 2012 and 2002,respectively.Various standard models have been used for COVID-19 epidemic prediction but they suffered from low accuracy due to lesser data availability and a high level of uncertainty.The proposed approach used a machine learning-based time-series Facebook NeuralProphet model for prediction of the number of death as well as confirmed cases and compared it with Poisson Distribution,and Random Forest Model.The analysis upon dataset has been performed considering the time duration from January 1st 2020 to16th July 2021.The model has been developed to obtain the forecast values till September 2021.This study aimed to determine the pandemic prediction of COVID-19 in the second wave of coronavirus in India using the latest Time-Series model to observe and predict the coronavirus pandemic situation across the country.In India,the cases are rapidly increasing day-by-day since mid of Feb 2021.The prediction of death rate using the proposed model has a good ability to forecast the COVID-19 dataset essentially in the second wave.To empower the prediction for future validation,the proposed model works effectively.展开更多
Biomedical image analysis has been exploited considerably by recent technology involvements,carrying about a pattern shift towards‘automation’and‘error free diagnosis’classification methods with markedly improved ...Biomedical image analysis has been exploited considerably by recent technology involvements,carrying about a pattern shift towards‘automation’and‘error free diagnosis’classification methods with markedly improved accurate diagnosis productivity and cost effectiveness.This paper proposes an automated deep learning model to diagnose skin disease at an early stage by using Dermoscopy images.The proposed model has four convolutional layers,two maxpool layers,one fully connected layer and three dense layers.All the convolutional layers are using the kernel size of 3∗3 whereas the maxpool layer is using the kernel size of 2∗2.The dermoscopy images are taken from the HAM10000 dataset.The proposed model is compared with the three different models of ResNet that are ResNet18,ResNet50 and ResNet101.The models are simulated with 32 batch size and Adadelta optimizer.The proposed model has obtained the best accuracy value of 0.96 whereas the ResNet101 model has obtained 0.90,the ResNet50 has obtained 0.89 and the ResNet18 model has obtained value as 0.86.Therefore,features obtained from the proposed model are more capable for improving the classification performance of multiple skin disease classes.This model can be used for early diagnosis of skin disease and can also act as a second opinion tool for dermatologists.展开更多
This study aims to empirically analyze teaching-learning-based optimization(TLBO)and machine learning algorithms using k-means and fuzzy c-means(FCM)algorithms for their individual performance evaluation in terms of c...This study aims to empirically analyze teaching-learning-based optimization(TLBO)and machine learning algorithms using k-means and fuzzy c-means(FCM)algorithms for their individual performance evaluation in terms of clustering and classification.In the first phase,the clustering(k-means and FCM)algorithms were employed independently and the clustering accuracy was evaluated using different computationalmeasures.During the second phase,the non-clustered data obtained from the first phase were preprocessed with TLBO.TLBO was performed using k-means(TLBO-KM)and FCM(TLBO-FCM)(TLBO-KM/FCM)algorithms.The objective function was determined by considering both minimization and maximization criteria.Non-clustered data obtained from the first phase were further utilized and fed as input for threshold optimization.Five benchmark datasets were considered from theUniversity of California,Irvine(UCI)Machine Learning Repository for comparative study and experimentation.These are breast cancer Wisconsin(BCW),Pima Indians Diabetes,Heart-Statlog,Hepatitis,and Cleveland Heart Disease datasets.The combined average accuracy obtained collectively is approximately 99.4%in case of TLBO-KM and 98.6%in case of TLBOFCM.This approach is also capable of finding the dominating attributes.The findings indicate that TLBO-KM/FCM,considering different computational measures,perform well on the non-clustered data where k-means and FCM,if employed independently,fail to provide significant results.Evaluating different feature sets,the TLBO-KM/FCM and SVM(GS)clearly outperformed all other classifiers in terms of sensitivity,specificity and accuracy.TLBOKM/FCM attained the highest average sensitivity(98.7%),highest average specificity(98.4%)and highest average accuracy(99.4%)for 10-fold cross validation with different test data.展开更多
In Software-Dened Networks(SDN),the divergence of the control interface from the data plane provides a unique platform to develop a programmable and exible network.A single controller,due to heavy load trafc triggered...In Software-Dened Networks(SDN),the divergence of the control interface from the data plane provides a unique platform to develop a programmable and exible network.A single controller,due to heavy load trafc triggered by different intelligent devices can not handle due to it’s restricted capability.To manage this,it is necessary to implement multiple controllers on the control plane to achieve quality network performance and robustness.The ow of data through the multiple controllers also varies,resulting in an unequal distribution of load between different controllers.One major drawback of the multiple controllers is their constant conguration of the mapping of the switch-controller,quickly allowing unequal distribution of load between controllers.To overcome this drawback,Software-Dened Vehicular Networking(SDVN)has evolved as a congurable and scalable network,that has quickly achieved attraction in wireless communications from research groups,businesses,and industries administration.In this paper,we have proposed a load balancing algorithm based on latency for multiple SDN controllers.It acknowledges the evolving characteristics of real-time latency vs.controller loads.By choosing the required latency and resolving multiple overloads simultaneously,our proposed algorithm solves the loadbalancing problems with multiple overloaded controllers in the SDN control plane.In addition to the migration,our algorithm has improved 25%latency as compared to the existing algorithms.展开更多
The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedi...The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disor-ders.This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion(AD)and non-subsampled contourlet transform(NSCT).First,the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input images such as structural and textural information.The detail and base layers are further combined utilizing a sum-based fusion rule which maximizes noise filtering contrast level by effectively preserving most of the structural and textural details.NSCT is utilized to further decompose these images into their low and high-frequency coefficients.These coefficients are then combined utilizing the principal component analysis/Karhunen-Loeve(PCA/KL)based fusion rule independently by substantiating eigenfeature reinforcement in the fusion results.An NSCT-based multiresolution analysis is performed on the combined salient feature information and the contrast-enhanced fusion coefficients.Finally,an inverse NSCT is applied to each coef-ficient to produce the final fusion result.Experimental results demonstrate an advantage of the proposed technique using a publicly accessible dataset and conducted comparative studies on three pairs of medical images from different modalities and health.Our approach offers better visual and robust performance with better objective measurements for research development since it excellently preserves significant salient features and precision without producing abnormal information in the case of qualitative and quantitative analysis.展开更多
基金the“Intelligent Recognition Industry Service Center”as part of the Featured Areas Research Center Program under the Higher Education Sprout Project by the Ministry of Education(MOE)in Taiwan,and the National Science and Technology Council,Taiwan,under grants 113-2221-E-224-041 and 113-2622-E-224-002.Additionally,partial support was provided by Isuzu Optics Corporation.
文摘Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present with tissues of similar intensities,making automatically segmenting and classifying LTs from abdominal tomography images crucial and challenging.This review examines recent advancements in Liver Segmentation(LS)and Tumor Segmentation(TS)algorithms,highlighting their strengths and limitations regarding precision,automation,and resilience.Performance metrics are utilized to assess key detection algorithms and analytical methods,emphasizing their effectiveness and relevance in clinical contexts.The review also addresses ongoing challenges in liver tumor segmentation and identification,such as managing high variability in patient data and ensuring robustness across different imaging conditions.It suggests directions for future research,with insights into technological advancements that can enhance surgical planning and diagnostic accuracy by comparing popular methods.This paper contributes to a comprehensive understanding of current liver tumor detection techniques,provides a roadmap for future innovations,and improves diagnostic and therapeutic outcomes for liver cancer by integrating recent progress with remaining challenges.
基金the financial support from the Sunway University International Research Network Grant Scheme(STR-IRNGSSET-GAMRG-01-2022)the Universiti Kebangsaan Malaysia Grant(GUP-2022-080)。
文摘The increasing focus on electrocatalysis for sustainable hydrogen(H_(2))production has prompted significant interest in MXenes,a class of two-dimensional(2D)materials comprising metal carbides,carbonitrides,and nitrides.These materials exhibit intriguing chemical and physical properties,including excellent electrical conductivity and a large surface area,making them attractive candidates for the hydrogen evolution reaction(HER).This scientific review explores recent advancements in MXene-based electrocatalysts for HER kinetics.It discusses various compositions,functionalities,and explicit design principles while providing a comprehensive overview of synthesis methods,exceptional properties,and electro-catalytic approaches for H_(2) production via electrochemical reactions.Furthermore,challenges and future prospects in designing MXenes-based electrocatalysts with enhanced kinetics are highlighted,emphasizing the potential of incorporating different metals to expand the scope of electrochemical reactions.This review suggests possible efforts for developing advanced MXenes-based electrocatalysts,particularly for efficient H_(2) generation through electrochemical water-splitting reactions..
文摘Taurine is a sulfur-containing,semi-essential amino acid that occurs naturally in the body.It alternates between inflammation and oxidative stress-mediated injury in various disease models.As part of its limiting functions,taurine also modulates endoplasmic reticulum stress,Ca^(2+)homeostasis,and neuronal activity at the molecular level.Taurine effectively protects against a number of neurological disorders,including stro ke,epilepsy,cerebral ischemia,memory dysfunction,and spinal cord injury.Although various therapies are available,effective management of these disorders remains a global challenge.Approximately 30 million people are affected worldwide.The design of taurine fo rmation co uld lead to potential drugs/supplements for the health maintenance and treatment of central nervous system disorders.The general neuroprotective effects of taurine and the various possible underlying mechanisms are discussed in this review.This article is a good resource for understanding the general effects of taurine on various diseases.Given the strong evidence for the neuropharmacological efficacy of taurine in various experimental paradigms,it is concluded that this molecule should be considered and further investigated as a potential candidate for neurotherapeutics,with emphasis on mechanism and clinical studies to determine efficacy.
基金the Researchers Supporting Project(RSP2023R395),King Saud University,Riyadh,Saudi Arabia.
文摘The distinction and precise identification of tumor nodules are crucial for timely lung cancer diagnosis andplanning intervention. This research work addresses the major issues pertaining to the field of medical imageprocessing while focusing on lung cancer Computed Tomography (CT) images. In this context, the paper proposesan improved lung cancer segmentation technique based on the strengths of nature-inspired approaches. Thebetter resolution of CT is exploited to distinguish healthy subjects from those who have lung cancer. In thisprocess, the visual challenges of the K-means are addressed with the integration of four nature-inspired swarmintelligent techniques. The techniques experimented in this paper are K-means with Artificial Bee Colony (ABC),K-means with Cuckoo Search Algorithm (CSA), K-means with Particle Swarm Optimization (PSO), and Kmeanswith Firefly Algorithm (FFA). The testing and evaluation are performed on Early Lung Cancer ActionProgram (ELCAP) database. The simulation analysis is performed using lung cancer images set against metrics:precision, sensitivity, specificity, f-measure, accuracy,Matthews Correlation Coefficient (MCC), Jaccard, and Dice.The detailed evaluation shows that the K-means with Cuckoo Search Algorithm (CSA) significantly improved thequality of lung cancer segmentation in comparison to the other optimization approaches utilized for lung cancerimages. The results exhibit that the proposed approach (K-means with CSA) achieves precision, sensitivity, and Fmeasureof 0.942, 0.964, and 0.953, respectively, and an average accuracy of 93%. The experimental results prove thatK-meanswithABC,K-meanswith PSO,K-meanswith FFA, andK-meanswithCSAhave achieved an improvementof 10.8%, 13.38%, 13.93%, and 15.7%, respectively, for accuracy measure in comparison to K-means segmentationfor lung cancer images. Further, it is highlighted that the proposed K-means with CSA have achieved a significantimprovement in accuracy, hence can be utilized by researchers for improved segmentation processes of medicalimage datasets for identifying the targeted region of interest.
基金supported by the Researchers Supporting Project number(RSP2024R 34),King Saud University,Riyadh,Saudi Arabia。
文摘In numerous real-world healthcare applications,handling incomplete medical data poses significant challenges for missing value imputation and subsequent clustering or classification tasks.Traditional approaches often rely on statistical methods for imputation,which may yield suboptimal results and be computationally intensive.This paper aims to integrate imputation and clustering techniques to enhance the classification of incomplete medical data with improved accuracy.Conventional classification methods are ill-suited for incomplete medical data.To enhance efficiency without compromising accuracy,this paper introduces a novel approach that combines imputation and clustering for the classification of incomplete data.Initially,the linear interpolation imputation method alongside an iterative Fuzzy c-means clustering method is applied and followed by a classification algorithm.The effectiveness of the proposed approach is evaluated using multiple performance metrics,including accuracy,precision,specificity,and sensitivity.The encouraging results demonstrate that our proposed method surpasses classical approaches across various performance criteria.
文摘Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.
文摘Skin cancer has been recognized as one of the most lethal and complex types of cancer for over a decade.The diagnosis of skin cancer is of paramount importance,yet the process is intricate and challenging.The analysis and modeling of human skin pose significant difficulties due to its asymmetrical nature,the visibility of dense hair,and the presence of various substitute characteristics.The texture of the epidermis is notably different from that of normal skin,and these differences are often evident in cases of unhealthy skin.As a consequence,the development of an effective method for monitoring skin cancer has seen little progress.Moreover,the task of diagnosing skin cancer from dermoscopic images is particularly challenging.It is crucial to diagnose skin cancer at an early stage,despite the high cost associated with the procedure,as it is an expensive process.Unfortunately,the advancement of diagnostic techniques for skin cancer has been limited.To address this issue,there is a need for a more accurate and efficient method for identifying and categorizing skin cancer cases.This involves the evaluation of specific characteristics to distinguish between benign and malignant skin cancer occurrences.We present and evaluate several techniques for segmentation,categorized into three main types:thresholding,edge-based,and region-based.These techniques are applied to a dataset of 200 benign and melanoma lesions from the Hospital Pedro Hispano(PH2)collection.The evaluation is based on twelve distinct metrics,which are designed to measure various types of errors with particular clinical significance.Additionally,we assess the effectiveness of these techniques independently for three different types of lesions:melanocytic nevi,atypical nevi,and melanomas.The first technique is capable of classifying lesions into two categories:atypical nevi and melanoma,achieving the highest accuracy score of 90.00%with the Otsu(3-level)method.The second technique also classifies lesions into two categories:common nevi and melanoma,achieving a score of 90.80%with the Binarized Sauvola method.
文摘The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness ofIoT devices. These devices, present in offices, homes, industries, and more, need constant monitoring to ensuretheir proper functionality. The success of smart systems relies on their seamless operation and ability to handlefaults. Sensors, crucial components of these systems, gather data and contribute to their functionality. Therefore,sensor faults can compromise the system’s reliability and undermine the trustworthiness of smart environments.To address these concerns, various techniques and algorithms can be employed to enhance the performance ofIoT devices through effective fault detection. This paper conducted a thorough review of the existing literature andconducted a detailed analysis.This analysis effectively links sensor errors with a prominent fault detection techniquecapable of addressing them. This study is innovative because it paves theway for future researchers to explore errorsthat have not yet been tackled by existing fault detection methods. Significant, the paper, also highlights essentialfactors for selecting and adopting fault detection techniques, as well as the characteristics of datasets and theircorresponding recommended techniques. Additionally, the paper presents amethodical overview of fault detectiontechniques employed in smart devices, including themetrics used for evaluation. Furthermore, the paper examinesthe body of academic work related to sensor faults and fault detection techniques within the domain. This reflectsthe growing inclination and scholarly attention of researchers and academicians toward strategies for fault detectionwithin the realm of the Internet of Things.
文摘The proliferation of IoT devices requires innovative approaches to gaining insights while preserving privacy and resources amid unprecedented data generation.However,FL development for IoT is still in its infancy and needs to be explored in various areas to understand the key challenges for deployment in real-world scenarios.The paper systematically reviewed the available literature using the PRISMA guiding principle.The study aims to provide a detailed overview of the increasing use of FL in IoT networks,including the architecture and challenges.A systematic review approach is used to collect,categorize and analyze FL-IoT-based articles.Asearch was performed in the IEEE,Elsevier,Arxiv,ACM,and WOS databases and 92 articles were finally examined.Inclusion measures were published in English and with the keywords“FL”and“IoT”.The methodology begins with an overview of recent advances in FL and the IoT,followed by a discussion of how these two technologies can be integrated.To be more specific,we examine and evaluate the capabilities of FL by talking about communication protocols,frameworks and architecture.We then present a comprehensive analysis of the use of FL in a number of key IoT applications,including smart healthcare,smart transportation,smart cities,smart industry,smart finance,and smart agriculture.The key findings from this analysis of FL IoT services and applications are also presented.Finally,we performed a comparative analysis with FL IID(independent and identical data)and non-ID,traditional centralized deep learning(DL)approaches.We concluded that FL has better performance,especially in terms of privacy protection and resource utilization.FL is excellent for preserving privacy becausemodel training takes place on individual devices or edge nodes,eliminating the need for centralized data aggregation,which poses significant privacy risks.To facilitate development in this rapidly evolving field,the insights presented are intended to help practitioners and researchers navigate the complex terrain of FL and IoT.
文摘Enhancing the interconnection of devices and systems,the Internet of Things(IoT)is a paradigm-shifting technology.IoT security concerns are still a substantial concern despite its extraordinary advantages.This paper offers an extensive review of IoT security,emphasizing the technology’s architecture,important security elements,and common attacks.It highlights how important artificial intelligence(AI)is to bolstering IoT security,especially when it comes to addressing risks at different IoT architecture layers.We systematically examined current mitigation strategies and their effectiveness,highlighting contemporary challenges with practical solutions and case studies from a range of industries,such as healthcare,smart homes,and industrial IoT.Our results highlight the importance of AI methods that are lightweight and improve security without compromising the limited resources of devices and computational capability.IoT networks can ensure operational efficiency and resilience by proactively identifying and countering security risks by utilizing machine learning capabilities.This study provides a comprehensive guide for practitioners and researchers aiming to understand the intricate connection between IoT,security challenges,and AI-driven solutions.
基金supported by the Researchers Supporting Program at King Saud University.Researchers Supporting Project number(RSPD2024R867),King Saud University,Riyadh,Saudi Arabia.
文摘Brain tumor is a global issue due to which several people suffer,and its early diagnosis can help in the treatment in a more efficient manner.Identifying different types of brain tumors,including gliomas,meningiomas,pituitary tumors,as well as confirming the absence of tumors,poses a significant challenge using MRI images.Current approaches predominantly rely on traditional machine learning and basic deep learning methods for image classification.These methods often rely on manual feature extraction and basic convolutional neural networks(CNNs).The limitations include inadequate accuracy,poor generalization of new data,and limited ability to manage the high variability in MRI images.Utilizing the EfficientNetB3 architecture,this study presents a groundbreaking approach in the computational engineering domain,enhancing MRI-based brain tumor classification.Our approach highlights a major advancement in employing sophisticated machine learning techniques within Computer Science and Engineering,showcasing a highly accurate framework with significant potential for healthcare technologies.The model achieves an outstanding 99%accuracy,exhibiting balanced precision,recall,and F1-scores across all tumor types,as detailed in the classification report.This successful implementation demonstrates the model’s potential as an essential tool for diagnosing and classifying brain tumors,marking a notable improvement over current methods.The integration of such advanced computational techniques in medical diagnostics can significantly enhance accuracy and efficiency,paving the way for wider application.This research highlights the revolutionary impact of deep learning technologies in improving diagnostic processes and patient outcomes in neuro-oncology.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R432),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research leverages the representation ability of pretrained EfficientNet-B0 model and the classification ability of the XGBoost model for the binary classification of breast tumors.In addition,the above transfer learning model is modified in such a way that it will focus more on tumor cells in the input mammogram.Accordingly,the work proposed an EfficientNet-B0 having a Spatial Attention Layer with XGBoost(ESA-XGBNet)for binary classification of mammograms.For this,the work is trained,tested,and validated using original and augmented mammogram images of three public datasets namely CBIS-DDSM,INbreast,and MIAS databases.Maximumclassification accuracy of 97.585%(CBISDDSM),98.255%(INbreast),and 98.91%(MIAS)is obtained using the proposed ESA-XGBNet architecture as compared with the existing models.Furthermore,the decision-making of the proposed ESA-XGBNet architecture is visualized and validated using the Attention Guided GradCAM-based Explainable AI technique.
基金This work was supported by Taif University Researchers Supporting Project(TURSP)under number(TURSP-2020/73),Taif University,Taif,Saudi Arabia.
文摘In Agriculture Sciences, detection of diseases is one of the mostchallenging tasks. The mis-interpretations of plant diseases often lead towrong pesticide selection, resulting in damage of crops. Hence, the automaticrecognition of the diseases at earlier stages is important as well as economicalfor better quality and quantity of fruits. Computer aided detection (CAD)has proven as a supportive tool for disease detection and classification, thusallowing the identification of diseases and reducing the rate of degradationof fruit quality. In this research work, a model based on convolutional neuralnetwork with 19 convolutional layers has been proposed for effective andaccurate classification of Marsonina Coronaria and Apple Scab diseases fromapple leaves. For this, a database of 50,000 images has been acquired bycollecting images of leaves from apple farms of Himachal Pradesh (H.P)and Uttarakhand (India). An augmentation technique has been performedon the dataset to increase the number of images for increasing the accuracy.The performance analysis of the proposed model has been compared with thenew two Convolutional Neural Network (CNN) models having 8 and 9 layersrespectively. The proposed model has also been compared with the standardmachine learning classifiers like support vector machine, k-Nearest Neighbour, Random Forest and Logistic Regression models. From experimentalresults, it has been observed that the proposed model has outperformed theother CNN based models and machine learning models with an accuracy of99.2%.
文摘Novel Coronavirus Disease(COVID-19)is a communicable disease that originated during December 2019,when China officially informed the World Health Organization(WHO)regarding the constellation of cases of the disease in the city of Wuhan.Subsequently,the disease started spreading to the rest of the world.Until this point in time,no specific vaccine or medicine is available for the prevention and cure of the disease.Several research works are being carried out in the fields of medicinal and pharmaceutical sciences aided by data analytics and machine learning in the direction of treatment and early detection of this viral disease.The present report describes the use of machine learning algorithms[Linear and Logistic Regression,Decision Tree(DT),K-Nearest Neighbor(KNN),Support Vector Machine(SVM),and SVM with Grid Search]for the prediction and classification in relation to COVID-19.The data used for experimentation was the COVID-19 dataset acquired from the Center for Systems Science and Engineering(CSSE),Johns Hopkins University(JHU).The assimilated results indicated that the risk period for the patients is 12–14 days,beyond which the probability of survival of the patient may increase.In addition,it was also indicated that the probability of death in COVID cases increases with age.The death probability was found to be higher in males as compared to females.SVM with Grid search methods demonstrated the highest accuracy of approximately 95%,followed by the decision tree algorithm with an accuracy of approximately 94%.The present study and analysis pave a way in the direction of attribute correlation,estimation of survival days,and the prediction of death probability.The findings of the present study clearly indicate that machine learning algorithms have strong capabilities of prediction and classification in relation to COVID-19 as well.
文摘Abstract: Change detection is a standard tool to extract and analyze the earth's surface features from remotely sensed data. Among the different change detection techniques, change vector analysis (CVA) have an exceptional advantage of discriminating change in terms of change magnitude and vector direction from multispectral bands. The estimation of precise threshold is one of the most crucial task in CVA to separate the change pixels from unchanged pixels because overall assessment of change detection method is highly dependent on selected threshold value. In recent years, integration of fuzzy clustering and remotely sensed data have become appropriate and realistic choice for change detection applications. The novelty of the proposed model lies within use of fuzzy maximum likelihood classification (FMLC) as fuzzy clustering in CVA. The FMLC based CVA is implemented using diverse threshold determination algorithms such as double-window flexible pace search (DFPS), interactive trial and error (T&E), and 3x3-pixel kernel window (PKW). Unlike existing CVA techniques, addition of fuzzy clustering in CVA permits each pixel to have multiple class categories and offers ease in threshold determination process. In present work, the comparative analysis has highlighted the performance of FMLC based CVA overimproved SCVA both in terms of accuracy assessment and operational complexity. Among all the examined threshold searching algorithms, FMLC based CVA using DFPS algorithm is found to be the most efficient method.
基金This work was supported by the Taif University Researchers supporting Project Number(TURSP-2020/254).
文摘COVID-19,being the virus of fear and anxiety,is one of the most recent and emergent of various respiratory disorders.It is similar to the MERS-COV and SARS-COV,the viruses that affected a large population of different countries in the year 2012 and 2002,respectively.Various standard models have been used for COVID-19 epidemic prediction but they suffered from low accuracy due to lesser data availability and a high level of uncertainty.The proposed approach used a machine learning-based time-series Facebook NeuralProphet model for prediction of the number of death as well as confirmed cases and compared it with Poisson Distribution,and Random Forest Model.The analysis upon dataset has been performed considering the time duration from January 1st 2020 to16th July 2021.The model has been developed to obtain the forecast values till September 2021.This study aimed to determine the pandemic prediction of COVID-19 in the second wave of coronavirus in India using the latest Time-Series model to observe and predict the coronavirus pandemic situation across the country.In India,the cases are rapidly increasing day-by-day since mid of Feb 2021.The prediction of death rate using the proposed model has a good ability to forecast the COVID-19 dataset essentially in the second wave.To empower the prediction for future validation,the proposed model works effectively.
基金This work was supported by Taif university Researchers Supporting Project Number(TURPS-2020/114),Taif University,Taif,Saudi Arabia.
文摘Biomedical image analysis has been exploited considerably by recent technology involvements,carrying about a pattern shift towards‘automation’and‘error free diagnosis’classification methods with markedly improved accurate diagnosis productivity and cost effectiveness.This paper proposes an automated deep learning model to diagnose skin disease at an early stage by using Dermoscopy images.The proposed model has four convolutional layers,two maxpool layers,one fully connected layer and three dense layers.All the convolutional layers are using the kernel size of 3∗3 whereas the maxpool layer is using the kernel size of 2∗2.The dermoscopy images are taken from the HAM10000 dataset.The proposed model is compared with the three different models of ResNet that are ResNet18,ResNet50 and ResNet101.The models are simulated with 32 batch size and Adadelta optimizer.The proposed model has obtained the best accuracy value of 0.96 whereas the ResNet101 model has obtained 0.90,the ResNet50 has obtained 0.89 and the ResNet18 model has obtained value as 0.86.Therefore,features obtained from the proposed model are more capable for improving the classification performance of multiple skin disease classes.This model can be used for early diagnosis of skin disease and can also act as a second opinion tool for dermatologists.
文摘This study aims to empirically analyze teaching-learning-based optimization(TLBO)and machine learning algorithms using k-means and fuzzy c-means(FCM)algorithms for their individual performance evaluation in terms of clustering and classification.In the first phase,the clustering(k-means and FCM)algorithms were employed independently and the clustering accuracy was evaluated using different computationalmeasures.During the second phase,the non-clustered data obtained from the first phase were preprocessed with TLBO.TLBO was performed using k-means(TLBO-KM)and FCM(TLBO-FCM)(TLBO-KM/FCM)algorithms.The objective function was determined by considering both minimization and maximization criteria.Non-clustered data obtained from the first phase were further utilized and fed as input for threshold optimization.Five benchmark datasets were considered from theUniversity of California,Irvine(UCI)Machine Learning Repository for comparative study and experimentation.These are breast cancer Wisconsin(BCW),Pima Indians Diabetes,Heart-Statlog,Hepatitis,and Cleveland Heart Disease datasets.The combined average accuracy obtained collectively is approximately 99.4%in case of TLBO-KM and 98.6%in case of TLBOFCM.This approach is also capable of finding the dominating attributes.The findings indicate that TLBO-KM/FCM,considering different computational measures,perform well on the non-clustered data where k-means and FCM,if employed independently,fail to provide significant results.Evaluating different feature sets,the TLBO-KM/FCM and SVM(GS)clearly outperformed all other classifiers in terms of sensitivity,specificity and accuracy.TLBOKM/FCM attained the highest average sensitivity(98.7%),highest average specificity(98.4%)and highest average accuracy(99.4%)for 10-fold cross validation with different test data.
基金The authors are thankful for the support of Taif University Researchers Supporting Project No.(TURSP-2020/10),Taif University,Taif,Saudi Arabia.Taif University Researchers Supporting Project No.(TURSP-2020/10),Taif University,Taif,Saudi Arabia.
文摘In Software-Dened Networks(SDN),the divergence of the control interface from the data plane provides a unique platform to develop a programmable and exible network.A single controller,due to heavy load trafc triggered by different intelligent devices can not handle due to it’s restricted capability.To manage this,it is necessary to implement multiple controllers on the control plane to achieve quality network performance and robustness.The ow of data through the multiple controllers also varies,resulting in an unequal distribution of load between different controllers.One major drawback of the multiple controllers is their constant conguration of the mapping of the switch-controller,quickly allowing unequal distribution of load between controllers.To overcome this drawback,Software-Dened Vehicular Networking(SDVN)has evolved as a congurable and scalable network,that has quickly achieved attraction in wireless communications from research groups,businesses,and industries administration.In this paper,we have proposed a load balancing algorithm based on latency for multiple SDN controllers.It acknowledges the evolving characteristics of real-time latency vs.controller loads.By choosing the required latency and resolving multiple overloads simultaneously,our proposed algorithm solves the loadbalancing problems with multiple overloaded controllers in the SDN control plane.In addition to the migration,our algorithm has improved 25%latency as compared to the existing algorithms.
文摘The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disor-ders.This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion(AD)and non-subsampled contourlet transform(NSCT).First,the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input images such as structural and textural information.The detail and base layers are further combined utilizing a sum-based fusion rule which maximizes noise filtering contrast level by effectively preserving most of the structural and textural details.NSCT is utilized to further decompose these images into their low and high-frequency coefficients.These coefficients are then combined utilizing the principal component analysis/Karhunen-Loeve(PCA/KL)based fusion rule independently by substantiating eigenfeature reinforcement in the fusion results.An NSCT-based multiresolution analysis is performed on the combined salient feature information and the contrast-enhanced fusion coefficients.Finally,an inverse NSCT is applied to each coef-ficient to produce the final fusion result.Experimental results demonstrate an advantage of the proposed technique using a publicly accessible dataset and conducted comparative studies on three pairs of medical images from different modalities and health.Our approach offers better visual and robust performance with better objective measurements for research development since it excellently preserves significant salient features and precision without producing abnormal information in the case of qualitative and quantitative analysis.