期刊文献+
共找到10篇文章
< 1 >
每页显示 20 50 100
Refined Anam-Net:Lightweight Deep Learning Model for Improved Segmentation Performance of Optic Cup and Disc for Glaucoma Diagnosis
1
作者 khursheed aurangzeb Syed Irtaza Haider Musaed Alhussein 《Computers, Materials & Continua》 SCIE EI 2024年第7期1381-1405,共25页
In this work,we aim to introduce some modifications to the Anam-Net deep neural network(DNN)model for segmenting optic cup(OC)and optic disc(OD)in retinal fundus images to estimate the cup-to-disc ratio(CDR).The CDR i... In this work,we aim to introduce some modifications to the Anam-Net deep neural network(DNN)model for segmenting optic cup(OC)and optic disc(OD)in retinal fundus images to estimate the cup-to-disc ratio(CDR).The CDR is a reliable measure for the early diagnosis of Glaucoma.In this study,we developed a lightweight DNN model for OC and OD segmentation in retinal fundus images.Our DNN model is based on modifications to Anam-Net,incorporating an anamorphic depth embedding block.To reduce computational complexity,we employ a fixed filter size for all convolution layers in the encoder and decoder stages as the network deepens.This modification significantly reduces the number of trainable parameters,making the model lightweight and suitable for resource-constrained applications.We evaluate the performance of the developed model using two publicly available retinal image databases,namely RIM-ONE and Drishti-GS.The results demonstrate promising OC segmentation performance across most standard evaluation metrics while achieving analogous results for OD segmentation.We used two retinal fundus image databases named RIM-ONE and Drishti-GS that contained 159 images and 101 retinal images,respectively.For OD segmentation using the RIM-ONE we obtain an f1-score(F1),Jaccard coefficient(JC),and overlapping error(OE)of 0.950,0.9219,and 0.0781,respectively.Similarly,for OC segmentation using the same databases,we achieve scores of 0.8481(F1),0.7428(JC),and 0.2572(OE).Based on these experimental results and the significantly lower number of trainable parameters,we conclude that the developed model is highly suitable for the early diagnosis of glaucoma by accurately estimating the CDR. 展开更多
关键词 Refined Anam-Net parameter tuning deep learning optic cup optic disc cup-to-disc ratio glaucoma diagnosis
下载PDF
An Implementation of Multiscale Line Detection and Mathematical Morphology for Efficient and Precise Blood Vessel Segmentation in Fundus Images
2
作者 Syed Ayaz Ali Shah Aamir Shahzad +4 位作者 Musaed Alhussein Chuan Meng Goh khursheed aurangzeb Tong Boon Tang Muhammad Awais 《Computers, Materials & Continua》 SCIE EI 2024年第5期2565-2583,共19页
Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when deal... Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field. 展开更多
关键词 Line detector vessel detection LOCALIZATION mathematical morphology image processing
下载PDF
Model Agnostic Meta-Learning(MAML)-Based Ensemble Model for Accurate Detection of Wheat Diseases Using Vision Transformer and Graph Neural Networks
3
作者 Yasir Maqsood Syed Muhammad Usman +3 位作者 Musaed Alhussein khursheed aurangzeb Shehzad Khalid Muhammad Zubair 《Computers, Materials & Continua》 SCIE EI 2024年第5期2795-2811,共17页
Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly di... Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed. 展开更多
关键词 Wheat disease detection deep learning vision transformer graph neural network model agnostic meta learning
下载PDF
Identification of Software Bugs by Analyzing Natural Language-Based Requirements Using Optimized Deep Learning Features
4
作者 Qazi Mazhar ul Haq Fahim Arif +4 位作者 khursheed aurangzeb Noor ul Ain Javed Ali Khan Saddaf Rubab Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2024年第3期4379-4397,共19页
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn... Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode. 展开更多
关键词 Natural language processing software bug prediction transfer learning ensemble learning feature selection
下载PDF
Deep Learning Approach for Hand Gesture Recognition:Applications in Deaf Communication and Healthcare
5
作者 khursheed aurangzeb Khalid Javeed +3 位作者 Musaed Alhussein Imad Rida Syed Irtaza Haider Anubha Parashar 《Computers, Materials & Continua》 SCIE EI 2024年第1期127-144,共18页
Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seaml... Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seamless and error-free HCI.HGRoc technology is pivotal in healthcare and communication for the deaf community.Despite significant advancements in computer vision-based gesture recognition for language understanding,two considerable challenges persist in this field:(a)limited and common gestures are considered,(b)processing multiple channels of information across a network takes huge computational time during discriminative feature extraction.Therefore,a novel hand vision-based convolutional neural network(CNN)model named(HVCNNM)offers several benefits,notably enhanced accuracy,robustness to variations,real-time performance,reduced channels,and scalability.Additionally,these models can be optimized for real-time performance,learn from large amounts of data,and are scalable to handle complex recognition tasks for efficient human-computer interaction.The proposed model was evaluated on two challenging datasets,namely the Massey University Dataset(MUD)and the American Sign Language(ASL)Alphabet Dataset(ASLAD).On the MUD and ASLAD datasets,HVCNNM achieved a score of 99.23% and 99.00%,respectively.These results demonstrate the effectiveness of CNN as a promising HGRoc approach.The findings suggest that the proposed model have potential roles in applications such as sign language recognition,human-computer interaction,and robotics. 展开更多
关键词 Computer vision deep learning gait recognition sign language recognition machine learning
下载PDF
Modified Anam-Net Based Lightweight Deep Learning Model for Retinal Vessel Segmentation 被引量:1
6
作者 Syed Irtaza Haider khursheed aurangzeb Musaed Alhussein 《Computers, Materials & Continua》 SCIE EI 2022年第10期1501-1526,共26页
The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernet... The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernetworks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we proposea lightweight convolutional neural network (CNN)-based encoder-decoderdeep learning model for accurate retinal vessels segmentation. The proposeddeep learning model consists of encoder-decoder architecture along withbottleneck layers that consist of depth-wise squeezing, followed by fullconvolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, whichwas tested on CT images for COVID-19 identification. For our lightweightmodel, we used a stack of two 3 × 3 convolution layers (without spatialpooling in between) instead of a single 3 × 3 convolution layer as proposedin Anam-Net to increase the receptive field and to reduce the trainableparameters. The proposed method includes fewer filters in all convolutionallayers than the original Anam-Net and does not have an increasing numberof filters for decreasing resolution. These modifications do not compromiseon the segmentation accuracy, but they do make the architecture significantlylighter in terms of the number of trainable parameters and computation time.The proposed architecture has comparatively fewer parameters (1.01M) thanAnam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the otherrecent works. The proposed model does not require any problem-specificpre- or post-processing, nor does it rely on handcrafted features. In addition,the attribute of being efficient in terms of segmentation accuracy as well aslightweight makes the proposed method a suitable candidate to be used in thescreening platforms at the point of care. We evaluated our proposed modelon open-access datasets namely, DRIVE, STARE, and CHASE_DB. Theexperimental results show that the proposed model outperforms several stateof-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoderdecoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the areaunder the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752,and 0.9906} on the CHASE_DB dataset. Additionally, we perform crosstraining experiments on the DRIVE and STARE datasets. The result of thisexperiment indicates the generalization ability and robustness of the proposedmodel. 展开更多
关键词 Anam-Net convolutional neural network cross-database training data augmentation deep learning fundus images retinal vessel segmentation semantic segmentation
下载PDF
A Saliency Based Image Fusion Framework for Skin Lesion Segmentation and Classification 被引量:1
7
作者 Javaria Tahir Syed Rameez Naqvi +1 位作者 khursheed aurangzeb Musaed Alhussein 《Computers, Materials & Continua》 SCIE EI 2022年第2期3235-3250,共16页
Melanoma,due to its higher mortality rate,is considered as one of the most pernicious types of skin cancers,mostly affecting the white populations.It has been reported a number of times and is now widely accepted,that... Melanoma,due to its higher mortality rate,is considered as one of the most pernicious types of skin cancers,mostly affecting the white populations.It has been reported a number of times and is now widely accepted,that early detection of melanoma increases the chances of the subject’s survival.Computer-aided diagnostic systems help the experts in diagnosing the skin lesion at earlier stages using machine learning techniques.In thiswork,we propose a framework that accurately segments,and later classifies,the lesion using improved image segmentation and fusion methods.The proposed technique takes an image and passes it through two methods simultaneously;one is the weighted visual saliency-based method,and the second is improved HDCT based saliency estimation.The resultant image maps are later fused using the proposed image fusion technique to generate a localized lesion region.The resultant binary image is later mapped back to the RGB image and fed into the Inception-ResNet-V2 pre-trained model-trained by applying transfer learning.The simulation results show improved performance compared to several existing methods. 展开更多
关键词 Skin lesion segmentation image fusion saliency detection skin lesion classification deep neural networks transfer learning
下载PDF
Gastric Tract Disease Recognition Using Optimized Deep Learning Features 被引量:1
8
作者 Zainab Nayyar Muhammad Attique Khan +5 位作者 Musaed Alhussein Muhammad Nazir khursheed aurangzeb Yunyoung Nam Seifedine Kadry Syed Irtaza Haider 《Computers, Materials & Continua》 SCIE EI 2021年第8期2041-2056,共16页
Artificial intelligence aids for healthcare have received a great deal of attention.Approximately one million patients with gastrointestinal diseases have been diagnosed via wireless capsule endoscopy(WCE).Early diagn... Artificial intelligence aids for healthcare have received a great deal of attention.Approximately one million patients with gastrointestinal diseases have been diagnosed via wireless capsule endoscopy(WCE).Early diagnosis facilitates appropriate treatment and saves lives.Deep learning-based techniques have been used to identify gastrointestinal ulcers,bleeding sites,and polyps.However,small lesions may be misclassified.We developed a deep learning-based best-feature method to classify various stomach diseases evident in WCE images.Initially,we use hybrid contrast enhancement to distinguish diseased from normal regions.Then,a pretrained model is fine-tuned,and further training is done via transfer learning.Deep features are extracted from the last two layers and fused using a vector length-based approach.We improve the genetic algorithm using a fitness function and kurtosis to select optimal features that are graded by a classifier.We evaluate a database containing 24,000 WCE images of ulcers,bleeding sites,polyps,and healthy tissue.The cubic support vector machine classifier was optimal;the average accuracy was 99%. 展开更多
关键词 Stomach cancer contrast enhancement deep learning OPTIMIZATION features fusion
下载PDF
Analysis and Characterization of Normally-Off Gallium Nitride High Electron Mobility Transistors
9
作者 Shahzaib Anwar Sardar Muhammad Gulfam +3 位作者 Bilal Muhammad Syed Junaid Nawaz khursheed aurangzeb Mohammad Kaleem 《Computers, Materials & Continua》 SCIE EI 2021年第10期1021-1037,共17页
High electron mobility transistor(HEMT)based on gallium nitride(GaN)is one of the most promising candidates for the future generation of high frequencies and high-power electronic applications.This research work aims ... High electron mobility transistor(HEMT)based on gallium nitride(GaN)is one of the most promising candidates for the future generation of high frequencies and high-power electronic applications.This research work aims at designing and characterization of enhancement-mode or normally-off GaN HEMT.The impact of variations in gate length,mole concentration,barrier variations and other important design parameters on the performance of normally-off GaN HEMT is thoroughly investigated.An increase in the gate length causes a decrease in the drain current and transconductance,while an increase in drain current and transconductance can be achieved by increasing the concentration of aluminium(Al).For Al mole fractions of 23%,25%,and 27%,within Al gallium nitride(AlGaN)barrier,the GaN HEMT devices provide a maximum drain current of 347,408 and 474 mA/μm and a transconductance of 19,20.2,21.5 mS/μm,respectively.Whereas,for Al mole fraction of 10%and 15%,within AlGaN buffer,these devices are observed to provide a drain current of 329 and 283 mA/μm,respectively.Furthermore,for a gate length of 2.4,3.4,and 4.4μm,the device is observed to exhibit a maximum drain current of 272,235,and 221 mA/μm and the transconductance of 16.2,14,and 12.3 mS/μm,respectively.It is established that a maximum drain current of 997 mA/μm can be achieved with an Al concentration of 23%,and the device exhibits a steady drain current with enhanced transconductance.These observations demonstrate tremendous potential for two-dimensional electron gas(2DEG)for securing of the normally-off mode operation.A suitable setting of gate length and other design parameters is critical in preserving the normally-off mode operation while also enhancing the critical performance parameters at the same time.Due to the normallyon depletion-mode nature of GaN HEMT,it is usually not considered as suitable for high power levels,frequencies,and temperature.In such settings,a negative bias is required to enter the blocking condition;however,in the before-mentioned normally-off devices,the negative bias can be avoided and the channel can be depleted without applying a negative bias. 展开更多
关键词 High electron mobility GAN HEMT bipolar transistors gallium nitride HETEROJUNCTIONS MOS devices
下载PDF
NPBMT: A Novel and Proficient Buffer Management Technique for Internet of Vehicle-Based DTNs
10
作者 Sikandar Khan Khalid Saeed +3 位作者 Muhammad Faran Majeed Salman A.AlQahtani khursheed aurangzeb Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2023年第10期1303-1323,共21页
Delay Tolerant Networks(DTNs)have the major problem of message delay in the network due to a lack of endto-end connectivity between the nodes,especially when the nodes are mobile.The nodes in DTNs have limited buffer ... Delay Tolerant Networks(DTNs)have the major problem of message delay in the network due to a lack of endto-end connectivity between the nodes,especially when the nodes are mobile.The nodes in DTNs have limited buffer storage for storing delayed messages.This instantaneous sharing of data creates a low buffer/shortage problem.Consequently,buffer congestion would occur and there would be no more space available in the buffer for the upcoming messages.To address this problem a buffer management policy is proposed named“A Novel and Proficient Buffer Management Technique(NPBMT)for the Internet of Vehicle-Based DTNs”.NPBMT combines appropriate-size messages with the lowest Time-to-Live(TTL)and then drops a combination of the appropriate messages to accommodate the newly arrived messages.To evaluate the performance of the proposed technique comparison is done with Drop Oldest(DOL),Size Aware Drop(SAD),and Drop Larges(DLA).The proposed technique is implemented in the Opportunistic Network Environment(ONE)simulator.The shortest path mapbased movement model has been used as the movement path model for the nodes with the epidemic routing protocol.From the simulation results,a significant change has been observed in the delivery probability as the proposed policy delivered 380 messages,DOL delivered 186 messages,SAD delivered 190 messages,and DLA delivered only 95 messages.A significant decrease has been observed in the overhead ratio,as the SAD overhead ratio is 324.37,DLA overhead ratio is 266.74,and DOL and NPBMT overhead ratios are 141.89 and 52.85,respectively,which reveals a significant reduction of overhead ratio in NPBMT as compared to existing policies.The network latency average of DOL is 7785.5,DLA is 5898.42,and SAD is 5789.43 whereas the NPBMT latency average is 3909.4.This reveals that the proposed policy keeps the messages for a short time in the network,which reduces the overhead ratio. 展开更多
关键词 Delay tolerant networks buffer management message drop policy ONE simulator NPBMT
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部