Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computa...Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computation experience.This paper aims to present a retrospective yet modern approach to the world of speech recognition systems.The development journey of ASR(Automatic Speech Recognition)has seen quite a few milestones and breakthrough technologies that have been highlighted in this paper.A step-by-step rundown of the fundamental stages in developing speech recognition systems has been presented,along with a brief discussion of various modern-day developments and applications in this domain.This review paper aims to summarize and provide a beginning point for those starting in the vast field of speech signal processing.Since speech recognition has a vast potential in various industries like telecommunication,emotion recognition,healthcare,etc.,this review would be helpful to researchers who aim at exploring more applications that society can quickly adopt in future years of evolution.展开更多
Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control sy...Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control systems,such as Attribute-Based Access Control(ABAC)and Role-Based Access Control(RBAC),are limited in their ability to enforce access decisions due to the variability and dynamism of attributes related to users and resources.This paper proposes a method for enforcing access decisions that is adaptable and dynamic,based on multilayer hybrid deep learning techniques,particularly the Tabular Deep Neural Network Tabular DNN method.This technique transforms all input attributes in an access request into a binary classification(allow or deny)using multiple layers,ensuring accurate and efficient access decision-making.The proposed solution was evaluated using the Kaggle Amazon access control policy dataset and demonstrated its effectiveness by achieving a 94%accuracy rate.Additionally,the proposed solution enhances the implementation of access decisions based on a variety of resource and user attributes while ensuring privacy through indirect communication with the Policy Administration Point(PAP).This solution significantly improves the flexibility of access control systems,making themmore dynamic and adaptable to the evolving needs ofmodern organizations.Furthermore,it offers a scalable approach to manage the complexities associated with the BYOD environment,providing a robust framework for secure and efficient access management.展开更多
Background:Sepsis,a potentially fatal inflammatory disease triggered by infection,carries significant healthimplications worldwide.Timely detection is crucial as sepsis can rapidly escalate if left undetected.Recentad...Background:Sepsis,a potentially fatal inflammatory disease triggered by infection,carries significant healthimplications worldwide.Timely detection is crucial as sepsis can rapidly escalate if left undetected.Recentadvancements in deep learning(DL)offer powerful tools to address this challenge.Aim:Thus,this study proposeda hybrid CNNBDLSTM,a combination of a convolutional neural network(CNN)with a bi-directional long shorttermmemory(BDLSTM)model to predict sepsis onset.Implementing the proposed model provides a robustframework that capitalizes on the complementary strengths of both architectures,resulting in more accurate andtimelier predictions.Method:The sepsis prediction method proposed here utilizes temporal feature extraction todelineate six distinct time frames before the onset of sepsis.These time frames adhere to the sepsis-3 standardrequirement,which incorporates 12-h observation windows preceding sepsis onset.All models were trained usingthe Medical Information Mart for Intensive Care III(MIMIC-III)dataset,which sourced 61,522 patients with 40clinical variables obtained from the IoT medical environment.The confusion matrix,the area under the receiveroperating characteristic curve(AUCROC)curve,the accuracy,the precision,the F1-score,and the recall weredeployed to evaluate themodels.Result:The CNNBDLSTMmodel demonstrated superior performance comparedto the benchmark and other models,achieving an AUCROC of 99.74%and an accuracy of 99.15%one hour beforesepsis onset.These results indicate that the CNNBDLSTM model is highly effective in predicting sepsis onset,particularly within a close proximity of one hour.Implication:The results could assist practitioners in increasingthe potential survival of the patient one hour before sepsis onset.展开更多
Automatic modulation classification(AMC) technology is one of the cutting-edge technologies in cognitive radio communications. AMC based on deep learning has recently attracted much attention due to its superior perfo...Automatic modulation classification(AMC) technology is one of the cutting-edge technologies in cognitive radio communications. AMC based on deep learning has recently attracted much attention due to its superior performances in classification accuracy and robustness. In this paper, we propose a novel, high resolution and multi-scale feature fusion convolutional neural network model with a squeeze-excitation block, referred to as HRSENet,to classify different kinds of modulation signals.The proposed model establishes a parallel computing mechanism of multi-resolution feature maps through the multi-layer convolution operation, which effectively reduces the information loss caused by downsampling convolution. Moreover, through dense skipconnecting at the same resolution and up-sampling or down-sampling connection at different resolutions, the low resolution representation of the deep feature maps and the high resolution representation of the shallow feature maps are simultaneously extracted and fully integrated, which is benificial to mine signal multilevel features. Finally, the feature squeeze and excitation module embedded in the decoder is used to adjust the response weights between channels, further improving classification accuracy of proposed model.The proposed HRSENet significantly outperforms existing methods in terms of classification accuracy on the public dataset “Over the Air” in signal-to-noise(SNR) ranging from-2dB to 20dB. The classification accuracy in the proposed model achieves 85.36% and97.30% at 4dB and 10dB, respectively, with the improvement by 9.71% and 5.82% compared to LWNet.Furthermore, the model also has a moderate computation complexity compared with several state-of-the-art methods.展开更多
Maritime transportation,a cornerstone of global trade,faces increasing safety challenges due to growing sea traffic volumes.This study proposes a novel approach to vessel trajectory prediction utilizing Automatic Iden...Maritime transportation,a cornerstone of global trade,faces increasing safety challenges due to growing sea traffic volumes.This study proposes a novel approach to vessel trajectory prediction utilizing Automatic Identification System(AIS)data and advanced deep learning models,including Long Short-Term Memory(LSTM),Gated Recurrent Unit(GRU),Bidirectional LSTM(DBLSTM),Simple Recurrent Neural Network(SimpleRNN),and Kalman Filtering.The research implemented rigorous AIS data preprocessing,encompassing record deduplication,noise elimination,stationary simplification,and removal of insignificant trajectories.Models were trained using key navigational parameters:latitude,longitude,speed,and heading.Spatiotemporal aware processing through trajectory segmentation and topological data analysis(TDA)was employed to capture dynamic patterns.Validation using a three-month AIS dataset demonstrated significant improvements in prediction accuracy.The GRU model exhibited superior performance,achieving training losses of 0.0020(Mean Squared Error,MSE)and 0.0334(Mean Absolute Error,MAE),with validation losses of 0.0708(MSE)and 0.1720(MAE).The LSTM model showed comparable efficacy,with training losses of 0.0011(MSE)and 0.0258(MAE),and validation losses of 0.2290(MSE)and 0.2652(MAE).Both models demonstrated reductions in training and validation losses,measured by MAE,MSE,Average Displacement Error(ADE),and Final Displacement Error(FDE).This research underscores the potential of advanced deep learning models in enhancing maritime safety through more accurate trajectory predictions,contributing significantly to the development of robust,intelligent navigation systems for the maritime industry.展开更多
Satellite communication systems are facing serious electromagnetic interference,and interference signal recognition is a crucial foundation for targeted anti-interference.In this paper,we propose a novel interference ...Satellite communication systems are facing serious electromagnetic interference,and interference signal recognition is a crucial foundation for targeted anti-interference.In this paper,we propose a novel interference recognition algorithm called HDCGD-CBAM,which adopts the time-frequency images(TFIs)of signals to effectively extract the temporal and spectral characteristics.In the proposed method,we improve the Convolutional Long Short-Term Memory Deep Neural Network(CLDNN)in two ways.First,the simpler Gate Recurrent Unit(GRU)is used instead of the Long Short-Term Memory(LSTM),reducing model parameters while maintaining the recognition accuracy.Second,we replace convolutional layers with hybrid dilated convolution(HDC)to expand the receptive field of feature maps,which captures the correlation of time-frequency data on a larger spatial scale.Additionally,Convolutional Block Attention Module(CBAM)is introduced before and after the HDC layers to strengthen the extraction of critical features and improve the recognition performance.The experiment results show that the HDCGD-CBAM model significantly outper-forms existing methods in terms of recognition accuracy and complexity.When Jamming-to-Signal Ratio(JSR)varies from-30dB to 10dB,it achieves an average accuracy of 78.7%and outperforms the CLDNN by 7.29%while reducing the Floating Point Operations(FLOPs)by 79.8%to 114.75M.Moreover,the proposed model has fewer parameters with 301k compared to several state-of-the-art methods.展开更多
Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques...Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques have been viewed as a viable method for enhancing the accuracy of univariate streamflow estimation when compared to standalone approaches.Current researchers have also emphasised using hybrid models to improve forecast accuracy.Accordingly,this paper conducts an updated literature review of applications of hybrid models in estimating streamflow over the last five years,summarising data preprocessing,univariate machine learning modelling strategy,advantages and disadvantages of standalone ML techniques,hybrid models,and performance metrics.This study focuses on two types of hybrid models:parameter optimisation-based hybrid models(OBH)and hybridisation of parameter optimisation-based and preprocessing-based hybridmodels(HOPH).Overall,this research supports the idea thatmeta-heuristic approaches precisely improveML techniques.It’s also one of the first efforts to comprehensively examine the efficiency of various meta-heuristic approaches(classified into four primary classes)hybridised with ML techniques.This study revealed that previous research applied swarm,evolutionary,physics,and hybrid metaheuristics with 77%,61%,12%,and 12%,respectively.Finally,there is still room for improving OBH and HOPH models by examining different data pre-processing techniques and metaheuristic algorithms.展开更多
The increased adoption of Internet of Medical Things (IoMT) technologies has resulted in the widespread use ofBody Area Networks (BANs) in medical and non-medical domains. However, the performance of IEEE 802.15.4-bas...The increased adoption of Internet of Medical Things (IoMT) technologies has resulted in the widespread use ofBody Area Networks (BANs) in medical and non-medical domains. However, the performance of IEEE 802.15.4-based BANs is impacted by challenges related to heterogeneous data traffic requirements among nodes, includingcontention during finite backoff periods, association delays, and traffic channel access through clear channelassessment (CCA) algorithms. These challenges lead to increased packet collisions, queuing delays, retransmissions,and the neglect of critical traffic, thereby hindering performance indicators such as throughput, packet deliveryratio, packet drop rate, and packet delay. Therefore, we propose Dynamic Next Backoff Period and Clear ChannelAssessment (DNBP-CCA) schemes to address these issues. The DNBP-CCA schemes leverage a combination ofthe Dynamic Next Backoff Period (DNBP) scheme and the Dynamic Next Clear Channel Assessment (DNCCA)scheme. The DNBP scheme employs a fuzzy Takagi, Sugeno, and Kang (TSK) model’s inference system toquantitatively analyze backoff exponent, channel clearance, collision ratio, and data rate as input parameters. Onthe other hand, the DNCCA scheme dynamically adapts the CCA process based on requested data transmission tothe coordinator, considering input parameters such as buffer status ratio and acknowledgement ratio. As a result,simulations demonstrate that our proposed schemes are better than some existing representative approaches andenhance data transmission, reduce node collisions, improve average throughput, and packet delivery ratio, anddecrease average packet drop rate and packet delay.展开更多
The increasing adoption of solar photovoltaic systems necessitates accurate forecasting of solar energy production to enhance grid stability,reliability,and economic benefits.This study explores advanced machine learn...The increasing adoption of solar photovoltaic systems necessitates accurate forecasting of solar energy production to enhance grid stability,reliability,and economic benefits.This study explores advanced machine learning(ML)and deep learning(DL)techniques for predicting solar energy generation,emphasizing the significant impact of meteorological data.A comprehensive dataset,encompassing detailed weather conditions and solar energy metrics,was collected and preprocessed to improve model accuracy.Various models were developed and trained with different preprocessing stages.Finally,three datasets were prepared.A novel hour-based prediction wrapper was introduced,utilizing external sunrise and sunset data to restrict predictions to daylight hours,thereby enhancing model performance.A cascaded stacking model incorporating association rules,weak predictors,and a modified stacking aggregation procedure was proposed,demonstrating enhanced generalization and reduced prediction errors.Results indicated that models trained on raw data generally performed better than those on stripped data.The Long Short-Term Memory(LSTM)with Inception layers’model was the most effective,achieving significant performance improvements through feature selection,data preprocessing,and innovative modeling techniques.The study underscores the potential to combine detailed meteorological data with advanced ML and DL methods to improve the accuracy of solar energy forecasting,thereby optimizing energy management and planning.展开更多
Network embedding aspires to learn a low-dimensional vector of each node in networks,which can apply to diverse data mining tasks.In real-life,many networks include rich attributes and temporal information.However,mos...Network embedding aspires to learn a low-dimensional vector of each node in networks,which can apply to diverse data mining tasks.In real-life,many networks include rich attributes and temporal information.However,most existing embedding approaches ignore either temporal information or network attributes.A self-attention based architecture using higher-order weights and node attributes for both static and temporal attributed network embedding is presented in this article.A random walk sampling algorithm based on higher-order weights and node attributes to capture network topological features is presented.For static attributed networks,the algorithm incorporates first-order to k-order weights,and node attribute similarities into one weighted graph to preserve topological features of networks.For temporal attribute networks,the algorithm incorporates previous snapshots of networks containing first-order to k-order weights,and nodes attribute similarities into one weighted graph.In addition,the algorithm utilises a damping factor to ensure that the more recent snapshots allocate a greater weight.Attribute features are then incorporated into topological features.Next,the authors adopt the most advanced architecture,Self-Attention Networks,to learn node representations.Experimental results on node classification of static attributed networks and link prediction of temporal attributed networks reveal that our proposed approach is competitive against diverse state-of-the-art baseline approaches.展开更多
Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengt...Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengths of multiple algorithms,enhancing solution quality,convergence speed,and robustness,thereby offering a more versatile and efficient means of solving intricate real-world optimization tasks.In this paper,we introduce a hybrid algorithm that amalgamates three distinct metaheuristics:the Beluga Whale Optimization(BWO),the Honey Badger Algorithm(HBA),and the Jellyfish Search(JS)optimizer.The proposed hybrid algorithm will be referred to as BHJO.Through this fusion,the BHJO algorithm aims to leverage the strengths of each optimizer.Before this hybridization,we thoroughly examined the exploration and exploitation capabilities of the BWO,HBA,and JS metaheuristics,as well as their ability to strike a balance between exploration and exploitation.This meticulous analysis allowed us to identify the pros and cons of each algorithm,enabling us to combine them in a novel hybrid approach that capitalizes on their respective strengths for enhanced optimization performance.In addition,the BHJO algorithm incorporates Opposition-Based Learning(OBL)to harness the advantages offered by this technique,leveraging its diverse exploration,accelerated convergence,and improved solution quality to enhance the overall performance and effectiveness of the hybrid algorithm.Moreover,the performance of the BHJO algorithm was evaluated across a range of both unconstrained and constrained optimization problems,providing a comprehensive assessment of its efficacy and applicability in diverse problem domains.Similarly,the BHJO algorithm was subjected to a comparative analysis with several renowned algorithms,where mean and standard deviation values were utilized as evaluation metrics.This rigorous comparison aimed to assess the performance of the BHJOalgorithmabout its counterparts,shedding light on its effectiveness and reliability in solving optimization problems.Finally,the obtained numerical statistics underwent rigorous analysis using the Friedman post hoc Dunn’s test.The resulting numerical values revealed the BHJO algorithm’s competitiveness in tackling intricate optimization problems,affirming its capability to deliver favorable outcomes in challenging scenarios.展开更多
Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial i...Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.展开更多
Cookies are considered a fundamental means of web application services for authenticating various Hypertext Transfer Protocol(HTTP)requests andmaintains the states of clients’information over the Internet.HTTP cookie...Cookies are considered a fundamental means of web application services for authenticating various Hypertext Transfer Protocol(HTTP)requests andmaintains the states of clients’information over the Internet.HTTP cookies are exploited to carry client patterns observed by a website.These client patterns facilitate the particular client’s future visit to the corresponding website.However,security and privacy are the primary concerns owing to the value of information over public channels and the storage of client information on the browser.Several protocols have been introduced that maintain HTTP cookies,but many of those fail to achieve the required security,or require a lot of resource overheads.In this article,we have introduced a lightweight Elliptic Curve Cryptographic(ECC)based protocol for authenticating client and server transactions to maintain the privacy and security of HTTP cookies.Our proposed protocol uses a secret key embedded within a cookie.The proposed protocol ismore efficient and lightweight than related protocols because of its reduced computation,storage,and communication costs.Moreover,the analysis presented in this paper confirms that proposed protocol resists various known attacks.展开更多
In this review paper,we present a thorough investigation into the role of pavement technologies in advancing urban sustainability.Our analysis traverses the historical evolution of these technologies,meticulously eval...In this review paper,we present a thorough investigation into the role of pavement technologies in advancing urban sustainability.Our analysis traverses the historical evolution of these technologies,meticulously evaluating their socio-economic and environmental impacts,with a particular emphasis on their role in mitigating the urban heat island effect.The evaluation of pavement types and variables influencing pavement performance to be used in the multi-criteria decision-making(MCDM)framework to choose the optimal pavement application are at the heart of our research.Which serves to assess a spectrum of pavement options,revealing insights into the most effective and sustainable practices.By highlighting both the existing challenges and potential innovative solutions within thefield,this paper aims to offer a directional compass for future urban planning and infrastructural advancements.This review not only synthesizes the current state of knowledge but also aims to chart a course for future exploration,emphasizing the critical need for innovative and environmentally sensitive pavement tech-nologies in the creation of resilient and sustainable urban environments.展开更多
Contingent self-esteem captures the fragile nature of self-esteem and is often regarded as suboptimal to psychological functioning.Self-compassion is another important self-related concept assumed to promote mental he...Contingent self-esteem captures the fragile nature of self-esteem and is often regarded as suboptimal to psychological functioning.Self-compassion is another important self-related concept assumed to promote mental health and well-being.However,research on the relation of self-compassion to contingent self-esteem is lacking.Two studies were conducted to explore the role of selfcompassion,either as a personal characteristic or an induced mindset,in influencing the effects of contingent self-esteem on well-being.Study 1 recruited 256 Chinese college students(30.4%male,mean age=21.72 years)who filled out measures of contingent self-esteem,self-compassion,and well-being.The results found that self-compassion moderated the effect of contingent self-esteem on well-being.In Study 2,a sample of 90 Chinese college students(34%male,mean age=18.39 years)were randomly assigned to either a control or self-compassion group.They completed baseline trait measures of contingent self-esteem,self-compassion,and self-esteem.Then,they were led to have a 12-min break(control group)or listen to a 12-min self-compassion audio(self-compassion group),followed by a social stress task and outcome measures.The results demonstrated the effectiveness of the brief self-compassion training and its moderating role in influencing the effects of contingent self-esteem on negative affects after the social stress task.This research provides implications that to equip with a self-compassionate mindset could lower the risk of the impairment of well-being associated with elements of contingent selfesteem,which involves a fragile sense of self-worth.It may also provide insights into the development of an“optimal selfesteem”and the improvement of well-being.展开更多
Serial remote sensing images offer a valuable means of tracking the evolutionary changes and growth of a specific geographical area over time.Although the original images may provide limited insights,they harbor consi...Serial remote sensing images offer a valuable means of tracking the evolutionary changes and growth of a specific geographical area over time.Although the original images may provide limited insights,they harbor considerable potential for identifying clusters and patterns.The aggregation of these serial remote sensing images(SRSI)becomes increasingly viable as distinct patterns emerge in diverse scenarios,such as suburbanization,the expansion of native flora,and agricultural activities.In a novel approach,we propose an innovative method for extracting sequential patterns by combining Ant Colony Optimization(ACD)and Empirical Mode Decomposition(EMD).This integration of the newly developed EMD and ACO techniques proves remarkably effective in identifying the most significant characteristic features within serial remote sensing images,guided by specific criteria.Our findings highlight a substantial improvement in the efficiency of sequential pattern mining through the application of this unique hybrid method,seamlessly integrating EMD and ACO for feature selection.This study exposes the potential of our innovative methodology,particularly in the realms of urbanization,native vegetation expansion,and agricultural activities.展开更多
This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected featu...This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected feature subsets. The dependence between two attributes (binary) is determined based on the probabilities of their joint values that contribute to positive and negative classification decisions. If opposing sets of attribute values do not lead to opposing classification decisions (zero probability), then the two attributes are considered independent of each other, otherwise dependent, and one of them can be removed and thus the number of attributes is reduced. The process must be repeated on all combinations of attributes. The paper also evaluates the approach by comparing it with existing feature selection algorithms over 8 datasets from University of California, Irvine (UCI) machine learning databases. The proposed method shows better results in terms of number of selected features, classification accuracy, and running time than most existing algorithms.展开更多
The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,whi...The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,which makes it challenging and a big issue to improve approaches for efficient identification of COVID-19 disease.In this study,an automatic prediction of COVID-19 identification is proposed to automatically discriminate between healthy and COVID-19 infected subjects in X-ray images using two successful moderns are traditional machine learning methods(e.g.,artificial neural network(ANN),support vector machine(SVM),linear kernel and radial basis function(RBF),k-nearest neighbor(k-NN),Decision Tree(DT),andCN2 rule inducer techniques)and deep learningmodels(e.g.,MobileNets V2,ResNet50,GoogleNet,DarkNet andXception).A largeX-ray dataset has been created and developed,namely the COVID-19 vs.Normal(400 healthy cases,and 400 COVID cases).To the best of our knowledge,it is currently the largest publicly accessible COVID-19 dataset with the largest number of X-ray images of confirmed COVID-19 infection cases.Based on the results obtained from the experiments,it can be concluded that all the models performed well,deep learning models had achieved the optimum accuracy of 98.8%in ResNet50 model.In comparison,in traditional machine learning techniques, the SVM demonstrated the best result for an accuracy of 95% and RBFaccuracy 94% for the prediction of coronavirus disease 2019.展开更多
Aortic dissection(AD)is a kind of acute and rapidly progressing cardiovascular disease.In this work,we build a CTA image library with 88 CT cases,43 cases of aortic dissection and 45 cases of health.An aortic dissecti...Aortic dissection(AD)is a kind of acute and rapidly progressing cardiovascular disease.In this work,we build a CTA image library with 88 CT cases,43 cases of aortic dissection and 45 cases of health.An aortic dissection detection method based on CTA images is proposed.ROI is extracted based on binarization and morphology opening operation.The deep learning networks(InceptionV3,ResNet50,and DenseNet)are applied after the preprocessing of the datasets.Recall,F1-score,Matthews correlation coefficient(MCC)and other performance indexes are investigated.It is shown that the deep learning methods have much better performance than the traditional method.And among those deep learning methods,DenseNet121 can exceed other networks such as ResNet50 and InceptionV3.展开更多
Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation ...Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation algorithms have problems with mis-segmentation and poor edge segmentation.To address these challenges,we propose a medical image segmentation network(AF-Net)based on attention mechanism and feature fusion,which can effectively capture global information while focusing the network on the object area.In this approach,we add dual attention blocks(DA-block)to the backbone network,which comprises parallel channels and spatial attention branches,to adaptively calibrate and weigh features.Secondly,the multi-scale feature fusion block(MFF-block)is proposed to obtain feature maps of different receptive domains and get multi-scale information with less computational consumption.Finally,to restore the locations and shapes of organs,we adopt the global feature fusion blocks(GFF-block)to fuse high-level and low-level information,which can obtain accurate pixel positioning.We evaluate our method on multiple datasets(the aorta and lungs dataset),and the experimental results achieve 94.0%in mIoU and 96.3%in DICE,showing that our approach performs better than U-Net and other state-of-art methods.展开更多
文摘Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computation experience.This paper aims to present a retrospective yet modern approach to the world of speech recognition systems.The development journey of ASR(Automatic Speech Recognition)has seen quite a few milestones and breakthrough technologies that have been highlighted in this paper.A step-by-step rundown of the fundamental stages in developing speech recognition systems has been presented,along with a brief discussion of various modern-day developments and applications in this domain.This review paper aims to summarize and provide a beginning point for those starting in the vast field of speech signal processing.Since speech recognition has a vast potential in various industries like telecommunication,emotion recognition,healthcare,etc.,this review would be helpful to researchers who aim at exploring more applications that society can quickly adopt in future years of evolution.
基金partly supported by the University of Malaya Impact Oriented Interdisci-plinary Research Grant under Grant IIRG008(A,B,C)-19IISS.
文摘Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control systems,such as Attribute-Based Access Control(ABAC)and Role-Based Access Control(RBAC),are limited in their ability to enforce access decisions due to the variability and dynamism of attributes related to users and resources.This paper proposes a method for enforcing access decisions that is adaptable and dynamic,based on multilayer hybrid deep learning techniques,particularly the Tabular Deep Neural Network Tabular DNN method.This technique transforms all input attributes in an access request into a binary classification(allow or deny)using multiple layers,ensuring accurate and efficient access decision-making.The proposed solution was evaluated using the Kaggle Amazon access control policy dataset and demonstrated its effectiveness by achieving a 94%accuracy rate.Additionally,the proposed solution enhances the implementation of access decisions based on a variety of resource and user attributes while ensuring privacy through indirect communication with the Policy Administration Point(PAP).This solution significantly improves the flexibility of access control systems,making themmore dynamic and adaptable to the evolving needs ofmodern organizations.Furthermore,it offers a scalable approach to manage the complexities associated with the BYOD environment,providing a robust framework for secure and efficient access management.
基金the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia,for funding this research work through Project Number RI-44-0214.
文摘Background:Sepsis,a potentially fatal inflammatory disease triggered by infection,carries significant healthimplications worldwide.Timely detection is crucial as sepsis can rapidly escalate if left undetected.Recentadvancements in deep learning(DL)offer powerful tools to address this challenge.Aim:Thus,this study proposeda hybrid CNNBDLSTM,a combination of a convolutional neural network(CNN)with a bi-directional long shorttermmemory(BDLSTM)model to predict sepsis onset.Implementing the proposed model provides a robustframework that capitalizes on the complementary strengths of both architectures,resulting in more accurate andtimelier predictions.Method:The sepsis prediction method proposed here utilizes temporal feature extraction todelineate six distinct time frames before the onset of sepsis.These time frames adhere to the sepsis-3 standardrequirement,which incorporates 12-h observation windows preceding sepsis onset.All models were trained usingthe Medical Information Mart for Intensive Care III(MIMIC-III)dataset,which sourced 61,522 patients with 40clinical variables obtained from the IoT medical environment.The confusion matrix,the area under the receiveroperating characteristic curve(AUCROC)curve,the accuracy,the precision,the F1-score,and the recall weredeployed to evaluate themodels.Result:The CNNBDLSTMmodel demonstrated superior performance comparedto the benchmark and other models,achieving an AUCROC of 99.74%and an accuracy of 99.15%one hour beforesepsis onset.These results indicate that the CNNBDLSTM model is highly effective in predicting sepsis onset,particularly within a close proximity of one hour.Implication:The results could assist practitioners in increasingthe potential survival of the patient one hour before sepsis onset.
基金supported by the Beijing Natural Science Foundation (L202003)National Natural Science Foundation of China (No. 31700479)。
文摘Automatic modulation classification(AMC) technology is one of the cutting-edge technologies in cognitive radio communications. AMC based on deep learning has recently attracted much attention due to its superior performances in classification accuracy and robustness. In this paper, we propose a novel, high resolution and multi-scale feature fusion convolutional neural network model with a squeeze-excitation block, referred to as HRSENet,to classify different kinds of modulation signals.The proposed model establishes a parallel computing mechanism of multi-resolution feature maps through the multi-layer convolution operation, which effectively reduces the information loss caused by downsampling convolution. Moreover, through dense skipconnecting at the same resolution and up-sampling or down-sampling connection at different resolutions, the low resolution representation of the deep feature maps and the high resolution representation of the shallow feature maps are simultaneously extracted and fully integrated, which is benificial to mine signal multilevel features. Finally, the feature squeeze and excitation module embedded in the decoder is used to adjust the response weights between channels, further improving classification accuracy of proposed model.The proposed HRSENet significantly outperforms existing methods in terms of classification accuracy on the public dataset “Over the Air” in signal-to-noise(SNR) ranging from-2dB to 20dB. The classification accuracy in the proposed model achieves 85.36% and97.30% at 4dB and 10dB, respectively, with the improvement by 9.71% and 5.82% compared to LWNet.Furthermore, the model also has a moderate computation complexity compared with several state-of-the-art methods.
基金the“Regional Innovation Strategy(RIS)”through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(MOE)(2021RIS-004)Institute of Information and Communications Technology Planning and Evaluation(IITP)grant funded by the Korean government(MSIT)(No.RS-2022-00155857,Artificial Intelligence Convergence Innovation Human Resources Development(Chungnam National University)).
文摘Maritime transportation,a cornerstone of global trade,faces increasing safety challenges due to growing sea traffic volumes.This study proposes a novel approach to vessel trajectory prediction utilizing Automatic Identification System(AIS)data and advanced deep learning models,including Long Short-Term Memory(LSTM),Gated Recurrent Unit(GRU),Bidirectional LSTM(DBLSTM),Simple Recurrent Neural Network(SimpleRNN),and Kalman Filtering.The research implemented rigorous AIS data preprocessing,encompassing record deduplication,noise elimination,stationary simplification,and removal of insignificant trajectories.Models were trained using key navigational parameters:latitude,longitude,speed,and heading.Spatiotemporal aware processing through trajectory segmentation and topological data analysis(TDA)was employed to capture dynamic patterns.Validation using a three-month AIS dataset demonstrated significant improvements in prediction accuracy.The GRU model exhibited superior performance,achieving training losses of 0.0020(Mean Squared Error,MSE)and 0.0334(Mean Absolute Error,MAE),with validation losses of 0.0708(MSE)and 0.1720(MAE).The LSTM model showed comparable efficacy,with training losses of 0.0011(MSE)and 0.0258(MAE),and validation losses of 0.2290(MSE)and 0.2652(MAE).Both models demonstrated reductions in training and validation losses,measured by MAE,MSE,Average Displacement Error(ADE),and Final Displacement Error(FDE).This research underscores the potential of advanced deep learning models in enhancing maritime safety through more accurate trajectory predictions,contributing significantly to the development of robust,intelligent navigation systems for the maritime industry.
基金This work was supported by the Beijing Natural Science Foundation(L202003).
文摘Satellite communication systems are facing serious electromagnetic interference,and interference signal recognition is a crucial foundation for targeted anti-interference.In this paper,we propose a novel interference recognition algorithm called HDCGD-CBAM,which adopts the time-frequency images(TFIs)of signals to effectively extract the temporal and spectral characteristics.In the proposed method,we improve the Convolutional Long Short-Term Memory Deep Neural Network(CLDNN)in two ways.First,the simpler Gate Recurrent Unit(GRU)is used instead of the Long Short-Term Memory(LSTM),reducing model parameters while maintaining the recognition accuracy.Second,we replace convolutional layers with hybrid dilated convolution(HDC)to expand the receptive field of feature maps,which captures the correlation of time-frequency data on a larger spatial scale.Additionally,Convolutional Block Attention Module(CBAM)is introduced before and after the HDC layers to strengthen the extraction of critical features and improve the recognition performance.The experiment results show that the HDCGD-CBAM model significantly outper-forms existing methods in terms of recognition accuracy and complexity.When Jamming-to-Signal Ratio(JSR)varies from-30dB to 10dB,it achieves an average accuracy of 78.7%and outperforms the CLDNN by 7.29%while reducing the Floating Point Operations(FLOPs)by 79.8%to 114.75M.Moreover,the proposed model has fewer parameters with 301k compared to several state-of-the-art methods.
基金This paper’s logical organisation and content quality have been enhanced,so the authors thank anonymous reviewers and journal editors for assistance.
文摘Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques have been viewed as a viable method for enhancing the accuracy of univariate streamflow estimation when compared to standalone approaches.Current researchers have also emphasised using hybrid models to improve forecast accuracy.Accordingly,this paper conducts an updated literature review of applications of hybrid models in estimating streamflow over the last five years,summarising data preprocessing,univariate machine learning modelling strategy,advantages and disadvantages of standalone ML techniques,hybrid models,and performance metrics.This study focuses on two types of hybrid models:parameter optimisation-based hybrid models(OBH)and hybridisation of parameter optimisation-based and preprocessing-based hybridmodels(HOPH).Overall,this research supports the idea thatmeta-heuristic approaches precisely improveML techniques.It’s also one of the first efforts to comprehensively examine the efficiency of various meta-heuristic approaches(classified into four primary classes)hybridised with ML techniques.This study revealed that previous research applied swarm,evolutionary,physics,and hybrid metaheuristics with 77%,61%,12%,and 12%,respectively.Finally,there is still room for improving OBH and HOPH models by examining different data pre-processing techniques and metaheuristic algorithms.
基金Research Supporting Project Number(RSP2024R421),King Saud University,Riyadh,Saudi Arabia。
文摘The increased adoption of Internet of Medical Things (IoMT) technologies has resulted in the widespread use ofBody Area Networks (BANs) in medical and non-medical domains. However, the performance of IEEE 802.15.4-based BANs is impacted by challenges related to heterogeneous data traffic requirements among nodes, includingcontention during finite backoff periods, association delays, and traffic channel access through clear channelassessment (CCA) algorithms. These challenges lead to increased packet collisions, queuing delays, retransmissions,and the neglect of critical traffic, thereby hindering performance indicators such as throughput, packet deliveryratio, packet drop rate, and packet delay. Therefore, we propose Dynamic Next Backoff Period and Clear ChannelAssessment (DNBP-CCA) schemes to address these issues. The DNBP-CCA schemes leverage a combination ofthe Dynamic Next Backoff Period (DNBP) scheme and the Dynamic Next Clear Channel Assessment (DNCCA)scheme. The DNBP scheme employs a fuzzy Takagi, Sugeno, and Kang (TSK) model’s inference system toquantitatively analyze backoff exponent, channel clearance, collision ratio, and data rate as input parameters. Onthe other hand, the DNCCA scheme dynamically adapts the CCA process based on requested data transmission tothe coordinator, considering input parameters such as buffer status ratio and acknowledgement ratio. As a result,simulations demonstrate that our proposed schemes are better than some existing representative approaches andenhance data transmission, reduce node collisions, improve average throughput, and packet delivery ratio, anddecrease average packet drop rate and packet delay.
文摘The increasing adoption of solar photovoltaic systems necessitates accurate forecasting of solar energy production to enhance grid stability,reliability,and economic benefits.This study explores advanced machine learning(ML)and deep learning(DL)techniques for predicting solar energy generation,emphasizing the significant impact of meteorological data.A comprehensive dataset,encompassing detailed weather conditions and solar energy metrics,was collected and preprocessed to improve model accuracy.Various models were developed and trained with different preprocessing stages.Finally,three datasets were prepared.A novel hour-based prediction wrapper was introduced,utilizing external sunrise and sunset data to restrict predictions to daylight hours,thereby enhancing model performance.A cascaded stacking model incorporating association rules,weak predictors,and a modified stacking aggregation procedure was proposed,demonstrating enhanced generalization and reduced prediction errors.Results indicated that models trained on raw data generally performed better than those on stripped data.The Long Short-Term Memory(LSTM)with Inception layers’model was the most effective,achieving significant performance improvements through feature selection,data preprocessing,and innovative modeling techniques.The study underscores the potential to combine detailed meteorological data with advanced ML and DL methods to improve the accuracy of solar energy forecasting,thereby optimizing energy management and planning.
基金Key research and development projects of Ningxia,Grant/Award Number:2022BDE03007Natural Science Foundation of Ningxia Province,Grant/Award Numbers:2023A0367,2021A0966,2022AAC05010,2022AAC03004,2021AAC03068。
文摘Network embedding aspires to learn a low-dimensional vector of each node in networks,which can apply to diverse data mining tasks.In real-life,many networks include rich attributes and temporal information.However,most existing embedding approaches ignore either temporal information or network attributes.A self-attention based architecture using higher-order weights and node attributes for both static and temporal attributed network embedding is presented in this article.A random walk sampling algorithm based on higher-order weights and node attributes to capture network topological features is presented.For static attributed networks,the algorithm incorporates first-order to k-order weights,and node attribute similarities into one weighted graph to preserve topological features of networks.For temporal attribute networks,the algorithm incorporates previous snapshots of networks containing first-order to k-order weights,and nodes attribute similarities into one weighted graph.In addition,the algorithm utilises a damping factor to ensure that the more recent snapshots allocate a greater weight.Attribute features are then incorporated into topological features.Next,the authors adopt the most advanced architecture,Self-Attention Networks,to learn node representations.Experimental results on node classification of static attributed networks and link prediction of temporal attributed networks reveal that our proposed approach is competitive against diverse state-of-the-art baseline approaches.
基金funded by the Researchers Supporting Program at King Saud University(RSPD2024R809).
文摘Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengths of multiple algorithms,enhancing solution quality,convergence speed,and robustness,thereby offering a more versatile and efficient means of solving intricate real-world optimization tasks.In this paper,we introduce a hybrid algorithm that amalgamates three distinct metaheuristics:the Beluga Whale Optimization(BWO),the Honey Badger Algorithm(HBA),and the Jellyfish Search(JS)optimizer.The proposed hybrid algorithm will be referred to as BHJO.Through this fusion,the BHJO algorithm aims to leverage the strengths of each optimizer.Before this hybridization,we thoroughly examined the exploration and exploitation capabilities of the BWO,HBA,and JS metaheuristics,as well as their ability to strike a balance between exploration and exploitation.This meticulous analysis allowed us to identify the pros and cons of each algorithm,enabling us to combine them in a novel hybrid approach that capitalizes on their respective strengths for enhanced optimization performance.In addition,the BHJO algorithm incorporates Opposition-Based Learning(OBL)to harness the advantages offered by this technique,leveraging its diverse exploration,accelerated convergence,and improved solution quality to enhance the overall performance and effectiveness of the hybrid algorithm.Moreover,the performance of the BHJO algorithm was evaluated across a range of both unconstrained and constrained optimization problems,providing a comprehensive assessment of its efficacy and applicability in diverse problem domains.Similarly,the BHJO algorithm was subjected to a comparative analysis with several renowned algorithms,where mean and standard deviation values were utilized as evaluation metrics.This rigorous comparison aimed to assess the performance of the BHJOalgorithmabout its counterparts,shedding light on its effectiveness and reliability in solving optimization problems.Finally,the obtained numerical statistics underwent rigorous analysis using the Friedman post hoc Dunn’s test.The resulting numerical values revealed the BHJO algorithm’s competitiveness in tackling intricate optimization problems,affirming its capability to deliver favorable outcomes in challenging scenarios.
基金financially supported by the Deanship of Scientific Research at King Khalid University under Research Grant Number(R.G.P.2/549/44).
文摘Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.
基金support from Abu Dhabi University’s Office of Research and Sponsored Programs Grant Number:19300810.
文摘Cookies are considered a fundamental means of web application services for authenticating various Hypertext Transfer Protocol(HTTP)requests andmaintains the states of clients’information over the Internet.HTTP cookies are exploited to carry client patterns observed by a website.These client patterns facilitate the particular client’s future visit to the corresponding website.However,security and privacy are the primary concerns owing to the value of information over public channels and the storage of client information on the browser.Several protocols have been introduced that maintain HTTP cookies,but many of those fail to achieve the required security,or require a lot of resource overheads.In this article,we have introduced a lightweight Elliptic Curve Cryptographic(ECC)based protocol for authenticating client and server transactions to maintain the privacy and security of HTTP cookies.Our proposed protocol uses a secret key embedded within a cookie.The proposed protocol ismore efficient and lightweight than related protocols because of its reduced computation,storage,and communication costs.Moreover,the analysis presented in this paper confirms that proposed protocol resists various known attacks.
文摘In this review paper,we present a thorough investigation into the role of pavement technologies in advancing urban sustainability.Our analysis traverses the historical evolution of these technologies,meticulously evaluating their socio-economic and environmental impacts,with a particular emphasis on their role in mitigating the urban heat island effect.The evaluation of pavement types and variables influencing pavement performance to be used in the multi-criteria decision-making(MCDM)framework to choose the optimal pavement application are at the heart of our research.Which serves to assess a spectrum of pavement options,revealing insights into the most effective and sustainable practices.By highlighting both the existing challenges and potential innovative solutions within thefield,this paper aims to offer a directional compass for future urban planning and infrastructural advancements.This review not only synthesizes the current state of knowledge but also aims to chart a course for future exploration,emphasizing the critical need for innovative and environmentally sensitive pavement tech-nologies in the creation of resilient and sustainable urban environments.
基金the Jilin Science and Technology Department 20200201280JC,and Shanghai special fund for ideological and political work in Shanghai University of International Business and Economics.
文摘Contingent self-esteem captures the fragile nature of self-esteem and is often regarded as suboptimal to psychological functioning.Self-compassion is another important self-related concept assumed to promote mental health and well-being.However,research on the relation of self-compassion to contingent self-esteem is lacking.Two studies were conducted to explore the role of selfcompassion,either as a personal characteristic or an induced mindset,in influencing the effects of contingent self-esteem on well-being.Study 1 recruited 256 Chinese college students(30.4%male,mean age=21.72 years)who filled out measures of contingent self-esteem,self-compassion,and well-being.The results found that self-compassion moderated the effect of contingent self-esteem on well-being.In Study 2,a sample of 90 Chinese college students(34%male,mean age=18.39 years)were randomly assigned to either a control or self-compassion group.They completed baseline trait measures of contingent self-esteem,self-compassion,and self-esteem.Then,they were led to have a 12-min break(control group)or listen to a 12-min self-compassion audio(self-compassion group),followed by a social stress task and outcome measures.The results demonstrated the effectiveness of the brief self-compassion training and its moderating role in influencing the effects of contingent self-esteem on negative affects after the social stress task.This research provides implications that to equip with a self-compassionate mindset could lower the risk of the impairment of well-being associated with elements of contingent selfesteem,which involves a fragile sense of self-worth.It may also provide insights into the development of an“optimal selfesteem”and the improvement of well-being.
文摘Serial remote sensing images offer a valuable means of tracking the evolutionary changes and growth of a specific geographical area over time.Although the original images may provide limited insights,they harbor considerable potential for identifying clusters and patterns.The aggregation of these serial remote sensing images(SRSI)becomes increasingly viable as distinct patterns emerge in diverse scenarios,such as suburbanization,the expansion of native flora,and agricultural activities.In a novel approach,we propose an innovative method for extracting sequential patterns by combining Ant Colony Optimization(ACD)and Empirical Mode Decomposition(EMD).This integration of the newly developed EMD and ACO techniques proves remarkably effective in identifying the most significant characteristic features within serial remote sensing images,guided by specific criteria.Our findings highlight a substantial improvement in the efficiency of sequential pattern mining through the application of this unique hybrid method,seamlessly integrating EMD and ACO for feature selection.This study exposes the potential of our innovative methodology,particularly in the realms of urbanization,native vegetation expansion,and agricultural activities.
文摘This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected feature subsets. The dependence between two attributes (binary) is determined based on the probabilities of their joint values that contribute to positive and negative classification decisions. If opposing sets of attribute values do not lead to opposing classification decisions (zero probability), then the two attributes are considered independent of each other, otherwise dependent, and one of them can be removed and thus the number of attributes is reduced. The process must be repeated on all combinations of attributes. The paper also evaluates the approach by comparing it with existing feature selection algorithms over 8 datasets from University of California, Irvine (UCI) machine learning databases. The proposed method shows better results in terms of number of selected features, classification accuracy, and running time than most existing algorithms.
文摘The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,which makes it challenging and a big issue to improve approaches for efficient identification of COVID-19 disease.In this study,an automatic prediction of COVID-19 identification is proposed to automatically discriminate between healthy and COVID-19 infected subjects in X-ray images using two successful moderns are traditional machine learning methods(e.g.,artificial neural network(ANN),support vector machine(SVM),linear kernel and radial basis function(RBF),k-nearest neighbor(k-NN),Decision Tree(DT),andCN2 rule inducer techniques)and deep learningmodels(e.g.,MobileNets V2,ResNet50,GoogleNet,DarkNet andXception).A largeX-ray dataset has been created and developed,namely the COVID-19 vs.Normal(400 healthy cases,and 400 COVID cases).To the best of our knowledge,it is currently the largest publicly accessible COVID-19 dataset with the largest number of X-ray images of confirmed COVID-19 infection cases.Based on the results obtained from the experiments,it can be concluded that all the models performed well,deep learning models had achieved the optimum accuracy of 98.8%in ResNet50 model.In comparison,in traditional machine learning techniques, the SVM demonstrated the best result for an accuracy of 95% and RBFaccuracy 94% for the prediction of coronavirus disease 2019.
基金This work is supported by the National Natural Science Foundation of China(No.61772561)the National Natural Science Foundation of Hunan(No.2019JJ50866)+1 种基金the Key Research&Development Plan of Hunan Province(No.2018NK2012)the Postgraduate Science and Technology Innovation Foundation of Central South University of Forestry and Technology(No.20183034).
文摘Aortic dissection(AD)is a kind of acute and rapidly progressing cardiovascular disease.In this work,we build a CTA image library with 88 CT cases,43 cases of aortic dissection and 45 cases of health.An aortic dissection detection method based on CTA images is proposed.ROI is extracted based on binarization and morphology opening operation.The deep learning networks(InceptionV3,ResNet50,and DenseNet)are applied after the preprocessing of the datasets.Recall,F1-score,Matthews correlation coefficient(MCC)and other performance indexes are investigated.It is shown that the deep learning methods have much better performance than the traditional method.And among those deep learning methods,DenseNet121 can exceed other networks such as ResNet50 and InceptionV3.
基金This work was supported in part by the National Natural Science Foundation of China under Grant 61772561,author J.Q,http://www.nsfc.gov.cn/in part by the Science Research Projects of Hunan Provincial Education Department under Grant 18A174,author X.X,http://kxjsc.gov.hnedu.cn/+5 种基金in part by the Science Research Projects of Hunan Provincial Education Department under Grant 19B584,author Y.T,http://kxjsc.gov.hnedu.cn/in part by the Natural Science Foundation of Hunan Province(No.2020JJ4140),author Y.T,http://kjt.hunan.gov.cn/in part by the Natural Science Foundation of Hunan Province(No.2020JJ4141),author X.X,http://kjt.hunan.gov.cn/in part by the Key Research and Development Plan of Hunan Province under Grant 2019SK2022,author Y.T,http://kjt.hunan.gov.cn/in part by the Key Research and Development Plan of Hunan Province under Grant CX20200730,author G.H,http://kjt.hunan.gov.cn/in part by the Graduate Science and Technology Innovation Fund Project of Central South University of Forestry and Technology under Grant CX20202038,author G.H,http://jwc.csuft.edu.cn/.
文摘Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation algorithms have problems with mis-segmentation and poor edge segmentation.To address these challenges,we propose a medical image segmentation network(AF-Net)based on attention mechanism and feature fusion,which can effectively capture global information while focusing the network on the object area.In this approach,we add dual attention blocks(DA-block)to the backbone network,which comprises parallel channels and spatial attention branches,to adaptively calibrate and weigh features.Secondly,the multi-scale feature fusion block(MFF-block)is proposed to obtain feature maps of different receptive domains and get multi-scale information with less computational consumption.Finally,to restore the locations and shapes of organs,we adopt the global feature fusion blocks(GFF-block)to fuse high-level and low-level information,which can obtain accurate pixel positioning.We evaluate our method on multiple datasets(the aorta and lungs dataset),and the experimental results achieve 94.0%in mIoU and 96.3%in DICE,showing that our approach performs better than U-Net and other state-of-art methods.