期刊文献+
共找到212篇文章
< 1 2 11 >
每页显示 20 50 100
Social Media-Based Surveillance Systems for Health Informatics Using Machine and Deep Learning Techniques:A Comprehensive Review and Open Challenges
1
作者 Samina Amin Muhammad Ali Zeb +3 位作者 Hani Alshahrani Mohammed Hamdi Mohammad Alsulami Asadullah Shaikh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1167-1202,共36页
Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM... Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM-based surveillance methods for early epidemic outbreaks and the role of ML and DL in enhancing their performance.Since,every year,a large amount of data related to epidemic outbreaks,particularly Twitter data is generated by SM.This paper outlines the theme of SM analysis for tracking health-related issues and detecting epidemic outbreaks in SM,along with the ML and DL techniques that have been configured for the detection of epidemic outbreaks.DL has emerged as a promising ML technique that adaptsmultiple layers of representations or features of the data and yields state-of-the-art extrapolation results.In recent years,along with the success of ML and DL in many other application domains,both ML and DL are also popularly used in SM analysis.This paper aims to provide an overview of epidemic outbreaks in SM and then outlines a comprehensive analysis of ML and DL approaches and their existing applications in SM analysis.Finally,this review serves the purpose of offering suggestions,ideas,and proposals,along with highlighting the ongoing challenges in the field of early outbreak detection that still need to be addressed. 展开更多
关键词 Social media EPIDEMIC machine learning deep learning health informatics PANDEMIC
下载PDF
Machine Learning Empowered Security and Privacy Architecture for IoT Networks with the Integration of Blockchain
2
作者 Sohaib Latif M.Saad Bin Ilyas +3 位作者 Azhar Imran Hamad Ali Abosaq Abdulaziz Alzubaidi Vincent Karovic Jr. 《Intelligent Automation & Soft Computing》 2024年第2期353-379,共27页
The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes ... The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively. 展开更多
关键词 Machine learning internet of things blockchain data privacy SECURITY Industry 4.0
下载PDF
A Review and Analysis of Localization Techniques in Underwater Wireless Sensor Networks 被引量:1
3
作者 Seema Rani Anju +6 位作者 Anupma Sangwan Krishna Kumar Kashif Nisar Tariq Rahim Soomro Ag.Asri Ag.Ibrahim Manoj Gupta Laxmi Chandand Sadiq Ali Khan 《Computers, Materials & Continua》 SCIE EI 2023年第6期5697-5715,共19页
In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in... In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in such a network is the localization of underwater nodes.Localization is required for tracking objects and detecting the target.It is also considered tagging of data where sensed contents are not found of any use without localization.This is useless for application until the position of sensed content is confirmed.This article’s major goal is to review and analyze underwater node localization to solve the localization issues in UWSN.The present paper describes various existing localization schemes and broadly categorizes these schemes as Centralized and Distributed localization schemes underwater.Also,a detailed subdivision of these localization schemes is given.Further,these localization schemes are compared from different perspectives.The detailed analysis of these schemes in terms of certain performance metrics has been discussed in this paper.At the end,the paper addresses several future directions for potential research in improving localization problems of UWSN. 展开更多
关键词 Underwater wireless sensor networks localization schemes node localization ranging algorithms estimation based prediction based
下载PDF
Proposed Biometric Security System Based on Deep Learning and Chaos Algorithms
4
作者 Iman Almomani Walid El-Shafai +3 位作者 Aala AlKhayer Albandari Alsumayt Sumayh S.Aljameel Khalid Alissa 《Computers, Materials & Continua》 SCIE EI 2023年第2期3515-3537,共23页
Nowadays,there is tremendous growth in biometric authentication and cybersecurity applications.Thus,the efficient way of storing and securing personal biometric patterns is mandatory in most governmental and private s... Nowadays,there is tremendous growth in biometric authentication and cybersecurity applications.Thus,the efficient way of storing and securing personal biometric patterns is mandatory in most governmental and private sectors.Therefore,designing and implementing robust security algorithms for users’biometrics is still a hot research area to be investigated.This work presents a powerful biometric security system(BSS)to protect different biometric modalities such as faces,iris,and fingerprints.The proposed BSSmodel is based on hybridizing auto-encoder(AE)network and a chaos-based ciphering algorithm to cipher the details of the stored biometric patterns and ensures their secrecy.The employed AE network is unsupervised deep learning(DL)structure used in the proposed BSS model to extract main biometric features.These obtained features are utilized to generate two random chaos matrices.The first random chaos matrix is used to permute the pixels of biometric images.In contrast,the second random matrix is used to further cipher and confuse the resulting permuted biometric pixels using a two-dimensional(2D)chaotic logisticmap(CLM)algorithm.To assess the efficiency of the proposed BSS,(1)different standardized color and grayscale images of the examined fingerprint,faces,and iris biometrics were used(2)comprehensive security and recognition evaluation metrics were measured.The assessment results have proven the authentication and robustness superiority of the proposed BSSmodel compared to other existing BSSmodels.For example,the proposed BSS succeeds in getting a high area under the receiver operating characteristic(AROC)value that reached 99.97%and low rates of 0.00137,0.00148,and 3516 CMC,2023,vol.74,no.20.00157 for equal error rate(EER),false reject rate(FRR),and a false accept rate(FAR),respectively. 展开更多
关键词 Biometric security deep learning AE network 2D CLM cybersecurity and authentication applications feature extraction unsupervised learning
下载PDF
An Automated System for Early Prediction of Miscarriage in the First Trimester Using Machine Learning
5
作者 Sumayh S.Aljameel Malak Aljabri +7 位作者 Nida Aslam Dorieh M.Alomari Arwa Alyahya Shaykhah Alfaris Maha Balharith Hiessa Abahussain Dana Boujlea Eman S.Alsulmi 《Computers, Materials & Continua》 SCIE EI 2023年第4期1291-1304,共14页
Currently, the risk factors of pregnancy loss are increasing andare considered a major challenge because they vary between cases. The earlyprediction of miscarriage can help pregnant ladies to take the needed careand ... Currently, the risk factors of pregnancy loss are increasing andare considered a major challenge because they vary between cases. The earlyprediction of miscarriage can help pregnant ladies to take the needed careand avoid any danger. Therefore, an intelligent automated solution must bedeveloped to predict the risk factors for pregnancy loss at an early stage toassist with accurate and effective diagnosis. Machine learning (ML)-baseddecision support systems are increasingly used in the healthcare sector andhave achieved notable performance and objectiveness in disease predictionand prognosis. Thus, we developed a model to help obstetricians predictthe probability of miscarriage using ML. And support their decisions andexpectations about pregnancy status by providing an easy, automated way topredict miscarriage at early stages using ML tools and techniques. Althoughmany published papers proposed similar models, none of them used Saudiclinical data. Our proposed solution used ML classification algorithms tobuild a miscarriage prediction model. Four classifiers were used in this study:decision tree (DT), random forest (RF), k-nearest neighbor (KNN), andgradient boosting (GB). Accuracy, Precision, Recall, F1-score, and receiveroperating characteristic area under the curve (ROC-AUC) were used to evaluatethe proposed model. The results showed that GB overperformed the otherclassifiers with an accuracy of 93.4% and ROC-AUC of 97%. This proposedmodel can assist in the early identification of at-risk pregnant women to avoidmiscarriage in the first trimester and will improve the healthcare sector inSaudi Arabia. 展开更多
关键词 MISCARRIAGE PREGNANCY ABORTION machine learning gradient boosting
下载PDF
Effectiveness of Deep Learning Models for Brain Tumor Classification and Segmentation
6
作者 Muhammad Irfan Ahmad Shaf +6 位作者 Tariq Ali Umar Farooq Saifur Rahman Salim Nasar Faraj Mursal Mohammed Jalalah Samar M.Alqhtani Omar AlShorman 《Computers, Materials & Continua》 SCIE EI 2023年第7期711-729,共19页
A brain tumor is a mass or growth of abnormal cells in the brain.In children and adults,brain tumor is considered one of the leading causes of death.There are several types of brain tumors,including benign(non-cancero... A brain tumor is a mass or growth of abnormal cells in the brain.In children and adults,brain tumor is considered one of the leading causes of death.There are several types of brain tumors,including benign(non-cancerous)and malignant(cancerous)tumors.Diagnosing brain tumors as early as possible is essential,as this can improve the chances of successful treatment and survival.Considering this problem,we bring forth a hybrid intelligent deep learning technique that uses several pre-trained models(Resnet50,Vgg16,Vgg19,U-Net)and their integration for computer-aided detection and localization systems in brain tumors.These pre-trained and integrated deep learning models have been used on the publicly available dataset from The Cancer Genome Atlas.The dataset consists of 120 patients.The pre-trained models have been used to classify tumor or no tumor images,while integrated models are applied to segment the tumor region correctly.We have evaluated their performance in terms of loss,accuracy,intersection over union,Jaccard distance,dice coefficient,and dice coefficient loss.From pre-trained models,the U-Net model achieves higher performance than other models by obtaining 95%accuracy.In contrast,U-Net with ResNet-50 out-performs all other models from integrated pre-trained models and correctly classified and segmented the tumor region. 展开更多
关键词 Brain tumor deep learning ENSEMBLE detection healthcare
下载PDF
A U-Net-Based CNN Model for Detection and Segmentation of Brain Tumor
7
作者 Rehana Ghulam Sammar Fatima +5 位作者 Tariq Ali Nazir Ahmad Zafar Abdullah A.Asiri Hassan A.Alshamrani Samar M.Alqhtani Khlood M.Mehdar 《Computers, Materials & Continua》 SCIE EI 2023年第1期1333-1349,共17页
Human brain consists of millions of cells to control the overall structure of the human body.When these cells start behaving abnormally,then brain tumors occurred.Precise and initial stage brain tumor detection has al... Human brain consists of millions of cells to control the overall structure of the human body.When these cells start behaving abnormally,then brain tumors occurred.Precise and initial stage brain tumor detection has always been an issue in the field of medicines for medical experts.To handle this issue,various deep learning techniques for brain tumor detection and segmentation techniques have been developed,which worked on different datasets to obtain fruitful results,but the problem still exists for the initial stage of detection of brain tumors to save human lives.For this purpose,we proposed a novel U-Net-based Convolutional Neural Network(CNN)technique to detect and segmentizes the brain tumor for Magnetic Resonance Imaging(MRI).Moreover,a 2-dimensional publicly available Multimodal Brain Tumor Image Segmentation(BRATS2020)dataset with 1840 MRI images of brain tumors has been used having an image size of 240×240 pixels.After initial dataset preprocessing the proposed model is trained by dividing the dataset into three parts i.e.,testing,training,and validation process.Our model attained an accuracy value of 0.98%on the BRATS2020 dataset,which is the highest one as compared to the already existing techniques. 展开更多
关键词 U-net brain tumor magnetic resonance images convolutional neural network SEGMENTATION
下载PDF
Vehicle kinematics modeling and design of vehicle trajectory generator system 被引量:2
8
作者 李昭 蔡自兴 +2 位作者 任孝平 陈爱斌 薛志超 《Journal of Central South University》 SCIE EI CAS 2012年第10期2860-2865,共6页
A trajectory generator based on vehicle kinematics model was presented and an integrated navigation simulation system was designed.Considering that the tight relation between vehicle motion and topography,a new trajec... A trajectory generator based on vehicle kinematics model was presented and an integrated navigation simulation system was designed.Considering that the tight relation between vehicle motion and topography,a new trajectory generator for vehicle was proposed for more actual simulation.Firstly,a vehicle kinematics model was built based on conversion of attitude vector in different coordinate systems.Then,the principle of common trajectory generators was analyzed.Besides,combining the vehicle kinematics model with the principle of dead reckoning,a new vehicle trajectory generator was presented,which can provide process parameters of carrier anytime and achieve simulation of typical actions of running vehicle.Moreover,IMU(inertial measurement unit) elements were simulated,including accelerometer and gyroscope.After setting up the simulation conditions,the integrated navigation simulation system was verified by final performance test.The result proves the validity and flexibility of this design. 展开更多
关键词 运动学建模 轨迹生成系统 车辆 设计 轨迹发生器 运动学模型 汽车 仿真系统
下载PDF
Detection and Classification of Hemorrhages in Retinal Images
9
作者 Ghassan Ahmed Ali Thamer Mitib Ahmad Al Sariera +2 位作者 Muhammad Akram Adel Sulaiman Fekry Olayah 《Computer Systems Science & Engineering》 SCIE EI 2023年第2期1601-1616,共16页
Damage of the blood vessels in retina due to diabetes is called diabetic retinopathy(DR).Hemorrhages is thefirst clinically visible symptoms of DR.This paper presents a new technique to extract and classify the hemorrh... Damage of the blood vessels in retina due to diabetes is called diabetic retinopathy(DR).Hemorrhages is thefirst clinically visible symptoms of DR.This paper presents a new technique to extract and classify the hemorrhages in fundus images.The normal objects such as blood vessels,fovea and optic disc inside retinal images are masked to distinguish them from hemorrhages.For masking blood vessels,thresholding that separates blood vessels and background intensity followed by a newfilter to extract the border of vessels based on orienta-tions of vessels are used.For masking optic disc,the image is divided into sub-images then the brightest window with maximum variance in intensity is selected.Then the candidate dark regions are extracted based on adaptive thresholding and top-hat morphological techniques.Features are extracted from each candidate region based on ophthalmologist selection such as color and size and pattern recognition techniques such as texture and wavelet features.Three different types of Support Vector Machine(SVM),Linear SVM,Quadratic SVM and Cubic SVM classifier are applied to classify the candidate dark regions as either hemor-rhages or healthy.The efficacy of the proposed method is demonstrated using the standard benchmark DIARETDB1 database and by comparing the results with methods in silico.The performance of the method is measured based on average sensitivity,specificity,F-score and accuracy.Experimental results show the Linear SVM classifier gives better results than Cubic SVM and Quadratic SVM with respect to sensitivity and accuracy and with respect to specificity Quadratic SVM gives better result as compared to other SVMs. 展开更多
关键词 Diabetic retinopathy HEMORRHAGES adaptive thresholding support vector machine
下载PDF
Sustainable Learning of Computer Programming Languages Using Mind Mapping
10
作者 Shahla Gul Muhammad Asif +6 位作者 Zubair Nawaz Muhammad Haris Aziz Shahzada Khurram Muhammad Qaiser Saleem Elturabi Osman Ahmed Habib Muhammad Shafiq Osama E.Sheta 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期1687-1697,共11页
In the current era of information technology,students need to learn modern programming languages efficiently.The art of teaching/learning program-ming requires many logical and conceptual skills.So it’s a challenging ... In the current era of information technology,students need to learn modern programming languages efficiently.The art of teaching/learning program-ming requires many logical and conceptual skills.So it’s a challenging task for the instructors/learners to teach/learn these programming languages effectively and efficiently.Mind mapping is a useful visual tool for establishing ideas and connecting them to solve problems.This research proposed an effective way to teach programming languages through visual tools.This experimental study uses a mind mapping tool to teach two programming environments:Text-based Programming and Blocks-based Programming.We performed the experiments with one hundred and sixty undergraduate students of two public sector universities in the Asia Pacific region.Four different instructional approaches,including block-based language(BBL),text-based languages(TBL),mind map with text-based language(MMTBL)and mind mapping with block-based(MMBBL)are used for this purpose.The results show that instructional approaches using a mind mapping tool to help students solve given tasks in their critical thinking are more effective than other instructional techniques. 展开更多
关键词 Text programming blocks programming novice programmer
下载PDF
Road Traffic Monitoring from Aerial Images Using Template Matching and Invariant Features
11
作者 Asifa Mehmood Qureshi Naif Al Mudawi +2 位作者 Mohammed Alonazi Samia Allaoua Chelloug Jeongmin Park 《Computers, Materials & Continua》 SCIE EI 2024年第3期3683-3701,共19页
Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibilit... Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibility to use mobile platforms to detect the location and motion of the vehicle over a larger area.To this end,different models have shown the ability to recognize and track vehicles.However,these methods are not mature enough to produce accurate results in complex road scenes.Therefore,this paper presents an algorithm that combines state-of-the-art techniques for identifying and tracking vehicles in conjunction with image bursts.The extracted frames were converted to grayscale,followed by the application of a georeferencing algorithm to embed coordinate information into the images.The masking technique eliminated irrelevant data and reduced the computational cost of the overall monitoring system.Next,Sobel edge detection combined with Canny edge detection and Hough line transform has been applied for noise reduction.After preprocessing,the blob detection algorithm helped detect the vehicles.Vehicles of varying sizes have been detected by implementing a dynamic thresholding scheme.Detection was done on the first image of every burst.Then,to track vehicles,the model of each vehicle was made to find its matches in the succeeding images using the template matching algorithm.To further improve the tracking accuracy by incorporating motion information,Scale Invariant Feature Transform(SIFT)features have been used to find the best possible match among multiple matches.An accuracy rate of 87%for detection and 80%accuracy for tracking in the A1 Motorway Netherland dataset has been achieved.For the Vehicle Aerial Imaging from Drone(VAID)dataset,an accuracy rate of 86%for detection and 78%accuracy for tracking has been achieved. 展开更多
关键词 Unmanned Aerial Vehicles(UAV) aerial images DATASET object detection object tracking data elimination template matching blob detection SIFT VAID
下载PDF
Efficient Object Segmentation and Recognition Using Multi-Layer Perceptron Networks
12
作者 Aysha Naseer Nouf Abdullah Almujally +2 位作者 Saud S.Alotaibi Abdulwahab Alazeb Jeongmin Park 《Computers, Materials & Continua》 SCIE EI 2024年第1期1381-1398,共18页
Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on ... Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on their features.The proposed system presents a distinctive approach to object segmentation and recognition using Artificial Neural Networks(ANNs).The system takes RGB images as input and uses a k-means clustering-based segmentation technique to fragment the intended parts of the images into different regions and label thembased on their characteristics.Then,two distinct kinds of features are obtained from the segmented images to help identify the objects of interest.An Artificial Neural Network(ANN)is then used to recognize the objects based on their features.Experiments were carried out with three standard datasets,MSRC,MS COCO,and Caltech 101 which are extensively used in object recognition research,to measure the productivity of the suggested approach.The findings from the experiment support the suggested system’s validity,as it achieved class recognition accuracies of 89%,83%,and 90.30% on the MSRC,MS COCO,and Caltech 101 datasets,respectively. 展开更多
关键词 K-region fusion segmentation recognition feature extraction artificial neural network computer vision
下载PDF
Development of Social Media Analytics System for Emergency Event Detection and Crisis Management
13
作者 Shaheen Khatoon Majed AAlshamari +4 位作者 Amna Asif Md Maruf Hasan Sherif Abdou Khaled Mostafa Elsayed Mohsen Rashwan 《Computers, Materials & Continua》 SCIE EI 2021年第9期3079-3100,共22页
Social media platforms have proven to be effective for information gathering during emergency events caused by natural or human-made disasters.Emergency response authorities,law enforcement agencies,and the public can... Social media platforms have proven to be effective for information gathering during emergency events caused by natural or human-made disasters.Emergency response authorities,law enforcement agencies,and the public can use this information to gain situational awareness and improve disaster response.In case of emergencies,rapid responses are needed to address victims’requests for help.The research community has developed many social media platforms and used them effectively for emergency response and coordination in the past.However,most of the present deployments of platforms in crisis management are not automated,and their operational success largely depends on experts who analyze the information manually and coordinate with relevant humanitarian agencies or law enforcement authorities to initiate emergency response operations.The seamless integration of automatically identifying types of urgent needs from millions of posts and delivery of relevant information to the appropriate agency for timely response has become essential.This research project aims to develop a generalized Information Technology(IT)solution for emergency response and disaster management by integrating social media data as its core component.In this paper,we focused on text analysis techniques which can help the emergency response authorities to filter through the sheer amount of information gathered automatically for supporting their relief efforts.More specifically,we applied state-of-the-art Natural Language Processing(NLP),Machine Learning(ML),and Deep Learning(DL)techniques ranging from unsupervised to supervised learning for an in-depth analysis of social media data for the purpose of extracting real-time information on a critical event to facilitate emergency response in a crisis.As a proof of concept,a case study on the COVID-19 pandemic on the data collected from Twitter is presented,providing evidence that the scientific and operational goals have been achieved. 展开更多
关键词 Crisis management social media analytics machine learning natural language processing deep learning
下载PDF
Security Analysis and Enhanced Design of a Dynamic Block Cipher 被引量:3
14
作者 ZHAO Guosheng WANG Jian 《China Communications》 SCIE CSCD 2016年第1期150-160,共11页
There are a lot of security issues in block cipher algorithm.Security analysis and enhanced design of a dynamic block cipher was proposed.Firstly,the safety of ciphertext was enhanced based on confusion substitution o... There are a lot of security issues in block cipher algorithm.Security analysis and enhanced design of a dynamic block cipher was proposed.Firstly,the safety of ciphertext was enhanced based on confusion substitution of S-box,thus disordering the internal structure of data blocks by four steps of matrix transformation.Then,the diffusivity of ciphertext was obtained by cyclic displacement of bytes using column ambiguity function.The dynamic key was finally generated by using LFSR,which improved the stochastic characters of secret key in each of round of iteration.The safety performance of proposed algorithm was analyzed by simulation test.The results showed the proposed algorithm has a little effect on the speed of encryption and decryption while enhancing the security.Meanwhile,the proposed algorithm has highly scalability,the dimension of S-box and the number of register can be dynamically extended according to the security requirement. 展开更多
关键词 分组密码算法 安全性分析 改进设计 安全问题 密码设计 动态分组 变换矩阵 内部结构
下载PDF
Modeling and Global Conflict Analysis of Firewall Policy 被引量:2
15
作者 LIANG Xiaoyan XIA Chunhe +2 位作者 JIAO Jian HU Junshun LI Xiaojian 《China Communications》 SCIE CSCD 2014年第5期124-135,共12页
The global view of firewall policy conflict is important for administrators to optimize the policy.It has been lack of appropriate firewall policy global conflict analysis,existing methods focus on local conflict dete... The global view of firewall policy conflict is important for administrators to optimize the policy.It has been lack of appropriate firewall policy global conflict analysis,existing methods focus on local conflict detection.We research the global conflict detection algorithm in this paper.We presented a semantic model that captures more complete classifications of the policy using knowledge concept in rough set.Based on this model,we presented the global conflict formal model,and represent it with OBDD(Ordered Binary Decision Diagram).Then we developed GFPCDA(Global Firewall Policy Conflict Detection Algorithm) algorithm to detect global conflict.In experiment,we evaluated the usability of our semantic model by eliminating the false positives and false negatives caused by incomplete policy semantic model,of a classical algorithm.We compared this algorithm with GFPCDA algorithm.The results show that GFPCDA detects conflicts more precisely and independently,and has better performance. 展开更多
关键词 防火墙 冲突分析 建模 冲突检测 语义模型 策略冲突 测算法 形式化模型
下载PDF
Automatic Detection of Aortic Dissection Based on Morphology and Deep Learning 被引量:2
16
作者 Yun Tan Ling Tan +3 位作者 Xuyu Xiang Hao Tang Jiaohua Qin Wenyan Pan 《Computers, Materials & Continua》 SCIE EI 2020年第3期1201-1215,共15页
Aortic dissection(AD)is a kind of acute and rapidly progressing cardiovascular disease.In this work,we build a CTA image library with 88 CT cases,43 cases of aortic dissection and 45 cases of health.An aortic dissecti... Aortic dissection(AD)is a kind of acute and rapidly progressing cardiovascular disease.In this work,we build a CTA image library with 88 CT cases,43 cases of aortic dissection and 45 cases of health.An aortic dissection detection method based on CTA images is proposed.ROI is extracted based on binarization and morphology opening operation.The deep learning networks(InceptionV3,ResNet50,and DenseNet)are applied after the preprocessing of the datasets.Recall,F1-score,Matthews correlation coefficient(MCC)and other performance indexes are investigated.It is shown that the deep learning methods have much better performance than the traditional method.And among those deep learning methods,DenseNet121 can exceed other networks such as ResNet50 and InceptionV3. 展开更多
关键词 Aortic dissection detection MORPHOLOGY DenseNet
下载PDF
An Automated and Real-time Approach of Depression Detection from Facial Micro-expressions 被引量:2
17
作者 Ghulam Gilanie Mahmood ul Hassan +5 位作者 Mutyyba Asghar Ali Mustafa Qamar Hafeez Ullah Rehan Ullah Khan Nida Aslam Irfan Ullah Khan 《Computers, Materials & Continua》 SCIE EI 2022年第11期2513-2528,共16页
Depression is a mental psychological disorder that may cause a physical disorder or lead to death.It is highly impactful on the socialeconomical life of a person;therefore,its effective and timely detection is needful... Depression is a mental psychological disorder that may cause a physical disorder or lead to death.It is highly impactful on the socialeconomical life of a person;therefore,its effective and timely detection is needful.Despite speech and gait,facial expressions have valuable clues to depression.This study proposes a depression detection system based on facial expression analysis.Facial features have been used for depression detection using Support Vector Machine(SVM)and Convolutional Neural Network(CNN).We extracted micro-expressions using Facial Action Coding System(FACS)as Action Units(AUs)correlated with the sad,disgust,and contempt features for depression detection.A CNN-based model is also proposed in this study to auto classify depressed subjects from images or videos in real-time.Experiments have been performed on the dataset obtained from Bahawal Victoria Hospital,Bahawalpur,Pakistan,as per the patient health questionnaire depression scale(PHQ-8);for inferring the mental condition of a patient.The experiments revealed 99.9%validation accuracy on the proposed CNN model,while extracted features obtained 100%accuracy on SVM.Moreover,the results proved the superiority of the reported approach over state-of-the-art methods. 展开更多
关键词 Depression detection facial micro-expressions facial landmarked images
下载PDF
CSMCCVA:Framework of cross-modal semantic mapping based on cognitive computing of visual and auditory sensations 被引量:1
18
作者 刘扬 Zheng Fengbin Zuo Xianyu 《High Technology Letters》 EI CAS 2016年第1期90-98,共9页
Cross-modal semantic mapping and cross-media retrieval are key problems of the multimedia search engine.This study analyzes the hierarchy,the functionality,and the structure in the visual and auditory sensations of co... Cross-modal semantic mapping and cross-media retrieval are key problems of the multimedia search engine.This study analyzes the hierarchy,the functionality,and the structure in the visual and auditory sensations of cognitive system,and establishes a brain-like cross-modal semantic mapping framework based on cognitive computing of visual and auditory sensations.The mechanism of visual-auditory multisensory integration,selective attention in thalamo-cortical,emotional control in limbic system and the memory-enhancing in hippocampal were considered in the framework.Then,the algorithms of cross-modal semantic mapping were given.Experimental results show that the framework can be effectively applied to the cross-modal semantic mapping,and also provides an important significance for brain-like computing of non-von Neumann structure. 展开更多
关键词 计算框架 语义映射 模态 听觉 视觉 多媒体搜索引擎 层次结构 认知系统
下载PDF
Development of a particle swarm optimization based support vector regression model for titanium dioxide band gap characterization
19
作者 Taoreed O.Owolabi 《Journal of Semiconductors》 EI CAS CSCD 2019年第2期49-55,共7页
Energy band gap of titanium dioxide(TiO_2) semiconductor plays significant roles in many practical applications of the semiconductor and determines its appropriateness in technological and industrial applications such... Energy band gap of titanium dioxide(TiO_2) semiconductor plays significant roles in many practical applications of the semiconductor and determines its appropriateness in technological and industrial applications such as UV absorption, pigment,photo-catalysis, pollution control systems and solar cells among others. Substitution of impurities into crystal lattice structure is the most commonly used method of tuning the band gap of TiO_2 for specific application and eventually leads to lattice distortion. This work utilizes the distortion in the lattice structure to estimate the band gap of doped TiO_2, for the first time, through hybridization of a particle swarm optimization algorithm(PSO) with a support vector regression(SVR) algorithm for developing a PSO-SVR model. The precision and accuracy of the developed PSO-SVR model was further justified by applying the model for estimating the effect of cobalt-sulfur co-doping, nickel-iodine co-doping, tungsten and indium doping on the band gap of TiO_2 and excellent agreement with the experimentally reported values was achieved. Practical implementation of the proposed PSO-SVR model would further widen the applications of the semiconductor and reduce the experimental stress involved in band gap determination of TiO_2. 展开更多
关键词 band gap LATTICE DISTORTION crystal LATTICE parameters particle SWARM optimization support vector regression titanium dioxide
下载PDF
Text Simplification Using Transformer and BERT 被引量:1
20
作者 Sarah Alissa Mike Wald 《Computers, Materials & Continua》 SCIE EI 2023年第5期3479-3495,共17页
Reading and writing are the main interaction methods with web content.Text simplification tools are helpful for people with cognitive impairments,new language learners,and children as they might find difficulties in u... Reading and writing are the main interaction methods with web content.Text simplification tools are helpful for people with cognitive impairments,new language learners,and children as they might find difficulties in understanding the complex web content.Text simplification is the process of changing complex text intomore readable and understandable text.The recent approaches to text simplification adopted the machine translation concept to learn simplification rules from a parallel corpus of complex and simple sentences.In this paper,we propose two models based on the transformer which is an encoder-decoder structure that achieves state-of-the-art(SOTA)results in machine translation.The training process for our model includes three steps:preprocessing the data using a subword tokenizer,training the model and optimizing it using the Adam optimizer,then using the model to decode the output.The first model uses the transformer only and the second model uses and integrates the Bidirectional Encoder Representations from Transformer(BERT)as encoder to enhance the training time and results.The performance of the proposed model using the transformerwas evaluated using the Bilingual Evaluation Understudy score(BLEU)and recorded(53.78)on the WikiSmall dataset.On the other hand,the experiment on the second model which is integrated with BERT shows that the validation loss decreased very fast compared with the model without the BERT.However,the BLEU score was small(44.54),which could be due to the size of the dataset so the model was overfitting and unable to generalize well.Therefore,in the future,the second model could involve experimenting with a larger dataset such as the WikiLarge.In addition,more analysis has been done on the model’s results and the used dataset using different evaluation metrics to understand their performance. 展开更多
关键词 Text simplification neural machine translation TRANSFORMER
下载PDF
上一页 1 2 11 下一页 到第
使用帮助 返回顶部