期刊文献+
共找到8篇文章
< 1 >
每页显示 20 50 100
A weighted block cooperative sparse representation algorithm based on visual saliency dictionary
1
作者 Rui Chen Fei Li +2 位作者 Ying Tong Minghu Wu Yang Jiao 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第1期235-246,共12页
Unconstrained face images are interfered by many factors such as illumination,posture,expression,occlusion,age,accessories and so on,resulting in the randomness of the noise pollution implied in the original samples.I... Unconstrained face images are interfered by many factors such as illumination,posture,expression,occlusion,age,accessories and so on,resulting in the randomness of the noise pollution implied in the original samples.In order to improve the sample quality,a weighted block cooperative sparse representation algorithm is proposed based on visual saliency dictionary.First,the algorithm uses the biological visual attention mechanism to quickly and accurately obtain the face salient target and constructs the visual salient dictionary.Then,a block cooperation framework is presented to perform sparse coding for different local structures of human face,and the weighted regular term is introduced in the sparse representation process to enhance the identification of information hidden in the coding coefficients.Finally,by synthesising the sparse representation results of all visual salient block dictionaries,the global coding residual is obtained and the class label is given.The experimental results on four databases,that is,AR,extended Yale B,LFW and PubFig,indicate that the combination of visual saliency dictionary,block cooperative sparse representation and weighted constraint coding can effectively enhance the accuracy of sparse representation of the samples to be tested and improve the performance of unconstrained face recognition. 展开更多
关键词 cooperative sparse representation dictionary learning face recognition feature extraction noise dictionary visual saliency
下载PDF
Vehicle Detection Based on Visual Saliency and Deep Sparse Convolution Hierarchical Model 被引量:4
2
作者 CAI Yingfeng WANG Hai +2 位作者 CHEN Xiaobo GAO Li CHEN Long 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2016年第4期765-772,共8页
Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high ... Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high processing times and low vehicle detection performance.To address this issue,a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed.A visual saliency calculation is firstly used to generate a small vehicle candidate area.The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection.The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group,which outperforms the existing state-of-the-art algorithms.More importantly,high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle. 展开更多
关键词 vehicle detection visual saliency deep model convolution neural network
下载PDF
Ship detection and extraction using visual saliency and histogram of oriented gradient 被引量:6
3
作者 徐芳 刘晶红 《Optoelectronics Letters》 EI 2016年第6期473-477,共5页
A novel unsupervised ship detection and extraction method is proposed. A combination model based on visual saliency is constructed for searching the ship target regions and suppressing the false alarms. The salient ta... A novel unsupervised ship detection and extraction method is proposed. A combination model based on visual saliency is constructed for searching the ship target regions and suppressing the false alarms. The salient target regions are extracted and marked through segmentation. Radon transform is applied to confirm the suspected ship targets with symmetry profiles. Then, a new descriptor, improved histogram of oriented gradient(HOG), is introduced to discriminate the real ships. The experimental results on real optical remote sensing images demonstrate that plenty of ships can be extracted and located successfully, and the number of ships can be accurately acquired. Furthermore, the proposed method is superior to the contrastive methods in terms of both accuracy rate and false alarm rate. 展开更多
关键词 HOG Ship detection and extraction using visual saliency and histogram of oriented gradient
原文传递
Saliency detection and edge feature matching approach for crater extraction 被引量:2
4
作者 An Liu Donghua Zhou +1 位作者 Lixin Chen Maoyin Chen 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2015年第6期1291-1300,共10页
Craters are salient terrain features on planetary surfaces, and provide useful information about the relative dating of geological unit of planets. In addition, they are ideal landmarks for spacecraft navigation. Due ... Craters are salient terrain features on planetary surfaces, and provide useful information about the relative dating of geological unit of planets. In addition, they are ideal landmarks for spacecraft navigation. Due to low contrast and uneven illumination, automatic extraction of craters remains a challenging task. This paper presents a saliency detection method for crater edges and a feature matching algorithm based on edges informa- tion. The craters are extracted through saliency edges detection, edge extraction and selection, feature matching of the same crater edges and robust ellipse fitting. In the edges matching algorithm, a crater feature model is proposed by analyzing the relationship between highlight region edges and shadow region ones. Then, crater edges are paired through the effective matching algorithm. Experiments of real planetary images show that the proposed approach is robust to different lights and topographies, and the detection rate is larger than 90%. 展开更多
关键词 CRATER automatic extraction visual saliency featurematching edge detection.
下载PDF
Effective Video Summarization Approach Based on Visual Attention
5
作者 Hilal Ahmad Habib Ullah Khan +3 位作者 Sikandar Ali Syed Ijaz Ur Rahman Fazli Wahid Hizbullah Khattak 《Computers, Materials & Continua》 SCIE EI 2022年第4期1427-1442,共16页
Video summarization is applied to reduce redundancy and developa concise representation of key frames in the video, more recently, video summaries have been used through visual attention modeling. In these schemes,the... Video summarization is applied to reduce redundancy and developa concise representation of key frames in the video, more recently, video summaries have been used through visual attention modeling. In these schemes,the frames that stand out visually are extracted as key frames based on humanattention modeling theories. The schemes for modeling visual attention haveproven to be effective for video summaries. Nevertheless, the high cost ofcomputing in such techniques restricts their usability in everyday situations.In this context, we propose a method based on KFE (key frame extraction)technique, which is recommended based on an efficient and accurate visualattention model. The calculation effort is minimized by utilizing dynamicvisual highlighting based on the temporal gradient instead of the traditionaloptical flow techniques. In addition, an efficient technique using a discretecosine transformation is utilized for the static visual salience. The dynamic andstatic visual attention metrics are merged by means of a non-linear weightedfusion technique. Results of the system are compared with some existing stateof-the-art techniques for the betterment of accuracy. The experimental resultsof our proposed model indicate the efficiency and high standard in terms ofthe key frames extraction as output. 展开更多
关键词 KFE video summarization visual saliency visual attention model
下载PDF
Validity,reliability,and psychometric properties of a computerized,cognitive assessment test(Cognivue^®)Validity,reliability,and psychometric properties of a computerized,cognitive assessment test(Cognivue^®) 被引量:2
6
作者 Diego Cahn-Hidalgo Paul W Estes Reina Benabou 《World Journal of Psychiatry》 SCIE 2020年第1期1-11,共11页
BACKGROUND Cognitive issues such as Alzheimer’s disease and other dementias confer a substantial negative impact.Problems relating to sensitivity,subjectivity,and inherent bias can limit the usefulness of many tradit... BACKGROUND Cognitive issues such as Alzheimer’s disease and other dementias confer a substantial negative impact.Problems relating to sensitivity,subjectivity,and inherent bias can limit the usefulness of many traditional methods of assessing cognitive impairment.AIM To determine cut-off scores for classification of cognitive impairment,and assess Cognivue®safety and efficacy in a large validation study.METHODS Adults(age 55-95 years)at risk for age-related cognitive decline or dementia were invited via posters and email to participate in two cohort studies conducted at various outpatient clinics and assisted-and independent-living facilities.In the cut-off score determination study(n=92),optimization analyses by positive percent agreement(PPA)and negative percent agreement(NPA),and by accuracy and error bias were conducted.In the clinical validation study(n=401),regression,rank linear regression,and factor analyses were conducted.Participants in the clinical validation study also completed other neuropsychological tests.RESULTS For the cut-off score determination study,92 participants completed St.Louis University Mental Status(SLUMS,reference standard)and Cognivue^®tests.Analyses showed that SLUMS cut-off scores of<21(impairment)and>26(no impairment)corresponded to Cognivue^®scores of 54.5(NPA=0.92;PPA=0.64)and 78.5(NPA=0.5;PPA=0.79),respectively.Therefore,conservatively,Cognivue^®scores of 55-64 corresponded to impairment,and 74-79 to no impairment.For the clinical validation study,401 participants completed≥1 testing session,and 358 completed 2 sessions 1-2 wk apart.Cognivue^®classification scores were validated,demonstrating good agreement with SLUMS scores(weightedκ0.57;95%CI:0.50-0.63).Reliability analyses showed similar scores across repeated testing for Cognivue^®(R2=0.81;r=0.90)and SLUMS(R2=0.67;r=0.82).Psychometric validity of Cognivue^®was demonstrated vs.traditional neuropsychological tests.Scores were most closely correlated with measures of verbal processing,manual dexterity/speed,visual contrast sensitivity,visuospatial/executive function,and speed/sequencing.CONCLUSION Cognivue^®scores≤50 avoid misclassification of impairment,and scores≥75 avoid misclassification of unimpairment.The validation study demonstrates good agreement between Cognivue^®and SLUMS;superior reliability;and good psychometric validity. 展开更多
关键词 Cognitive screening test DEMENTIA Memory Motor control Perceptualprocessing visual salience
下载PDF
Saliency-Based Fidelity Adaptation Preprocessing for Video Coding
7
作者 卢少平 张松海 《Journal of Computer Science & Technology》 SCIE EI CSCD 2011年第1期195-202,共8页
In this paper, we present a video coding scheme which applies the technique of visual saliency computation to adjust image fidelity before compression. To extract visually salient features, we construct a spatio-tempo... In this paper, we present a video coding scheme which applies the technique of visual saliency computation to adjust image fidelity before compression. To extract visually salient features, we construct a spatio-temporal saliency map by analyzing the video using a combined bottom-up and top-down visual saliency model. We then use an extended bilateral filter, in which the local intensity and spatial scales are adjusted according to visual saliency, to adaptively alter the image fidelity. Our implementation is based on the H.264 video encoder JM12.0. Besides evaluating our scheme with the H.264 reference software, we also compare it to a more traditional foreground-background segmentation-based method and a foveation-based approach which employs Gaussian blurring. Our results show that the proposed algorithm can improve the compression ratio significantly while effectively preserving perceptual visual quality. 展开更多
关键词 visual saliency bilateral filter fidelity adjustment REGION-OF-INTEREST ENCODER
原文传递
Down image recognition based on deep convolutional neural network
8
作者 Wenzhu Yang Qing Liu +4 位作者 Sile Wang Zhenchao Cui Xiangyang Chen Liping Chen Ningyu Zhang 《Information Processing in Agriculture》 EI 2018年第2期246-252,共7页
Since of the scale and the various shapes of down in the image,it is difficult for traditional image recognition method to correctly recognize the type of down image and get the required recognition accuracy,even for ... Since of the scale and the various shapes of down in the image,it is difficult for traditional image recognition method to correctly recognize the type of down image and get the required recognition accuracy,even for the Traditional Convolutional Neural Network(TCNN).To deal with the above problems,a Deep Convolutional Neural Network(DCNN)for down image classification is constructed,and a new weight initialization method is proposed.Firstly,the salient regions of a down image were cut from the image using the visual saliency model.Then,these salient regions of the image were used to train a sparse autoencoder and get a collection of convolutional filters,which accord with the statistical characteristics of dataset.At last,a DCNN with Inception module and its variants was constructed.To improve the recognition accuracy,the depth of the network is deepened.The experiment results indicate that the constructed DCNN increases the recognition accuracy by 2.7% compared to TCNN,when recognizing the down in the images.The convergence rate of the proposed DCNN with the new weight initialization method is improved by 25.5% compared to TCNN. 展开更多
关键词 Deep convolutional neural network Weight initialization Sparse autoencoder visual saliency model Image recognition
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部