期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
“融视”:实现媒体价值最大化
1
作者 赵婷 《视听界》 2014年第3期71-73,共3页
在媒体深度融合的过程中,或将诞生一个实力强大的全媒体全产业链综合体——"融视"。"融视"将整合传媒技术领域内的所有资源,通过机构的重组设置,完善体制机制,实现媒体价值的最大化。"融视"拥有的产业链... 在媒体深度融合的过程中,或将诞生一个实力强大的全媒体全产业链综合体——"融视"。"融视"将整合传媒技术领域内的所有资源,通过机构的重组设置,完善体制机制,实现媒体价值的最大化。"融视"拥有的产业链具有统一指挥、相互关联、彼此协作的核心优势,故能在重大新闻报道中统一协调、集体应对,实现传媒企业、广告商、受众的多方共赢。 展开更多
关键词 “融视” 全媒体全产业链综合体 价值最大化
下载PDF
Design of a road vehicle detection system based on monocular vision 被引量:5
2
作者 王海 张为公 蔡英凤 《Journal of Southeast University(English Edition)》 EI CAS 2011年第2期169-173,共5页
In order to decrease vehicle crashes, a new rear view vehicle detection system based on monocular vision is designed. First, a small and flexible hardware platform based on a DM642 digtal signal processor (DSP) micr... In order to decrease vehicle crashes, a new rear view vehicle detection system based on monocular vision is designed. First, a small and flexible hardware platform based on a DM642 digtal signal processor (DSP) micro-controller is built. Then, a two-step vehicle detection algorithm is proposed. In the first step, a fast vehicle edge and symmetry fusion algorithm is used and a low threshold is set so that all the possible vehicles have a nearly 100% detection rate (TP) and the non-vehicles have a high false detection rate (FP), i. e., all the possible vehicles can be obtained. In the second step, a classifier using a probabilistic neural network (PNN) which is based on multiple scales and an orientation Gabor feature is trained to classify the possible vehicles and eliminate the false detected vehicles from the candidate vehicles generated in the first step. Experimental results demonstrate that the proposed system maintains a high detection rate and a low false detection rate under different road, weather and lighting conditions. 展开更多
关键词 vehicle detection monocular vision edge andsymmetry fusion Gabor feature PNN network
下载PDF
MRI and PET images fusion based on human retina model 被引量:2
3
作者 DANESHVAR Sabalan GHASSEMIAN Hassan 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2007年第10期1624-1632,共9页
The diagnostic potential of brain positron emission tomography (PET) imaging is limited by low spatial resolution. For solving this problem we propose a technique for the fusion of PET and MRI images. This fusion is... The diagnostic potential of brain positron emission tomography (PET) imaging is limited by low spatial resolution. For solving this problem we propose a technique for the fusion of PET and MRI images. This fusion is a trade-off between the spectral information extracted from PET images and the spatial information extracted from high spatial resolution MRI. The proposed method can control this trade-off. To achieve this goal, it is necessary to build a multiscale fusion model, based on the retinal cell photoreceptors model. This paper introduces general prospects of this model, and its application in multispectral medical image fusion. Results showed that the proposed method preserves more spectral features with less spatial distortion. Comparing with hue-intensity-saturation (HIS), discrete wavelet transform (DWT), wavelet-based sharpening and wavelet-a trous transform methods, the best spectral and spatial quality is only achieved simultaneously with the proposed feature-based data fusion method. This method does not require resampling images, which is an advantage over the other methods, and can perform in any aspect ratio between the pixels of MRI and PET images. 展开更多
关键词 Image fusion Retina based MULTIRESOLUTION Multiresolution image (MRI) Positron emission tomography (PET)
下载PDF
Multi-view feature fusion for rolling bearing fault diagnosis using random forest and autoencoder 被引量:6
4
作者 Sun Wenqing Deng Aidong +4 位作者 Deng Minqiang Zhu Jing Zhai Yimeng Cheng Qiang Liu Yang 《Journal of Southeast University(English Edition)》 EI CAS 2019年第3期302-309,共8页
To improve the accuracy and robustness of rolling bearing fault diagnosis under complex conditions, a novel method based on multi-view feature fusion is proposed. Firstly, multi-view features from perspectives of the ... To improve the accuracy and robustness of rolling bearing fault diagnosis under complex conditions, a novel method based on multi-view feature fusion is proposed. Firstly, multi-view features from perspectives of the time domain, frequency domain and time-frequency domain are extracted through the Fourier transform, Hilbert transform and empirical mode decomposition (EMD).Then, the random forest model (RF) is applied to select features which are highly correlated with the bearing operating state. Subsequently, the selected features are fused via the autoencoder (AE) to further reduce the redundancy. Finally, the effectiveness of the fused features is evaluated by the support vector machine (SVM). The experimental results indicate that the proposed method based on the multi-view feature fusion can effectively reflect the difference in the state of the rolling bearing, and improve the accuracy of fault diagnosis. 展开更多
关键词 multi-view features feature fusion fault diagnosis rolling bearing machine learning
下载PDF
COMBINING SCENE MODEL AND FUSION FOR NIGHT VIDEO ENHANCEMENT 被引量:1
5
作者 Li Jing Yang Tao +1 位作者 Pan Quan Cheng Yongmei 《Journal of Electronics(China)》 2009年第1期88-93,共6页
This paper presents a video context enhancement method for night surveillance. The basic idea is to extract and fuse the meaningful information of video sequence captured from a fixed camera under different illuminati... This paper presents a video context enhancement method for night surveillance. The basic idea is to extract and fuse the meaningful information of video sequence captured from a fixed camera under different illuminations. A unique characteristic of the algorithm is to separate the image context into two classes and estimate them in different ways. One class contains basic surrounding scene in- formation and scene model, which is obtained via background modeling and object tracking in daytime video sequence. The other class is extracted from nighttime video, including frequently moving region, high illumination region and high gradient region. The scene model and pixel-wise difference method are used to segment the three regions. A shift-invariant discrete wavelet based image fusion technique is used to integral all those context information in the final result. Experiment results demonstrate that the proposed approach can provide much more details and meaningful information for nighttime video. 展开更多
关键词 Night video enhancement Image fusion Background modeling Object tracking
下载PDF
Exploration of the problem and measures of financing of Small and Medium Enterprises under the sight of economic restructuring
6
作者 Jinqiang LU 《International Journal of Technology Management》 2015年第1期67-69,共3页
This paper firstly analyzes status and financing behavior properties of SME, pointing out that the government has the responsibility and obligation to solve the financing problems of SMEs, and then analyzes the curren... This paper firstly analyzes status and financing behavior properties of SME, pointing out that the government has the responsibility and obligation to solve the financing problems of SMEs, and then analyzes the current situation of the reasons for SME financing and financing difficulties combined with the current economic situation, and finally comes up with specific measures from the establishment of the enterprise credit system, the establishment of small and medium financial institutions, expanding financing channels, increasing government support and accelerating the pace of restructuring and other aspects to solve the financing difficulties of SMEs. 展开更多
关键词 economic transformation SMES FINANCING COUNTERMEASURE
下载PDF
Video Concept Detection Based on Multiple Features and Classifiers Fusion 被引量:1
7
作者 Dong Yuan Zhang Jiwei +2 位作者 Zhao Nan Chang Xiaofu Liu Wei 《China Communications》 SCIE CSCD 2012年第8期105-121,共17页
The rapid growth of multimedia content necessitates powerful technologies to filter, classify, index and retrieve video documents more efficiently. However, the essential bottleneck of image and video analysis is the ... The rapid growth of multimedia content necessitates powerful technologies to filter, classify, index and retrieve video documents more efficiently. However, the essential bottleneck of image and video analysis is the problem of semantic gap that low level features extracted by computers always fail to coincide with high-level concepts interpreted by humans. In this paper, we present a generic scheme for the detection video semantic concepts based on multiple visual features machine learning. Various global and local low-level visual features are systelrtically investigated, and kernelbased learning method equips the concept detection system to explore the potential of these features. Then we combine the different features and sub-systen on both classifier-level and kernel-level fusion that contribute to a more robust system Our proposed system is tested on the TRECVID dataset. The resulted Mean Average Precision (MAP) score is rmch better than the benchmark perforrmnce, which proves that our concepts detection engine develops a generic model and perforrrs well on both object and scene type concepts. 展开更多
关键词 concept detection visual feature extraction kemel-based learning classifier fusion
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部