期刊文献+
共找到34篇文章
< 1 2 >
每页显示 20 50 100
Disparity estimation for multi-scale multi-sensor fusion
1
作者 SUN Guoliang PEI Shanshan +2 位作者 LONG Qian ZHENG Sifa YANG Rui 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第2期259-274,共16页
The perception module of advanced driver assistance systems plays a vital role.Perception schemes often use a single sensor for data processing and environmental perception or adopt the information processing results ... The perception module of advanced driver assistance systems plays a vital role.Perception schemes often use a single sensor for data processing and environmental perception or adopt the information processing results of various sensors for the fusion of the detection layer.This paper proposes a multi-scale and multi-sensor data fusion strategy in the front end of perception and accomplishes a multi-sensor function disparity map generation scheme.A binocular stereo vision sensor composed of two cameras and a light deterction and ranging(LiDAR)sensor is used to jointly perceive the environment,and a multi-scale fusion scheme is employed to improve the accuracy of the disparity map.This solution not only has the advantages of dense perception of binocular stereo vision sensors but also considers the perception accuracy of LiDAR sensors.Experiments demonstrate that the multi-scale multi-sensor scheme proposed in this paper significantly improves disparity map estimation. 展开更多
关键词 stereo vision light deterction and ranging(LiDAR) multi-sensor fusion multi-scale fusion disparity map
下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification
2
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet Image Classification Lightweight Convolutional Neural Network Depthwise Dilated Separable Convolution Hierarchical multi-scale Feature fusion
下载PDF
Ship recognition based on HRRP via multi-scale sparse preserving method
3
作者 YANG Xueling ZHANG Gong SONG Hu 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第3期599-608,共10页
In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) ba... In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance. 展开更多
关键词 ship target recognition high-resolution range profile(HRRP) multi-scale fusion kernel sparse preserving projection(MSFKSPP) feature extraction dimensionality reduction
下载PDF
Clothing Parsing Based on Multi-Scale Fusion and Improved Self-Attention Mechanism
4
作者 陈诺 王绍宇 +3 位作者 陆然 李文萱 覃志东 石秀金 《Journal of Donghua University(English Edition)》 CAS 2023年第6期661-666,共6页
Due to the lack of long-range association and spatial location information,fine details and accurate boundaries of complex clothing images cannot always be obtained by using the existing deep learning-based methods.Th... Due to the lack of long-range association and spatial location information,fine details and accurate boundaries of complex clothing images cannot always be obtained by using the existing deep learning-based methods.This paper presents a convolutional structure with multi-scale fusion to optimize the step of clothing feature extraction and a self-attention module to capture long-range association information.The structure enables the self-attention mechanism to directly participate in the process of information exchange through the down-scaling projection operation of the multi-scale framework.In addition,the improved self-attention module introduces the extraction of 2-dimensional relative position information to make up for its lack of ability to extract spatial position features from clothing images.The experimental results based on the colorful fashion parsing dataset(CFPD)show that the proposed network structure achieves 53.68%mean intersection over union(mIoU)and has better performance on the clothing parsing task. 展开更多
关键词 clothing parsing convolutional neural network multi-scale fusion self-attention mechanism vision Transformer
下载PDF
Sub-Regional Infrared-Visible Image Fusion Using Multi-Scale Transformation 被引量:1
5
作者 Yexin Liu Ben Xu +2 位作者 Mengmeng Zhang Wei Li Ran Tao 《Journal of Beijing Institute of Technology》 EI CAS 2022年第6期535-550,共16页
Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhanc... Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhancement and visual improvement.To deal with these problems,a sub-regional infrared-visible image fusion method(SRF)is proposed.First,morphology and threshold segmentation is applied to extract targets interested in infrared images.Second,the infrared back-ground is reconstructed based on extracted targets and the visible image.Finally,target and back-ground regions are fused using a multi-scale transform.Experimental results are obtained using public data for comparison and evaluation,which demonstrate that the proposed SRF has poten-tial benefits over other methods. 展开更多
关键词 image fusion infrared image visible image multi-scale transform
下载PDF
An infrared and visible image fusion method based upon multi-scale and top-hat transforms 被引量:1
6
作者 何贵青 张琪琦 +3 位作者 纪佳琪 董丹丹 张海曦 王珺 《Chinese Physics B》 SCIE EI CAS CSCD 2018年第11期340-348,共9页
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients ar... The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced. 展开更多
关键词 infrared and visible image fusion multi-scale transform mathematical morphology top-hat trans- form
下载PDF
Attention Guided Multi Scale Feature Fusion Network for Automatic Prostate Segmentation
7
作者 Yuchun Li Mengxing Huang +1 位作者 Yu Zhang Zhiming Bai 《Computers, Materials & Continua》 SCIE EI 2024年第2期1649-1668,共20页
The precise and automatic segmentation of prostate magnetic resonance imaging(MRI)images is vital for assisting doctors in diagnosing prostate diseases.In recent years,many advanced methods have been applied to prosta... The precise and automatic segmentation of prostate magnetic resonance imaging(MRI)images is vital for assisting doctors in diagnosing prostate diseases.In recent years,many advanced methods have been applied to prostate segmentation,but due to the variability caused by prostate diseases,automatic segmentation of the prostate presents significant challenges.In this paper,we propose an attention-guided multi-scale feature fusion network(AGMSF-Net)to segment prostate MRI images.We propose an attention mechanism for extracting multi-scale features,and introduce a 3D transformer module to enhance global feature representation by adding it during the transition phase from encoder to decoder.In the decoder stage,a feature fusion module is proposed to obtain global context information.We evaluate our model on MRI images of the prostate acquired from a local hospital.The relative volume difference(RVD)and dice similarity coefficient(DSC)between the results of automatic prostate segmentation and ground truth were 1.21%and 93.68%,respectively.To quantitatively evaluate prostate volume on MRI,which is of significant clinical significance,we propose a unique AGMSF-Net.The essential performance evaluation and validation experiments have demonstrated the effectiveness of our method in automatic prostate segmentation. 展开更多
关键词 Prostate segmentation multi-scale attention 3D Transformer feature fusion MRI
下载PDF
Grasp Detection with Hierarchical Multi-Scale Feature Fusion and Inverted Shuffle Residual
8
作者 Wenjie Geng Zhiqiang Cao +3 位作者 Peiyu Guan Fengshui Jing Min Tan Junzhi Yu 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2024年第1期244-256,共13页
Grasp detection plays a critical role for robot manipulation.Mainstream pixel-wise grasp detection networks with encoder-decoder structure receive much attention due to good accuracy and efficiency.However,they usuall... Grasp detection plays a critical role for robot manipulation.Mainstream pixel-wise grasp detection networks with encoder-decoder structure receive much attention due to good accuracy and efficiency.However,they usually transmit the high-level feature in the encoder to the decoder,and low-level features are neglected.It is noted that low-level features contain abundant detail information,and how to fully exploit low-level features remains unsolved.Meanwhile,the channel information in high-level feature is also not well mined.Inevitably,the performance of grasp detection is degraded.To solve these problems,we propose a grasp detection network with hierarchical multi-scale feature fusion and inverted shuffle residual.Both low-level and high-level features in the encoder are firstly fused by the designed skip connections with attention module,and the fused information is then propagated to corresponding layers of the decoder for in-depth feature fusion.Such a hierarchical fusion guarantees the quality of grasp prediction.Furthermore,an inverted shuffle residual module is created,where the high-level feature from encoder is split in channel and the resultant split features are processed in their respective branches.By such differentiation processing,more high-dimensional channel information is kept,which enhances the representation ability of the network.Besides,an information enhancement module is added before the encoder to reinforce input information.The proposed method attains 98.9%and 97.8%in image-wise and object-wise accuracy on the Cornell grasping dataset,respectively,and the experimental results verify the effectiveness of the method. 展开更多
关键词 grasp detection hierarchical multi-scale feature fusion skip connections with attention inverted shuffle residual
原文传递
Multi-Scale Feature Fusion Model for Bridge Appearance Defect Detection
9
作者 Rong Pang Yan Yang +3 位作者 Aiguo Huang Yan Liu Peng Zhang Guangwu Tang 《Big Data Mining and Analytics》 EI CSCD 2024年第1期1-11,共11页
Although the Faster Region-based Convolutional Neural Network(Faster R-CNN)model has obvious advantages in defect recognition,it still cannot overcome challenging problems,such as time-consuming,small targets,irregula... Although the Faster Region-based Convolutional Neural Network(Faster R-CNN)model has obvious advantages in defect recognition,it still cannot overcome challenging problems,such as time-consuming,small targets,irregular shapes,and strong noise interference in bridge defect detection.To deal with these issues,this paper proposes a novel Multi-scale Feature Fusion(MFF)model for bridge appearance disease detection.First,the Faster R-CNN model adopts Region Of Interest(ROl)pooling,which omits the edge information of the target area,resulting in some missed detections and inaccuracies in both detecting and localizing bridge defects.Therefore,this paper proposes an MFF based on regional feature Aggregation(MFF-A),which reduces the missed detection rate of bridge defect detection and improves the positioning accuracy of the target area.Second,the Faster R-CNN model is insensitive to small targets,irregular shapes,and strong noises in bridge defect detection,which results in a long training time and low recognition accuracy.Accordingly,a novel Lightweight MFF(namely MFF-L)model for bridge appearance defect detection using a lightweight network EfficientNetV2 and a feature pyramid network is proposed,which fuses multi-scale features to shorten the training speed and improve recognition accuracy.Finally,the effectiveness of the proposed method is evaluated on the bridge disease dataset and public computational fluid dynamic dataset. 展开更多
关键词 defect detection multi-scale Feature fusion(mfF) Region Of Interest(ROl)alignment lightweight network
原文传递
Multi-Scale Fusion Model Based on Gated Recurrent Unit for Enhancing Prediction Accuracy of State-of-Charge in Battery Energy Storage Systems
10
作者 Hao Liu Fengwei Liang +2 位作者 Tianyu Hu Jichao Hong Huimin Ma 《Journal of Modern Power Systems and Clean Energy》 SCIE EI CSCD 2024年第2期405-414,共10页
Accurate prediction of the state-of-charge(SOC)of battery energy storage system(BESS)is critical for its safety and lifespan in electric vehicles.To overcome the imbalance of existing methods between multi-scale featu... Accurate prediction of the state-of-charge(SOC)of battery energy storage system(BESS)is critical for its safety and lifespan in electric vehicles.To overcome the imbalance of existing methods between multi-scale feature fusion and global feature extraction,this paper introduces a novel multi-scale fusion(MSF)model based on gated recurrent unit(GRU),which is specifically designed for complex multi-step SOC prediction in practical BESSs.Pearson correlation analysis is first employed to identify SOC-related parameters.These parameters are then input into a multi-layer GRU for point-wise feature extraction.Concurrently,the parameters undergo patching before entering a dual-stage multi-layer GRU,thus enabling the model to capture nuanced information across varying time intervals.Ultimately,by means of adaptive weight fusion and a fully connected network,multi-step SOC predictions are rendered.Following extensive validation over multiple days,it is illustrated that the proposed model achieves an absolute error of less than 1.5%in real-time SOC prediction. 展开更多
关键词 Electric vehicle battery energy storage system(BESS) state-of-charge(SOC)prediction gated recurrent unit(GRU) multi-scale fusion(MSF).
原文传递
Feature Fusion-Based Deep Learning Network to Recognize Table Tennis Actions
11
作者 Chih-Ta Yen Tz-Yun Chen +1 位作者 Un-Hung Chen Guo-Chang WangZong-Xian Chen 《Computers, Materials & Continua》 SCIE EI 2023年第1期83-99,共17页
A system for classifying four basic table tennis strokes using wearable devices and deep learning networks is proposed in this study.The wearable device consisted of a six-axis sensor,Raspberry Pi 3,and a power bank.M... A system for classifying four basic table tennis strokes using wearable devices and deep learning networks is proposed in this study.The wearable device consisted of a six-axis sensor,Raspberry Pi 3,and a power bank.Multiple kernel sizes were used in convolutional neural network(CNN)to evaluate their performance for extracting features.Moreover,a multiscale CNN with two kernel sizes was used to perform feature fusion at different scales in a concatenated manner.The CNN achieved recognition of the four table tennis strokes.Experimental data were obtained from20 research participants who wore sensors on the back of their hands while performing the four table tennis strokes in a laboratory environment.The data were collected to verify the performance of the proposed models for wearable devices.Finally,the sensor and multi-scale CNN designed in this study achieved accuracy and F1 scores of 99.58%and 99.16%,respectively,for the four strokes.The accuracy for five-fold cross validation was 99.87%.This result also shows that the multi-scale convolutional neural network has better robustness after fivefold cross validation. 展开更多
关键词 Wearable devices deep learning six-axis sensor feature fusion multi-scale convolutional neural networks action recognit
下载PDF
基于多尺度信息提取和特征融合的皮肤镜图像分割算法
12
作者 唐嘉男 孟祥瑞 《湖北民族大学学报(自然科学版)》 CAS 2024年第2期226-232,共7页
针对现有皮肤镜图像分割技术分割精度不高的问题,提出了一种基于多尺度信息提取和特征融合的U型网络(multi-scale information extraction and feature fusion U-shaped network,MF-UNet)模型。在U-Net的基础上,在卷积层后加入批归一化... 针对现有皮肤镜图像分割技术分割精度不高的问题,提出了一种基于多尺度信息提取和特征融合的U型网络(multi-scale information extraction and feature fusion U-shaped network,MF-UNet)模型。在U-Net的基础上,在卷积层后加入批归一化层,将原本的跳跃连接部分替换为4级特征融合模块,充分利用语义信息和位置信息,在特征提取端末尾加入多尺度空洞卷积模块和多尺度池化模块,增大感受野,利用双路拼接上采样模块进行上采样,减少图像恢复过程中的信息损失。实验表明,相较于U-Net模型,MF-UNet在平均交并比(mean intersection over union,MIoU)上提升了14.32%,在戴斯相似系数(Dice similarity coefficient,DSC)上提升了13.18%,取得了较好的结果。该研究为计算机技术辅助医生进行皮肤病诊断提供了借鉴。 展开更多
关键词 语义分割 皮肤镜图像 特征融合 注意力机制 mf-UNet 深度学习 病灶边缘
下载PDF
基于MF-SSD卷积神经网络的玉米穗丝目标检测方法 被引量:4
13
作者 朱德利 林智健 《华南农业大学学报》 CAS CSCD 北大核心 2020年第6期109-118,共10页
【目的】玉米穗丝是玉米的授粉器官,生长发育状况会影响玉米的产量。为了在玉米生长状态监测和产量预测工作中实时准确识别玉米穗丝,提出一种基于多特征融合SSD(MF-SSD)卷积神经网络的玉米穗丝检测模型。【方法】基于特征图对玉米穗丝... 【目的】玉米穗丝是玉米的授粉器官,生长发育状况会影响玉米的产量。为了在玉米生长状态监测和产量预测工作中实时准确识别玉米穗丝,提出一种基于多特征融合SSD(MF-SSD)卷积神经网络的玉米穗丝检测模型。【方法】基于特征图对玉米穗丝进行检测,在VGG16-SSD的基础上,用MobileNet替换特征提取器,加入多层特征融合结构,得到MF-SSD网络模型;通过网络优化调整,试验了MF-SSD-cut-3、MF-SSD和MF-SSD-add-3共3种网络结构,优选出检测性能最好的网络结构用于玉米穗丝检测。基于玉米穗丝图像数据集,应用0~180°随机旋转原始图像和水平翻转、平移原始图像2种数据增广技术提升模型训练效果。对是否使用二次训练策略和是否使用Focal loss解决样本不平衡问题进行了试验,并对比分析Loss的下降过程。【结果】通过加入多层特征融合结构对SSD模型改进后能够提高网络的检测能力,提升识别速度。与VGG16-SSD相比,MF-SSD在交并比指标方面的平均精度提高7.2%,对玉米穗丝小目标检测的平均召回率提高19.6%,检测速度最高能提升18.7%。在存储空间和运行时间有较高要求的嵌入式环境下,MF-SSD-cut-3模型在满足检测效果的前提下,以较小的空间代价获得了相对较短的运行时间;在不考虑空间和时间因素的情况下,MF-SSD模型获得更好的检测效果。二次训练策略提高了网络的收敛速度和模型的稳定性;Focal loss有效解决了SSD算法中正负样本数量不平衡问题,使网络模型的训练更容易收敛。【结论】MF-SSD模型对小目标的检测能力能满足农业生产中对玉米穗丝的实时检测需要,可以用于玉米生长状态的自动监控和产量的精准预测。 展开更多
关键词 玉米穗丝 目标检测 卷积神经网络 特征融合 mf-SSD
下载PDF
Image Inpainting Technique Incorporating Edge Prior and Attention Mechanism
14
作者 Jinxian Bai Yao Fan +1 位作者 Zhiwei Zhao Lizhi Zheng 《Computers, Materials & Continua》 SCIE EI 2024年第1期999-1025,共27页
Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images wit... Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images with large holes,leading to distortions in the structure and blurring of textures.To address these problems,we combine the advantages of transformers and convolutions to propose an image inpainting method that incorporates edge priors and attention mechanisms.The proposed method aims to improve the results of inpainting large holes in images by enhancing the accuracy of structure restoration and the ability to recover texture details.This method divides the inpainting task into two phases:edge prediction and image inpainting.Specifically,in the edge prediction phase,a transformer architecture is designed to combine axial attention with standard self-attention.This design enhances the extraction capability of global structural features and location awareness.It also balances the complexity of self-attention operations,resulting in accurate prediction of the edge structure in the defective region.In the image inpainting phase,a multi-scale fusion attention module is introduced.This module makes full use of multi-level distant features and enhances local pixel continuity,thereby significantly improving the quality of image inpainting.To evaluate the performance of our method.comparative experiments are conducted on several datasets,including CelebA,Places2,and Facade.Quantitative experiments show that our method outperforms the other mainstream methods.Specifically,it improves Peak Signal-to-Noise Ratio(PSNR)and Structure Similarity Index Measure(SSIM)by 1.141~3.234 db and 0.083~0.235,respectively.Moreover,it reduces Learning Perceptual Image Patch Similarity(LPIPS)and Mean Absolute Error(MAE)by 0.0347~0.1753 and 0.0104~0.0402,respectively.Qualitative experiments reveal that our method excels at reconstructing images with complete structural information and clear texture details.Furthermore,our model exhibits impressive performance in terms of the number of parameters,memory cost,and testing time. 展开更多
关键词 Image inpainting TRANSFORMER edge prior axial attention multi-scale fusion attention
下载PDF
Vehicle color recognition based on smooth modulation neural network with multi-scale feature fusion
15
作者 Mingdi HU Long BAI +2 位作者 Jiulun FAN Sirui ZHAO Enhong CHEN 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第3期91-102,共12页
Vehicle Color Recognition(VCR)plays a vital role in intelligent traffic management and criminal investigation assistance.However,the existing vehicle color datasets only cover 13 classes,which can not meet the current... Vehicle Color Recognition(VCR)plays a vital role in intelligent traffic management and criminal investigation assistance.However,the existing vehicle color datasets only cover 13 classes,which can not meet the current actual demand.Besides,although lots of efforts are devoted to VCR,they suffer from the problem of class imbalance in datasets.To address these challenges,in this paper,we propose a novel VCR method based on Smooth Modulation Neural Network with Multi-Scale Feature Fusion(SMNN-MSFF).Specifically,to construct the benchmark of model training and evaluation,we first present a new VCR dataset with 24 vehicle classes,Vehicle Color-24,consisting of 10091 vehicle images from a 100-hour urban road surveillance video.Then,to tackle the problem of long-tail distribution and improve the recognition performance,we propose the SMNN-MSFF model with multiscale feature fusion and smooth modulation.The former aims to extract feature information from local to global,and the latter could increase the loss of the images of tail class instances for training with class-imbalance.Finally,comprehensive experimental evaluation on Vehicle Color-24 and previously three representative datasets demonstrate that our proposed SMNN-MSFF outperformed state-of-the-art VCR methods.And extensive ablation studies also demonstrate that each module of our method is effective,especially,the smooth modulation efficiently help feature learning of the minority or tail classes.Vehicle Color-24 and the code of SMNN-MSFF are publicly available and can contact the author to obtain. 展开更多
关键词 vehicle color recognition benchmark dataset multi-scale feature fusion long-tail distribution improved smooth l1 loss
原文传递
Highly maneuvering target tracking using multi-parameter fusion Singer model 被引量:2
16
作者 Shuyi Jia Yun Zhang Guohong Wang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2017年第5期841-850,共10页
An algorithm of highly maneuvering target tracking is proposed to solve the problem of large tracking error caused by strong maneuver. In this algorithm, a new estimator, named as multi-parameter fusion Singer (MF-Sin... An algorithm of highly maneuvering target tracking is proposed to solve the problem of large tracking error caused by strong maneuver. In this algorithm, a new estimator, named as multi-parameter fusion Singer (MF-Singer) model is derived based on the Singer model and the fuzzy reasoning method by using radial acceleration and velocity of the target, and applied to the problem of maneuvering target tracking in strong maneuvering environment and operating environment. The tracking performance of the MF-Singer model is evaluated and compared with other manuevering tracking models. It is shown that the MF-Singer model outperforms these algorithms in several examples. 展开更多
关键词 maneuvering target multi-parameter fusion Singer (mf-Singer) fuzzy reasoning Singer model
下载PDF
Controlling Fusion of Majorana Fermions in One-Dimensional Systems by Zeeman Field
17
作者 邵陆兵 汪子丹 +3 位作者 沈瑞 盛利 王伯根 邢定钰 《Chinese Physics Letters》 SCIE CAS CSCD 2017年第6期96-99,共4页
We propose the realization of Majorana fermions (MFs) on the edges of a two-dimensional topological insulator in the proximity with s-wave superconductors and in the presence of transverse exchange field h. It is sh... We propose the realization of Majorana fermions (MFs) on the edges of a two-dimensional topological insulator in the proximity with s-wave superconductors and in the presence of transverse exchange field h. It is shown that there appear a pair of MFs localized at two junctions and that a reverse in the direction of h can lead to permutation of two MFs. With decreasing h, the MF states can either be fused or form one Dirac fermion on the π-junctions, exhibiting a topological phase transition. This characteristic can be used to detect physical states of MFs when they are transformed into Dirac fermions MFs is also given. localized on the π-junction. A condition of decoupling two 展开更多
关键词 mf Controlling fusion of Majorana Fermions in One-Dimensional Systems by Zeeman Field
下载PDF
基于注意力机制与编解码结构的人群计数网络
18
作者 黄友文 肖贵光 豆恒 《传感器与微系统》 CSCD 北大核心 2023年第5期78-82,86,共6页
针对人群计数任务中背景干扰和尺度变化影响计数精度的问题,提出一种基于注意力机制与编解码结构的人群计数网络CAENet。网络以编解码结构为骨干,基于特征金字塔设计多尺度融合(MF)模块,使编码器中具有不同尺度语义信息的特征进行融合... 针对人群计数任务中背景干扰和尺度变化影响计数精度的问题,提出一种基于注意力机制与编解码结构的人群计数网络CAENet。网络以编解码结构为骨干,基于特征金字塔设计多尺度融合(MF)模块,使编码器中具有不同尺度语义信息的特征进行融合。引入通道注意力机制,使用一条单独的解码通道设计注意力模块(AM),将模块生成的注意力图反馈到解码器的各个阶段用于抑制背景干扰。网络通过逐级监督的方式完成训练,并将最后一层输出的密度图作为最终的预测结果。在多个公开数据集的测试结果表明:该网络在固定场景中的人群计数任务中具有较高的准确性,且鲁棒性强,泛化性能良好。 展开更多
关键词 人群计数 背景干扰 编解码 多尺度融合 通道注意力机制
下载PDF
Modulation recognition network of multi-scale analysis with deep threshold noise elimination
19
作者 Xiang LI Yibing LI +1 位作者 Chunrui TANG Yingsong LI 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2023年第5期742-758,共17页
To improve the accuracy of modulated signal recognition in variable environments and reduce the impact of factors such as lack of prior knowledge on recognition results,researchers have gradually adopted deep learning... To improve the accuracy of modulated signal recognition in variable environments and reduce the impact of factors such as lack of prior knowledge on recognition results,researchers have gradually adopted deep learning techniques to replace traditional modulated signal processing techniques.To address the problem of low recognition accuracy of the modulated signal at low signal-to-noise ratios,we have designed a novel modulation recognition network of multi-scale analysis with deep threshold noise elimination to recognize the actually collected modulated signals under a symmetric cross-entropy function of label smoothing.The network consists of a denoising encoder with deep adaptive threshold learning and a decoder with multi-scale feature fusion.The two modules are skip-connected to work together to improve the robustness of the overall network.Experimental results show that this method has better recognition accuracy at low signal-to-noise ratios than previous methods.The network demonstrates a flexible self-learning capability for different noise thresholds and the effectiveness of the designed feature fusion module in multi-scale feature acquisition for various modulation types. 展开更多
关键词 Signal noise elimination Deep adaptive threshold learning network multi-scale feature fusion Modulation ecognition
原文传递
Nighttime image dehazing using color cast removal and dual path multi-scale fusion strategy 被引量:1
20
作者 Bo Wang Li Hu +2 位作者 Bowen Wei Zitong Kang Chongyi Li 《Frontiers of Computer Science》 SCIE EI CSCD 2022年第4期147-159,共13页
Nighttime image dehazing aims to remove the effect of haze on the images captured in nighttime,which however,raises new challenges such as severe color distortion,more complex lighting conditions,and lower contrast.In... Nighttime image dehazing aims to remove the effect of haze on the images captured in nighttime,which however,raises new challenges such as severe color distortion,more complex lighting conditions,and lower contrast.Instead of estimating the transmission map and atmospheric light that are difficult to be accurately acquired in nighttime,we propose a nighttime image dehazing method composed of a color cast removal and a dual path multi-scale fusion algorithm.We first propose a human visual system(HVS)inspired color correction model,which is effective for removing the color deviation on nighttime hazy images.Then,we propose to use dual path strategy that includes an underexposure and a contrast enhancement path for multi-scale fusion,where the weight maps are achieved by selecting appropriate exposed areas under Gaussian pyramids.Extensive experiments demonstrate that the visual effect of the hazy nighttime images in real-world datasets can be significantly improved by our method regarding contrast,color fidelity,and visibility.In addition,our method outperforms the state-of-the-art methods qualitatively and quantitatively. 展开更多
关键词 nighttime image dehazing color cast removal dual path multi-scale fusion
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部