Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false...Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false detection rates when applying object recognition algorithms tailored for remote sensing imagery.Additionally,these complexities contribute to inaccuracies in target localization and hinder precise target categorization.This paper addresses these challenges by proposing a solution:The YOLO-MFD model(YOLO-MFD:Remote Sensing Image Object Detection withMulti-scale Fusion Dynamic Head).Before presenting our method,we delve into the prevalent issues faced in remote sensing imagery analysis.Specifically,we emphasize the struggles of existing object recognition algorithms in comprehensively capturing critical image features amidst varying scales and complex backgrounds.To resolve these issues,we introduce a novel approach.First,we propose the implementation of a lightweight multi-scale module called CEF.This module significantly improves the model’s ability to comprehensively capture important image features by merging multi-scale feature information.It effectively addresses the issues of missed detection and mistaken alarms that are common in remote sensing imagery.Second,an additional layer of small target detection heads is added,and a residual link is established with the higher-level feature extraction module in the backbone section.This allows the model to incorporate shallower information,significantly improving the accuracy of target localization in remotely sensed images.Finally,a dynamic head attentionmechanism is introduced.This allows themodel to exhibit greater flexibility and accuracy in recognizing shapes and targets of different sizes.Consequently,the precision of object detection is significantly improved.The trial results show that the YOLO-MFD model shows improvements of 6.3%,3.5%,and 2.5%over the original YOLOv8 model in Precision,map@0.5 and map@0.5:0.95,separately.These results illustrate the clear advantages of the method.展开更多
Due to the lack of long-range association and spatial location information,fine details and accurate boundaries of complex clothing images cannot always be obtained by using the existing deep learning-based methods.Th...Due to the lack of long-range association and spatial location information,fine details and accurate boundaries of complex clothing images cannot always be obtained by using the existing deep learning-based methods.This paper presents a convolutional structure with multi-scale fusion to optimize the step of clothing feature extraction and a self-attention module to capture long-range association information.The structure enables the self-attention mechanism to directly participate in the process of information exchange through the down-scaling projection operation of the multi-scale framework.In addition,the improved self-attention module introduces the extraction of 2-dimensional relative position information to make up for its lack of ability to extract spatial position features from clothing images.The experimental results based on the colorful fashion parsing dataset(CFPD)show that the proposed network structure achieves 53.68%mean intersection over union(mIoU)and has better performance on the clothing parsing task.展开更多
The demand for the exploration of ocean resources is increasing exponentially.Underwater image data plays a significant role in many research areas.Despite this,the visual quality of underwater images is degraded beca...The demand for the exploration of ocean resources is increasing exponentially.Underwater image data plays a significant role in many research areas.Despite this,the visual quality of underwater images is degraded because of two main factors namely,backscattering and attenuation.Therefore,visual enhancement has become an essential process to recover the required data from the images.Many algorithms had been proposed in a decade for improving the quality of images.This paper aims to propose a single image enhancement technique without the use of any external datasets.For that,the degraded images are subjected to two main processes namely,color correction and image fusion.Initially,veiling light and transmission light is estimated tofind the color required for correction.Veiling light refers to unwanted light,whereas transmission light refers to the required light for color correction.These estimated outputs are applied in the scene recovery equation.The image obtained from color correction is subjected to a fusion process where the image is categorized into two versions and applied to white balance and contrast enhancement techniques.The resultants are divided into three weight maps namely,luminance,saliency,chromaticity and fused using the Laplacian pyramid.The results obtained are graphically compared with their input data using RGB Histogram plot.Finally,image quality is measured and tabulated using underwater image quality measures.展开更多
To deal with the low location accuracy issue of existing underwater navigation technologies in autonomous underwater vehicles(AUVs),a distributed fusion algorithm which combines the model's analysis method with a ...To deal with the low location accuracy issue of existing underwater navigation technologies in autonomous underwater vehicles(AUVs),a distributed fusion algorithm which combines the model's analysis method with a multi-scale transformation method is proposed for integrated navigation system based on AUV.First,integrated navigation system theory and system error sources are introduced in details.Secondly,a navigation system's observation equation on the original scale is decomposed into different scales by the discrete wavelet transform method,and noise reduction is performed by setting the wavelet de-noising threshold.At last,the dynamic equation and observation equations are fused on different scales by the wavelet transformation and Kalman filter.The results show that the proposed algorithm has smaller navigation error and higher navigation accuracy.展开更多
Accurate prediction of the state-of-charge(SOC)of battery energy storage system(BESS)is critical for its safety and lifespan in electric vehicles.To overcome the imbalance of existing methods between multi-scale featu...Accurate prediction of the state-of-charge(SOC)of battery energy storage system(BESS)is critical for its safety and lifespan in electric vehicles.To overcome the imbalance of existing methods between multi-scale feature fusion and global feature extraction,this paper introduces a novel multi-scale fusion(MSF)model based on gated recurrent unit(GRU),which is specifically designed for complex multi-step SOC prediction in practical BESSs.Pearson correlation analysis is first employed to identify SOC-related parameters.These parameters are then input into a multi-layer GRU for point-wise feature extraction.Concurrently,the parameters undergo patching before entering a dual-stage multi-layer GRU,thus enabling the model to capture nuanced information across varying time intervals.Ultimately,by means of adaptive weight fusion and a fully connected network,multi-step SOC predictions are rendered.Following extensive validation over multiple days,it is illustrated that the proposed model achieves an absolute error of less than 1.5%in real-time SOC prediction.展开更多
The perception module of advanced driver assistance systems plays a vital role.Perception schemes often use a single sensor for data processing and environmental perception or adopt the information processing results ...The perception module of advanced driver assistance systems plays a vital role.Perception schemes often use a single sensor for data processing and environmental perception or adopt the information processing results of various sensors for the fusion of the detection layer.This paper proposes a multi-scale and multi-sensor data fusion strategy in the front end of perception and accomplishes a multi-sensor function disparity map generation scheme.A binocular stereo vision sensor composed of two cameras and a light deterction and ranging(LiDAR)sensor is used to jointly perceive the environment,and a multi-scale fusion scheme is employed to improve the accuracy of the disparity map.This solution not only has the advantages of dense perception of binocular stereo vision sensors but also considers the perception accuracy of LiDAR sensors.Experiments demonstrate that the multi-scale multi-sensor scheme proposed in this paper significantly improves disparity map estimation.展开更多
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
Chinese Clinical Named Entity Recognition(CNER)is a crucial step in extracting medical information and is of great significance in promoting medical informatization.However,CNER poses challenges due to the specificity...Chinese Clinical Named Entity Recognition(CNER)is a crucial step in extracting medical information and is of great significance in promoting medical informatization.However,CNER poses challenges due to the specificity of clinical terminology,the complexity of Chinese text semantics,and the uncertainty of Chinese entity boundaries.To address these issues,we propose an improved CNER model,which is based on multi-feature fusion and multi-scale local context enhancement.The model simultaneously fuses multi-feature representations of pinyin,radical,Part of Speech(POS),word boundary with BERT deep contextual representations to enhance the semantic representation of text for more effective entity recognition.Furthermore,to address the model’s limitation of focusing just on global features,we incorporate Convolutional Neural Networks(CNNs)with various kernel sizes to capture multi-scale local features of the text and enhance the model’s comprehension of the text.Finally,we integrate the obtained global and local features,and employ multi-head attention mechanism(MHA)extraction to enhance the model’s focus on characters associated with medical entities,hence boosting the model’s performance.We obtained 92.74%,and 87.80%F1 scores on the two CNER benchmark datasets,CCKS2017 and CCKS2019,respectively.The results demonstrate that our model outperforms the latest models in CNER,showcasing its outstanding overall performance.It can be seen that the CNER model proposed in this study has an important application value in constructing clinical medical knowledge graph and intelligent Q&A system.展开更多
Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hier...Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.展开更多
The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivot...The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality.展开更多
The laser powder bed fusion(LPBF) process can integrally form geometrically complex and high-performance metallic parts that have attracted much interest,especially in the molds industry.The appearance of the LPBF mak...The laser powder bed fusion(LPBF) process can integrally form geometrically complex and high-performance metallic parts that have attracted much interest,especially in the molds industry.The appearance of the LPBF makes it possible to design and produce complex conformal cooling channel systems in molds.Thus,LPBF-processed tool steels have attracted more and more attention.The complex thermal history in the LPBF process makes the microstructural characteristics and properties different from those of conventional manufactured tool steels.This paper provides an overview of LPBF-processed tool steels by describing the physical phenomena,the microstructural characteristics,and the mechanical/thermal properties,including tensile properties,wear resistance,and thermal properties.The microstructural characteristics are presented through a multiscale perspective,ranging from densification,meso-structure,microstructure,substructure in grains,to nanoprecipitates.Finally,a summary of tool steels and their challenges and outlooks are introduced.展开更多
The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.Howeve...The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.However,these traces have become increasingly difficult to extract due to wide availability of various image processing algorithms.Convolutional Neural Networks(CNN)-based algorithms have demonstrated good discriminative capabilities for different brands and even different models of camera devices.However,their performances is not ideal in case of distinguishing between individual devices of the same model,because cameras of the same model typically use the same optical lens,image sensor,and image processing algorithms,that result in minimal overall differences.In this paper,we propose a camera forensics algorithm based on multi-scale feature fusion to address these issues.The proposed algorithm extracts different local features from feature maps of different scales and then fuses them to obtain a comprehensive feature representation.This representation is then fed into a subsequent camera fingerprint classification network.Building upon the Swin-T network,we utilize Transformer Blocks and Graph Convolutional Network(GCN)modules to fuse multi-scale features from different stages of the backbone network.Furthermore,we conduct experiments on established datasets to demonstrate the feasibility and effectiveness of the proposed approach.展开更多
In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) ba...In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance.展开更多
A large number of nanopores and complex fracture structures in shale reservoirs results in multi-scale flow of oil. With the development of shale oil reservoirs, the permeability of multi-scale media undergoes changes...A large number of nanopores and complex fracture structures in shale reservoirs results in multi-scale flow of oil. With the development of shale oil reservoirs, the permeability of multi-scale media undergoes changes due to stress sensitivity, which plays a crucial role in controlling pressure propagation and oil flow. This paper proposes a multi-scale coupled flow mathematical model of matrix nanopores, induced fractures, and hydraulic fractures. In this model, the micro-scale effects of shale oil flow in fractal nanopores, fractal induced fracture network, and stress sensitivity of multi-scale media are considered. We solved the model iteratively using Pedrosa transform, semi-analytic Segmented Bessel function, Laplace transform. The results of this model exhibit good agreement with the numerical solution and field production data, confirming the high accuracy of the model. As well, the influence of stress sensitivity on permeability, pressure and production is analyzed. It is shown that the permeability and production decrease significantly when induced fractures are weakly supported. Closed induced fractures can inhibit interporosity flow in the stimulated reservoir volume (SRV). It has been shown in sensitivity analysis that hydraulic fractures are beneficial to early production, and induced fractures in SRV are beneficial to middle production. The model can characterize multi-scale flow characteristics of shale oil, providing theoretical guidance for rapid productivity evaluation.展开更多
The hands and face are the most important parts for expressing sign language morphemes in sign language videos.However,we find that existing Continuous Sign Language Recognition(CSLR)methods lack the mining of hand an...The hands and face are the most important parts for expressing sign language morphemes in sign language videos.However,we find that existing Continuous Sign Language Recognition(CSLR)methods lack the mining of hand and face information in visual backbones or use expensive and time-consuming external extractors to explore this information.In addition,the signs have different lengths,whereas previous CSLR methods typically use a fixed-length window to segment the video to capture sequential features and then perform global temporal modeling,which disturbs the perception of complete signs.In this study,we propose a Multi-Scale Context-Aware network(MSCA-Net)to solve the aforementioned problems.Our MSCA-Net contains two main modules:(1)Multi-Scale Motion Attention(MSMA),which uses the differences among frames to perceive information of the hands and face in multiple spatial scales,replacing the heavy feature extractors;and(2)Multi-Scale Temporal Modeling(MSTM),which explores crucial temporal information in the sign language video from different temporal scales.We conduct extensive experiments using three widely used sign language datasets,i.e.,RWTH-PHOENIX-Weather-2014,RWTH-PHOENIX-Weather-2014T,and CSL-Daily.The proposed MSCA-Net achieve state-of-the-art performance,demonstrating the effectiveness of our approach.展开更多
Computer-aided diagnosis of pneumonia based on deep learning is a research hotspot.However,there are some problems that the features of different sizes and different directions are not sufficient when extracting the f...Computer-aided diagnosis of pneumonia based on deep learning is a research hotspot.However,there are some problems that the features of different sizes and different directions are not sufficient when extracting the features in lung X-ray images.A pneumonia classification model based on multi-scale directional feature enhancement MSD-Net is proposed in this paper.The main innovations are as follows:Firstly,the Multi-scale Residual Feature Extraction Module(MRFEM)is designed to effectively extract multi-scale features.The MRFEM uses dilated convolutions with different expansion rates to increase the receptive field and extract multi-scale features effectively.Secondly,the Multi-scale Directional Feature Perception Module(MDFPM)is designed,which uses a three-branch structure of different sizes convolution to transmit direction feature layer by layer,and focuses on the target region to enhance the feature information.Thirdly,the Axial Compression Former Module(ACFM)is designed to perform global calculations to enhance the perception ability of global features in different directions.To verify the effectiveness of the MSD-Net,comparative experiments and ablation experiments are carried out.In the COVID-19 RADIOGRAPHY DATABASE,the Accuracy,Recall,Precision,F1 Score,and Specificity of MSD-Net are 97.76%,95.57%,95.52%,95.52%,and 98.51%,respectively.In the chest X-ray dataset,the Accuracy,Recall,Precision,F1 Score and Specificity of MSD-Net are 97.78%,95.22%,96.49%,95.58%,and 98.11%,respectively.This model improves the accuracy of lung image recognition effectively and provides an important clinical reference to pneumonia Computer-Aided Diagnosis.展开更多
Rock fracture mechanisms can be inferred from moment tensors(MT)inverted from microseismic events.However,MT can only be inverted for events whose waveforms are acquired across a network of sensors.This is limiting fo...Rock fracture mechanisms can be inferred from moment tensors(MT)inverted from microseismic events.However,MT can only be inverted for events whose waveforms are acquired across a network of sensors.This is limiting for underground mines where the microseismic stations often lack azimuthal coverage.Thus,there is a need for a method to invert fracture mechanisms using waveforms acquired by a sparse microseismic network.Here,we present a novel,multi-scale framework to classify whether a rock crack contracts or dilates based on a single waveform.The framework consists of a deep learning model that is initially trained on 2400000+manually labelled field-scale seismic and microseismic waveforms acquired across 692 stations.Transfer learning is then applied to fine-tune the model on 300000+MT-labelled labscale acoustic emission waveforms from 39 individual experiments instrumented with different sensor layouts,loading,and rock types in training.The optimal model achieves over 86%F-score on unseen waveforms at both the lab-and field-scale.This model outperforms existing empirical methods in classification of rock fracture mechanisms monitored by a sparse microseismic network.This facilitates rapid assessment of,and early warning against,various rock engineering hazard such as induced earthquakes and rock bursts.展开更多
Accurately identifying small objects in high-resolution aerial images presents a complex and crucial task in thefield of small object detection on unmanned aerial vehicles(UAVs).This task is challenging due to variati...Accurately identifying small objects in high-resolution aerial images presents a complex and crucial task in thefield of small object detection on unmanned aerial vehicles(UAVs).This task is challenging due to variations inUAV flight altitude,differences in object scales,as well as factors like flight speed and motion blur.To enhancethe detection efficacy of small targets in drone aerial imagery,we propose an enhanced You Only Look Onceversion 7(YOLOv7)algorithm based on multi-scale spatial context.We build the MSC-YOLO model,whichincorporates an additional prediction head,denoted as P2,to improve adaptability for small objects.We replaceconventional downsampling with a Spatial-to-Depth Convolutional Combination(CSPDC)module to mitigatethe loss of intricate feature details related to small objects.Furthermore,we propose a Spatial Context Pyramidwith Multi-Scale Attention(SCPMA)module,which captures spatial and channel-dependent features of smalltargets acrossmultiple scales.This module enhances the perception of spatial contextual features and the utilizationof multiscale feature information.On the Visdrone2023 and UAVDT datasets,MSC-YOLO achieves remarkableresults,outperforming the baseline method YOLOv7 by 3.0%in terms ofmean average precision(mAP).The MSCYOLOalgorithm proposed in this paper has demonstrated satisfactory performance in detecting small targets inUAV aerial photography,providing strong support for practical applications.展开更多
Multi-scale system remains a classical scientific problem in fluid dynamics,biology,etc.In the present study,a scheme of multi-scale Physics-informed neural networks is proposed to solve the boundary layer flow at hig...Multi-scale system remains a classical scientific problem in fluid dynamics,biology,etc.In the present study,a scheme of multi-scale Physics-informed neural networks is proposed to solve the boundary layer flow at high Reynolds numbers without any data.The flow is divided into several regions with different scales based on Prandtl's boundary theory.Different regions are solved with governing equations in different scales.The method of matched asymptotic expansions is used to make the flow field continuously.A flow on a semi infinite flat plate at a high Reynolds number is considered a multi-scale problem because the boundary layer scale is much smaller than the outer flow scale.The results are compared with the reference numerical solutions,which show that the msPINNs can solve the multi-scale problem of the boundary layer in high Reynolds number flows.This scheme can be developed for more multi-scale problems in the future.展开更多
Thermal conductivity is one of the most significant criterion of three-dimensional carbon fiber-reinforced SiC matrix composites(3D C/SiC).Represent volume element(RVE)models of microscale,void/matrix and mesoscale pr...Thermal conductivity is one of the most significant criterion of three-dimensional carbon fiber-reinforced SiC matrix composites(3D C/SiC).Represent volume element(RVE)models of microscale,void/matrix and mesoscale proposed in this work are used to simulate the thermal conductivity behaviors of the 3D C/SiC composites.An entirely new process is introduced to weave the preform with three-dimensional orthogonal architecture.The 3D steady-state analysis step is created for assessing the thermal conductivity behaviors of the composites by applying periodic temperature boundary conditions.Three RVE models of cuboid,hexagonal and fiber random distribution are respectively developed to comparatively study the influence of fiber package pattern on the thermal conductivities at the microscale.Besides,the effect of void morphology on the thermal conductivity of the matrix is analyzed by the void/matrix models.The prediction results at the mesoscale correspond closely to the experimental values.The effect of the porosities and fiber volume fractions on the thermal conductivities is also taken into consideration.The multi-scale models mentioned in this paper can be used to predict the thermal conductivity behaviors of other composites with complex structures.展开更多
基金the Scientific Research Fund of Hunan Provincial Education Department(23A0423).
文摘Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false detection rates when applying object recognition algorithms tailored for remote sensing imagery.Additionally,these complexities contribute to inaccuracies in target localization and hinder precise target categorization.This paper addresses these challenges by proposing a solution:The YOLO-MFD model(YOLO-MFD:Remote Sensing Image Object Detection withMulti-scale Fusion Dynamic Head).Before presenting our method,we delve into the prevalent issues faced in remote sensing imagery analysis.Specifically,we emphasize the struggles of existing object recognition algorithms in comprehensively capturing critical image features amidst varying scales and complex backgrounds.To resolve these issues,we introduce a novel approach.First,we propose the implementation of a lightweight multi-scale module called CEF.This module significantly improves the model’s ability to comprehensively capture important image features by merging multi-scale feature information.It effectively addresses the issues of missed detection and mistaken alarms that are common in remote sensing imagery.Second,an additional layer of small target detection heads is added,and a residual link is established with the higher-level feature extraction module in the backbone section.This allows the model to incorporate shallower information,significantly improving the accuracy of target localization in remotely sensed images.Finally,a dynamic head attentionmechanism is introduced.This allows themodel to exhibit greater flexibility and accuracy in recognizing shapes and targets of different sizes.Consequently,the precision of object detection is significantly improved.The trial results show that the YOLO-MFD model shows improvements of 6.3%,3.5%,and 2.5%over the original YOLOv8 model in Precision,map@0.5 and map@0.5:0.95,separately.These results illustrate the clear advantages of the method.
文摘Due to the lack of long-range association and spatial location information,fine details and accurate boundaries of complex clothing images cannot always be obtained by using the existing deep learning-based methods.This paper presents a convolutional structure with multi-scale fusion to optimize the step of clothing feature extraction and a self-attention module to capture long-range association information.The structure enables the self-attention mechanism to directly participate in the process of information exchange through the down-scaling projection operation of the multi-scale framework.In addition,the improved self-attention module introduces the extraction of 2-dimensional relative position information to make up for its lack of ability to extract spatial position features from clothing images.The experimental results based on the colorful fashion parsing dataset(CFPD)show that the proposed network structure achieves 53.68%mean intersection over union(mIoU)and has better performance on the clothing parsing task.
文摘The demand for the exploration of ocean resources is increasing exponentially.Underwater image data plays a significant role in many research areas.Despite this,the visual quality of underwater images is degraded because of two main factors namely,backscattering and attenuation.Therefore,visual enhancement has become an essential process to recover the required data from the images.Many algorithms had been proposed in a decade for improving the quality of images.This paper aims to propose a single image enhancement technique without the use of any external datasets.For that,the degraded images are subjected to two main processes namely,color correction and image fusion.Initially,veiling light and transmission light is estimated tofind the color required for correction.Veiling light refers to unwanted light,whereas transmission light refers to the required light for color correction.These estimated outputs are applied in the scene recovery equation.The image obtained from color correction is subjected to a fusion process where the image is categorized into two versions and applied to white balance and contrast enhancement techniques.The resultants are divided into three weight maps namely,luminance,saliency,chromaticity and fused using the Laplacian pyramid.The results obtained are graphically compared with their input data using RGB Histogram plot.Finally,image quality is measured and tabulated using underwater image quality measures.
基金National Natural Science Foundation of China(51779057,51709061,51509057)the Equipment Pre-Research Project(41412030201)the National 863 High Technology Development Plan Project(2011AA09A106)。
文摘To deal with the low location accuracy issue of existing underwater navigation technologies in autonomous underwater vehicles(AUVs),a distributed fusion algorithm which combines the model's analysis method with a multi-scale transformation method is proposed for integrated navigation system based on AUV.First,integrated navigation system theory and system error sources are introduced in details.Secondly,a navigation system's observation equation on the original scale is decomposed into different scales by the discrete wavelet transform method,and noise reduction is performed by setting the wavelet de-noising threshold.At last,the dynamic equation and observation equations are fused on different scales by the wavelet transformation and Kalman filter.The results show that the proposed algorithm has smaller navigation error and higher navigation accuracy.
基金supported in part by the National Natural Science Foundation of China(No.62172036).
文摘Accurate prediction of the state-of-charge(SOC)of battery energy storage system(BESS)is critical for its safety and lifespan in electric vehicles.To overcome the imbalance of existing methods between multi-scale feature fusion and global feature extraction,this paper introduces a novel multi-scale fusion(MSF)model based on gated recurrent unit(GRU),which is specifically designed for complex multi-step SOC prediction in practical BESSs.Pearson correlation analysis is first employed to identify SOC-related parameters.These parameters are then input into a multi-layer GRU for point-wise feature extraction.Concurrently,the parameters undergo patching before entering a dual-stage multi-layer GRU,thus enabling the model to capture nuanced information across varying time intervals.Ultimately,by means of adaptive weight fusion and a fully connected network,multi-step SOC predictions are rendered.Following extensive validation over multiple days,it is illustrated that the proposed model achieves an absolute error of less than 1.5%in real-time SOC prediction.
基金the National Key R&D Program of China(2018AAA0103103).
文摘The perception module of advanced driver assistance systems plays a vital role.Perception schemes often use a single sensor for data processing and environmental perception or adopt the information processing results of various sensors for the fusion of the detection layer.This paper proposes a multi-scale and multi-sensor data fusion strategy in the front end of perception and accomplishes a multi-sensor function disparity map generation scheme.A binocular stereo vision sensor composed of two cameras and a light deterction and ranging(LiDAR)sensor is used to jointly perceive the environment,and a multi-scale fusion scheme is employed to improve the accuracy of the disparity map.This solution not only has the advantages of dense perception of binocular stereo vision sensors but also considers the perception accuracy of LiDAR sensors.Experiments demonstrate that the multi-scale multi-sensor scheme proposed in this paper significantly improves disparity map estimation.
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
基金This study was supported by the National Natural Science Foundation of China(61911540482 and 61702324).
文摘Chinese Clinical Named Entity Recognition(CNER)is a crucial step in extracting medical information and is of great significance in promoting medical informatization.However,CNER poses challenges due to the specificity of clinical terminology,the complexity of Chinese text semantics,and the uncertainty of Chinese entity boundaries.To address these issues,we propose an improved CNER model,which is based on multi-feature fusion and multi-scale local context enhancement.The model simultaneously fuses multi-feature representations of pinyin,radical,Part of Speech(POS),word boundary with BERT deep contextual representations to enhance the semantic representation of text for more effective entity recognition.Furthermore,to address the model’s limitation of focusing just on global features,we incorporate Convolutional Neural Networks(CNNs)with various kernel sizes to capture multi-scale local features of the text and enhance the model’s comprehension of the text.Finally,we integrate the obtained global and local features,and employ multi-head attention mechanism(MHA)extraction to enhance the model’s focus on characters associated with medical entities,hence boosting the model’s performance.We obtained 92.74%,and 87.80%F1 scores on the two CNER benchmark datasets,CCKS2017 and CCKS2019,respectively.The results demonstrate that our model outperforms the latest models in CNER,showcasing its outstanding overall performance.It can be seen that the CNER model proposed in this study has an important application value in constructing clinical medical knowledge graph and intelligent Q&A system.
基金Major Program of National Natural Science Foundation of China(NSFC12292980,NSFC12292984)National Key R&D Program of China(2023YFA1009000,2023YFA1009004,2020YFA0712203,2020YFA0712201)+2 种基金Major Program of National Natural Science Foundation of China(NSFC12031016)Beijing Natural Science Foundation(BNSFZ210003)Department of Science,Technology and Information of the Ministry of Education(8091B042240).
文摘Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.
基金This project is supported by the National Natural Science Foundation of China(NSFC)(No.61902158).
文摘The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality.
基金financial supports provided by the China Scholarship Council(Nos.202206 290061 and 202206290062)。
文摘The laser powder bed fusion(LPBF) process can integrally form geometrically complex and high-performance metallic parts that have attracted much interest,especially in the molds industry.The appearance of the LPBF makes it possible to design and produce complex conformal cooling channel systems in molds.Thus,LPBF-processed tool steels have attracted more and more attention.The complex thermal history in the LPBF process makes the microstructural characteristics and properties different from those of conventional manufactured tool steels.This paper provides an overview of LPBF-processed tool steels by describing the physical phenomena,the microstructural characteristics,and the mechanical/thermal properties,including tensile properties,wear resistance,and thermal properties.The microstructural characteristics are presented through a multiscale perspective,ranging from densification,meso-structure,microstructure,substructure in grains,to nanoprecipitates.Finally,a summary of tool steels and their challenges and outlooks are introduced.
基金This work was funded by the National Natural Science Foundation of China(Grant No.62172132)Public Welfare Technology Research Project of Zhejiang Province(Grant No.LGF21F020014)the Opening Project of Key Laboratory of Public Security Information Application Based on Big-Data Architecture,Ministry of Public Security of Zhejiang Police College(Grant No.2021DSJSYS002).
文摘The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.However,these traces have become increasingly difficult to extract due to wide availability of various image processing algorithms.Convolutional Neural Networks(CNN)-based algorithms have demonstrated good discriminative capabilities for different brands and even different models of camera devices.However,their performances is not ideal in case of distinguishing between individual devices of the same model,because cameras of the same model typically use the same optical lens,image sensor,and image processing algorithms,that result in minimal overall differences.In this paper,we propose a camera forensics algorithm based on multi-scale feature fusion to address these issues.The proposed algorithm extracts different local features from feature maps of different scales and then fuses them to obtain a comprehensive feature representation.This representation is then fed into a subsequent camera fingerprint classification network.Building upon the Swin-T network,we utilize Transformer Blocks and Graph Convolutional Network(GCN)modules to fuse multi-scale features from different stages of the backbone network.Furthermore,we conduct experiments on established datasets to demonstrate the feasibility and effectiveness of the proposed approach.
基金supported by the National Natural Science Foundation of China (62271255,61871218)the Fundamental Research Funds for the Central University (3082019NC2019002)+1 种基金the Aeronautical Science Foundation (ASFC-201920007002)the Program of Remote Sensing Intelligent Monitoring and Emergency Services for Regional Security Elements。
文摘In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance.
基金This study was supported by the National Natural Science Foundation of China(U22B2075,52274056,51974356).
文摘A large number of nanopores and complex fracture structures in shale reservoirs results in multi-scale flow of oil. With the development of shale oil reservoirs, the permeability of multi-scale media undergoes changes due to stress sensitivity, which plays a crucial role in controlling pressure propagation and oil flow. This paper proposes a multi-scale coupled flow mathematical model of matrix nanopores, induced fractures, and hydraulic fractures. In this model, the micro-scale effects of shale oil flow in fractal nanopores, fractal induced fracture network, and stress sensitivity of multi-scale media are considered. We solved the model iteratively using Pedrosa transform, semi-analytic Segmented Bessel function, Laplace transform. The results of this model exhibit good agreement with the numerical solution and field production data, confirming the high accuracy of the model. As well, the influence of stress sensitivity on permeability, pressure and production is analyzed. It is shown that the permeability and production decrease significantly when induced fractures are weakly supported. Closed induced fractures can inhibit interporosity flow in the stimulated reservoir volume (SRV). It has been shown in sensitivity analysis that hydraulic fractures are beneficial to early production, and induced fractures in SRV are beneficial to middle production. The model can characterize multi-scale flow characteristics of shale oil, providing theoretical guidance for rapid productivity evaluation.
基金Supported by the National Natural Science Foundation of China(62072334).
文摘The hands and face are the most important parts for expressing sign language morphemes in sign language videos.However,we find that existing Continuous Sign Language Recognition(CSLR)methods lack the mining of hand and face information in visual backbones or use expensive and time-consuming external extractors to explore this information.In addition,the signs have different lengths,whereas previous CSLR methods typically use a fixed-length window to segment the video to capture sequential features and then perform global temporal modeling,which disturbs the perception of complete signs.In this study,we propose a Multi-Scale Context-Aware network(MSCA-Net)to solve the aforementioned problems.Our MSCA-Net contains two main modules:(1)Multi-Scale Motion Attention(MSMA),which uses the differences among frames to perceive information of the hands and face in multiple spatial scales,replacing the heavy feature extractors;and(2)Multi-Scale Temporal Modeling(MSTM),which explores crucial temporal information in the sign language video from different temporal scales.We conduct extensive experiments using three widely used sign language datasets,i.e.,RWTH-PHOENIX-Weather-2014,RWTH-PHOENIX-Weather-2014T,and CSL-Daily.The proposed MSCA-Net achieve state-of-the-art performance,demonstrating the effectiveness of our approach.
基金supported in part by the National Natural Science Foundation of China(Grant No.62062003)Natural Science Foundation of Ningxia(Grant No.2023AAC03293).
文摘Computer-aided diagnosis of pneumonia based on deep learning is a research hotspot.However,there are some problems that the features of different sizes and different directions are not sufficient when extracting the features in lung X-ray images.A pneumonia classification model based on multi-scale directional feature enhancement MSD-Net is proposed in this paper.The main innovations are as follows:Firstly,the Multi-scale Residual Feature Extraction Module(MRFEM)is designed to effectively extract multi-scale features.The MRFEM uses dilated convolutions with different expansion rates to increase the receptive field and extract multi-scale features effectively.Secondly,the Multi-scale Directional Feature Perception Module(MDFPM)is designed,which uses a three-branch structure of different sizes convolution to transmit direction feature layer by layer,and focuses on the target region to enhance the feature information.Thirdly,the Axial Compression Former Module(ACFM)is designed to perform global calculations to enhance the perception ability of global features in different directions.To verify the effectiveness of the MSD-Net,comparative experiments and ablation experiments are carried out.In the COVID-19 RADIOGRAPHY DATABASE,the Accuracy,Recall,Precision,F1 Score,and Specificity of MSD-Net are 97.76%,95.57%,95.52%,95.52%,and 98.51%,respectively.In the chest X-ray dataset,the Accuracy,Recall,Precision,F1 Score and Specificity of MSD-Net are 97.78%,95.22%,96.49%,95.58%,and 98.11%,respectively.This model improves the accuracy of lung image recognition effectively and provides an important clinical reference to pneumonia Computer-Aided Diagnosis.
基金supported by Western Research Interdisciplinary Initiative R6259A03.
文摘Rock fracture mechanisms can be inferred from moment tensors(MT)inverted from microseismic events.However,MT can only be inverted for events whose waveforms are acquired across a network of sensors.This is limiting for underground mines where the microseismic stations often lack azimuthal coverage.Thus,there is a need for a method to invert fracture mechanisms using waveforms acquired by a sparse microseismic network.Here,we present a novel,multi-scale framework to classify whether a rock crack contracts or dilates based on a single waveform.The framework consists of a deep learning model that is initially trained on 2400000+manually labelled field-scale seismic and microseismic waveforms acquired across 692 stations.Transfer learning is then applied to fine-tune the model on 300000+MT-labelled labscale acoustic emission waveforms from 39 individual experiments instrumented with different sensor layouts,loading,and rock types in training.The optimal model achieves over 86%F-score on unseen waveforms at both the lab-and field-scale.This model outperforms existing empirical methods in classification of rock fracture mechanisms monitored by a sparse microseismic network.This facilitates rapid assessment of,and early warning against,various rock engineering hazard such as induced earthquakes and rock bursts.
基金the Key Research and Development Program of Hainan Province(Grant Nos.ZDYF2023GXJS163,ZDYF2024GXJS014)National Natural Science Foundation of China(NSFC)(Grant Nos.62162022,62162024)+2 种基金the Major Science and Technology Project of Hainan Province(Grant No.ZDKJ2020012)Hainan Provincial Natural Science Foundation of China(Grant No.620MS021)Youth Foundation Project of Hainan Natural Science Foundation(621QN211).
文摘Accurately identifying small objects in high-resolution aerial images presents a complex and crucial task in thefield of small object detection on unmanned aerial vehicles(UAVs).This task is challenging due to variations inUAV flight altitude,differences in object scales,as well as factors like flight speed and motion blur.To enhancethe detection efficacy of small targets in drone aerial imagery,we propose an enhanced You Only Look Onceversion 7(YOLOv7)algorithm based on multi-scale spatial context.We build the MSC-YOLO model,whichincorporates an additional prediction head,denoted as P2,to improve adaptability for small objects.We replaceconventional downsampling with a Spatial-to-Depth Convolutional Combination(CSPDC)module to mitigatethe loss of intricate feature details related to small objects.Furthermore,we propose a Spatial Context Pyramidwith Multi-Scale Attention(SCPMA)module,which captures spatial and channel-dependent features of smalltargets acrossmultiple scales.This module enhances the perception of spatial contextual features and the utilizationof multiscale feature information.On the Visdrone2023 and UAVDT datasets,MSC-YOLO achieves remarkableresults,outperforming the baseline method YOLOv7 by 3.0%in terms ofmean average precision(mAP).The MSCYOLOalgorithm proposed in this paper has demonstrated satisfactory performance in detecting small targets inUAV aerial photography,providing strong support for practical applications.
文摘Multi-scale system remains a classical scientific problem in fluid dynamics,biology,etc.In the present study,a scheme of multi-scale Physics-informed neural networks is proposed to solve the boundary layer flow at high Reynolds numbers without any data.The flow is divided into several regions with different scales based on Prandtl's boundary theory.Different regions are solved with governing equations in different scales.The method of matched asymptotic expansions is used to make the flow field continuously.A flow on a semi infinite flat plate at a high Reynolds number is considered a multi-scale problem because the boundary layer scale is much smaller than the outer flow scale.The results are compared with the reference numerical solutions,which show that the msPINNs can solve the multi-scale problem of the boundary layer in high Reynolds number flows.This scheme can be developed for more multi-scale problems in the future.
基金Supported by Science Center for Gas Turbine Project of China (Grant No.P2022-B-IV-014-001)Frontier Leading Technology Basic Research Special Project of Jiangsu Province of China (Grant No.BK20212007)the BIT Research and Innovation Promoting Project of China (Grant No.2022YCXZ019)。
文摘Thermal conductivity is one of the most significant criterion of three-dimensional carbon fiber-reinforced SiC matrix composites(3D C/SiC).Represent volume element(RVE)models of microscale,void/matrix and mesoscale proposed in this work are used to simulate the thermal conductivity behaviors of the 3D C/SiC composites.An entirely new process is introduced to weave the preform with three-dimensional orthogonal architecture.The 3D steady-state analysis step is created for assessing the thermal conductivity behaviors of the composites by applying periodic temperature boundary conditions.Three RVE models of cuboid,hexagonal and fiber random distribution are respectively developed to comparatively study the influence of fiber package pattern on the thermal conductivities at the microscale.Besides,the effect of void morphology on the thermal conductivity of the matrix is analyzed by the void/matrix models.The prediction results at the mesoscale correspond closely to the experimental values.The effect of the porosities and fiber volume fractions on the thermal conductivities is also taken into consideration.The multi-scale models mentioned in this paper can be used to predict the thermal conductivity behaviors of other composites with complex structures.