The emergence of new media in various fields has continuously strengthened the social aspect of social media.Netizens tend to express emotions in social interactions,and many people even use satire,metaphors,and other...The emergence of new media in various fields has continuously strengthened the social aspect of social media.Netizens tend to express emotions in social interactions,and many people even use satire,metaphors,and other techniques to express some negative emotions,it is necessary to detect sarcasm in social comment data.For sarcasm,the more reference data modalities used,the better the experimental effect.This paper conducts research on sarcasm detection technology based on image-text fusion data.To effectively utilize the features of each modality,a feature reconstruction output algorithm is proposed.This algorithm is based on the attention mechanism,learns the low-rank features of another modality through cross-modality,the eigenvectors are reconstructed for the corresponding modality through weighted averaging.When only the image modality in the dataset is used,the preprocessed data has outstanding performance in reconstructing the output model,with an accuracy rate of 87.6%.When using only the text modality data in the dataset,the reconstructed output model is optimal,with an accuracy rate of 85.2%.To improve feature fusion between modalities for effective classification,a weight adaptive learning algorithm is used.This algorithm uses a neural network combined with an attention mechanism to calculate the attention weight of each modality to achieve weight adaptive learning purposes,with an accuracy rate of 87.9%.Extensive experiments on a benchmark dataset demonstrate the superiority of our proposed model.展开更多
Accurately identifying small objects in high-resolution aerial images presents a complex and crucial task in thefield of small object detection on unmanned aerial vehicles(UAVs).This task is challenging due to variati...Accurately identifying small objects in high-resolution aerial images presents a complex and crucial task in thefield of small object detection on unmanned aerial vehicles(UAVs).This task is challenging due to variations inUAV flight altitude,differences in object scales,as well as factors like flight speed and motion blur.To enhancethe detection efficacy of small targets in drone aerial imagery,we propose an enhanced You Only Look Onceversion 7(YOLOv7)algorithm based on multi-scale spatial context.We build the MSC-YOLO model,whichincorporates an additional prediction head,denoted as P2,to improve adaptability for small objects.We replaceconventional downsampling with a Spatial-to-Depth Convolutional Combination(CSPDC)module to mitigatethe loss of intricate feature details related to small objects.Furthermore,we propose a Spatial Context Pyramidwith Multi-Scale Attention(SCPMA)module,which captures spatial and channel-dependent features of smalltargets acrossmultiple scales.This module enhances the perception of spatial contextual features and the utilizationof multiscale feature information.On the Visdrone2023 and UAVDT datasets,MSC-YOLO achieves remarkableresults,outperforming the baseline method YOLOv7 by 3.0%in terms ofmean average precision(mAP).The MSCYOLOalgorithm proposed in this paper has demonstrated satisfactory performance in detecting small targets inUAV aerial photography,providing strong support for practical applications.展开更多
Owing to the rapid increase in the interchange of text information through internet networks,the reliability and security of digital content are becoming a major research problem.Tampering detection,Content authentica...Owing to the rapid increase in the interchange of text information through internet networks,the reliability and security of digital content are becoming a major research problem.Tampering detection,Content authentication,and integrity verification of digital content interchanged through the Internet were utilized to solve a major concern in information and communication technologies.The authors’difficulties were tampering detection,authentication,and integrity verification of the digital contents.This study develops an Automated Data Mining based Digital Text Document Watermarking for Tampering Attack Detection(ADMDTW-TAD)via the Internet.The DM concept is exploited in the presented ADMDTW-TAD technique to identify the document’s appropriate characteristics to embed larger watermark information.The presented secure watermarking scheme intends to transmit digital text documents over the Internet securely.Once the watermark is embedded with no damage to the original document,it is then shared with the destination.The watermark extraction process is performed to get the original document securely.The experimental validation of the ADMDTW-TAD technique is carried out under varying levels of attack volumes,and the outcomes were inspected in terms of different measures.The simulation values indicated that the ADMDTW-TAD technique improved performance over other models.展开更多
In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure in...In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure information has proven to be effective in fake news detection and how to combine it while reducing the noise information is critical.Unfortunately,existing approaches fail to handle these problems.This paper proposes a multi-model fake news detection framework based on Tex-modal Dominance and fusing Multiple Multi-model Cues(TD-MMC),which utilizes three valuable multi-model clues:text-model importance,text-image complementary,and text-image inconsistency.TD-MMC is dominated by textural content and assisted by image information while using social network information to enhance text representation.To reduce the irrelevant social structure’s information interference,we use a unidirectional cross-modal attention mechanism to selectively learn the social structure’s features.A cross-modal attention mechanism is adopted to obtain text-image cross-modal features while retaining textual features to reduce the loss of important information.In addition,TD-MMC employs a new multi-model loss to improve the model’s generalization ability.Extensive experiments have been conducted on two public real-world English and Chinese datasets,and the results show that our proposed model outperforms the state-of-the-art methods on classification evaluation metrics.展开更多
Background Document images such as statistical reports and scientific journals are widely used in information technology.Accurate detection of table areas in document images is an essential prerequisite for tasks such...Background Document images such as statistical reports and scientific journals are widely used in information technology.Accurate detection of table areas in document images is an essential prerequisite for tasks such as information extraction.However,because of the diversity in the shapes and sizes of tables,existing table detection methods adapted from general object detection algorithms,have not yet achieved satisfactory results.Incorrect detection results might lead to the loss of critical information.Methods Therefore,we propose a novel end-to-end trainable deep network combined with a self-supervised pretraining transformer for feature extraction to minimize incorrect detections.To better deal with table areas of different shapes and sizes,we added a dual-branch context content attention module(DCCAM)to high-dimensional features to extract context content information,thereby enhancing the network's ability to learn shape features.For feature fusion at different scales,we replaced the original 3×3 convolution with a multilayer residual module,which contains enhanced gradient flow information to improve the feature representation and extraction capability.Results We evaluated our method on public document datasets and compared it with previous methods,which achieved state-of-the-art results in terms of evaluation metrics such as recall and F1-score.展开更多
Synthetic Aperture Radar(SAR)image target detection has widespread applications in both military and civil domains.However,SAR images pose challenges due to strong scattering,indistinct edge contours,multi-scale repre...Synthetic Aperture Radar(SAR)image target detection has widespread applications in both military and civil domains.However,SAR images pose challenges due to strong scattering,indistinct edge contours,multi-scale representation,sparsity,and severe background interference,which make the existing target detection methods in low accuracy.To address this issue,this paper proposes a multi-scale fusion framework(Swin-PAFF)for SAR target detection that utilizes the global context perception capability of the Transformer and the multi-layer feature fusion learning ability of the feature pyramid structure(FPN).Firstly,to tackle the issue of inadequate perceptual image context information in SAR target detection,we propose an end-to-end SAR target detection network with the Transformer structure as the backbone.Furthermore,we enhance the ability of the Swin Transformer to acquire contextual features and cross-information by incorporating a Swin-CC backbone network model that combines the Spatial Depthwise Pooling(SDP)module and the self-attentive mechanism.Finally,we design a cross-layer fusion neck module(PAFF)that better handles multi-scale variations and complex situations(such as sparsity,background interference,etc.).Our devised approach yields a noteworthy AP@0.5:0.95 performance of 91.3%when assessed on the HRSID dataset.The application of our proposed technique has resulted in a noteworthy advancement of 8%in the AP@0.5:0.95 scores on the HRSID dataset.展开更多
Detecting and recognizing text from natural scene images presents a challenge because the image quality depends on the conditions in which the image is captured,such as viewing angles,blurring,sensor noise,etc.However...Detecting and recognizing text from natural scene images presents a challenge because the image quality depends on the conditions in which the image is captured,such as viewing angles,blurring,sensor noise,etc.However,in this paper,a prototype for text detection and recognition from natural scene images is proposed.This prototype is based on the Raspberry Pi 4 and the Universal Serial Bus(USB)camera and embedded our text detection and recognition model,which was developed using the Python language.Our model is based on the deep learning text detector model through the Efficient and Accurate Scene Text Detec-tor(EAST)model for text localization and detection and the Tesseract-OCR,which is used as an Optical Character Recognition(OCR)engine for text recog-nition.Our prototype is controlled by the Virtual Network Computing(VNC)tool through a computer via a wireless connection.The experiment results show that the recognition rate for the captured image through the camera by our prototype can reach 99.75%with low computational complexity.Furthermore,our proto-type is more performant than the Tesseract software in terms of the recognition rate.Besides,it provides the same performance in terms of the recognition rate with a huge decrease in the execution time by an average of 89%compared to the EasyOCR software on the Raspberry Pi 4 board.展开更多
With the continuous development and utilization of marine resources,the underwater target detection has gradually become a popular research topic in the field of underwater robot operations and target detection.Howeve...With the continuous development and utilization of marine resources,the underwater target detection has gradually become a popular research topic in the field of underwater robot operations and target detection.However,it is difficult to combine the environmental semantic information and the semantic information of targets at different scales by detection algorithms due to the complex underwater environment.In this paper,a cascade model based on the UGC-YOLO network structure with high detection accuracy is proposed.The YOLOv3 convolutional neural network is employed as the baseline structure.By fusing the global semantic information between two residual stages in the parallel structure of the feature extraction network,the perception of underwater targets is improved and the detection rate of hard-to-detect underwater objects is raised.Furthermore,the deformable convolution is applied to capture longrange semantic dependencies and PPM pooling is introduced in the highest layer network for aggregating semantic information.Finally,a multi-scale weighted fusion approach is presented for learning semantic information at different scales.Experiments are conducted on an underwater test dataset and the results have demonstrated that our proposed algorithm could detect aquatic targets in complex degraded underwater images.Compared with the baseline network algorithm,the Common Objects in Context(COCO)evaluation metric has been improved by 4.34%.展开更多
Text perception is crucial for understanding the semantics of outdoor scenes,making it a key requirement for building intelligent systems for driver assistance or autonomous driving.Text information in car-mounted vid...Text perception is crucial for understanding the semantics of outdoor scenes,making it a key requirement for building intelligent systems for driver assistance or autonomous driving.Text information in car-mounted videos can assist drivers in making decisions.However,Car-mounted video text images pose challenges such as complex backgrounds,small fonts,and the need for real-time detection.We proposed a robust Car-mounted Video Text Detector(CVTD).It is a lightweight text detection model based on ResNet18 for feature extraction,capable of detecting text in arbitrary shapes.Our model efficiently extracted global text positions through the Coordinate Attention Threshold Activation(CATA)and enhanced the representation capability through stacking two Feature Pyramid Enhancement Fusion Modules(FPEFM),strengthening feature representation,and integrating text local features and global position information,reinforcing the representation capability of the CVTD model.The enhanced feature maps,when acted upon by Text Activation Maps(TAM),effectively distinguished text foreground from non-text regions.Additionally,we collected and annotated a dataset containing 2200 images of Car-mounted Video Text(CVT)under various road conditions for training and evaluating our model’s performance.We further tested our model on four other challenging public natural scene text detection benchmark datasets,demonstrating its strong generalization ability and real-time detection speed.This model holds potential for practical applications in real-world scenarios.展开更多
Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal ...Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal is to enhance inference speed without sacrificing significant detection accuracy,thereby enabling robust performance on resource-constrained devices like drones,closed-circuit television cameras,and other embedded systems.To achieve this,we propose key modifications to the network architecture to lighten the original backbone and improve feature aggregation,including replacing standard convolution with depth-wise convolution,adopting the C2 sequence module in place of C3,employing Spatial Pyramid Pooling Global(SPPG)instead of Spatial Pyramid Pooling Fast(SPPF)and integrating Bi-directional Feature Pyramid Network(BiFPN)into the neck.Experimental results demonstrate a remarkable 26%improvement in inference speed compared to the baseline,with only marginal reductions of 1.6%and 4.2%in mean average precision(mAP)at the intersection over union(IoU)thresholds of 0.5 and 0.5:0.95,respectively.Our work represents a significant advancement in scene text detection,striking a balance between speed and accuracy,making it well-suited for performance-constrained environments.展开更多
Esophageal cancer ranks among the most prevalent malignant tumors globally,primarily due to its highly aggressive nature and poor survival rates.According to the 2020 global cancer statistics,there were approximately ...Esophageal cancer ranks among the most prevalent malignant tumors globally,primarily due to its highly aggressive nature and poor survival rates.According to the 2020 global cancer statistics,there were approximately 604000 new cases of esophageal cancer,resulting in 544000 deaths.The 5-year survival rate hovers around a mere 15%-25%.Notably,distinct variations exist in the risk factors associated with the two primary histological types,influencing their worldwide incidence and distribution.Squamous cell carcinoma displays a high incidence in specific regions,such as certain areas in China,where it meets the cost-effect-iveness criteria for widespread endoscopy-based early diagnosis within the local population.Conversely,adenocarcinoma(EAC)represents the most common histological subtype of esophageal cancer in Europe and the United States.The role of early diagnosis in cases of EAC originating from Barrett's esophagus(BE)remains a subject of controversy.The effectiveness of early detection for EAC,particularly those arising from BE,continues to be a debated topic.The variations in how early-stage esophageal carcinoma is treated in different regions are largely due to the differing rates of early-stage cancer diagnoses.In areas with higher incidences,such as China and Japan,early diagnosis is more common,which has led to the advancement of endoscopic methods as definitive treatments.These techniques have demonstrated remarkable efficacy with minimal complications while preserving esophageal functionality.Early screening,prompt diagnosis,and timely treatment are key strategies that can significantly lower both the occurrence and death rates associated with esophageal cancer.展开更多
As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The ...As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.展开更多
Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal depende...Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.展开更多
For underwater robots in the process of performing target detection tasks,the color distortion and the uneven quality of underwater images lead to great difficulties in the feature extraction process of the model,whic...For underwater robots in the process of performing target detection tasks,the color distortion and the uneven quality of underwater images lead to great difficulties in the feature extraction process of the model,which is prone to issues like error detection,omission detection,and poor accuracy.Therefore,this paper proposed the CER-YOLOv7(CBAM-EIOU-RepVGG-YOLOv7)underwater target detection algorithm.To improve the algorithm’s capability to retain valid features from both spatial and channel perspectives during the feature extraction phase,we have added a Convolutional Block Attention Module(CBAM)to the backbone network.The Reparameterization Visual Geometry Group(RepVGG)module is inserted into the backbone to improve the training and inference capabilities.The Efficient Intersection over Union(EIoU)loss is also used as the localization loss function,which reduces the error detection rate and missed detection rate of the algorithm.The experimental results of the CER-YOLOv7 algorithm on the UPRC(Underwater Robot Prototype Competition)dataset show that the mAP(mean Average Precision)score of the algorithm is 86.1%,which is a 2.2%improvement compared to the YOLOv7.The feasibility and validity of the CER-YOLOv7 are proved through ablation and comparison experiments,and it is more suitable for underwater target detection.展开更多
As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocrea...As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocreate a misleading perception among users. While early research primarily focused on text-based features forfake news detection mechanisms, there has been relatively limited exploration of learning shared representationsin multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal modelfor detecting fake news, which relies on similarity reasoning and adversarial networks. The model employsBidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) toextract visual features. Subsequently, the model establishes similarity representations between the textual featuresextracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features arefused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigatethe relationship between fake news and events. This paper validates the proposed model using publicly availablemultimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approachachieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodalmodalmodelsand existing multimodal models. In contrast, the overall better performance of our model on the Weibo datasetsurpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarialnetworks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However,current research is limited to the fusion of only text and image modalities. Future research directions should aimto further integrate features fromadditionalmodalities to comprehensively represent themultifaceted informationof fake news.展开更多
基金funded by National Key Research and Development Program of China(No.2022YFC3302103).
文摘The emergence of new media in various fields has continuously strengthened the social aspect of social media.Netizens tend to express emotions in social interactions,and many people even use satire,metaphors,and other techniques to express some negative emotions,it is necessary to detect sarcasm in social comment data.For sarcasm,the more reference data modalities used,the better the experimental effect.This paper conducts research on sarcasm detection technology based on image-text fusion data.To effectively utilize the features of each modality,a feature reconstruction output algorithm is proposed.This algorithm is based on the attention mechanism,learns the low-rank features of another modality through cross-modality,the eigenvectors are reconstructed for the corresponding modality through weighted averaging.When only the image modality in the dataset is used,the preprocessed data has outstanding performance in reconstructing the output model,with an accuracy rate of 87.6%.When using only the text modality data in the dataset,the reconstructed output model is optimal,with an accuracy rate of 85.2%.To improve feature fusion between modalities for effective classification,a weight adaptive learning algorithm is used.This algorithm uses a neural network combined with an attention mechanism to calculate the attention weight of each modality to achieve weight adaptive learning purposes,with an accuracy rate of 87.9%.Extensive experiments on a benchmark dataset demonstrate the superiority of our proposed model.
基金the Key Research and Development Program of Hainan Province(Grant Nos.ZDYF2023GXJS163,ZDYF2024GXJS014)National Natural Science Foundation of China(NSFC)(Grant Nos.62162022,62162024)+2 种基金the Major Science and Technology Project of Hainan Province(Grant No.ZDKJ2020012)Hainan Provincial Natural Science Foundation of China(Grant No.620MS021)Youth Foundation Project of Hainan Natural Science Foundation(621QN211).
文摘Accurately identifying small objects in high-resolution aerial images presents a complex and crucial task in thefield of small object detection on unmanned aerial vehicles(UAVs).This task is challenging due to variations inUAV flight altitude,differences in object scales,as well as factors like flight speed and motion blur.To enhancethe detection efficacy of small targets in drone aerial imagery,we propose an enhanced You Only Look Onceversion 7(YOLOv7)algorithm based on multi-scale spatial context.We build the MSC-YOLO model,whichincorporates an additional prediction head,denoted as P2,to improve adaptability for small objects.We replaceconventional downsampling with a Spatial-to-Depth Convolutional Combination(CSPDC)module to mitigatethe loss of intricate feature details related to small objects.Furthermore,we propose a Spatial Context Pyramidwith Multi-Scale Attention(SCPMA)module,which captures spatial and channel-dependent features of smalltargets acrossmultiple scales.This module enhances the perception of spatial contextual features and the utilizationof multiscale feature information.On the Visdrone2023 and UAVDT datasets,MSC-YOLO achieves remarkableresults,outperforming the baseline method YOLOv7 by 3.0%in terms ofmean average precision(mAP).The MSCYOLOalgorithm proposed in this paper has demonstrated satisfactory performance in detecting small targets inUAV aerial photography,providing strong support for practical applications.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Research Groups Program Grant No.(RGP-1443-0051).
文摘Owing to the rapid increase in the interchange of text information through internet networks,the reliability and security of digital content are becoming a major research problem.Tampering detection,Content authentication,and integrity verification of digital content interchanged through the Internet were utilized to solve a major concern in information and communication technologies.The authors’difficulties were tampering detection,authentication,and integrity verification of the digital contents.This study develops an Automated Data Mining based Digital Text Document Watermarking for Tampering Attack Detection(ADMDTW-TAD)via the Internet.The DM concept is exploited in the presented ADMDTW-TAD technique to identify the document’s appropriate characteristics to embed larger watermark information.The presented secure watermarking scheme intends to transmit digital text documents over the Internet securely.Once the watermark is embedded with no damage to the original document,it is then shared with the destination.The watermark extraction process is performed to get the original document securely.The experimental validation of the ADMDTW-TAD technique is carried out under varying levels of attack volumes,and the outcomes were inspected in terms of different measures.The simulation values indicated that the ADMDTW-TAD technique improved performance over other models.
基金This research was funded by the General Project of Philosophy and Social Science of Heilongjiang Province,Grant Number:20SHB080.
文摘In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure information has proven to be effective in fake news detection and how to combine it while reducing the noise information is critical.Unfortunately,existing approaches fail to handle these problems.This paper proposes a multi-model fake news detection framework based on Tex-modal Dominance and fusing Multiple Multi-model Cues(TD-MMC),which utilizes three valuable multi-model clues:text-model importance,text-image complementary,and text-image inconsistency.TD-MMC is dominated by textural content and assisted by image information while using social network information to enhance text representation.To reduce the irrelevant social structure’s information interference,we use a unidirectional cross-modal attention mechanism to selectively learn the social structure’s features.A cross-modal attention mechanism is adopted to obtain text-image cross-modal features while retaining textual features to reduce the loss of important information.In addition,TD-MMC employs a new multi-model loss to improve the model’s generalization ability.Extensive experiments have been conducted on two public real-world English and Chinese datasets,and the results show that our proposed model outperforms the state-of-the-art methods on classification evaluation metrics.
文摘Background Document images such as statistical reports and scientific journals are widely used in information technology.Accurate detection of table areas in document images is an essential prerequisite for tasks such as information extraction.However,because of the diversity in the shapes and sizes of tables,existing table detection methods adapted from general object detection algorithms,have not yet achieved satisfactory results.Incorrect detection results might lead to the loss of critical information.Methods Therefore,we propose a novel end-to-end trainable deep network combined with a self-supervised pretraining transformer for feature extraction to minimize incorrect detections.To better deal with table areas of different shapes and sizes,we added a dual-branch context content attention module(DCCAM)to high-dimensional features to extract context content information,thereby enhancing the network's ability to learn shape features.For feature fusion at different scales,we replaced the original 3×3 convolution with a multilayer residual module,which contains enhanced gradient flow information to improve the feature representation and extraction capability.Results We evaluated our method on public document datasets and compared it with previous methods,which achieved state-of-the-art results in terms of evaluation metrics such as recall and F1-score.
基金supported by the National Key Research and Development Program of China under Grant 2021YFC2801001the Natural Science Foundation of Shanghai under Grant 21ZR1426500the 2022 Graduate Top Innovative Talents Training Program at Shanghai Maritime University under Grant 2022YBR004.
文摘Synthetic Aperture Radar(SAR)image target detection has widespread applications in both military and civil domains.However,SAR images pose challenges due to strong scattering,indistinct edge contours,multi-scale representation,sparsity,and severe background interference,which make the existing target detection methods in low accuracy.To address this issue,this paper proposes a multi-scale fusion framework(Swin-PAFF)for SAR target detection that utilizes the global context perception capability of the Transformer and the multi-layer feature fusion learning ability of the feature pyramid structure(FPN).Firstly,to tackle the issue of inadequate perceptual image context information in SAR target detection,we propose an end-to-end SAR target detection network with the Transformer structure as the backbone.Furthermore,we enhance the ability of the Swin Transformer to acquire contextual features and cross-information by incorporating a Swin-CC backbone network model that combines the Spatial Depthwise Pooling(SDP)module and the self-attentive mechanism.Finally,we design a cross-layer fusion neck module(PAFF)that better handles multi-scale variations and complex situations(such as sparsity,background interference,etc.).Our devised approach yields a noteworthy AP@0.5:0.95 performance of 91.3%when assessed on the HRSID dataset.The application of our proposed technique has resulted in a noteworthy advancement of 8%in the AP@0.5:0.95 scores on the HRSID dataset.
基金This work was funded by the Deanship of Scientific Research at Jouf University(Kingdom of Saudi Arabia)under Grant No.DSR-2021-02-0392.
文摘Detecting and recognizing text from natural scene images presents a challenge because the image quality depends on the conditions in which the image is captured,such as viewing angles,blurring,sensor noise,etc.However,in this paper,a prototype for text detection and recognition from natural scene images is proposed.This prototype is based on the Raspberry Pi 4 and the Universal Serial Bus(USB)camera and embedded our text detection and recognition model,which was developed using the Python language.Our model is based on the deep learning text detector model through the Efficient and Accurate Scene Text Detec-tor(EAST)model for text localization and detection and the Tesseract-OCR,which is used as an Optical Character Recognition(OCR)engine for text recog-nition.Our prototype is controlled by the Virtual Network Computing(VNC)tool through a computer via a wireless connection.The experiment results show that the recognition rate for the captured image through the camera by our prototype can reach 99.75%with low computational complexity.Furthermore,our proto-type is more performant than the Tesseract software in terms of the recognition rate.Besides,it provides the same performance in terms of the recognition rate with a huge decrease in the execution time by an average of 89%compared to the EasyOCR software on the Raspberry Pi 4 board.
基金supported by the National Natural Science Foundation of China(No.62271199)the Natural Science Foundation of Hunan Province,China(No.2020JJ5170)the Scientific Research Fund of Hunan Provincial Education Department(No.18C0299)。
文摘With the continuous development and utilization of marine resources,the underwater target detection has gradually become a popular research topic in the field of underwater robot operations and target detection.However,it is difficult to combine the environmental semantic information and the semantic information of targets at different scales by detection algorithms due to the complex underwater environment.In this paper,a cascade model based on the UGC-YOLO network structure with high detection accuracy is proposed.The YOLOv3 convolutional neural network is employed as the baseline structure.By fusing the global semantic information between two residual stages in the parallel structure of the feature extraction network,the perception of underwater targets is improved and the detection rate of hard-to-detect underwater objects is raised.Furthermore,the deformable convolution is applied to capture longrange semantic dependencies and PPM pooling is introduced in the highest layer network for aggregating semantic information.Finally,a multi-scale weighted fusion approach is presented for learning semantic information at different scales.Experiments are conducted on an underwater test dataset and the results have demonstrated that our proposed algorithm could detect aquatic targets in complex degraded underwater images.Compared with the baseline network algorithm,the Common Objects in Context(COCO)evaluation metric has been improved by 4.34%.
基金This work is supported in part by the National Natural Science Foundation of China(Grant Number 61971078)which provided domain expertise and computational power that greatly assisted the activity+1 种基金This work was financially supported by Chongqing Municipal Education Commission Grants forMajor Science and Technology Project(KJZD-M202301901)the Science and Technology Research Project of Jiangxi Department of Education(GJJ2201049).
文摘Text perception is crucial for understanding the semantics of outdoor scenes,making it a key requirement for building intelligent systems for driver assistance or autonomous driving.Text information in car-mounted videos can assist drivers in making decisions.However,Car-mounted video text images pose challenges such as complex backgrounds,small fonts,and the need for real-time detection.We proposed a robust Car-mounted Video Text Detector(CVTD).It is a lightweight text detection model based on ResNet18 for feature extraction,capable of detecting text in arbitrary shapes.Our model efficiently extracted global text positions through the Coordinate Attention Threshold Activation(CATA)and enhanced the representation capability through stacking two Feature Pyramid Enhancement Fusion Modules(FPEFM),strengthening feature representation,and integrating text local features and global position information,reinforcing the representation capability of the CVTD model.The enhanced feature maps,when acted upon by Text Activation Maps(TAM),effectively distinguished text foreground from non-text regions.Additionally,we collected and annotated a dataset containing 2200 images of Car-mounted Video Text(CVT)under various road conditions for training and evaluating our model’s performance.We further tested our model on four other challenging public natural scene text detection benchmark datasets,demonstrating its strong generalization ability and real-time detection speed.This model holds potential for practical applications in real-world scenarios.
基金the National Natural Science Foundation of PRChina(42075130)Nari Technology Co.,Ltd.(4561655965)。
文摘Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal is to enhance inference speed without sacrificing significant detection accuracy,thereby enabling robust performance on resource-constrained devices like drones,closed-circuit television cameras,and other embedded systems.To achieve this,we propose key modifications to the network architecture to lighten the original backbone and improve feature aggregation,including replacing standard convolution with depth-wise convolution,adopting the C2 sequence module in place of C3,employing Spatial Pyramid Pooling Global(SPPG)instead of Spatial Pyramid Pooling Fast(SPPF)and integrating Bi-directional Feature Pyramid Network(BiFPN)into the neck.Experimental results demonstrate a remarkable 26%improvement in inference speed compared to the baseline,with only marginal reductions of 1.6%and 4.2%in mean average precision(mAP)at the intersection over union(IoU)thresholds of 0.5 and 0.5:0.95,respectively.Our work represents a significant advancement in scene text detection,striking a balance between speed and accuracy,making it well-suited for performance-constrained environments.
基金Supported by Shandong Province Medical and Health Science and Technology Development Plan Project,No.202203030713Clinical Research Funding of Shandong Medical Association-Qilu Specialization,No.YXH2022ZX02031Science and Technology Program of Yantai Affiliated Hospital of Binzhou Medical University,No.YTFY2022KYQD06.
文摘Esophageal cancer ranks among the most prevalent malignant tumors globally,primarily due to its highly aggressive nature and poor survival rates.According to the 2020 global cancer statistics,there were approximately 604000 new cases of esophageal cancer,resulting in 544000 deaths.The 5-year survival rate hovers around a mere 15%-25%.Notably,distinct variations exist in the risk factors associated with the two primary histological types,influencing their worldwide incidence and distribution.Squamous cell carcinoma displays a high incidence in specific regions,such as certain areas in China,where it meets the cost-effect-iveness criteria for widespread endoscopy-based early diagnosis within the local population.Conversely,adenocarcinoma(EAC)represents the most common histological subtype of esophageal cancer in Europe and the United States.The role of early diagnosis in cases of EAC originating from Barrett's esophagus(BE)remains a subject of controversy.The effectiveness of early detection for EAC,particularly those arising from BE,continues to be a debated topic.The variations in how early-stage esophageal carcinoma is treated in different regions are largely due to the differing rates of early-stage cancer diagnoses.In areas with higher incidences,such as China and Japan,early diagnosis is more common,which has led to the advancement of endoscopic methods as definitive treatments.These techniques have demonstrated remarkable efficacy with minimal complications while preserving esophageal functionality.Early screening,prompt diagnosis,and timely treatment are key strategies that can significantly lower both the occurrence and death rates associated with esophageal cancer.
基金supported by the Meteorological Soft Science Project(Grant No.2023ZZXM29)the Natural Science Fund Project of Tianjin,China(Grant No.21JCYBJC00740)the Key Research and Development-Social Development Program of Jiangsu Province,China(Grant No.BE2021685).
文摘As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.
基金This research was financially supported by the Ministry of Trade,Industry,and Energy(MOTIE),Korea,under the“Project for Research and Development with Middle Markets Enterprises and DNA(Data,Network,AI)Universities”(AI-based Safety Assessment and Management System for Concrete Structures)(ReferenceNumber P0024559)supervised by theKorea Institute for Advancement of Technology(KIAT).
文摘Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.
基金Scientific Research Fund of Liaoning Provincial Education Department(No.JGLX2021030):Research on Vision-Based Intelligent Perception Technology for the Survival of Benthic Organisms.
文摘For underwater robots in the process of performing target detection tasks,the color distortion and the uneven quality of underwater images lead to great difficulties in the feature extraction process of the model,which is prone to issues like error detection,omission detection,and poor accuracy.Therefore,this paper proposed the CER-YOLOv7(CBAM-EIOU-RepVGG-YOLOv7)underwater target detection algorithm.To improve the algorithm’s capability to retain valid features from both spatial and channel perspectives during the feature extraction phase,we have added a Convolutional Block Attention Module(CBAM)to the backbone network.The Reparameterization Visual Geometry Group(RepVGG)module is inserted into the backbone to improve the training and inference capabilities.The Efficient Intersection over Union(EIoU)loss is also used as the localization loss function,which reduces the error detection rate and missed detection rate of the algorithm.The experimental results of the CER-YOLOv7 algorithm on the UPRC(Underwater Robot Prototype Competition)dataset show that the mAP(mean Average Precision)score of the algorithm is 86.1%,which is a 2.2%improvement compared to the YOLOv7.The feasibility and validity of the CER-YOLOv7 are proved through ablation and comparison experiments,and it is more suitable for underwater target detection.
基金the National Natural Science Foundation of China(No.62302540)with author F.F.S.For more information,please visit their website at https://www.nsfc.gov.cn/.Additionally,it is also funded by the Open Foundation of Henan Key Laboratory of Cyberspace Situation Awareness(No.HNTS2022020)+1 种基金where F.F.S is an author.Further details can be found at http://xt.hnkjt.gov.cn/data/pingtai/.The research is also supported by the Natural Science Foundation of Henan Province Youth Science Fund Project(No.232300420422)for more information,you can visit https://kjt.henan.gov.cn/2022/09-02/2599082.html.Lastly,it receives funding from the Natural Science Foundation of Zhongyuan University of Technology(No.K2023QN018),where F.F.S is an author.You can find more information at https://www.zut.edu.cn/.
文摘As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocreate a misleading perception among users. While early research primarily focused on text-based features forfake news detection mechanisms, there has been relatively limited exploration of learning shared representationsin multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal modelfor detecting fake news, which relies on similarity reasoning and adversarial networks. The model employsBidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) toextract visual features. Subsequently, the model establishes similarity representations between the textual featuresextracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features arefused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigatethe relationship between fake news and events. This paper validates the proposed model using publicly availablemultimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approachachieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodalmodalmodelsand existing multimodal models. In contrast, the overall better performance of our model on the Weibo datasetsurpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarialnetworks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However,current research is limited to the fusion of only text and image modalities. Future research directions should aimto further integrate features fromadditionalmodalities to comprehensively represent themultifaceted informationof fake news.