Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems,...Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems, including spectral, polarization, and infrared technologies, there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes. Here, this study proposes a snapshot multispectral image-based camouflaged detection model, multispectral YOLO(MS-YOLO), which utilizes the SPD-Conv and Sim AM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information. Besides, the study constructs the first real-shot multispectral camouflaged people dataset(MSCPD), which encompasses diverse scenes, target scales, and attitudes. To minimize information redundancy, MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input. Through experiments on the MSCPD, MS-YOLO achieves a mean Average Precision of 94.31% and real-time detection at 65 frames per second, which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes. Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield.展开更多
Automatic control technology is the basis of road robot improvement,according to the characteristics of construction equipment and functions,the research will be input type perception from positioning acquisition,real...Automatic control technology is the basis of road robot improvement,according to the characteristics of construction equipment and functions,the research will be input type perception from positioning acquisition,real-world monitoring,the process will use RTK-GNSS positional perception technology,by projecting the left side of the earth from Gauss-Krueger projection method,and then carry out the Cartesian conversion based on the characteristics of drawing;steering control system is the core of the electric drive unmanned module,on the basis of the analysis of the composition of the steering system of unmanned engineering vehicles,the steering system key components such as direction,torque sensor,drive motor and other models are established,the joint simulation model of unmanned engineering vehicles is established,the steering controller is designed using the PID method,the simulation results show that the control method can meet the construction path demand for automatic steering.The path planning will first formulate the construction area with preset values and realize the steering angle correction during driving by PID algorithm,and never realize the construction-based path planning,and the results show that the method can control the straight path within the error of 10 cm and the curve error within 20 cm.With the collaboration of various modules,the automatic construction simulation results of this robot show that the design path and control method is effective.展开更多
Detecting highly-overlapped objects in crowded scenes remains a challenging problem,especially for one-stage detector.In this paper,we extricate YOLOv4 from the dilemma in a crowd by fine-tuning its detection scheme,n...Detecting highly-overlapped objects in crowded scenes remains a challenging problem,especially for one-stage detector.In this paper,we extricate YOLOv4 from the dilemma in a crowd by fine-tuning its detection scheme,named YOLO-CS.Specifically,we give YOLOv4 the power to detect multiple objects in one cell.Center to our method is the carefully designed joint prediction scheme,which is executed through an assignment of bounding boxes and a joint loss.Equipped with the derived joint-object augmentation(DJA),refined regression loss(RL)and Score-NMS(SN),YOLO-CS achieves competitive detection performance on CrowdHuman and CityPersons benchmarks compared with state-of-the-art detectors at the cost of little time.Furthermore,on the widely used general benchmark COCO,YOLOCS still has a good performance,indicating its robustness to various scenes.展开更多
The analysis of overcrowded areas is essential for flow monitoring,assembly control,and security.Crowd counting’s primary goal is to calculate the population in a given region,which requires real-time analysis of con...The analysis of overcrowded areas is essential for flow monitoring,assembly control,and security.Crowd counting’s primary goal is to calculate the population in a given region,which requires real-time analysis of congested scenes for prompt reactionary actions.The crowd is always unexpected,and the benchmarked available datasets have a lot of variation,which limits the trained models’performance on unseen test data.In this paper,we proposed an end-to-end deep neural network that takes an input image and generates a density map of a crowd scene.The proposed model consists of encoder and decoder networks comprising batch-free normalization layers known as evolving normalization(EvoNorm).This allows our network to be generalized for unseen data because EvoNorm is not using statistics from the training samples.The decoder network uses dilated 2D convolutional layers to provide large receptive fields and fewer parameters,which enables real-time processing and solves the density drift problem due to its large receptive field.Five benchmark datasets are used in this study to assess the proposed model,resulting in the conclusion that it outperforms conventional models.展开更多
Weather is a key factor affecting the control of air traffic.Accurate recognition and classification of similar weather scenes in the terminal area is helpful for rapid decision-making in air trafficflow management.Curren...Weather is a key factor affecting the control of air traffic.Accurate recognition and classification of similar weather scenes in the terminal area is helpful for rapid decision-making in air trafficflow management.Current researches mostly use traditional machine learning methods to extract features of weather scenes,and clustering algorithms to divide similar scenes.Inspired by the excellent performance of deep learning in image recognition,this paper proposes a terminal area similar weather scene classification method based on improved deep convolution embedded clustering(IDCEC),which uses the com-bination of the encoding layer and the decoding layer to reduce the dimensionality of the weather image,retaining useful information to the greatest extent,and then uses the combination of the pre-trained encoding layer and the clustering layer to train the clustering model of the similar scenes in the terminal area.Finally,term-inal area of Guangzhou Airport is selected as the research object,the method pro-posed in this article is used to classify historical weather data in similar scenes,and the performance is compared with other state-of-the-art methods.The experi-mental results show that the proposed IDCEC method can identify similar scenes more accurately based on the spatial distribution characteristics and severity of weather;at the same time,compared with the actualflight volume in the Guangz-hou terminal area,IDCEC's recognition results of similar weather scenes are con-sistent with the recognition of experts in thefield.展开更多
In European thought and culture,there exists a group of passionate artists who are fascinated by the intention,passion,and richness of artistic expression.They strive to establish connections between different art for...In European thought and culture,there exists a group of passionate artists who are fascinated by the intention,passion,and richness of artistic expression.They strive to establish connections between different art forms.Musicians not only attempt to represent masterpieces through the language of music but also aim to convey subjective experiences of emotions and personal imagination to listeners by adding titles to their musical works.This study examines two pieces,“Scenes of Childhood”and“Children’s Garden”,and analyzes the different approaches employed by the composers in portraying similar content.展开更多
Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal ...Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal is to enhance inference speed without sacrificing significant detection accuracy,thereby enabling robust performance on resource-constrained devices like drones,closed-circuit television cameras,and other embedded systems.To achieve this,we propose key modifications to the network architecture to lighten the original backbone and improve feature aggregation,including replacing standard convolution with depth-wise convolution,adopting the C2 sequence module in place of C3,employing Spatial Pyramid Pooling Global(SPPG)instead of Spatial Pyramid Pooling Fast(SPPF)and integrating Bi-directional Feature Pyramid Network(BiFPN)into the neck.Experimental results demonstrate a remarkable 26%improvement in inference speed compared to the baseline,with only marginal reductions of 1.6%and 4.2%in mean average precision(mAP)at the intersection over union(IoU)thresholds of 0.5 and 0.5:0.95,respectively.Our work represents a significant advancement in scene text detection,striking a balance between speed and accuracy,making it well-suited for performance-constrained environments.展开更多
The proposed robust reversible watermarking algorithm addresses the compatibility challenges between robustness and reversibility in existing video watermarking techniques by leveraging scene smoothness for frame grou...The proposed robust reversible watermarking algorithm addresses the compatibility challenges between robustness and reversibility in existing video watermarking techniques by leveraging scene smoothness for frame grouping videos.Grounded in the H.264 video coding standard,the algorithm first employs traditional robust watermark stitching technology to embed watermark information in the low-frequency coefficient domain of the U channel.Subsequently,it utilizes histogram migration techniques in the high-frequency coefficient domain of the U channel to embed auxiliary information,enabling successful watermark extraction and lossless recovery of the original video content.Experimental results demonstrate the algorithm’s strong imperceptibility,with each embedded frame in the experimental videos achieving a mean peak signal-to-noise ratio of 49.3830 dB and a mean structural similarity of 0.9996.Compared with the three comparison algorithms,the performance of the two experimental indexes is improved by 7.59%and 0.4%on average.At the same time,the proposed algorithm has strong robustness to both offline and online attacks:In the face of offline attacks,the average normalized correlation coefficient between the extracted watermark and the original watermark is 0.9989,and the average bit error rate is 0.0089.In the face of online attacks,the normalized correlation coefficient between the extracted watermark and the original watermark is 0.8840,and the mean bit error rate is 0.2269.Compared with the three comparison algorithms,the performance of the two experimental indexes is improved by 1.27%and 18.16%on average,highlighting the algorithm’s robustness.Furthermore,the algorithm exhibits low computational complexity,with the mean encoding and the mean decoding time differentials during experimental video processing being 3.934 and 2.273 s,respectively,underscoring its practical utility.展开更多
In digital video analysis, browse, retrieval and query, shot is incapable of meeting needs. Scene is a cluster of a series of shots, which partially meets above demands. In this paper, an algorithm of video scenes clu...In digital video analysis, browse, retrieval and query, shot is incapable of meeting needs. Scene is a cluster of a series of shots, which partially meets above demands. In this paper, an algorithm of video scenes clustering based on shot key frame sets is proposed. We use X^2 histogram match and twin histogram comparison for shot detection. A method is presented for key frame set extraction based on distance of non adjacent frames, further more, the minimum distance of key frame sets as distance of shots is computed, eventually scenes are clustered according to the distance of shots. Experiments of this algorithm show satisfactory performance in cor rectness and computing speed.展开更多
From virtual technology,the Internet to the meta-universe,around the creation of virtual and real isomorphism and immersive experience venues,an innovative integration model and innovative design concept integrating t...From virtual technology,the Internet to the meta-universe,around the creation of virtual and real isomorphism and immersive experience venues,an innovative integration model and innovative design concept integrating the three major fields of"science and technology+art+culture"have been opened,and the method of today's design industry management model has also been shaped.The author,ground on design originality and experience,intends to source feasible exhibition design tactics from the"presence"design and implementation project that integrates immersive display and performances.From this,efforts are invested in sorting specific innovative design patterns referring to ongoing cases of Metaverse,thereby charting course for prospective design projects.展开更多
By releasing the book The Catcher in the Rye,J.D.Salinger received an immediate popularity of his writing career.Hissymbolic use of language has been thoroughly researched but the symbolic scenes which make up Holden&...By releasing the book The Catcher in the Rye,J.D.Salinger received an immediate popularity of his writing career.Hissymbolic use of language has been thoroughly researched but the symbolic scenes which make up Holden's life stage,especiallythe symbolic connotations of ironic resting places in the novel,such as bed,couch and bedroom,has not been paid much attention.It tries to analyse the four scenes: on Holden's history teacher's bed,on the hotel bed with a prostitute,in his sister's bedroom,and on his English teacher's couch,and aims to discover his spiritual chaos as well as adolescent desires in the real world,demon-strating that there is no place for adolescent Holden to rest after he chooses his own stage of scenes in his life.展开更多
Background Virtual-reality(VR)fusion techniques have become increasingly popular in recent years,and several previous studies have applied them to laboratory education.However,without a basis for evaluating the effect...Background Virtual-reality(VR)fusion techniques have become increasingly popular in recent years,and several previous studies have applied them to laboratory education.However,without a basis for evaluating the effects of virtual-real fusion on VR in education,many developers have chosen to abandon this expensive and complex set of techniques.Methods In this study,we experimentally investigate the effects of virtual-real fusion on immersion,presence,and learning performance.Each participant was randomly assigned to one of three conditions:a PC environment(PCE)operated by mouse;a VR environment(VRE)operated by controllers;or a VR environment running virtual-real fusion(VR VRFE),operated by real hands.Results The analysis of variance(ANOVA)and t-test results for presence and self-efficacy show significant differences between the PCE*VR-VRFE condition pair.Furthermore,the results show significant differences in the intrinsic value of learning performance for pairs PCE*VR VRFE and VRE*VR-VRFE,and a marginally significant difference was found for the immersion group.Conclusions The results suggest that virtual-real fusion can offer improved immersion,presence,and self efficacy compared to traditional PC environments,as well as a better intrinsic value of learning performance compared to both PC and VR environments.The results also suggest that virtual-real fusion offers a lower sense of presence compared to traditional VR environments.展开更多
An automatic approach is presented to track a wide screen in a multipurpose hall video scene. Once the screen is located, this system also generates the temporal rate of change by using the edge detection based method...An automatic approach is presented to track a wide screen in a multipurpose hall video scene. Once the screen is located, this system also generates the temporal rate of change by using the edge detection based method. Our approach adopts a scene segmentation algorithm that explores visual features (texture) and depth information to perform efficient screen localization. The cropped region which refers to the wide screen undergoes salient visual cues extraction to retrieve the emphasized changes required in rate-of- change computation. In addition to video document indexing and retrieval, this work can improve the machine vision capability in the behavior analysis and pattern recognition.展开更多
Encryption and decryption method of three-dimensional objects uses holograms computer-generated and suggests encoding stage. Information obtained amplitude and phase of a three-dimensional object using mathematically ...Encryption and decryption method of three-dimensional objects uses holograms computer-generated and suggests encoding stage. Information obtained amplitude and phase of a three-dimensional object using mathematically stage transforms overlap stored on a digital computer. Different three-dimensional images restore and develop the system for the expansion of the three-dimensional scenes and camera movement parameters. This article talks about these kinds of digital image processing algorithms as the reconstruction of three-dimensional model of the scene. In the present state, many such algorithms need to be improved in this paper proposing one of the options to improve the accuracy of such reconstruction.展开更多
In this paper, we study autonomous landing scene recognition with knowledge transfer for drones. Considering the difficulties in aerial remote sensing, especially that some scenes are extremely similar, or the same sc...In this paper, we study autonomous landing scene recognition with knowledge transfer for drones. Considering the difficulties in aerial remote sensing, especially that some scenes are extremely similar, or the same scene has different representations in different altitudes, we employ a deep convolutional neural network(CNN) based on knowledge transfer and fine-tuning to solve the problem. Then, LandingScenes-7 dataset is established and divided into seven classes. Moreover, there is still a novelty detection problem in the classifier, and we address this by excluding other landing scenes using the approach of thresholding in the prediction stage. We employ the transfer learning method based on ResNeXt-50 backbone with the adaptive momentum(ADAM) optimization algorithm. We also compare ResNet-50 backbone and the momentum stochastic gradient descent(SGD) optimizer. Experiment results show that ResNeXt-50 based on the ADAM optimization algorithm has better performance. With a pre-trained model and fine-tuning, it can achieve 97.845 0% top-1 accuracy on the LandingScenes-7dataset, paving the way for drones to autonomously learn landing scenes.展开更多
In today’s real world, an important research part in image processing isscene text detection and recognition. Scene text can be in different languages,fonts, sizes, colours, orientations and structures. Moreover, the...In today’s real world, an important research part in image processing isscene text detection and recognition. Scene text can be in different languages,fonts, sizes, colours, orientations and structures. Moreover, the aspect ratios andlayouts of a scene text may differ significantly. All these variations appear assignificant challenges for the detection and recognition algorithms that are consideredfor the text in natural scenes. In this paper, a new intelligent text detection andrecognition method for detectingthe text from natural scenes and forrecognizingthe text by applying the newly proposed Conditional Random Field-based fuzzyrules incorporated Convolutional Neural Network (CR-CNN) has been proposed.Moreover, we have recommended a new text detection method for detecting theexact text from the input natural scene images. For enhancing the presentation ofthe edge detection process, image pre-processing activities such as edge detectionand color modeling have beenapplied in this work. In addition, we have generatednew fuzzy rules for making effective decisions on the processes of text detectionand recognition. The experiments have been directedusing the standard benchmark datasets such as the ICDAR 2003, the ICDAR 2011, the ICDAR2005 and the SVT and have achieved better detection accuracy intext detectionand recognition. By using these three datasets, five different experiments havebeen conducted for evaluating the proposed model. And also, we have comparedthe proposed system with the other classifiers such as the SVM, the MLP and theCNN. In these comparisons, the proposed model has achieved better classificationaccuracywhen compared with the other existing works.展开更多
This is an attempt to explain mRNA-dependent non-stationary semantic values of codons (triplets) and nucleotides (letters) in codon composition during protein biosynthesis. This explanation is realized by comparing th...This is an attempt to explain mRNA-dependent non-stationary semantic values of codons (triplets) and nucleotides (letters) in codon composition during protein biosynthesis. This explanation is realized by comparing the different protein codes of various biosystem taxa, and, comparing mitochondrial code with the standard code. An initial mRNA transcriptional virtuality (Virtual-Reality) is transformed into material reality at the level of translation of virtual triplets into real (material) amino acids or into a real stop command of protein biosynthesis. The transformation of virtuality into reality occurs de facto when the linguistic sign1 functions of the codon syhoms are realized in the 3’ nucleotide (wobbling nucleotide according to F. Crick) in the process of protein biosynthesis. This corresponds to the theoretical works of the authors of this article. Despite the illusory appearance of semantic arbitrariness during the operation of ribosomes in the mode of codon semantic non-stationarity, this phenomenon probably provides biosystems with an unusually high level of adaptability to changes in the external environment as well as to internal (mental) dynamics of neuron’s genome in the cerebral cortex. The genome’s non-stationarity properties at the nucleotide, codon, gene and mental levels have fractal structure and corresponding dimensions. The highest form of such fractality (with maximum dimension) is probably realized in the genomic continuum of neurons in the human cerebral cortex through this semantic Virtual-to-Real (VR) codon transcoding with the biosynthesis of short-living semantic proteins, as the equivalents of material thinking-consciousness. In fact, this is the language of the brain’s genome, that is, our own language. In this case, the same thing happens in natural, primarily mental (non-verbal) languages. Their materialization is recorded in vocables (sounding words) and in writing. Such writing is the amino acid sequence in the semantic proteins of the human cerebral cortex. Rapidly decaying, such proteins can leave a long-lasting “so-called” Schrödinger wave holographic memory in the cerebral cortex. The presented below study is purely theoretical and based on a logical approach. The topic of the study is very complex and is subject to further development.展开更多
Traffic scene captioning technology automatically generates one or more sentences to describe the content of traffic scenes by analyzing the content of the input traffic scene images,ensuring road safety while providi...Traffic scene captioning technology automatically generates one or more sentences to describe the content of traffic scenes by analyzing the content of the input traffic scene images,ensuring road safety while providing an important decision-making function for sustainable transportation.In order to provide a comprehensive and reasonable description of complex traffic scenes,a traffic scene semantic captioningmodel withmulti-stage feature enhancement is proposed in this paper.In general,the model follows an encoder-decoder structure.First,multilevel granularity visual features are used for feature enhancement during the encoding process,which enables the model to learn more detailed content in the traffic scene image.Second,the scene knowledge graph is applied to the decoding process,and the semantic features provided by the scene knowledge graph are used to enhance the features learned by the decoder again,so that themodel can learn the attributes of objects in the traffic scene and the relationships between objects to generate more reasonable captions.This paper reports extensive experiments on the challenging MS-COCO dataset,evaluated by five standard automatic evaluation metrics,and the results show that the proposed model has improved significantly in all metrics compared with the state-of-the-art methods,especially achieving a score of 129.0 on the CIDEr-D evaluation metric,which also indicates that the proposed model can effectively provide a more reasonable and comprehensive description of the traffic scene.展开更多
In view of the phased development in college education,military training,and new equipment combat training,this paper proposes the virtual-real fusion training model of“five-in-one and step-by-step.”The five trainin...In view of the phased development in college education,military training,and new equipment combat training,this paper proposes the virtual-real fusion training model of“five-in-one and step-by-step.”The five training modes,namely virtual panel training,immersive virtual training,physical(semi-physical)simulation training,training with equipped training equipment,and installation drill,are organically combined in the practical training of new equipment,which improves students' innovation consciousness and serviceability.展开更多
基金support by the National Natural Science Foundation of China (Grant No. 62005049)Natural Science Foundation of Fujian Province (Grant Nos. 2020J01451, 2022J05113)Education and Scientific Research Program for Young and Middleaged Teachers in Fujian Province (Grant No. JAT210035)。
文摘Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems, including spectral, polarization, and infrared technologies, there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes. Here, this study proposes a snapshot multispectral image-based camouflaged detection model, multispectral YOLO(MS-YOLO), which utilizes the SPD-Conv and Sim AM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information. Besides, the study constructs the first real-shot multispectral camouflaged people dataset(MSCPD), which encompasses diverse scenes, target scales, and attitudes. To minimize information redundancy, MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input. Through experiments on the MSCPD, MS-YOLO achieves a mean Average Precision of 94.31% and real-time detection at 65 frames per second, which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes. Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield.
文摘Automatic control technology is the basis of road robot improvement,according to the characteristics of construction equipment and functions,the research will be input type perception from positioning acquisition,real-world monitoring,the process will use RTK-GNSS positional perception technology,by projecting the left side of the earth from Gauss-Krueger projection method,and then carry out the Cartesian conversion based on the characteristics of drawing;steering control system is the core of the electric drive unmanned module,on the basis of the analysis of the composition of the steering system of unmanned engineering vehicles,the steering system key components such as direction,torque sensor,drive motor and other models are established,the joint simulation model of unmanned engineering vehicles is established,the steering controller is designed using the PID method,the simulation results show that the control method can meet the construction path demand for automatic steering.The path planning will first formulate the construction area with preset values and realize the steering angle correction during driving by PID algorithm,and never realize the construction-based path planning,and the results show that the method can control the straight path within the error of 10 cm and the curve error within 20 cm.With the collaboration of various modules,the automatic construction simulation results of this robot show that the design path and control method is effective.
基金the China National Key Research and Development Program(No.2016YFC0802904)National Natural Science Foundation of China(61671470)62nd batch of funded projects of China Postdoctoral Science Foundation(No.2017M623423).
文摘Detecting highly-overlapped objects in crowded scenes remains a challenging problem,especially for one-stage detector.In this paper,we extricate YOLOv4 from the dilemma in a crowd by fine-tuning its detection scheme,named YOLO-CS.Specifically,we give YOLOv4 the power to detect multiple objects in one cell.Center to our method is the carefully designed joint prediction scheme,which is executed through an assignment of bounding boxes and a joint loss.Equipped with the derived joint-object augmentation(DJA),refined regression loss(RL)and Score-NMS(SN),YOLO-CS achieves competitive detection performance on CrowdHuman and CityPersons benchmarks compared with state-of-the-art detectors at the cost of little time.Furthermore,on the widely used general benchmark COCO,YOLOCS still has a good performance,indicating its robustness to various scenes.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1I1A1A01055652).
文摘The analysis of overcrowded areas is essential for flow monitoring,assembly control,and security.Crowd counting’s primary goal is to calculate the population in a given region,which requires real-time analysis of congested scenes for prompt reactionary actions.The crowd is always unexpected,and the benchmarked available datasets have a lot of variation,which limits the trained models’performance on unseen test data.In this paper,we proposed an end-to-end deep neural network that takes an input image and generates a density map of a crowd scene.The proposed model consists of encoder and decoder networks comprising batch-free normalization layers known as evolving normalization(EvoNorm).This allows our network to be generalized for unseen data because EvoNorm is not using statistics from the training samples.The decoder network uses dilated 2D convolutional layers to provide large receptive fields and fewer parameters,which enables real-time processing and solves the density drift problem due to its large receptive field.Five benchmark datasets are used in this study to assess the proposed model,resulting in the conclusion that it outperforms conventional models.
基金supported by the Fundamental Research Funds for the CentralUniversities under Grant NS2020045. Y.L.G received the grant.
文摘Weather is a key factor affecting the control of air traffic.Accurate recognition and classification of similar weather scenes in the terminal area is helpful for rapid decision-making in air trafficflow management.Current researches mostly use traditional machine learning methods to extract features of weather scenes,and clustering algorithms to divide similar scenes.Inspired by the excellent performance of deep learning in image recognition,this paper proposes a terminal area similar weather scene classification method based on improved deep convolution embedded clustering(IDCEC),which uses the com-bination of the encoding layer and the decoding layer to reduce the dimensionality of the weather image,retaining useful information to the greatest extent,and then uses the combination of the pre-trained encoding layer and the clustering layer to train the clustering model of the similar scenes in the terminal area.Finally,term-inal area of Guangzhou Airport is selected as the research object,the method pro-posed in this article is used to classify historical weather data in similar scenes,and the performance is compared with other state-of-the-art methods.The experi-mental results show that the proposed IDCEC method can identify similar scenes more accurately based on the spatial distribution characteristics and severity of weather;at the same time,compared with the actualflight volume in the Guangz-hou terminal area,IDCEC's recognition results of similar weather scenes are con-sistent with the recognition of experts in thefield.
文摘In European thought and culture,there exists a group of passionate artists who are fascinated by the intention,passion,and richness of artistic expression.They strive to establish connections between different art forms.Musicians not only attempt to represent masterpieces through the language of music but also aim to convey subjective experiences of emotions and personal imagination to listeners by adding titles to their musical works.This study examines two pieces,“Scenes of Childhood”and“Children’s Garden”,and analyzes the different approaches employed by the composers in portraying similar content.
基金the National Natural Science Foundation of PRChina(42075130)Nari Technology Co.,Ltd.(4561655965)。
文摘Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal is to enhance inference speed without sacrificing significant detection accuracy,thereby enabling robust performance on resource-constrained devices like drones,closed-circuit television cameras,and other embedded systems.To achieve this,we propose key modifications to the network architecture to lighten the original backbone and improve feature aggregation,including replacing standard convolution with depth-wise convolution,adopting the C2 sequence module in place of C3,employing Spatial Pyramid Pooling Global(SPPG)instead of Spatial Pyramid Pooling Fast(SPPF)and integrating Bi-directional Feature Pyramid Network(BiFPN)into the neck.Experimental results demonstrate a remarkable 26%improvement in inference speed compared to the baseline,with only marginal reductions of 1.6%and 4.2%in mean average precision(mAP)at the intersection over union(IoU)thresholds of 0.5 and 0.5:0.95,respectively.Our work represents a significant advancement in scene text detection,striking a balance between speed and accuracy,making it well-suited for performance-constrained environments.
基金supported in part by the National Natural Science Foundation of China under Grants 62202496,62272478the Basic Frontier Innovation Project of Engineering university of People Armed Police under Grants WJY202314,WJY202221.
文摘The proposed robust reversible watermarking algorithm addresses the compatibility challenges between robustness and reversibility in existing video watermarking techniques by leveraging scene smoothness for frame grouping videos.Grounded in the H.264 video coding standard,the algorithm first employs traditional robust watermark stitching technology to embed watermark information in the low-frequency coefficient domain of the U channel.Subsequently,it utilizes histogram migration techniques in the high-frequency coefficient domain of the U channel to embed auxiliary information,enabling successful watermark extraction and lossless recovery of the original video content.Experimental results demonstrate the algorithm’s strong imperceptibility,with each embedded frame in the experimental videos achieving a mean peak signal-to-noise ratio of 49.3830 dB and a mean structural similarity of 0.9996.Compared with the three comparison algorithms,the performance of the two experimental indexes is improved by 7.59%and 0.4%on average.At the same time,the proposed algorithm has strong robustness to both offline and online attacks:In the face of offline attacks,the average normalized correlation coefficient between the extracted watermark and the original watermark is 0.9989,and the average bit error rate is 0.0089.In the face of online attacks,the normalized correlation coefficient between the extracted watermark and the original watermark is 0.8840,and the mean bit error rate is 0.2269.Compared with the three comparison algorithms,the performance of the two experimental indexes is improved by 1.27%and 18.16%on average,highlighting the algorithm’s robustness.Furthermore,the algorithm exhibits low computational complexity,with the mean encoding and the mean decoding time differentials during experimental video processing being 3.934 and 2.273 s,respectively,underscoring its practical utility.
基金Supported by the Natural Science Foundation ofHubei Province(2004ABA174)
文摘In digital video analysis, browse, retrieval and query, shot is incapable of meeting needs. Scene is a cluster of a series of shots, which partially meets above demands. In this paper, an algorithm of video scenes clustering based on shot key frame sets is proposed. We use X^2 histogram match and twin histogram comparison for shot detection. A method is presented for key frame set extraction based on distance of non adjacent frames, further more, the minimum distance of key frame sets as distance of shots is computed, eventually scenes are clustered according to the distance of shots. Experiments of this algorithm show satisfactory performance in cor rectness and computing speed.
文摘From virtual technology,the Internet to the meta-universe,around the creation of virtual and real isomorphism and immersive experience venues,an innovative integration model and innovative design concept integrating the three major fields of"science and technology+art+culture"have been opened,and the method of today's design industry management model has also been shaped.The author,ground on design originality and experience,intends to source feasible exhibition design tactics from the"presence"design and implementation project that integrates immersive display and performances.From this,efforts are invested in sorting specific innovative design patterns referring to ongoing cases of Metaverse,thereby charting course for prospective design projects.
文摘By releasing the book The Catcher in the Rye,J.D.Salinger received an immediate popularity of his writing career.Hissymbolic use of language has been thoroughly researched but the symbolic scenes which make up Holden's life stage,especiallythe symbolic connotations of ironic resting places in the novel,such as bed,couch and bedroom,has not been paid much attention.It tries to analyse the four scenes: on Holden's history teacher's bed,on the hotel bed with a prostitute,in his sister's bedroom,and on his English teacher's couch,and aims to discover his spiritual chaos as well as adolescent desires in the real world,demon-strating that there is no place for adolescent Holden to rest after he chooses his own stage of scenes in his life.
基金the National Key Research and Development Program of China(2018YFB1004902)the National Natural Science Foundation of China(61772329,61373085)。
文摘Background Virtual-reality(VR)fusion techniques have become increasingly popular in recent years,and several previous studies have applied them to laboratory education.However,without a basis for evaluating the effects of virtual-real fusion on VR in education,many developers have chosen to abandon this expensive and complex set of techniques.Methods In this study,we experimentally investigate the effects of virtual-real fusion on immersion,presence,and learning performance.Each participant was randomly assigned to one of three conditions:a PC environment(PCE)operated by mouse;a VR environment(VRE)operated by controllers;or a VR environment running virtual-real fusion(VR VRFE),operated by real hands.Results The analysis of variance(ANOVA)and t-test results for presence and self-efficacy show significant differences between the PCE*VR-VRFE condition pair.Furthermore,the results show significant differences in the intrinsic value of learning performance for pairs PCE*VR VRFE and VRE*VR-VRFE,and a marginally significant difference was found for the immersion group.Conclusions The results suggest that virtual-real fusion can offer improved immersion,presence,and self efficacy compared to traditional PC environments,as well as a better intrinsic value of learning performance compared to both PC and VR environments.The results also suggest that virtual-real fusion offers a lower sense of presence compared to traditional VR environments.
文摘An automatic approach is presented to track a wide screen in a multipurpose hall video scene. Once the screen is located, this system also generates the temporal rate of change by using the edge detection based method. Our approach adopts a scene segmentation algorithm that explores visual features (texture) and depth information to perform efficient screen localization. The cropped region which refers to the wide screen undergoes salient visual cues extraction to retrieve the emphasized changes required in rate-of- change computation. In addition to video document indexing and retrieval, this work can improve the machine vision capability in the behavior analysis and pattern recognition.
文摘Encryption and decryption method of three-dimensional objects uses holograms computer-generated and suggests encoding stage. Information obtained amplitude and phase of a three-dimensional object using mathematically stage transforms overlap stored on a digital computer. Different three-dimensional images restore and develop the system for the expansion of the three-dimensional scenes and camera movement parameters. This article talks about these kinds of digital image processing algorithms as the reconstruction of three-dimensional model of the scene. In the present state, many such algorithms need to be improved in this paper proposing one of the options to improve the accuracy of such reconstruction.
基金supported by the National Natural Science Foundation of China (62103104)the China Postdoctoral Science Foundation(2021M690615)。
文摘In this paper, we study autonomous landing scene recognition with knowledge transfer for drones. Considering the difficulties in aerial remote sensing, especially that some scenes are extremely similar, or the same scene has different representations in different altitudes, we employ a deep convolutional neural network(CNN) based on knowledge transfer and fine-tuning to solve the problem. Then, LandingScenes-7 dataset is established and divided into seven classes. Moreover, there is still a novelty detection problem in the classifier, and we address this by excluding other landing scenes using the approach of thresholding in the prediction stage. We employ the transfer learning method based on ResNeXt-50 backbone with the adaptive momentum(ADAM) optimization algorithm. We also compare ResNet-50 backbone and the momentum stochastic gradient descent(SGD) optimizer. Experiment results show that ResNeXt-50 based on the ADAM optimization algorithm has better performance. With a pre-trained model and fine-tuning, it can achieve 97.845 0% top-1 accuracy on the LandingScenes-7dataset, paving the way for drones to autonomously learn landing scenes.
文摘In today’s real world, an important research part in image processing isscene text detection and recognition. Scene text can be in different languages,fonts, sizes, colours, orientations and structures. Moreover, the aspect ratios andlayouts of a scene text may differ significantly. All these variations appear assignificant challenges for the detection and recognition algorithms that are consideredfor the text in natural scenes. In this paper, a new intelligent text detection andrecognition method for detectingthe text from natural scenes and forrecognizingthe text by applying the newly proposed Conditional Random Field-based fuzzyrules incorporated Convolutional Neural Network (CR-CNN) has been proposed.Moreover, we have recommended a new text detection method for detecting theexact text from the input natural scene images. For enhancing the presentation ofthe edge detection process, image pre-processing activities such as edge detectionand color modeling have beenapplied in this work. In addition, we have generatednew fuzzy rules for making effective decisions on the processes of text detectionand recognition. The experiments have been directedusing the standard benchmark datasets such as the ICDAR 2003, the ICDAR 2011, the ICDAR2005 and the SVT and have achieved better detection accuracy intext detectionand recognition. By using these three datasets, five different experiments havebeen conducted for evaluating the proposed model. And also, we have comparedthe proposed system with the other classifiers such as the SVM, the MLP and theCNN. In these comparisons, the proposed model has achieved better classificationaccuracywhen compared with the other existing works.
文摘This is an attempt to explain mRNA-dependent non-stationary semantic values of codons (triplets) and nucleotides (letters) in codon composition during protein biosynthesis. This explanation is realized by comparing the different protein codes of various biosystem taxa, and, comparing mitochondrial code with the standard code. An initial mRNA transcriptional virtuality (Virtual-Reality) is transformed into material reality at the level of translation of virtual triplets into real (material) amino acids or into a real stop command of protein biosynthesis. The transformation of virtuality into reality occurs de facto when the linguistic sign1 functions of the codon syhoms are realized in the 3’ nucleotide (wobbling nucleotide according to F. Crick) in the process of protein biosynthesis. This corresponds to the theoretical works of the authors of this article. Despite the illusory appearance of semantic arbitrariness during the operation of ribosomes in the mode of codon semantic non-stationarity, this phenomenon probably provides biosystems with an unusually high level of adaptability to changes in the external environment as well as to internal (mental) dynamics of neuron’s genome in the cerebral cortex. The genome’s non-stationarity properties at the nucleotide, codon, gene and mental levels have fractal structure and corresponding dimensions. The highest form of such fractality (with maximum dimension) is probably realized in the genomic continuum of neurons in the human cerebral cortex through this semantic Virtual-to-Real (VR) codon transcoding with the biosynthesis of short-living semantic proteins, as the equivalents of material thinking-consciousness. In fact, this is the language of the brain’s genome, that is, our own language. In this case, the same thing happens in natural, primarily mental (non-verbal) languages. Their materialization is recorded in vocables (sounding words) and in writing. Such writing is the amino acid sequence in the semantic proteins of the human cerebral cortex. Rapidly decaying, such proteins can leave a long-lasting “so-called” Schrödinger wave holographic memory in the cerebral cortex. The presented below study is purely theoretical and based on a logical approach. The topic of the study is very complex and is subject to further development.
基金funded by(i)Natural Science Foundation China(NSFC)under Grant Nos.61402397,61263043,61562093 and 61663046(ii)Open Foundation of Key Laboratory in Software Engineering of Yunnan Province:No.2020SE304.(iii)Practical Innovation Project of Yunnan University,Project Nos.2021z34,2021y128 and 2021y129.
文摘Traffic scene captioning technology automatically generates one or more sentences to describe the content of traffic scenes by analyzing the content of the input traffic scene images,ensuring road safety while providing an important decision-making function for sustainable transportation.In order to provide a comprehensive and reasonable description of complex traffic scenes,a traffic scene semantic captioningmodel withmulti-stage feature enhancement is proposed in this paper.In general,the model follows an encoder-decoder structure.First,multilevel granularity visual features are used for feature enhancement during the encoding process,which enables the model to learn more detailed content in the traffic scene image.Second,the scene knowledge graph is applied to the decoding process,and the semantic features provided by the scene knowledge graph are used to enhance the features learned by the decoder again,so that themodel can learn the attributes of objects in the traffic scene and the relationships between objects to generate more reasonable captions.This paper reports extensive experiments on the challenging MS-COCO dataset,evaluated by five standard automatic evaluation metrics,and the results show that the proposed model has improved significantly in all metrics compared with the state-of-the-art methods,especially achieving a score of 129.0 on the CIDEr-D evaluation metric,which also indicates that the proposed model can effectively provide a more reasonable and comprehensive description of the traffic scene.
基金The 2019 Ministry of Education Industry-University Cooperation Collaborative Education Project"Research on the Construction of Economics and Management Professional Data Analysis Laboratory"(Grant Number:201902077020)。
文摘In view of the phased development in college education,military training,and new equipment combat training,this paper proposes the virtual-real fusion training model of“five-in-one and step-by-step.”The five training modes,namely virtual panel training,immersive virtual training,physical(semi-physical)simulation training,training with equipped training equipment,and installation drill,are organically combined in the practical training of new equipment,which improves students' innovation consciousness and serviceability.