Leveraging the extraordinary phenomena of quantum superposition and quantum correlation,quantum computing offers unprecedented potential for addressing challenges beyond the reach of classical computers.This paper tac...Leveraging the extraordinary phenomena of quantum superposition and quantum correlation,quantum computing offers unprecedented potential for addressing challenges beyond the reach of classical computers.This paper tackles two pivotal challenges in the realm of quantum computing:firstly,the development of an effective encoding protocol for translating classical data into quantum states,a critical step for any quantum computation.Different encoding strategies can significantly influence quantum computer performance.Secondly,we address the need to counteract the inevitable noise that can hinder quantum acceleration.Our primary contribution is the introduction of a novel variational data encoding method,grounded in quantum regression algorithm models.By adapting the learning concept from machine learning,we render data encoding a learnable process.This allowed us to study the role of quantum correlation in data encoding.Through numerical simulations of various regression tasks,we demonstrate the efficacy of our variational data encoding,particularly post-learning from instructional data.Moreover,we delve into the role of quantum correlation in enhancing task performance,especially in noisy environments.Our findings underscore the critical role of quantum correlation in not only bolstering performance but also in mitigating noise interference,thus advancing the frontier of quantum computing.展开更多
This study proposes a novel particle encoding mechanism that seamlessly incorporates the quantum properties of particles,with a specific emphasis on constituent quarks.The primary objective of this mechanism is to fac...This study proposes a novel particle encoding mechanism that seamlessly incorporates the quantum properties of particles,with a specific emphasis on constituent quarks.The primary objective of this mechanism is to facilitate the digital registration and identification of a wide range of particle information.Its design ensures easy integration with different event generators and digital simulations commonly used in high-energy experiments.Moreover,this innovative framework can be easily expanded to encode complex multi-quark states comprising up to nine valence quarks and accommodating an angular momentum of up to 99/2.This versatility and scalability make it a valuable tool.展开更多
Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding ...Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding and decoding semantic communication framework,which adopts the semantic information and the contextual correlations between items to optimize the performance of a communication system over various channels.On the sender side,the average semantic loss caused by the wrong detection is defined,and a semantic source encoding strategy is developed to minimize the average semantic loss.To further improve communication reliability,a decoding strategy that utilizes the semantic and the context information to recover messages is proposed in the receiver.Extensive simulation results validate the superior performance of our strategies over state-of-the-art semantic coding and decoding policies on different communication channels.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
Information security has emerged as a key problem in encryption because of the rapid evolution of the internet and networks.Thus,the progress of image encryption techniques is becoming an increasingly serious issue an...Information security has emerged as a key problem in encryption because of the rapid evolution of the internet and networks.Thus,the progress of image encryption techniques is becoming an increasingly serious issue and considerable problem.Small space of the key,encryption-based low confidentiality,low key sensitivity,and easily exploitable existing image encryption techniques integrating chaotic system and DNA computing are purposing the main problems to propose a new encryption technique in this study.In our proposed scheme,a three-dimensional Chen’s map and a one-dimensional Logistic map are employed to construct a double-layer image encryption scheme.In the confusion stage,different scrambling operations related to the original plain image pixels are designed using Chen’s map.A stream pixel scrambling operation related to the plain image is constructed.Then,a block scrambling-based image encryption-related stream pixel scrambled image is designed.In the diffusion stage,two rounds of pixel diffusion are generated related to the confusing image for intra-image diffusion.Chen’s map,logistic map,and DNA computing are employed to construct diffusion operations.A reverse complementary rule is applied to obtain a new form of DNA.A Chen’s map is used to produce a pseudorandom DNA sequence,and then another DNA form is constructed from a reverse pseudorandom DNA sequence.Finally,the XOR operation is performed multiple times to obtain the encrypted image.According to the simulation of experiments and security analysis,this approach extends the key space,has great sensitivity,and is able to withstand various typical attacks.An adequate encryption effect is achieved by the proposed algorithm,which can simultaneously decrease the correlation between adjacent pixels by making it near zero,also the information entropy is increased.The number of pixels changing rate(NPCR)and the unified average change intensity(UACI)both are very near to optimal values.展开更多
Water exchange between the different compartments of a heterogeneous specimen can be characterized via diffusion magnetic resonance imaging(dMRI).Many analysis frameworks using dMRI data have been proposed to describe...Water exchange between the different compartments of a heterogeneous specimen can be characterized via diffusion magnetic resonance imaging(dMRI).Many analysis frameworks using dMRI data have been proposed to describe exchange,often using a double diffusion encoding(DDE)stimulated echo sequence.Techniques such as diffusion exchange weighted imaging(DEWI)and the filter exchange and rapid exchange models,use a specific subset of the full space DDE signal.In this work,a general representation of the DDE signal was employed with different sampling schemes(namely constant b1,diagonal and anti-diagonal)from the data reduction models to estimate exchange.A near-uniform sampling scheme was proposed and compared with the other sampling schemes.The filter exchange and rapid exchange models were also applied to estimate exchange with their own subsampling schemes.These subsampling schemes and models were compared on both simulated data and experimental data acquired with a benchtop MR scanner.In synthetic data,the diagonal and near-uniform sampling schemes performed the best due to the consistency of their estimates with the ground truth.In experimental data,the shifted diagonal and near-uniform sampling schemes outperformed the others,yielding the most consistent estimates with the full space estimation.The results suggest the feasibility of measuring exchange using a general representation of the DDE signal along with variable sampling schemes.In future studies,algorithms could be further developed for the optimization of sampling schemes,as well as incorporating additional properties,such as geometry and diffusion anisotropy,into exchange frameworks.展开更多
Tea has a history of thousands of years in China and it plays an important role in the working-life and daily life of people.Tea culture rich in connotation is an important part of Chinese traditional culture,and its ...Tea has a history of thousands of years in China and it plays an important role in the working-life and daily life of people.Tea culture rich in connotation is an important part of Chinese traditional culture,and its existence and development are also of great significance to the diversified development of world culture.Based on Stuart Hall’s encoding/decoding theory,this paper analyzes the problems in the spreading of Chinese tea in and out of the country and provides solutions from the perspective of encoding,communication,and decoding.It is expected to provide a reference for the domestic and international dissemination of Chinese tea culture.展开更多
The Beijing-Hangzhou Grand Canal carries a wealth of Chinese cultural symbols,showing the lifestyle and wisdom of working people through ages.The preservation and inheritance of its intangible cultural heritage can he...The Beijing-Hangzhou Grand Canal carries a wealth of Chinese cultural symbols,showing the lifestyle and wisdom of working people through ages.The preservation and inheritance of its intangible cultural heritage can help to evoke cultural memories and cultural identification of the Canal and build cultural confidence.This paper applies Stuart Hall’s encoding/decoding theory to analyze the dissemination of intangible heritage tourism culture.On the basis of a practical study of the villages along the Beijing-Hangzhou Grand Canal,this paper analyses the problems in the transmission of its intangible cultural heritage and proposes specific methods to solve them in four processes,encoding,decoding,communication,and secondary encoding,in order to propose references for the transmission of intangible heritage culture at home and abroad.展开更多
Automatic crack detection of cement pavement chiefly benefits from the rapid development of deep learning,with convolutional neural networks(CNN)playing an important role in this field.However,as the performance of cr...Automatic crack detection of cement pavement chiefly benefits from the rapid development of deep learning,with convolutional neural networks(CNN)playing an important role in this field.However,as the performance of crack detection in cement pavement improves,the depth and width of the network structure are significantly increased,which necessitates more computing power and storage space.This limitation hampers the practical implementation of crack detection models on various platforms,particularly portable devices like small mobile devices.To solve these problems,we propose a dual-encoder-based network architecture that focuses on extracting more comprehensive fracture feature information and combines cross-fusion modules and coordinated attention mechanisms formore efficient feature fusion.Firstly,we use small channel convolution to construct shallow feature extractionmodule(SFEM)to extract low-level feature information of cracks in cement pavement images,in order to obtainmore information about cracks in the shallowfeatures of images.In addition,we construct large kernel atrous convolution(LKAC)to enhance crack information,which incorporates coordination attention mechanism for non-crack information filtering,and large kernel atrous convolution with different cores,using different receptive fields to extract more detailed edge and context information.Finally,the three-stage feature map outputs from the shallow feature extraction module is cross-fused with the two-stage feature map outputs from the large kernel atrous convolution module,and the shallow feature and detailed edge feature are fully fused to obtain the final crack prediction map.We evaluate our method on three public crack datasets:DeepCrack,CFD,and Crack500.Experimental results on theDeepCrack dataset demonstrate the effectiveness of our proposed method compared to state-of-the-art crack detection methods,which achieves Precision(P)87.2%,Recall(R)87.7%,and F-score(F1)87.4%.Thanks to our lightweight crack detectionmodel,the parameter count of the model in real-world detection scenarios has been significantly reduced to less than 2M.This advancement also facilitates technical support for portable scene detection.展开更多
In order to prevent possible casualties and economic loss, it is critical to accurate prediction of the Remaining Useful Life (RUL) in rail prognostics health management. However, the traditional neural networks is di...In order to prevent possible casualties and economic loss, it is critical to accurate prediction of the Remaining Useful Life (RUL) in rail prognostics health management. However, the traditional neural networks is difficult to capture the long-term dependency relationship of the time series in the modeling of the long time series of rail damage, due to the coupling relationship of multi-channel data from multiple sensors. Here, in this paper, a novel RUL prediction model with an enhanced pulse separable convolution is used to solve this issue. Firstly, a coding module based on the improved pulse separable convolutional network is established to effectively model the relationship between the data. To enhance the network, an alternate gradient back propagation method is implemented. And an efficient channel attention (ECA) mechanism is developed for better emphasizing the useful pulse characteristics. Secondly, an optimized Transformer encoder was designed to serve as the backbone of the model. It has the ability to efficiently understand relationship between the data itself and each other at each time step of long time series with a full life cycle. More importantly, the Transformer encoder is improved by integrating pulse maximum pooling to retain more pulse timing characteristics. Finally, based on the characteristics of the front layer, the final predicted RUL value was provided and served as the end-to-end solution. The empirical findings validate the efficacy of the suggested approach in forecasting the rail RUL, surpassing various existing data-driven prognostication techniques. Meanwhile, the proposed method also shows good generalization performance on PHM2012 bearing data set.展开更多
The visual features of continuous pseudocolor encoding is discussed and the optimiz- ing design algorithm of continuous pseudocolor scale is derived.The algorithm is restricting the varying range and direction of ligh...The visual features of continuous pseudocolor encoding is discussed and the optimiz- ing design algorithm of continuous pseudocolor scale is derived.The algorithm is restricting the varying range and direction of lightness,hue and saturation according to correlation and naturalness,automatically calculating the chromaticity coordinates of nodes in uniform color space to get the longest length of scale path,then interpolating points between nodes in equal color differences to obtain continuous pseudocolor scale with visual uniformity.When it was applied to the pseudocolor encoding of thermal image displays,the results showed that the correlation and the naturalness of original images and cognitive characteristics of target pattern were reserved well;the dynamic range of visual perception and the amount of visual information increased obviously;the contrast sensitivity of target identification improved;and the blindness of scale design were avoided.展开更多
On-chip global buses in deep sub-micron designs consume significant amounts of energy and have large propagation delays. Thus, minimizing energy dissipation and propagation delay is an important design objective. In t...On-chip global buses in deep sub-micron designs consume significant amounts of energy and have large propagation delays. Thus, minimizing energy dissipation and propagation delay is an important design objective. In this paper, we propose a new spatial and temporal encoding approach for generic on-chip global buses with repeaters that enables higher performance while reducing peak energy and average energy. The proposed encoding approach exploits the benefits of a temporal encoding circuit and spatial bus-invert coding techniques to simultaneously eliminate opposite transitions on adjacent wires and reduce the number of self-transitions and coupling-transitions. In the design process of applying encoding techniques for reduced bus delay and energy, we present a repeater insertion design methodology to determine the repeater size and inter-repeater bus length, which minimizes the total bus energy dissipation while satisfying target delay and slew-rate constraints. This methodology is employed to obtain optimal energy versus delay trade-offs under slew-rate constraints for various encoding techniques.展开更多
In this paper, a 3-D video encoding scheme suitable for digital TV/HDTV (high definition television) is studied through computer simulation. The encoding scheme is designed to provide a good match to human vision. Bas...In this paper, a 3-D video encoding scheme suitable for digital TV/HDTV (high definition television) is studied through computer simulation. The encoding scheme is designed to provide a good match to human vision. Basically, this involves transmission of low frequency luminance information at full frame rate for good motion rendition and transmission of high frequency luminance signal at reduced frame rate for good detail in static images.展开更多
The translation activity is a process of the interlinguistic transmission of information realized by the information encoding and decoding.Encoding and decoding,cognitive practices operated in objective contexts,are i...The translation activity is a process of the interlinguistic transmission of information realized by the information encoding and decoding.Encoding and decoding,cognitive practices operated in objective contexts,are inevitably of selectivity ascribing to the restriction of contextual reasons.The translator as the intermediary agent connects the original author(encoder)and the target readers(decoder),shouldering the dual duties of the decoder and the encoder,for which his subjectivity is irrevocably manipulated by the selectivity of encoding and decoding.展开更多
As a high quality seismic imaging method, full waveform inversion (FWI) can accurately reconstruct the physical parameter model for the subsurface medium. However, application of the FWI in seismic data processing i...As a high quality seismic imaging method, full waveform inversion (FWI) can accurately reconstruct the physical parameter model for the subsurface medium. However, application of the FWI in seismic data processing is computationally expensive, especially for the three-dimension complex medium inversion. Introducing blended source technology into the frequency-domain FWI can greatly reduce the computational burden and improve the efficiency of the inversion. However, this method has two issues: first, crosstalk noise is caused by interference between the sources involved in the encoding, resulting in an inversion result with some artifacts; second, it is more sensitive to ambient noise compared to conventional FWI, therefore noisy data results in a poor inversion. This paper introduces a frequency-group encoding method to suppress crosstalk noise, and presents a frequency- domain auto-adapting FWI based on source-encoding technology. The conventional FWI method and source-encoding based FWI method are combined using an auto-adapting mechanism. This improvement can both guarantee the quality of the inversion result and maximize the inversion efficiency.展开更多
Based on detailed analysis of advantages and disadvantages of the existing connected-component labeling (CCL) algorithm,a new algorithm for binary connected components labeling based on run-length encoding (RLE) a...Based on detailed analysis of advantages and disadvantages of the existing connected-component labeling (CCL) algorithm,a new algorithm for binary connected components labeling based on run-length encoding (RLE) and union-find sets has been put forward.The new algorithm uses RLE as the basic processing unit,converts the label merging of connected RLE into sets grouping in accordance with equivalence relation,and uses the union-find sets which is the realization method of sets grouping to solve the label merging of connected RLE.And the label merging procedure has been optimized:the union operation has been modified by adding the "weighted rule" to avoid getting a degenerated-tree,and the "path compression" has been adopted when implementing the find operation,then the time complexity of label merging is O(nα(n)).The experiments show that the new algorithm can label the connected components of any shapes very quickly and exactly,save more memory,and facilitate the subsequent image analysis.展开更多
Brain encoding and decoding via functional magnetic resonance imaging(fMRI)are two important aspects of visual perception neuroscience.Although previous researchers have made significant advances in brain encoding and...Brain encoding and decoding via functional magnetic resonance imaging(fMRI)are two important aspects of visual perception neuroscience.Although previous researchers have made significant advances in brain encoding and decoding models,existing methods still require improvement using advanced machine learning techniques.For example,traditional methods usually build the encoding and decoding models separately,and are prone to overfitting on a small dataset.In fact,effectively unifying the encoding and decoding procedures may allow for more accurate predictions.In this paper,we first review the existing encoding and decoding methods and discuss the potential advantages of a“bidirectional”modeling strategy.Next,we show that there are correspondences between deep neural networks and human visual streams in terms of the architecture and computational rules.Furthermore,deep generative models(e.g.,variational autoencoders(VAEs)and generative adversarial networks(GANs))have produced promising results in studies on brain encoding and decoding.Finally,we propose that the dual learning method,which was originally designed for machine translation tasks,could help to improve the performance of encoding and decoding models by leveraging large-scale unpaired data.展开更多
基金the National Natural Science Foun-dation of China(Grant Nos.12105090 and 12175057).
文摘Leveraging the extraordinary phenomena of quantum superposition and quantum correlation,quantum computing offers unprecedented potential for addressing challenges beyond the reach of classical computers.This paper tackles two pivotal challenges in the realm of quantum computing:firstly,the development of an effective encoding protocol for translating classical data into quantum states,a critical step for any quantum computation.Different encoding strategies can significantly influence quantum computer performance.Secondly,we address the need to counteract the inevitable noise that can hinder quantum acceleration.Our primary contribution is the introduction of a novel variational data encoding method,grounded in quantum regression algorithm models.By adapting the learning concept from machine learning,we render data encoding a learnable process.This allowed us to study the role of quantum correlation in data encoding.Through numerical simulations of various regression tasks,we demonstrate the efficacy of our variational data encoding,particularly post-learning from instructional data.Moreover,we delve into the role of quantum correlation in enhancing task performance,especially in noisy environments.Our findings underscore the critical role of quantum correlation in not only bolstering performance but also in mitigating noise interference,thus advancing the frontier of quantum computing.
基金the Department of Education of Hunan Province,China(No.21A0541)the U.S.Department of Energy(No.DE-FG03-93ER40773)H.Z.acknowledges the financial support from Key Laboratory of Quark and Lepton Physics in Central China Normal University(No.QLPL2024P01)。
文摘This study proposes a novel particle encoding mechanism that seamlessly incorporates the quantum properties of particles,with a specific emphasis on constituent quarks.The primary objective of this mechanism is to facilitate the digital registration and identification of a wide range of particle information.Its design ensures easy integration with different event generators and digital simulations commonly used in high-energy experiments.Moreover,this innovative framework can be easily expanded to encode complex multi-quark states comprising up to nine valence quarks and accommodating an angular momentum of up to 99/2.This versatility and scalability make it a valuable tool.
基金supported in part by the National Natural Science Foundation of China under Grant No.61931020,U19B2024,62171449,62001483in part by the science and technology innovation Program of Hunan Province under Grant No.2021JJ40690。
文摘Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding and decoding semantic communication framework,which adopts the semantic information and the contextual correlations between items to optimize the performance of a communication system over various channels.On the sender side,the average semantic loss caused by the wrong detection is defined,and a semantic source encoding strategy is developed to minimize the average semantic loss.To further improve communication reliability,a decoding strategy that utilizes the semantic and the context information to recover messages is proposed in the receiver.Extensive simulation results validate the superior performance of our strategies over state-of-the-art semantic coding and decoding policies on different communication channels.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
基金Deanship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number:IFP22UQU4400257DSR031.
文摘Information security has emerged as a key problem in encryption because of the rapid evolution of the internet and networks.Thus,the progress of image encryption techniques is becoming an increasingly serious issue and considerable problem.Small space of the key,encryption-based low confidentiality,low key sensitivity,and easily exploitable existing image encryption techniques integrating chaotic system and DNA computing are purposing the main problems to propose a new encryption technique in this study.In our proposed scheme,a three-dimensional Chen’s map and a one-dimensional Logistic map are employed to construct a double-layer image encryption scheme.In the confusion stage,different scrambling operations related to the original plain image pixels are designed using Chen’s map.A stream pixel scrambling operation related to the plain image is constructed.Then,a block scrambling-based image encryption-related stream pixel scrambled image is designed.In the diffusion stage,two rounds of pixel diffusion are generated related to the confusing image for intra-image diffusion.Chen’s map,logistic map,and DNA computing are employed to construct diffusion operations.A reverse complementary rule is applied to obtain a new form of DNA.A Chen’s map is used to produce a pseudorandom DNA sequence,and then another DNA form is constructed from a reverse pseudorandom DNA sequence.Finally,the XOR operation is performed multiple times to obtain the encrypted image.According to the simulation of experiments and security analysis,this approach extends the key space,has great sensitivity,and is able to withstand various typical attacks.An adequate encryption effect is achieved by the proposed algorithm,which can simultaneously decrease the correlation between adjacent pixels by making it near zero,also the information entropy is increased.The number of pixels changing rate(NPCR)and the unified average change intensity(UACI)both are very near to optimal values.
基金the Swedish Foundation for International Cooperation in Research and Higher Education(STINT),and the Swedish Research Council(Dnr 2022e04715).
文摘Water exchange between the different compartments of a heterogeneous specimen can be characterized via diffusion magnetic resonance imaging(dMRI).Many analysis frameworks using dMRI data have been proposed to describe exchange,often using a double diffusion encoding(DDE)stimulated echo sequence.Techniques such as diffusion exchange weighted imaging(DEWI)and the filter exchange and rapid exchange models,use a specific subset of the full space DDE signal.In this work,a general representation of the DDE signal was employed with different sampling schemes(namely constant b1,diagonal and anti-diagonal)from the data reduction models to estimate exchange.A near-uniform sampling scheme was proposed and compared with the other sampling schemes.The filter exchange and rapid exchange models were also applied to estimate exchange with their own subsampling schemes.These subsampling schemes and models were compared on both simulated data and experimental data acquired with a benchtop MR scanner.In synthetic data,the diagonal and near-uniform sampling schemes performed the best due to the consistency of their estimates with the ground truth.In experimental data,the shifted diagonal and near-uniform sampling schemes outperformed the others,yielding the most consistent estimates with the full space estimation.The results suggest the feasibility of measuring exchange using a general representation of the DDE signal along with variable sampling schemes.In future studies,algorithms could be further developed for the optimization of sampling schemes,as well as incorporating additional properties,such as geometry and diffusion anisotropy,into exchange frameworks.
文摘Tea has a history of thousands of years in China and it plays an important role in the working-life and daily life of people.Tea culture rich in connotation is an important part of Chinese traditional culture,and its existence and development are also of great significance to the diversified development of world culture.Based on Stuart Hall’s encoding/decoding theory,this paper analyzes the problems in the spreading of Chinese tea in and out of the country and provides solutions from the perspective of encoding,communication,and decoding.It is expected to provide a reference for the domestic and international dissemination of Chinese tea culture.
基金supported by the National Social Science Fund Project (No.20BH151).
文摘The Beijing-Hangzhou Grand Canal carries a wealth of Chinese cultural symbols,showing the lifestyle and wisdom of working people through ages.The preservation and inheritance of its intangible cultural heritage can help to evoke cultural memories and cultural identification of the Canal and build cultural confidence.This paper applies Stuart Hall’s encoding/decoding theory to analyze the dissemination of intangible heritage tourism culture.On the basis of a practical study of the villages along the Beijing-Hangzhou Grand Canal,this paper analyses the problems in the transmission of its intangible cultural heritage and proposes specific methods to solve them in four processes,encoding,decoding,communication,and secondary encoding,in order to propose references for the transmission of intangible heritage culture at home and abroad.
基金supported by the National Natural Science Foundation of China(No.62176034)the Science and Technology Research Program of Chongqing Municipal Education Commission(No.KJZD-M202300604)the Natural Science Foundation of Chongqing(Nos.cstc2021jcyj-msxmX0518,2023NSCQ-MSX1781).
文摘Automatic crack detection of cement pavement chiefly benefits from the rapid development of deep learning,with convolutional neural networks(CNN)playing an important role in this field.However,as the performance of crack detection in cement pavement improves,the depth and width of the network structure are significantly increased,which necessitates more computing power and storage space.This limitation hampers the practical implementation of crack detection models on various platforms,particularly portable devices like small mobile devices.To solve these problems,we propose a dual-encoder-based network architecture that focuses on extracting more comprehensive fracture feature information and combines cross-fusion modules and coordinated attention mechanisms formore efficient feature fusion.Firstly,we use small channel convolution to construct shallow feature extractionmodule(SFEM)to extract low-level feature information of cracks in cement pavement images,in order to obtainmore information about cracks in the shallowfeatures of images.In addition,we construct large kernel atrous convolution(LKAC)to enhance crack information,which incorporates coordination attention mechanism for non-crack information filtering,and large kernel atrous convolution with different cores,using different receptive fields to extract more detailed edge and context information.Finally,the three-stage feature map outputs from the shallow feature extraction module is cross-fused with the two-stage feature map outputs from the large kernel atrous convolution module,and the shallow feature and detailed edge feature are fully fused to obtain the final crack prediction map.We evaluate our method on three public crack datasets:DeepCrack,CFD,and Crack500.Experimental results on theDeepCrack dataset demonstrate the effectiveness of our proposed method compared to state-of-the-art crack detection methods,which achieves Precision(P)87.2%,Recall(R)87.7%,and F-score(F1)87.4%.Thanks to our lightweight crack detectionmodel,the parameter count of the model in real-world detection scenarios has been significantly reduced to less than 2M.This advancement also facilitates technical support for portable scene detection.
文摘In order to prevent possible casualties and economic loss, it is critical to accurate prediction of the Remaining Useful Life (RUL) in rail prognostics health management. However, the traditional neural networks is difficult to capture the long-term dependency relationship of the time series in the modeling of the long time series of rail damage, due to the coupling relationship of multi-channel data from multiple sensors. Here, in this paper, a novel RUL prediction model with an enhanced pulse separable convolution is used to solve this issue. Firstly, a coding module based on the improved pulse separable convolutional network is established to effectively model the relationship between the data. To enhance the network, an alternate gradient back propagation method is implemented. And an efficient channel attention (ECA) mechanism is developed for better emphasizing the useful pulse characteristics. Secondly, an optimized Transformer encoder was designed to serve as the backbone of the model. It has the ability to efficiently understand relationship between the data itself and each other at each time step of long time series with a full life cycle. More importantly, the Transformer encoder is improved by integrating pulse maximum pooling to retain more pulse timing characteristics. Finally, based on the characteristics of the front layer, the final predicted RUL value was provided and served as the end-to-end solution. The empirical findings validate the efficacy of the suggested approach in forecasting the rail RUL, surpassing various existing data-driven prognostication techniques. Meanwhile, the proposed method also shows good generalization performance on PHM2012 bearing data set.
文摘The visual features of continuous pseudocolor encoding is discussed and the optimiz- ing design algorithm of continuous pseudocolor scale is derived.The algorithm is restricting the varying range and direction of lightness,hue and saturation according to correlation and naturalness,automatically calculating the chromaticity coordinates of nodes in uniform color space to get the longest length of scale path,then interpolating points between nodes in equal color differences to obtain continuous pseudocolor scale with visual uniformity.When it was applied to the pseudocolor encoding of thermal image displays,the results showed that the correlation and the naturalness of original images and cognitive characteristics of target pattern were reserved well;the dynamic range of visual perception and the amount of visual information increased obviously;the contrast sensitivity of target identification improved;and the blindness of scale design were avoided.
文摘On-chip global buses in deep sub-micron designs consume significant amounts of energy and have large propagation delays. Thus, minimizing energy dissipation and propagation delay is an important design objective. In this paper, we propose a new spatial and temporal encoding approach for generic on-chip global buses with repeaters that enables higher performance while reducing peak energy and average energy. The proposed encoding approach exploits the benefits of a temporal encoding circuit and spatial bus-invert coding techniques to simultaneously eliminate opposite transitions on adjacent wires and reduce the number of self-transitions and coupling-transitions. In the design process of applying encoding techniques for reduced bus delay and energy, we present a repeater insertion design methodology to determine the repeater size and inter-repeater bus length, which minimizes the total bus energy dissipation while satisfying target delay and slew-rate constraints. This methodology is employed to obtain optimal energy versus delay trade-offs under slew-rate constraints for various encoding techniques.
文摘In this paper, a 3-D video encoding scheme suitable for digital TV/HDTV (high definition television) is studied through computer simulation. The encoding scheme is designed to provide a good match to human vision. Basically, this involves transmission of low frequency luminance information at full frame rate for good motion rendition and transmission of high frequency luminance signal at reduced frame rate for good detail in static images.
文摘The translation activity is a process of the interlinguistic transmission of information realized by the information encoding and decoding.Encoding and decoding,cognitive practices operated in objective contexts,are inevitably of selectivity ascribing to the restriction of contextual reasons.The translator as the intermediary agent connects the original author(encoder)and the target readers(decoder),shouldering the dual duties of the decoder and the encoder,for which his subjectivity is irrevocably manipulated by the selectivity of encoding and decoding.
基金financially supported by the National Natural Science Foundation of China(No.41074075/D0409)the National Science and Technology Major Project(No.2011ZX05025-001-04)
文摘As a high quality seismic imaging method, full waveform inversion (FWI) can accurately reconstruct the physical parameter model for the subsurface medium. However, application of the FWI in seismic data processing is computationally expensive, especially for the three-dimension complex medium inversion. Introducing blended source technology into the frequency-domain FWI can greatly reduce the computational burden and improve the efficiency of the inversion. However, this method has two issues: first, crosstalk noise is caused by interference between the sources involved in the encoding, resulting in an inversion result with some artifacts; second, it is more sensitive to ambient noise compared to conventional FWI, therefore noisy data results in a poor inversion. This paper introduces a frequency-group encoding method to suppress crosstalk noise, and presents a frequency- domain auto-adapting FWI based on source-encoding technology. The conventional FWI method and source-encoding based FWI method are combined using an auto-adapting mechanism. This improvement can both guarantee the quality of the inversion result and maximize the inversion efficiency.
文摘Based on detailed analysis of advantages and disadvantages of the existing connected-component labeling (CCL) algorithm,a new algorithm for binary connected components labeling based on run-length encoding (RLE) and union-find sets has been put forward.The new algorithm uses RLE as the basic processing unit,converts the label merging of connected RLE into sets grouping in accordance with equivalence relation,and uses the union-find sets which is the realization method of sets grouping to solve the label merging of connected RLE.And the label merging procedure has been optimized:the union operation has been modified by adding the "weighted rule" to avoid getting a degenerated-tree,and the "path compression" has been adopted when implementing the find operation,then the time complexity of label merging is O(nα(n)).The experiments show that the new algorithm can label the connected components of any shapes very quickly and exactly,save more memory,and facilitate the subsequent image analysis.
基金This work was supported by the National Key Research and Development Program of China(2018YFC2001302)National Natural Science Foundation of China(91520202)+2 种基金Chinese Academy of Sciences Scientific Equipment Development Project(YJKYYQ20170050)Beijing Municipal Science and Technology Commission(Z181100008918010)Youth Innovation Promotion Association of Chinese Academy of Sciences,and Strategic Priority Research Program of Chinese Academy of Sciences(XDB32040200).
文摘Brain encoding and decoding via functional magnetic resonance imaging(fMRI)are two important aspects of visual perception neuroscience.Although previous researchers have made significant advances in brain encoding and decoding models,existing methods still require improvement using advanced machine learning techniques.For example,traditional methods usually build the encoding and decoding models separately,and are prone to overfitting on a small dataset.In fact,effectively unifying the encoding and decoding procedures may allow for more accurate predictions.In this paper,we first review the existing encoding and decoding methods and discuss the potential advantages of a“bidirectional”modeling strategy.Next,we show that there are correspondences between deep neural networks and human visual streams in terms of the architecture and computational rules.Furthermore,deep generative models(e.g.,variational autoencoders(VAEs)and generative adversarial networks(GANs))have produced promising results in studies on brain encoding and decoding.Finally,we propose that the dual learning method,which was originally designed for machine translation tasks,could help to improve the performance of encoding and decoding models by leveraging large-scale unpaired data.