Automatic crack detection of cement pavement chiefly benefits from the rapid development of deep learning,with convolutional neural networks(CNN)playing an important role in this field.However,as the performance of cr...Automatic crack detection of cement pavement chiefly benefits from the rapid development of deep learning,with convolutional neural networks(CNN)playing an important role in this field.However,as the performance of crack detection in cement pavement improves,the depth and width of the network structure are significantly increased,which necessitates more computing power and storage space.This limitation hampers the practical implementation of crack detection models on various platforms,particularly portable devices like small mobile devices.To solve these problems,we propose a dual-encoder-based network architecture that focuses on extracting more comprehensive fracture feature information and combines cross-fusion modules and coordinated attention mechanisms formore efficient feature fusion.Firstly,we use small channel convolution to construct shallow feature extractionmodule(SFEM)to extract low-level feature information of cracks in cement pavement images,in order to obtainmore information about cracks in the shallowfeatures of images.In addition,we construct large kernel atrous convolution(LKAC)to enhance crack information,which incorporates coordination attention mechanism for non-crack information filtering,and large kernel atrous convolution with different cores,using different receptive fields to extract more detailed edge and context information.Finally,the three-stage feature map outputs from the shallow feature extraction module is cross-fused with the two-stage feature map outputs from the large kernel atrous convolution module,and the shallow feature and detailed edge feature are fully fused to obtain the final crack prediction map.We evaluate our method on three public crack datasets:DeepCrack,CFD,and Crack500.Experimental results on theDeepCrack dataset demonstrate the effectiveness of our proposed method compared to state-of-the-art crack detection methods,which achieves Precision(P)87.2%,Recall(R)87.7%,and F-score(F1)87.4%.Thanks to our lightweight crack detectionmodel,the parameter count of the model in real-world detection scenarios has been significantly reduced to less than 2M.This advancement also facilitates technical support for portable scene detection.展开更多
Linear minimum mean square error(MMSE)detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output(MIMO)systems but inevitably involves complicated matrix inversion,which ent...Linear minimum mean square error(MMSE)detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output(MIMO)systems but inevitably involves complicated matrix inversion,which entails high complexity.To avoid the exact matrix inversion,a considerable number of implicit and explicit approximate matrix inversion based detection methods is proposed.By combining the advantages of both the explicit and the implicit matrix inversion,this paper introduces a new low-complexity signal detection algorithm.Firstly,the relationship between implicit and explicit techniques is analyzed.Then,an enhanced Newton iteration method is introduced to realize an approximate MMSE detection for massive MIMO uplink systems.The proposed improved Newton iteration significantly reduces the complexity of conventional Newton iteration.However,its complexity is still high for higher iterations.Thus,it is applied only for first two iterations.For subsequent iterations,we propose a novel trace iterative method(TIM)based low-complexity algorithm,which has significantly lower complexity than higher Newton iterations.Convergence guarantees of the proposed detector are also provided.Numerical simulations verify that the proposed detector exhibits significant performance enhancement over recently reported iterative detectors and achieves close-to-MMSE performance while retaining the low-complexity advantage for systems with hundreds of antennas.展开更多
In order to prevent possible casualties and economic loss, it is critical to accurate prediction of the Remaining Useful Life (RUL) in rail prognostics health management. However, the traditional neural networks is di...In order to prevent possible casualties and economic loss, it is critical to accurate prediction of the Remaining Useful Life (RUL) in rail prognostics health management. However, the traditional neural networks is difficult to capture the long-term dependency relationship of the time series in the modeling of the long time series of rail damage, due to the coupling relationship of multi-channel data from multiple sensors. Here, in this paper, a novel RUL prediction model with an enhanced pulse separable convolution is used to solve this issue. Firstly, a coding module based on the improved pulse separable convolutional network is established to effectively model the relationship between the data. To enhance the network, an alternate gradient back propagation method is implemented. And an efficient channel attention (ECA) mechanism is developed for better emphasizing the useful pulse characteristics. Secondly, an optimized Transformer encoder was designed to serve as the backbone of the model. It has the ability to efficiently understand relationship between the data itself and each other at each time step of long time series with a full life cycle. More importantly, the Transformer encoder is improved by integrating pulse maximum pooling to retain more pulse timing characteristics. Finally, based on the characteristics of the front layer, the final predicted RUL value was provided and served as the end-to-end solution. The empirical findings validate the efficacy of the suggested approach in forecasting the rail RUL, surpassing various existing data-driven prognostication techniques. Meanwhile, the proposed method also shows good generalization performance on PHM2012 bearing data set.展开更多
The goal of street-to-aerial cross-view image geo-localization is to determine the location of the query street-view image by retrieving the aerial-view image from the same place.The drastic viewpoint and appearance g...The goal of street-to-aerial cross-view image geo-localization is to determine the location of the query street-view image by retrieving the aerial-view image from the same place.The drastic viewpoint and appearance gap between the aerial-view and the street-view images brings a huge challenge against this task.In this paper,we propose a novel multiscale attention encoder to capture the multiscale contextual information of the aerial/street-view images.To bridge the domain gap between these two view images,we first use an inverse polar transform to make the street-view images approximately aligned with the aerial-view images.Then,the explored multiscale attention encoder is applied to convert the image into feature representation with the guidance of the learnt multiscale information.Finally,we propose a novel global mining strategy to enable the network to pay more attention to hard negative exemplars.Experiments on standard benchmark datasets show that our approach obtains 81.39%top-1 recall rate on the CVUSA dataset and 71.52%on the CVACT dataset,achieving the state-of-the-art performance and outperforming most of the existing methods significantly.展开更多
The topological connectivity information derived from the brain functional network can bring new insights for diagnosing and analyzing dementia disorders.The brain functional network is suitable to bridge the correlat...The topological connectivity information derived from the brain functional network can bring new insights for diagnosing and analyzing dementia disorders.The brain functional network is suitable to bridge the correlation between abnormal connectivities and dementia disorders.However,it is challenging to access considerable amounts of brain functional network data,which hinders the widespread application of data-driven models in dementia diagnosis.In this study,a novel distribution-regularized adversarial graph auto-Encoder(DAGAE)with transformer is proposed to generate new fake brain functional networks to augment the brain functional network dataset,improving the dementia diagnosis accuracy of data-driven models.Specifically,the label distribution is estimated to regularize the latent space learned by the graph encoder,which canmake the learning process stable and the learned representation robust.Also,the transformer generator is devised to map the node representations into node-to-node connections by exploring the long-term dependence of highly-correlated distant brain regions.The typical topological properties and discriminative features can be preserved entirely.Furthermore,the generated brain functional networks improve the prediction performance using different classifiers,which can be applied to analyze other cognitive diseases.Attempts on the Alzheimer’s Disease Neuroimaging Initiative(ADNI)dataset demonstrate that the proposed model can generate good brain functional networks.The classification results show adding generated data can achieve the best accuracy value of 85.33%,sensitivity value of 84.00%,specificity value of 86.67%.The proposed model also achieves superior performance compared with other related augmentedmodels.Overall,the proposedmodel effectively improves cognitive disease diagnosis by generating diverse brain functional networks.展开更多
The detection of brain disease is an essential issue in medical and research areas.Deep learning techniques have shown promising results in detecting and diagnosing brain diseases using magnetic resonance imaging(MRI)...The detection of brain disease is an essential issue in medical and research areas.Deep learning techniques have shown promising results in detecting and diagnosing brain diseases using magnetic resonance imaging(MRI)images.These techniques involve training neural networks on large datasets of MRI images,allowing the networks to learn patterns and features indicative of different brain diseases.However,several challenges and limitations still need to be addressed further to improve the accuracy and effectiveness of these techniques.This paper implements a Feature Enhanced Stacked Auto Encoder(FESAE)model to detect brain diseases.The standard stack auto encoder’s results are trivial and not robust enough to boost the system’s accuracy.Therefore,the standard Stack Auto Encoder(SAE)is replaced with a Stacked Feature Enhanced Auto Encoder with a feature enhancement function to efficiently and effectively get non-trivial features with less activation energy froman image.The proposed model consists of four stages.First,pre-processing is performed to remove noise,and the greyscale image is converted to Red,Green,and Blue(RGB)to enhance feature details for discriminative feature extraction.Second,feature Extraction is performed to extract significant features for classification using DiscreteWavelet Transform(DWT)and Channelization.Third,classification is performed to classify MRI images into four major classes:Normal,Tumor,Brain Stroke,and Alzheimer’s.Finally,the FESAE model outperforms the state-of-theart,machine learning,and deep learning methods such as Artificial Neural Network(ANN),SAE,Random Forest(RF),and Logistic Regression(LR)by achieving a high accuracy of 98.61% on a dataset of 2000 MRI images.The proposed model has significant potential for assisting radiologists in diagnosing brain diseases more accurately and improving patient outcomes.展开更多
Latent information is difficult to get from the text in speech synthesis.Studies show that features from speech can get more information to help text encoding.In the field of speech encoding,a lot of work has been con...Latent information is difficult to get from the text in speech synthesis.Studies show that features from speech can get more information to help text encoding.In the field of speech encoding,a lot of work has been conducted on two aspects.The first aspect is to encode speech frame by frame.The second aspect is to encode the whole speech to a vector.But the scale in these aspects is fixed.So,encoding speech with an adjustable scale for more latent information is worthy of investigation.But current alignment approaches only support frame-by-frame encoding and speech-to-vector encoding.It remains a challenge to propose a new alignment approach to support adjustable scale speech encoding.This paper presents the dynamic speech encoder with a new alignment approach in conjunction with frame-by-frame encoding and speech-to-vector encoding.The speech feature fromourmodel achieves three functions.First,the speech feature can reconstruct the origin speech while the length of the speech feature is equal to the text length.Second,our model can get text embedding fromspeech,and the encoded speech feature is similar to the text embedding result.Finally,it can transfer the style of synthesis speech and make it more similar to the given reference speech.展开更多
This study proposes a novel particle encoding mechanism that seamlessly incorporates the quantum properties of particles,with a specific emphasis on constituent quarks.The primary objective of this mechanism is to fac...This study proposes a novel particle encoding mechanism that seamlessly incorporates the quantum properties of particles,with a specific emphasis on constituent quarks.The primary objective of this mechanism is to facilitate the digital registration and identification of a wide range of particle information.Its design ensures easy integration with different event generators and digital simulations commonly used in high-energy experiments.Moreover,this innovative framework can be easily expanded to encode complex multi-quark states comprising up to nine valence quarks and accommodating an angular momentum of up to 99/2.This versatility and scalability make it a valuable tool.展开更多
Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding ...Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding and decoding semantic communication framework,which adopts the semantic information and the contextual correlations between items to optimize the performance of a communication system over various channels.On the sender side,the average semantic loss caused by the wrong detection is defined,and a semantic source encoding strategy is developed to minimize the average semantic loss.To further improve communication reliability,a decoding strategy that utilizes the semantic and the context information to recover messages is proposed in the receiver.Extensive simulation results validate the superior performance of our strategies over state-of-the-art semantic coding and decoding policies on different communication channels.展开更多
Leveraging the extraordinary phenomena of quantum superposition and quantum correlation,quantum computing offers unprecedented potential for addressing challenges beyond the reach of classical computers.This paper tac...Leveraging the extraordinary phenomena of quantum superposition and quantum correlation,quantum computing offers unprecedented potential for addressing challenges beyond the reach of classical computers.This paper tackles two pivotal challenges in the realm of quantum computing:firstly,the development of an effective encoding protocol for translating classical data into quantum states,a critical step for any quantum computation.Different encoding strategies can significantly influence quantum computer performance.Secondly,we address the need to counteract the inevitable noise that can hinder quantum acceleration.Our primary contribution is the introduction of a novel variational data encoding method,grounded in quantum regression algorithm models.By adapting the learning concept from machine learning,we render data encoding a learnable process.This allowed us to study the role of quantum correlation in data encoding.Through numerical simulations of various regression tasks,we demonstrate the efficacy of our variational data encoding,particularly post-learning from instructional data.Moreover,we delve into the role of quantum correlation in enhancing task performance,especially in noisy environments.Our findings underscore the critical role of quantum correlation in not only bolstering performance but also in mitigating noise interference,thus advancing the frontier of quantum computing.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
To address the difficulties in fusing multi-mode sensor data for complex industrial machinery, an adaptive deep coupling convolutional auto-encoder (ADCCAE) fusion method was proposed. First, the multi-mode features e...To address the difficulties in fusing multi-mode sensor data for complex industrial machinery, an adaptive deep coupling convolutional auto-encoder (ADCCAE) fusion method was proposed. First, the multi-mode features extracted synchronously by the CCAE were stacked and fed to the multi-channel convolution layers for fusion. Then, the fused data was passed to all connection layers for compression and fed to the Softmax module for classification. Finally, the coupling loss function coefficients and the network parameters were optimized through an adaptive approach using the gray wolf optimization (GWO) algorithm. Experimental comparisons showed that the proposed ADCCAE fusion model was superior to existing models for multi-mode data fusion.展开更多
基金supported by the National Natural Science Foundation of China(No.62176034)the Science and Technology Research Program of Chongqing Municipal Education Commission(No.KJZD-M202300604)the Natural Science Foundation of Chongqing(Nos.cstc2021jcyj-msxmX0518,2023NSCQ-MSX1781).
文摘Automatic crack detection of cement pavement chiefly benefits from the rapid development of deep learning,with convolutional neural networks(CNN)playing an important role in this field.However,as the performance of crack detection in cement pavement improves,the depth and width of the network structure are significantly increased,which necessitates more computing power and storage space.This limitation hampers the practical implementation of crack detection models on various platforms,particularly portable devices like small mobile devices.To solve these problems,we propose a dual-encoder-based network architecture that focuses on extracting more comprehensive fracture feature information and combines cross-fusion modules and coordinated attention mechanisms formore efficient feature fusion.Firstly,we use small channel convolution to construct shallow feature extractionmodule(SFEM)to extract low-level feature information of cracks in cement pavement images,in order to obtainmore information about cracks in the shallowfeatures of images.In addition,we construct large kernel atrous convolution(LKAC)to enhance crack information,which incorporates coordination attention mechanism for non-crack information filtering,and large kernel atrous convolution with different cores,using different receptive fields to extract more detailed edge and context information.Finally,the three-stage feature map outputs from the shallow feature extraction module is cross-fused with the two-stage feature map outputs from the large kernel atrous convolution module,and the shallow feature and detailed edge feature are fully fused to obtain the final crack prediction map.We evaluate our method on three public crack datasets:DeepCrack,CFD,and Crack500.Experimental results on theDeepCrack dataset demonstrate the effectiveness of our proposed method compared to state-of-the-art crack detection methods,which achieves Precision(P)87.2%,Recall(R)87.7%,and F-score(F1)87.4%.Thanks to our lightweight crack detectionmodel,the parameter count of the model in real-world detection scenarios has been significantly reduced to less than 2M.This advancement also facilitates technical support for portable scene detection.
基金supported by National Natural Science Foundation of China(62371225,62371227)。
文摘Linear minimum mean square error(MMSE)detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output(MIMO)systems but inevitably involves complicated matrix inversion,which entails high complexity.To avoid the exact matrix inversion,a considerable number of implicit and explicit approximate matrix inversion based detection methods is proposed.By combining the advantages of both the explicit and the implicit matrix inversion,this paper introduces a new low-complexity signal detection algorithm.Firstly,the relationship between implicit and explicit techniques is analyzed.Then,an enhanced Newton iteration method is introduced to realize an approximate MMSE detection for massive MIMO uplink systems.The proposed improved Newton iteration significantly reduces the complexity of conventional Newton iteration.However,its complexity is still high for higher iterations.Thus,it is applied only for first two iterations.For subsequent iterations,we propose a novel trace iterative method(TIM)based low-complexity algorithm,which has significantly lower complexity than higher Newton iterations.Convergence guarantees of the proposed detector are also provided.Numerical simulations verify that the proposed detector exhibits significant performance enhancement over recently reported iterative detectors and achieves close-to-MMSE performance while retaining the low-complexity advantage for systems with hundreds of antennas.
文摘In order to prevent possible casualties and economic loss, it is critical to accurate prediction of the Remaining Useful Life (RUL) in rail prognostics health management. However, the traditional neural networks is difficult to capture the long-term dependency relationship of the time series in the modeling of the long time series of rail damage, due to the coupling relationship of multi-channel data from multiple sensors. Here, in this paper, a novel RUL prediction model with an enhanced pulse separable convolution is used to solve this issue. Firstly, a coding module based on the improved pulse separable convolutional network is established to effectively model the relationship between the data. To enhance the network, an alternate gradient back propagation method is implemented. And an efficient channel attention (ECA) mechanism is developed for better emphasizing the useful pulse characteristics. Secondly, an optimized Transformer encoder was designed to serve as the backbone of the model. It has the ability to efficiently understand relationship between the data itself and each other at each time step of long time series with a full life cycle. More importantly, the Transformer encoder is improved by integrating pulse maximum pooling to retain more pulse timing characteristics. Finally, based on the characteristics of the front layer, the final predicted RUL value was provided and served as the end-to-end solution. The empirical findings validate the efficacy of the suggested approach in forecasting the rail RUL, surpassing various existing data-driven prognostication techniques. Meanwhile, the proposed method also shows good generalization performance on PHM2012 bearing data set.
基金National Natural Science Foundation of China,Grant/Award Number:62106177supported by the Central University Basic Research Fund of China(No.2042020KF0016)supported by the supercomputing system in the Supercomputing Center of Wuhan University.
文摘The goal of street-to-aerial cross-view image geo-localization is to determine the location of the query street-view image by retrieving the aerial-view image from the same place.The drastic viewpoint and appearance gap between the aerial-view and the street-view images brings a huge challenge against this task.In this paper,we propose a novel multiscale attention encoder to capture the multiscale contextual information of the aerial/street-view images.To bridge the domain gap between these two view images,we first use an inverse polar transform to make the street-view images approximately aligned with the aerial-view images.Then,the explored multiscale attention encoder is applied to convert the image into feature representation with the guidance of the learnt multiscale information.Finally,we propose a novel global mining strategy to enable the network to pay more attention to hard negative exemplars.Experiments on standard benchmark datasets show that our approach obtains 81.39%top-1 recall rate on the CVUSA dataset and 71.52%on the CVACT dataset,achieving the state-of-the-art performance and outperforming most of the existing methods significantly.
基金This paper is partially supported by the British Heart Foundation Accelerator Award,UK(AA\18\3\34220)Royal Society International Exchanges Cost Share Award,UK(RP202G0230)+9 种基金Hope Foundation for Cancer Research,UK(RM60G0680)Medical Research Council Confidence in Concept Award,UK(MC_PC_17171)Sino-UK Industrial Fund,UK(RP202G0289)Global Challenges Research Fund(GCRF),UK(P202PF11)LIAS Pioneering Partnerships Award,UK(P202ED10)Data Science Enhancement Fund,UK(P202RE237)Fight for Sight,UK(24NN201)Sino-UK Education Fund,UK(OP202006)Biotechnology and Biological Sciences Research Council,UK(RM32G0178B8)LIAS Seed Corn,UK(P202RE969).
文摘The topological connectivity information derived from the brain functional network can bring new insights for diagnosing and analyzing dementia disorders.The brain functional network is suitable to bridge the correlation between abnormal connectivities and dementia disorders.However,it is challenging to access considerable amounts of brain functional network data,which hinders the widespread application of data-driven models in dementia diagnosis.In this study,a novel distribution-regularized adversarial graph auto-Encoder(DAGAE)with transformer is proposed to generate new fake brain functional networks to augment the brain functional network dataset,improving the dementia diagnosis accuracy of data-driven models.Specifically,the label distribution is estimated to regularize the latent space learned by the graph encoder,which canmake the learning process stable and the learned representation robust.Also,the transformer generator is devised to map the node representations into node-to-node connections by exploring the long-term dependence of highly-correlated distant brain regions.The typical topological properties and discriminative features can be preserved entirely.Furthermore,the generated brain functional networks improve the prediction performance using different classifiers,which can be applied to analyze other cognitive diseases.Attempts on the Alzheimer’s Disease Neuroimaging Initiative(ADNI)dataset demonstrate that the proposed model can generate good brain functional networks.The classification results show adding generated data can achieve the best accuracy value of 85.33%,sensitivity value of 84.00%,specificity value of 86.67%.The proposed model also achieves superior performance compared with other related augmentedmodels.Overall,the proposedmodel effectively improves cognitive disease diagnosis by generating diverse brain functional networks.
基金supported by financial support from Universiti Sains Malaysia(USM)under FRGS Grant Number FRGS/1/2020/TK03/USM/02/1the School of Computer Sciences USM for their support.
文摘The detection of brain disease is an essential issue in medical and research areas.Deep learning techniques have shown promising results in detecting and diagnosing brain diseases using magnetic resonance imaging(MRI)images.These techniques involve training neural networks on large datasets of MRI images,allowing the networks to learn patterns and features indicative of different brain diseases.However,several challenges and limitations still need to be addressed further to improve the accuracy and effectiveness of these techniques.This paper implements a Feature Enhanced Stacked Auto Encoder(FESAE)model to detect brain diseases.The standard stack auto encoder’s results are trivial and not robust enough to boost the system’s accuracy.Therefore,the standard Stack Auto Encoder(SAE)is replaced with a Stacked Feature Enhanced Auto Encoder with a feature enhancement function to efficiently and effectively get non-trivial features with less activation energy froman image.The proposed model consists of four stages.First,pre-processing is performed to remove noise,and the greyscale image is converted to Red,Green,and Blue(RGB)to enhance feature details for discriminative feature extraction.Second,feature Extraction is performed to extract significant features for classification using DiscreteWavelet Transform(DWT)and Channelization.Third,classification is performed to classify MRI images into four major classes:Normal,Tumor,Brain Stroke,and Alzheimer’s.Finally,the FESAE model outperforms the state-of-theart,machine learning,and deep learning methods such as Artificial Neural Network(ANN),SAE,Random Forest(RF),and Logistic Regression(LR)by achieving a high accuracy of 98.61% on a dataset of 2000 MRI images.The proposed model has significant potential for assisting radiologists in diagnosing brain diseases more accurately and improving patient outcomes.
基金supported by National Key R&D Program of China (2020AAA0107901).
文摘Latent information is difficult to get from the text in speech synthesis.Studies show that features from speech can get more information to help text encoding.In the field of speech encoding,a lot of work has been conducted on two aspects.The first aspect is to encode speech frame by frame.The second aspect is to encode the whole speech to a vector.But the scale in these aspects is fixed.So,encoding speech with an adjustable scale for more latent information is worthy of investigation.But current alignment approaches only support frame-by-frame encoding and speech-to-vector encoding.It remains a challenge to propose a new alignment approach to support adjustable scale speech encoding.This paper presents the dynamic speech encoder with a new alignment approach in conjunction with frame-by-frame encoding and speech-to-vector encoding.The speech feature fromourmodel achieves three functions.First,the speech feature can reconstruct the origin speech while the length of the speech feature is equal to the text length.Second,our model can get text embedding fromspeech,and the encoded speech feature is similar to the text embedding result.Finally,it can transfer the style of synthesis speech and make it more similar to the given reference speech.
基金the Department of Education of Hunan Province,China(No.21A0541)the U.S.Department of Energy(No.DE-FG03-93ER40773)H.Z.acknowledges the financial support from Key Laboratory of Quark and Lepton Physics in Central China Normal University(No.QLPL2024P01)。
文摘This study proposes a novel particle encoding mechanism that seamlessly incorporates the quantum properties of particles,with a specific emphasis on constituent quarks.The primary objective of this mechanism is to facilitate the digital registration and identification of a wide range of particle information.Its design ensures easy integration with different event generators and digital simulations commonly used in high-energy experiments.Moreover,this innovative framework can be easily expanded to encode complex multi-quark states comprising up to nine valence quarks and accommodating an angular momentum of up to 99/2.This versatility and scalability make it a valuable tool.
基金supported in part by the National Natural Science Foundation of China under Grant No.61931020,U19B2024,62171449,62001483in part by the science and technology innovation Program of Hunan Province under Grant No.2021JJ40690。
文摘Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding and decoding semantic communication framework,which adopts the semantic information and the contextual correlations between items to optimize the performance of a communication system over various channels.On the sender side,the average semantic loss caused by the wrong detection is defined,and a semantic source encoding strategy is developed to minimize the average semantic loss.To further improve communication reliability,a decoding strategy that utilizes the semantic and the context information to recover messages is proposed in the receiver.Extensive simulation results validate the superior performance of our strategies over state-of-the-art semantic coding and decoding policies on different communication channels.
基金the National Natural Science Foun-dation of China(Grant Nos.12105090 and 12175057).
文摘Leveraging the extraordinary phenomena of quantum superposition and quantum correlation,quantum computing offers unprecedented potential for addressing challenges beyond the reach of classical computers.This paper tackles two pivotal challenges in the realm of quantum computing:firstly,the development of an effective encoding protocol for translating classical data into quantum states,a critical step for any quantum computation.Different encoding strategies can significantly influence quantum computer performance.Secondly,we address the need to counteract the inevitable noise that can hinder quantum acceleration.Our primary contribution is the introduction of a novel variational data encoding method,grounded in quantum regression algorithm models.By adapting the learning concept from machine learning,we render data encoding a learnable process.This allowed us to study the role of quantum correlation in data encoding.Through numerical simulations of various regression tasks,we demonstrate the efficacy of our variational data encoding,particularly post-learning from instructional data.Moreover,we delve into the role of quantum correlation in enhancing task performance,especially in noisy environments.Our findings underscore the critical role of quantum correlation in not only bolstering performance but also in mitigating noise interference,thus advancing the frontier of quantum computing.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
文摘To address the difficulties in fusing multi-mode sensor data for complex industrial machinery, an adaptive deep coupling convolutional auto-encoder (ADCCAE) fusion method was proposed. First, the multi-mode features extracted synchronously by the CCAE were stacked and fed to the multi-channel convolution layers for fusion. Then, the fused data was passed to all connection layers for compression and fed to the Softmax module for classification. Finally, the coupling loss function coefficients and the network parameters were optimized through an adaptive approach using the gray wolf optimization (GWO) algorithm. Experimental comparisons showed that the proposed ADCCAE fusion model was superior to existing models for multi-mode data fusion.