Cemented paste backfill(CPB)is a key technology for green mining in metal mines,in which tailings thickening comprises the primary link of CPB technology.However,difficult flocculation and substandard concentrations o...Cemented paste backfill(CPB)is a key technology for green mining in metal mines,in which tailings thickening comprises the primary link of CPB technology.However,difficult flocculation and substandard concentrations of thickened tailings often occur.The rheological properties and concentration evolution in the thickened tailings remain unclear.Moreover,traditional indoor thickening experiments have yet to quantitatively characterize their rheological properties.An experiment of flocculation condition optimization based on the Box-Behnken design(BBD)was performed in the study,and the two response values were investigated:concentration and the mean weighted chord length(MWCL)of flocs.Thus,optimal flocculation conditions were obtained.In addition,the rheological properties and concentration evolution of different flocculant dosages and ultrafine tailing contents under shear,compression,and compression-shear coupling experimental conditions were tested and compared.The results show that the shear yield stress under compression and compression-shear coupling increases with the growth of compressive yield stress,while the shear yield stress increases slightly under shear.The order of shear yield stress from low to high under different thickening conditions is shear,compression,and compression-shear coupling.Under compression and compression-shear coupling,the concentration first rapidly increases with the growth of compressive yield stress and then slowly increases,while concentration increases slightly under shear.The order of concentration from low to high under different thickening conditions is shear,compression,and compression-shear coupling.Finally,the evolution mechanism of the flocs and drainage channels during the thickening of the thickened tailings under different experimental conditions was revealed.展开更多
预训练模型通过自监督学习表示在非平行语料语音转换(VC)取得了重大突破。随着自监督预训练表示(SSPR)的广泛使用,预训练模型提取的特征中被证实包含更多的内容信息。提出一种基于SSPR同时结合矢量量化(VQ)和联结时序分类(CTC)的VC模型...预训练模型通过自监督学习表示在非平行语料语音转换(VC)取得了重大突破。随着自监督预训练表示(SSPR)的广泛使用,预训练模型提取的特征中被证实包含更多的内容信息。提出一种基于SSPR同时结合矢量量化(VQ)和联结时序分类(CTC)的VC模型。将预训练模型提取的SSPR作为端到端模型的输入,用于提高单次语音转换质量。如何有效地解耦内容表示和说话人表示成为语音转换中的关键问题。使用SSPR作为初步的内容信息,采用VQ从语音中解耦内容和说话人表示。然而,仅使用VQ只能将内容信息离散化,很难将纯粹的内容表示从语音中分离出来,为了进一步消除内容信息中说话人的不变信息,提出CTC损失指导内容编码器。CTC不仅作为辅助网络加快模型收敛,同时其额外的文本监督可以与VQ联合优化,实现性能互补,学习纯内容表示。说话人表示采用风格嵌入学习,2种表示作为系统的输入进行语音转换。在开源的CMU数据集和VCTK语料库对所提的方法进行评估,实验结果表明,该方法在客观上的梅尔倒谱失真(MCD)达到8.896 d B,在主观上的语音自然度平均意见分数(MOS)和说话人相似度MOS分别为3.29和3.22,均优于基线模型,此方法在语音转换的质量和说话人相似度上能够获得最佳性能。展开更多
With the advent of the information security era,it is necessary to guarantee the privacy,accuracy,and dependable transfer of pictures.This study presents a new approach to the encryption and compression of color image...With the advent of the information security era,it is necessary to guarantee the privacy,accuracy,and dependable transfer of pictures.This study presents a new approach to the encryption and compression of color images.It is predicated on 2D compressed sensing(CS)and the hyperchaotic system.First,an optimized Arnold scrambling algorithm is applied to the initial color images to ensure strong security.Then,the processed images are con-currently encrypted and compressed using 2D CS.Among them,chaotic sequences replace traditional random measurement matrices to increase the system’s security.Third,the processed images are re-encrypted using a combination of permutation and diffusion algorithms.In addition,the 2D projected gradient with an embedding decryption(2DPG-ED)algorithm is used to reconstruct images.Compared with the traditional reconstruction algorithm,the 2DPG-ED algorithm can improve security and reduce computational complexity.Furthermore,it has better robustness.The experimental outcome and the performance analysis indicate that this algorithm can withstand malicious attacks and prove the method is effective.展开更多
As conventional communication systems based on classic information theory have closely approached Shannon capacity,semantic communication is emerging as a key enabling technology for the further improvement of communi...As conventional communication systems based on classic information theory have closely approached Shannon capacity,semantic communication is emerging as a key enabling technology for the further improvement of communication performance.However,it is still unsettled on how to represent semantic information and characterise the theoretical limits of semantic-oriented compression and transmission.In this paper,we consider a semantic source which is characterised by a set of correlated random variables whose joint probabilistic distribution can be described by a Bayesian network.We give the information-theoretic limit on the lossless compression of the semantic source and introduce a low complexity encoding method by exploiting the conditional independence.We further characterise the limits on lossy compression of the semantic source and the upper and lower bounds of the rate-distortion function.We also investigate the lossy compression of the semantic source with two-sided information at the encoder and decoder,and obtain the corresponding rate distortion function.We prove that the optimal code of the semantic source is the combination of the optimal codes of each conditional independent set given the side information.展开更多
Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro...Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro-posed to improve the efficiency for edge inference of Deep Neural Networks(DNNs),existing PoT schemes require a huge amount of bit-wise manipulation and have large memory overhead,and their efficiency is bounded by the bottleneck of computation latency and memory footprint.To tackle this challenge,we present an efficient inference approach on the basis of PoT quantization and model compression.An integer-only scalar PoT quantization(IOS-PoT)is designed jointly with a distribution loss regularizer,wherein the regularizer minimizes quantization errors and training disturbances.Additionally,two-stage model compression is developed to effectively reduce memory requirement,and alleviate bandwidth usage in communications of networked heterogenous learning systems.The product look-up table(P-LUT)inference scheme is leveraged to replace bit-shifting with only indexing and addition operations for achieving low-latency computation and implementing efficient edge accelerators.Finally,comprehensive experiments on Residual Networks(ResNets)and efficient architectures with Canadian Institute for Advanced Research(CIFAR),ImageNet,and Real-world Affective Faces Database(RAF-DB)datasets,indicate that our approach achieves 2×∼10×improvement in the reduction of both weight size and computation cost in comparison to state-of-the-art methods.A P-LUT accelerator prototype is implemented on the Xilinx KV260 Field Programmable Gate Array(FPGA)platform for accelerating convolution operations,with performance results showing that P-LUT reduces memory footprint by 1.45×,achieves more than 3×power efficiency and 2×resource efficiency,compared to the conventional bit-shifting scheme.展开更多
基金financially supported by the National Natural Science Foundation of China(Nos.52130404 and 52304121)the Fundamental Research Funds for the Central Universities(No.FRF-TP-22-112A1)+4 种基金the Guangdong Basic and Applied Basic Research Foundation(No.2021A 1515110161)the ANID(Chile)through Fondecyt project 1210610the Centro de Modelamiento Matemático(BASAL funds for Centers of Excellence FB210005)the CRHIAM project ANID/FONDAP/15130015 and ANID/FONDAP/1523A0001the Anillo project ANID/ACT210030。
文摘Cemented paste backfill(CPB)is a key technology for green mining in metal mines,in which tailings thickening comprises the primary link of CPB technology.However,difficult flocculation and substandard concentrations of thickened tailings often occur.The rheological properties and concentration evolution in the thickened tailings remain unclear.Moreover,traditional indoor thickening experiments have yet to quantitatively characterize their rheological properties.An experiment of flocculation condition optimization based on the Box-Behnken design(BBD)was performed in the study,and the two response values were investigated:concentration and the mean weighted chord length(MWCL)of flocs.Thus,optimal flocculation conditions were obtained.In addition,the rheological properties and concentration evolution of different flocculant dosages and ultrafine tailing contents under shear,compression,and compression-shear coupling experimental conditions were tested and compared.The results show that the shear yield stress under compression and compression-shear coupling increases with the growth of compressive yield stress,while the shear yield stress increases slightly under shear.The order of shear yield stress from low to high under different thickening conditions is shear,compression,and compression-shear coupling.Under compression and compression-shear coupling,the concentration first rapidly increases with the growth of compressive yield stress and then slowly increases,while concentration increases slightly under shear.The order of concentration from low to high under different thickening conditions is shear,compression,and compression-shear coupling.Finally,the evolution mechanism of the flocs and drainage channels during the thickening of the thickened tailings under different experimental conditions was revealed.
文摘预训练模型通过自监督学习表示在非平行语料语音转换(VC)取得了重大突破。随着自监督预训练表示(SSPR)的广泛使用,预训练模型提取的特征中被证实包含更多的内容信息。提出一种基于SSPR同时结合矢量量化(VQ)和联结时序分类(CTC)的VC模型。将预训练模型提取的SSPR作为端到端模型的输入,用于提高单次语音转换质量。如何有效地解耦内容表示和说话人表示成为语音转换中的关键问题。使用SSPR作为初步的内容信息,采用VQ从语音中解耦内容和说话人表示。然而,仅使用VQ只能将内容信息离散化,很难将纯粹的内容表示从语音中分离出来,为了进一步消除内容信息中说话人的不变信息,提出CTC损失指导内容编码器。CTC不仅作为辅助网络加快模型收敛,同时其额外的文本监督可以与VQ联合优化,实现性能互补,学习纯内容表示。说话人表示采用风格嵌入学习,2种表示作为系统的输入进行语音转换。在开源的CMU数据集和VCTK语料库对所提的方法进行评估,实验结果表明,该方法在客观上的梅尔倒谱失真(MCD)达到8.896 d B,在主观上的语音自然度平均意见分数(MOS)和说话人相似度MOS分别为3.29和3.22,均优于基线模型,此方法在语音转换的质量和说话人相似度上能够获得最佳性能。
基金This work was supported in part by the National Natural Science Foundation of China under Grants 71571091,71771112the State Key Laboratory of Synthetical Automation for Process Industries Fundamental Research Funds under Grant PAL-N201801the Excellent Talent Training Project of University of Science and Technology Liaoning under Grant 2019RC05.
文摘With the advent of the information security era,it is necessary to guarantee the privacy,accuracy,and dependable transfer of pictures.This study presents a new approach to the encryption and compression of color images.It is predicated on 2D compressed sensing(CS)and the hyperchaotic system.First,an optimized Arnold scrambling algorithm is applied to the initial color images to ensure strong security.Then,the processed images are con-currently encrypted and compressed using 2D CS.Among them,chaotic sequences replace traditional random measurement matrices to increase the system’s security.Third,the processed images are re-encrypted using a combination of permutation and diffusion algorithms.In addition,the 2D projected gradient with an embedding decryption(2DPG-ED)algorithm is used to reconstruct images.Compared with the traditional reconstruction algorithm,the 2DPG-ED algorithm can improve security and reduce computational complexity.Furthermore,it has better robustness.The experimental outcome and the performance analysis indicate that this algorithm can withstand malicious attacks and prove the method is effective.
基金partly supported by NSFC under grant No.62293481,No.62201505partly by the SUTDZJU IDEA Grant(SUTD-ZJU(VP)202102)。
文摘As conventional communication systems based on classic information theory have closely approached Shannon capacity,semantic communication is emerging as a key enabling technology for the further improvement of communication performance.However,it is still unsettled on how to represent semantic information and characterise the theoretical limits of semantic-oriented compression and transmission.In this paper,we consider a semantic source which is characterised by a set of correlated random variables whose joint probabilistic distribution can be described by a Bayesian network.We give the information-theoretic limit on the lossless compression of the semantic source and introduce a low complexity encoding method by exploiting the conditional independence.We further characterise the limits on lossy compression of the semantic source and the upper and lower bounds of the rate-distortion function.We also investigate the lossy compression of the semantic source with two-sided information at the encoder and decoder,and obtain the corresponding rate distortion function.We prove that the optimal code of the semantic source is the combination of the optimal codes of each conditional independent set given the side information.
基金This work was supported by Open Fund Project of State Key Laboratory of Intelligent Vehicle Safety Technology by Grant with No.IVSTSKL-202311Key Projects of Science and Technology Research Programme of Chongqing Municipal Education Commission by Grant with No.KJZD-K202301505+1 种基金Cooperation Project between Chongqing Municipal Undergraduate Universities and Institutes Affiliated to the Chinese Academy of Sciences in 2021 by Grant with No.HZ2021015Chongqing Graduate Student Research Innovation Program by Grant with No.CYS240801.
文摘Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro-posed to improve the efficiency for edge inference of Deep Neural Networks(DNNs),existing PoT schemes require a huge amount of bit-wise manipulation and have large memory overhead,and their efficiency is bounded by the bottleneck of computation latency and memory footprint.To tackle this challenge,we present an efficient inference approach on the basis of PoT quantization and model compression.An integer-only scalar PoT quantization(IOS-PoT)is designed jointly with a distribution loss regularizer,wherein the regularizer minimizes quantization errors and training disturbances.Additionally,two-stage model compression is developed to effectively reduce memory requirement,and alleviate bandwidth usage in communications of networked heterogenous learning systems.The product look-up table(P-LUT)inference scheme is leveraged to replace bit-shifting with only indexing and addition operations for achieving low-latency computation and implementing efficient edge accelerators.Finally,comprehensive experiments on Residual Networks(ResNets)and efficient architectures with Canadian Institute for Advanced Research(CIFAR),ImageNet,and Real-world Affective Faces Database(RAF-DB)datasets,indicate that our approach achieves 2×∼10×improvement in the reduction of both weight size and computation cost in comparison to state-of-the-art methods.A P-LUT accelerator prototype is implemented on the Xilinx KV260 Field Programmable Gate Array(FPGA)platform for accelerating convolution operations,with performance results showing that P-LUT reduces memory footprint by 1.45×,achieves more than 3×power efficiency and 2×resource efficiency,compared to the conventional bit-shifting scheme.