A notable portion of cachelines in real-world workloads exhibits inner non-uniform access behaviors.However,modern cache management rarely considers this fine-grained feature,which impacts the effective cache capacity...A notable portion of cachelines in real-world workloads exhibits inner non-uniform access behaviors.However,modern cache management rarely considers this fine-grained feature,which impacts the effective cache capacity of contemporary high-performance spacecraft processors.To harness these non-uniform access behaviors,an efficient cache replacement framework featuring an auxiliary cache specifically designed to retain evicted hot data was proposed.This framework reconstructs the cache replacement policy,facilitating data migration between the main cache and the auxiliary cache.Unlike traditional cacheline-granularity policies,the approach excels at identifying and evicting infrequently used data,thereby optimizing cache utilization.The evaluation shows impressive performance improvement,especially on workloads with irregular access patterns.Benefiting from fine granularity,the proposal achieves superior storage efficiency compared with commonly used cache management schemes,providing a potential optimization opportunity for modern resource-constrained processors,such as spacecraft processors.Furthermore,the framework complements existing modern cache replacement policies and can be seamlessly integrated with minimal modifications,enhancing their overall efficacy.展开更多
The historical significance of the Stern–Gerlach(SG)experiment lies in its provision of the initial evidence for space quantization.Over time,its sequential form has evolved into an elegant paradigm that effectively ...The historical significance of the Stern–Gerlach(SG)experiment lies in its provision of the initial evidence for space quantization.Over time,its sequential form has evolved into an elegant paradigm that effectively illustrates the fundamental principles of quantum theory.To date,the practical implementation of the sequential SG experiment has not been fully achieved.In this study,we demonstrate the capability of programmable quantum processors to simulate the sequential SG experiment.The specific parametric shallow quantum circuits,which are suitable for the limitations of current noisy quantum hardware,are given to replicate the functionality of SG devices with the ability to perform measurements in different directions.Surprisingly,it has been demonstrated that Wigner’s SG interferometer can be readily implemented in our sequential quantum circuit.With the utilization of the identical circuits,it is also feasible to implement Wheeler’s delayed-choice experiment.We propose the utilization of cross-shaped programmable quantum processors to showcase sequential experiments,and the simulation results demonstrate a strong alignment with theoretical predictions.With the rapid advancement of cloud-based quantum computing,such as BAQIS Quafu,it is our belief that the proposed solution is well-suited for deployment on the cloud,allowing for public accessibility.Our findings not only expand the potential applications of quantum computers,but also contribute to a deeper comprehension of the fundamental principles underlying quantum theory.展开更多
针对可重入制造系统多具有多品种、大规模、混流生产等特点,构建带批处理机的可重入混合流水车间调度问题(reentrant hybrid flow shop scheduling problem with batch processors,BPRHFSP)模型,提出一种改进的多目标蜉蝣算法(multi-obj...针对可重入制造系统多具有多品种、大规模、混流生产等特点,构建带批处理机的可重入混合流水车间调度问题(reentrant hybrid flow shop scheduling problem with batch processors,BPRHFSP)模型,提出一种改进的多目标蜉蝣算法(multi-objective mayfly algorithm,MOMA)进行求解。提出了单件加工阶段和批处理阶段的解码规则;设计了基于Logistic混沌映射的反向学习初始化策略、改进的蜉蝣交配和变异策略,提高了算法初始解的质量和局部搜索能力;根据编码规则设计了基于变邻域下降搜索的蜉蝣运动策略,优化了种群方向。通过对不同规模大量测试算例的仿真实验,验证了MOMA相比传统算法求解BP-RHFSP更具有效性和优越性。所提出的模型能够反映生产的基础特征,达到减少最大完工时间、机器负载和碳排放的目的。展开更多
When it comes to decreasing margins and increasing energy effi-ciency in near-threshold and sub-threshold processors,timing error resilience may be viewed as a potentially lucrative alternative to examine.On the other...When it comes to decreasing margins and increasing energy effi-ciency in near-threshold and sub-threshold processors,timing error resilience may be viewed as a potentially lucrative alternative to examine.On the other hand,the currently employed approaches have certain restrictions,including high levels of design complexity,severe time constraints on error consolidation and propagation,and uncontaminated architectural registers(ARs).The design of near-threshold circuits,often known as NT circuits,is becoming the approach of choice for the construction of energy-efficient digital circuits.As a result of the exponentially decreased driving current,there was a reduction in performance,which was one of the downsides.Numerous studies have advised the use of NT techniques to chip multiprocessors as a means to preserve outstanding energy efficiency while minimising performance loss.Over the past several years,there has been a clear growth in interest in the development of artificial intelligence hardware with low energy consumption(AI).This has resulted in both large corporations and start-ups producing items that compete on the basis of varying degrees of performance and energy use.This technology’s ultimate goal was to provide levels of efficiency and performance that could not be achieved with graphics processing units or general-purpose CPUs.To achieve this objective,the technology was created to integrate several processing units into a single chip.To accomplish this purpose,the hardware was designed with a number of unique properties.In this study,an Energy Effi-cient Hyperparameter Tuned Deep Neural Network(EEHPT-DNN)model for Variation-Tolerant Near-Threshold Processor was developed.In order to improve the energy efficiency of artificial intelligence(AI),the EEHPT-DNN model employs several AI techniques.The notion focuses mostly on the repercussions of embedded technologies positioned at the network’s edge.The presented model employs a deep stacked sparse autoencoder(DSSAE)model with the objective of creating a variation-tolerant NT processor.The time-consuming method of modifying hyperparameters through trial and error is substituted with the marine predators optimization algorithm(MPO).This method is utilised to modify the hyperparameters associated with the DSSAE model.To validate that the proposed EEHPT-DNN model has a higher degree of functionality,a full simulation study is conducted,and the results are analysed from a variety of perspectives.This was completed so that the enhanced performance could be evaluated and analysed.According to the results of the study that compared numerous DL models,the EEHPT-DNN model performed significantly better than the other models.展开更多
The automation process is a very important pillar for Industry 4.0.One of the first steps is the control of motors to improve production efficiency and generate energy savings.In mass production industries,techniques ...The automation process is a very important pillar for Industry 4.0.One of the first steps is the control of motors to improve production efficiency and generate energy savings.In mass production industries,techniques such as digital signal processing(DSP)systems are implemented to control motors.These systems are efficient but very expensive for certain applications.From this arises the need for a controller capable of handling AC and DC motors that improves efficiency and maintains low energy consumption.This project presents the design of an adaptive control system for brushless AC induction and DC motors,which is functional to any type of plant in the industry.The design was possible by implementing Matlab software and tools such as digital signal processor(DSP)and Simulink.Through an extensive investigation of the state of the art,three models needed to represent the control system have been specified.The first model for the AC motor,the second for the DC motor and the third for the DSP control;this is done in this way so that the probability of failure is lower.Subsequently,these models have been programmed in Simulink,integrating the three main models into one.In this way,the design of a controller for use in AC induction motors,specifically squirrel cage and brushless DC motors,has been achieved.The final model represents a response time of 0.25 seconds,which is optimal for this type of application,where response times of 2e-3 to 3 seconds are expected.展开更多
模型深度的不断增加和处理序列长度的不一致对循环神经网络在不同处理器上的性能优化提出巨大挑战。针对自主研制的长向量处理器FT-M7032,实现了一个高效的循环神经网络加速引擎。该引擎采用行优先矩阵向量乘算法和数据感知的多核并行方...模型深度的不断增加和处理序列长度的不一致对循环神经网络在不同处理器上的性能优化提出巨大挑战。针对自主研制的长向量处理器FT-M7032,实现了一个高效的循环神经网络加速引擎。该引擎采用行优先矩阵向量乘算法和数据感知的多核并行方式,提高矩阵向量乘的计算效率;采用两级内核融合优化方法降低临时数据传输的开销;采用手写汇编优化多种算子,进一步挖掘长向量处理器的性能潜力。实验表明,长向量处理器循环神经网络推理引擎可获得较高性能,相较于多核ARM CPU以及Intel Golden CPU,类循环神经网络模型长短记忆网络可获得最高62.68倍和3.12倍的性能加速。展开更多
文摘A notable portion of cachelines in real-world workloads exhibits inner non-uniform access behaviors.However,modern cache management rarely considers this fine-grained feature,which impacts the effective cache capacity of contemporary high-performance spacecraft processors.To harness these non-uniform access behaviors,an efficient cache replacement framework featuring an auxiliary cache specifically designed to retain evicted hot data was proposed.This framework reconstructs the cache replacement policy,facilitating data migration between the main cache and the auxiliary cache.Unlike traditional cacheline-granularity policies,the approach excels at identifying and evicting infrequently used data,thereby optimizing cache utilization.The evaluation shows impressive performance improvement,especially on workloads with irregular access patterns.Benefiting from fine granularity,the proposal achieves superior storage efficiency compared with commonly used cache management schemes,providing a potential optimization opportunity for modern resource-constrained processors,such as spacecraft processors.Furthermore,the framework complements existing modern cache replacement policies and can be seamlessly integrated with minimal modifications,enhancing their overall efficacy.
基金supported by Beijing Academy of Quantum Information Sciencessupported by the State Key Laboratory of Low Dimensional Quantum Physics+2 种基金the Start-up Fund provided by Tsinghua Universitythe financial support provided by the National Natural Science Foundation of China(Grant No.92065113)the Anhui Initiative in Quantum Information Technologies。
文摘The historical significance of the Stern–Gerlach(SG)experiment lies in its provision of the initial evidence for space quantization.Over time,its sequential form has evolved into an elegant paradigm that effectively illustrates the fundamental principles of quantum theory.To date,the practical implementation of the sequential SG experiment has not been fully achieved.In this study,we demonstrate the capability of programmable quantum processors to simulate the sequential SG experiment.The specific parametric shallow quantum circuits,which are suitable for the limitations of current noisy quantum hardware,are given to replicate the functionality of SG devices with the ability to perform measurements in different directions.Surprisingly,it has been demonstrated that Wigner’s SG interferometer can be readily implemented in our sequential quantum circuit.With the utilization of the identical circuits,it is also feasible to implement Wheeler’s delayed-choice experiment.We propose the utilization of cross-shaped programmable quantum processors to showcase sequential experiments,and the simulation results demonstrate a strong alignment with theoretical predictions.With the rapid advancement of cloud-based quantum computing,such as BAQIS Quafu,it is our belief that the proposed solution is well-suited for deployment on the cloud,allowing for public accessibility.Our findings not only expand the potential applications of quantum computers,but also contribute to a deeper comprehension of the fundamental principles underlying quantum theory.
文摘When it comes to decreasing margins and increasing energy effi-ciency in near-threshold and sub-threshold processors,timing error resilience may be viewed as a potentially lucrative alternative to examine.On the other hand,the currently employed approaches have certain restrictions,including high levels of design complexity,severe time constraints on error consolidation and propagation,and uncontaminated architectural registers(ARs).The design of near-threshold circuits,often known as NT circuits,is becoming the approach of choice for the construction of energy-efficient digital circuits.As a result of the exponentially decreased driving current,there was a reduction in performance,which was one of the downsides.Numerous studies have advised the use of NT techniques to chip multiprocessors as a means to preserve outstanding energy efficiency while minimising performance loss.Over the past several years,there has been a clear growth in interest in the development of artificial intelligence hardware with low energy consumption(AI).This has resulted in both large corporations and start-ups producing items that compete on the basis of varying degrees of performance and energy use.This technology’s ultimate goal was to provide levels of efficiency and performance that could not be achieved with graphics processing units or general-purpose CPUs.To achieve this objective,the technology was created to integrate several processing units into a single chip.To accomplish this purpose,the hardware was designed with a number of unique properties.In this study,an Energy Effi-cient Hyperparameter Tuned Deep Neural Network(EEHPT-DNN)model for Variation-Tolerant Near-Threshold Processor was developed.In order to improve the energy efficiency of artificial intelligence(AI),the EEHPT-DNN model employs several AI techniques.The notion focuses mostly on the repercussions of embedded technologies positioned at the network’s edge.The presented model employs a deep stacked sparse autoencoder(DSSAE)model with the objective of creating a variation-tolerant NT processor.The time-consuming method of modifying hyperparameters through trial and error is substituted with the marine predators optimization algorithm(MPO).This method is utilised to modify the hyperparameters associated with the DSSAE model.To validate that the proposed EEHPT-DNN model has a higher degree of functionality,a full simulation study is conducted,and the results are analysed from a variety of perspectives.This was completed so that the enhanced performance could be evaluated and analysed.According to the results of the study that compared numerous DL models,the EEHPT-DNN model performed significantly better than the other models.
文摘The automation process is a very important pillar for Industry 4.0.One of the first steps is the control of motors to improve production efficiency and generate energy savings.In mass production industries,techniques such as digital signal processing(DSP)systems are implemented to control motors.These systems are efficient but very expensive for certain applications.From this arises the need for a controller capable of handling AC and DC motors that improves efficiency and maintains low energy consumption.This project presents the design of an adaptive control system for brushless AC induction and DC motors,which is functional to any type of plant in the industry.The design was possible by implementing Matlab software and tools such as digital signal processor(DSP)and Simulink.Through an extensive investigation of the state of the art,three models needed to represent the control system have been specified.The first model for the AC motor,the second for the DC motor and the third for the DSP control;this is done in this way so that the probability of failure is lower.Subsequently,these models have been programmed in Simulink,integrating the three main models into one.In this way,the design of a controller for use in AC induction motors,specifically squirrel cage and brushless DC motors,has been achieved.The final model represents a response time of 0.25 seconds,which is optimal for this type of application,where response times of 2e-3 to 3 seconds are expected.
文摘模型深度的不断增加和处理序列长度的不一致对循环神经网络在不同处理器上的性能优化提出巨大挑战。针对自主研制的长向量处理器FT-M7032,实现了一个高效的循环神经网络加速引擎。该引擎采用行优先矩阵向量乘算法和数据感知的多核并行方式,提高矩阵向量乘的计算效率;采用两级内核融合优化方法降低临时数据传输的开销;采用手写汇编优化多种算子,进一步挖掘长向量处理器的性能潜力。实验表明,长向量处理器循环神经网络推理引擎可获得较高性能,相较于多核ARM CPU以及Intel Golden CPU,类循环神经网络模型长短记忆网络可获得最高62.68倍和3.12倍的性能加速。