Thermoelectric generators(TEGs)play a critical role in collecting renewable energy fromthe sun and deep space to generate clean electricity.With their environmentally friendly,reliable,and noise-free operation,TEGs of...Thermoelectric generators(TEGs)play a critical role in collecting renewable energy fromthe sun and deep space to generate clean electricity.With their environmentally friendly,reliable,and noise-free operation,TEGs offer diverse applications,including areas with limited power infrastructure,microelectronic devices,and wearable technology.The review thoroughly analyses TEG system configurations,performance,and applications driven by solar and/or radiative cooling,covering non-concentrating,concentrating,radiative cooling-driven,and dual-mode TEGs.Materials for solar absorbers and radiative coolers,simulation techniques,energy storage management,and thermal management strategies are explored.The integration of TEGs with combined heat and power systems is identified as a promising application.Additionally,TEGs hold potential as charging sources for electronic devices.This comprehensive review provides valuable insights into this energy collection approach,facilitating improved efficiency,reduced costs,and expanded applications.It also highlights current limitations and knowledge gaps,emphasizing the importance of further research and development in unlocking the full potential of TEGs for a sustainable and efficient energy future.展开更多
The two-parameter Waring is an important heavy-tailed discrete distribution,which extends the famous Yule Simon distribution and provides more flexibility when modelling the data.The commonly used EFF(Expectation-Firs...The two-parameter Waring is an important heavy-tailed discrete distribution,which extends the famous Yule Simon distribution and provides more flexibility when modelling the data.The commonly used EFF(Expectation-First Frequency)for parameter estimation can only be applied when the first moment exists,and it only uses the information of the expectation and the first frequency,which is not as efficient as the maximum likelihood estimator(MLE).However,the MLE may not exist for some sample data.We apply the profle method to the log-likelihood function and derive the necessary and sufficient Conditions for the existence of the MLE of the Waring parameters.We use extensive simulation studies to compare the MLE and EFF methods,and the goodness-of-fit comparison with the Yule Simon distribution.We also apply the Waring distribution to fit an insurance data.展开更多
Asynchronous advantage actor‐critic(A3C)algorithm is a commonly used policy opti-mization algorithm in reinforcement learning,in which asynchronous is parallel inter-active sampling and training,and advantage is a sa...Asynchronous advantage actor‐critic(A3C)algorithm is a commonly used policy opti-mization algorithm in reinforcement learning,in which asynchronous is parallel inter-active sampling and training,and advantage is a sampling multi‐step reward estimation method for computing weights.In order to address the problem of low efficiency and insufficient convergence caused by the traditional heuristic exploration of A3C algorithm in reinforcement learning,an improved A3C algorithm is proposed in this paper.In this algorithm,a noise network function,which updates the noise tensor in an explicit way is constructed to train the agent.Generalised advantage estimation(GAE)is also adopted to describe the dominance function.Finally,a new mean gradient parallelisation method is designed to update the parameters in both the primary and secondary networks by summing and averaging the gradients passed from all the sub‐processes to the main process.Simulation experiments were conducted in a gym environment using the PyTorch Agent Net(PTAN)advanced reinforcement learning library,and the results show that the method enables the agent to complete the learning training faster and its convergence during the training process is better.The improved A3C algorithm has a better performance than the original algorithm,which can provide new ideas for sub-sequent research on reinforcement learning algorithms.展开更多
基金supported by the Hong Kong Polytechnic University through Projects of RCRE(Project No.1-BBEG)sponsored by the Research Grants Council of HongKong and the NationalNatural Science Foundation of China(Project No.N_PolyU513/18).
文摘Thermoelectric generators(TEGs)play a critical role in collecting renewable energy fromthe sun and deep space to generate clean electricity.With their environmentally friendly,reliable,and noise-free operation,TEGs offer diverse applications,including areas with limited power infrastructure,microelectronic devices,and wearable technology.The review thoroughly analyses TEG system configurations,performance,and applications driven by solar and/or radiative cooling,covering non-concentrating,concentrating,radiative cooling-driven,and dual-mode TEGs.Materials for solar absorbers and radiative coolers,simulation techniques,energy storage management,and thermal management strategies are explored.The integration of TEGs with combined heat and power systems is identified as a promising application.Additionally,TEGs hold potential as charging sources for electronic devices.This comprehensive review provides valuable insights into this energy collection approach,facilitating improved efficiency,reduced costs,and expanded applications.It also highlights current limitations and knowledge gaps,emphasizing the importance of further research and development in unlocking the full potential of TEGs for a sustainable and efficient energy future.
基金This work is partially supported by National Natural Science Foundation of China[Grant Numbers 11671096,11690013,11731011,11871376]Natural Science Foundation of Shanghai[Grant Number 21ZR1420700].
文摘The two-parameter Waring is an important heavy-tailed discrete distribution,which extends the famous Yule Simon distribution and provides more flexibility when modelling the data.The commonly used EFF(Expectation-First Frequency)for parameter estimation can only be applied when the first moment exists,and it only uses the information of the expectation and the first frequency,which is not as efficient as the maximum likelihood estimator(MLE).However,the MLE may not exist for some sample data.We apply the profle method to the log-likelihood function and derive the necessary and sufficient Conditions for the existence of the MLE of the Waring parameters.We use extensive simulation studies to compare the MLE and EFF methods,and the goodness-of-fit comparison with the Yule Simon distribution.We also apply the Waring distribution to fit an insurance data.
基金Natural Science Foundation of Zhejiang Province,Grant/Award Number:LQ15F030006Key Research and Development Program of Zhejiang Province,Grant/Award Number:2018C01085。
文摘Asynchronous advantage actor‐critic(A3C)algorithm is a commonly used policy opti-mization algorithm in reinforcement learning,in which asynchronous is parallel inter-active sampling and training,and advantage is a sampling multi‐step reward estimation method for computing weights.In order to address the problem of low efficiency and insufficient convergence caused by the traditional heuristic exploration of A3C algorithm in reinforcement learning,an improved A3C algorithm is proposed in this paper.In this algorithm,a noise network function,which updates the noise tensor in an explicit way is constructed to train the agent.Generalised advantage estimation(GAE)is also adopted to describe the dominance function.Finally,a new mean gradient parallelisation method is designed to update the parameters in both the primary and secondary networks by summing and averaging the gradients passed from all the sub‐processes to the main process.Simulation experiments were conducted in a gym environment using the PyTorch Agent Net(PTAN)advanced reinforcement learning library,and the results show that the method enables the agent to complete the learning training faster and its convergence during the training process is better.The improved A3C algorithm has a better performance than the original algorithm,which can provide new ideas for sub-sequent research on reinforcement learning algorithms.