Concrete subjected to fire loads is susceptible to explosive spalling, which can lead to the exposure of reinforcingsteel bars to the fire, substantially jeopardizing the structural safety and stability. The spalling ...Concrete subjected to fire loads is susceptible to explosive spalling, which can lead to the exposure of reinforcingsteel bars to the fire, substantially jeopardizing the structural safety and stability. The spalling of fire-loaded concreteis closely related to the evolution of pore pressure and temperature. Conventional analytical methods involve theresolution of complex, strongly coupled multifield equations, necessitating significant computational efforts. Torapidly and accurately obtain the distributions of pore-pressure and temperature, the Pix2Pix model is adoptedin this work, which is celebrated for its capabilities in image generation. The open-source dataset used hereinfeatures RGB images we generated using a sophisticated coupled model, while the grayscale images encapsulate the15 principal variables influencing spalling. After conducting a series of tests with different layers configurations,activation functions and loss functions, the Pix2Pix model suitable for assessing the spalling risk of fire-loadedconcrete has been meticulously designed and trained. The applicability and reliability of the Pix2Pix model inconcrete parameter prediction are verified by comparing its outcomes with those derived fromthe strong couplingTHC model. Notably, for the practical engineering applications, our findings indicate that utilizing monochromeimages as the initial target for analysis yields more dependable results. This work not only offers valuable insightsfor civil engineers specializing in concrete structures but also establishes a robust methodological approach forresearchers seeking to create similar predictive models.展开更多
In recent years,Pix2Pix,a model within the domain of GANs,has found widespread application in the field of image-to-image translation.However,traditional Pix2Pix models suffer from significant drawbacks in image gener...In recent years,Pix2Pix,a model within the domain of GANs,has found widespread application in the field of image-to-image translation.However,traditional Pix2Pix models suffer from significant drawbacks in image generation,such as the loss of important information features during the encoding and decoding processes,as well as a lack of constraints during the training process.To address these issues and improve the quality of Pix2Pixgenerated images,this paper introduces two key enhancements.Firstly,to reduce information loss during encoding and decoding,we utilize the U-Net++network as the generator for the Pix2Pix model,incorporating denser skip-connection to minimize information loss.Secondly,to enhance constraints during image generation,we introduce a specialized discriminator designed to distinguish differential images,further enhancing the quality of the generated images.We conducted experiments on the facades dataset and the sketch portrait dataset from the Chinese University of Hong Kong to validate our proposed model.The experimental results demonstrate that our improved Pix2Pix model significantly enhances image quality and outperforms other models in the selected metrics.Notably,the Pix2Pix model incorporating the differential image discriminator exhibits the most substantial improvements across all metrics.An analysis of the experimental results reveals that the use of the U-Net++generator effectively reduces information feature loss,while the Pix2Pix model incorporating the differential image discriminator enhances the supervision of the generator during training.Both of these enhancements collectively improve the quality of Pix2Pix-generated images.展开更多
Sky clouds affect solar observations significantly.Their shadows obscure the details of solar features in observed images.Cloud-covered solar images are difficult to be used for further research without pre-processing...Sky clouds affect solar observations significantly.Their shadows obscure the details of solar features in observed images.Cloud-covered solar images are difficult to be used for further research without pre-processing.In this paper,the solar image cloud removing problem is converted to an image-to-image translation problem,with a used algorithm of the Pixel to Pixel Network(Pix2Pix),which generates a cloudless solar image without relying on the physical scattering model.Pix2Pix is consists of a generator and a discriminator.The generator is a well-designed U-Net.The discriminator uses PatchGAN structure to improve the details of the generated solar image,which guides the generator to create a pseudo realistic solar image.The image generation model and the training process are optimized,and the generator is jointly trained with the discriminator.So the generation model which can stably generate cloudless solar image is obtained.Extensive experiment results on Huairou Solar Observing Station,National Astronomical Observatories,and Chinese Academy of Sciences(HSOS,NAOC and CAS)datasets show that Pix2Pix is superior to the traditional methods based on physical prior knowledge in peak signal-to-noise ratio,structural similarity,perceptual index,and subjective visual effect.The result of the PSNR,SSIM and PI are 27.2121 dB,0.8601 and 3.3341.展开更多
In ground-based observations of the Sun,solar images are often affected by appearance of thin clouds,which contaminate the images and affect the scientific results from data analysis.In this paper,the improved Pixel t...In ground-based observations of the Sun,solar images are often affected by appearance of thin clouds,which contaminate the images and affect the scientific results from data analysis.In this paper,the improved Pixel to Pixel Network(Pix2Pix)network is used to convert polluted images to clear images to remove the cloud shadow in the solar images.By adding attention module to the model,the hidden layer of Pix2Pix model can infer the attention map of the input feature vector according to the input feature vector.And then,the attention map is multiplied by the input feature map to give different weights to the hidden features in the feature map,adaptively refine the input feature map to make the model pay attention to important feature information and achieve better recovery effect.In order to further enhance the model’s ability to recover detailed features,perceptual loss is added to the loss function.The model was tested on the full disk H-alpha images datasets provided by Huairou Solar Observing Station,National Astronomical Observatories.The experimental results show that the model can effectively remove the influence of thin clouds on the picture and restore the details of solar activity.The peak signal-to-noise ratio(PSNR)reaches 27.3012 and the learned perceptual image patch similarity(LPIPS)reaches 0.330,which is superior to the existed dehaze algorithms.展开更多
基金the National Natural Science Foundation of China(NSFC)(52178324).
文摘Concrete subjected to fire loads is susceptible to explosive spalling, which can lead to the exposure of reinforcingsteel bars to the fire, substantially jeopardizing the structural safety and stability. The spalling of fire-loaded concreteis closely related to the evolution of pore pressure and temperature. Conventional analytical methods involve theresolution of complex, strongly coupled multifield equations, necessitating significant computational efforts. Torapidly and accurately obtain the distributions of pore-pressure and temperature, the Pix2Pix model is adoptedin this work, which is celebrated for its capabilities in image generation. The open-source dataset used hereinfeatures RGB images we generated using a sophisticated coupled model, while the grayscale images encapsulate the15 principal variables influencing spalling. After conducting a series of tests with different layers configurations,activation functions and loss functions, the Pix2Pix model suitable for assessing the spalling risk of fire-loadedconcrete has been meticulously designed and trained. The applicability and reliability of the Pix2Pix model inconcrete parameter prediction are verified by comparing its outcomes with those derived fromthe strong couplingTHC model. Notably, for the practical engineering applications, our findings indicate that utilizing monochromeimages as the initial target for analysis yields more dependable results. This work not only offers valuable insightsfor civil engineers specializing in concrete structures but also establishes a robust methodological approach forresearchers seeking to create similar predictive models.
基金supported in part by the Xinjiang Natural Science Foundation of China(2021D01C078).
文摘In recent years,Pix2Pix,a model within the domain of GANs,has found widespread application in the field of image-to-image translation.However,traditional Pix2Pix models suffer from significant drawbacks in image generation,such as the loss of important information features during the encoding and decoding processes,as well as a lack of constraints during the training process.To address these issues and improve the quality of Pix2Pixgenerated images,this paper introduces two key enhancements.Firstly,to reduce information loss during encoding and decoding,we utilize the U-Net++network as the generator for the Pix2Pix model,incorporating denser skip-connection to minimize information loss.Secondly,to enhance constraints during image generation,we introduce a specialized discriminator designed to distinguish differential images,further enhancing the quality of the generated images.We conducted experiments on the facades dataset and the sketch portrait dataset from the Chinese University of Hong Kong to validate our proposed model.The experimental results demonstrate that our improved Pix2Pix model significantly enhances image quality and outperforms other models in the selected metrics.Notably,the Pix2Pix model incorporating the differential image discriminator exhibits the most substantial improvements across all metrics.An analysis of the experimental results reveals that the use of the U-Net++generator effectively reduces information feature loss,while the Pix2Pix model incorporating the differential image discriminator enhances the supervision of the generator during training.Both of these enhancements collectively improve the quality of Pix2Pix-generated images.
基金Funding for this study was received from the open project of CAS Key Laboratory of Solar Activity(Grant No:KLSA202114)and the crossdiscipline research project of Minzu University of China(2020MDJC08).
文摘Sky clouds affect solar observations significantly.Their shadows obscure the details of solar features in observed images.Cloud-covered solar images are difficult to be used for further research without pre-processing.In this paper,the solar image cloud removing problem is converted to an image-to-image translation problem,with a used algorithm of the Pixel to Pixel Network(Pix2Pix),which generates a cloudless solar image without relying on the physical scattering model.Pix2Pix is consists of a generator and a discriminator.The generator is a well-designed U-Net.The discriminator uses PatchGAN structure to improve the details of the generated solar image,which guides the generator to create a pseudo realistic solar image.The image generation model and the training process are optimized,and the generator is jointly trained with the discriminator.So the generation model which can stably generate cloudless solar image is obtained.Extensive experiment results on Huairou Solar Observing Station,National Astronomical Observatories,and Chinese Academy of Sciences(HSOS,NAOC and CAS)datasets show that Pix2Pix is superior to the traditional methods based on physical prior knowledge in peak signal-to-noise ratio,structural similarity,perceptual index,and subjective visual effect.The result of the PSNR,SSIM and PI are 27.2121 dB,0.8601 and 3.3341.
基金Funding for this study was received fromthe open project of CASKey Laboratory of Solar Activity(Grant No:KLSA202114)the crossdiscipline research project of Minzu University of China(2020MDJC08).
文摘In ground-based observations of the Sun,solar images are often affected by appearance of thin clouds,which contaminate the images and affect the scientific results from data analysis.In this paper,the improved Pixel to Pixel Network(Pix2Pix)network is used to convert polluted images to clear images to remove the cloud shadow in the solar images.By adding attention module to the model,the hidden layer of Pix2Pix model can infer the attention map of the input feature vector according to the input feature vector.And then,the attention map is multiplied by the input feature map to give different weights to the hidden features in the feature map,adaptively refine the input feature map to make the model pay attention to important feature information and achieve better recovery effect.In order to further enhance the model’s ability to recover detailed features,perceptual loss is added to the loss function.The model was tested on the full disk H-alpha images datasets provided by Huairou Solar Observing Station,National Astronomical Observatories.The experimental results show that the model can effectively remove the influence of thin clouds on the picture and restore the details of solar activity.The peak signal-to-noise ratio(PSNR)reaches 27.3012 and the learned perceptual image patch similarity(LPIPS)reaches 0.330,which is superior to the existed dehaze algorithms.