The Solar X-ray Detector(SXD)on-board the Macao Science Satellite-1B(MSS-1B)was successfully launched via the Chinese Long March-2C rocket on 21 May 2023,and commenced operations in early June of the same year.The MSS...The Solar X-ray Detector(SXD)on-board the Macao Science Satellite-1B(MSS-1B)was successfully launched via the Chinese Long March-2C rocket on 21 May 2023,and commenced operations in early June of the same year.The MSS-1B/Soft X-ray Detection Units(SXDUs)employ two silicon drift detectors(SDDs),providing a wide range of energy spectra spanning from 0.7 to 24 keV.Notably,the SXDUs deliver a high-resolution capability of 0.14 keV@5.9 keV and operate with a time cadence of 1 second.Here,we perform thorough calibrations of the MSS-1B/SXDUs,employing a combination of ground experiments and simulations.In addition,quantitative analysis comparing the flux measurements obtained by the MSS-1B/SXDUs to the data collected by the Geostationary Operational Environmental Satellite(GOES),provides compelling evidence of their consistency.Furthermore,the preliminary spectral analysis results showcase the robustness and expected performance of the MSS-1B/SXDUs,unlocking their potential for facilitating the study of dynamic evolution of solar flares.Moreover,the innovative MSS-1B/Solar X-ray Detector facilitates concurrent observations of solar soft and hard X-rays,thereby making valuable contributions to the advancements in solar research.展开更多
HY-2A is the first one of the Chinese HY-2 ocean satellite series carrying a microwave radiometer(RM)to measure sea surface temperature,sea surface wind speed,atmospheric water vapor,cloud liquid water content,and rai...HY-2A is the first one of the Chinese HY-2 ocean satellite series carrying a microwave radiometer(RM)to measure sea surface temperature,sea surface wind speed,atmospheric water vapor,cloud liquid water content,and rain rate.We verified the RM level 1B brightness temperature(T B)to retrieve environmental parameters.In the verification,TB that simulated using the ocean-atmosphere radiative transfer model(RTM)was used as a reference.The total bias and total standard deviation(SD)of the RM level 1B TB,with reference to the RTM simulation,ranged-20.6-4.38 K and 0.7-2.93 K,respectively.We found that both the total bias and the total SD depend on the frequency and polarization,although the values for ascending and descending passes are different.In addition,substantial seasonal variation of the bias was found at all channels.The verification results indicate the RM has some problems regarding calibration,e.g.,correction of antenna spillover and antenna physical emission,especially for the 18.7-GHz channel.Based on error analyses,a statistical recalibration algorithm was designed and recalibration was performed for the RM level 1B TB.Validation of the recalibrated TB indicated that the quality of the recalibrated RM level 1B TB was improved significantly.The bias of the recalibrated T B at all channels was reduced to<0.4 K,seasonal variation was almost eradicated,and SD was diminished(i.e.,the SD of the 18.7-GHz channel was reduced by more than 0.5K).展开更多
Since its introduction,discontinuous deformation analysis(DDA)has been widely used in different areas of rock mechanics.By dividing large blocks into subblocks and introducing artificial joints,DDA can be applied to r...Since its introduction,discontinuous deformation analysis(DDA)has been widely used in different areas of rock mechanics.By dividing large blocks into subblocks and introducing artificial joints,DDA can be applied to rock fracture simulation.However,parameter calibration,a fundamental issue in discontinuum methods,has not received enough attention in DDA.In this study,the parameter calibration of DDA for intact rock is carefully studied.To this end,a subblock DDA with Voronoi tessellation is presented first.Then,a modified contact constitutive law is introduced,in which the tensile and shear meso-strengths are modified to be independent of the bond lengths.This improvement can prevent the unjustified preferential failure of short edges.A method for imposing confining pressure is also introduced.Thereafter,sensitivity analysis is performed to investigate the influence of the calculated parameters and meso-parameters on the mechanical properties of modeled rock.Based on the sensitivity analysis,a unified calibration procedure is suggested for both cases with and without confining pressure.Finally,the calibration procedure is applied to two examples,including a biaxial compression test.The results show that the proposed Voronoi-based DDA can simulate rock fracture with and without confining pressure very well after careful parameter calibration.展开更多
General Administration of Quality Supervision,Inspection and Quarantine of P.R.China has approved the following 8 national measuring verification regulations in 2010 and publicize now.
General Administration of Quality Supervision,Inspection and Quarantine of P.R.China has approved the following 11 national measuring verification regulations in 2008 and publicize now.
General Administration of Quality Supervision,Inspection and Quarantine of P.R.China hasapproved the following 24 national measuring verification regulations in 2008 and publicize now.
General Administration of Quality Supervision,Inspection and Quarantine of P.R.China has approved the following 10 national measuring verification regulations in 2008 and publicize now.
In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation metrics.In this paper,we aim to reduce the annotation cost of crowd dat...In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation metrics.In this paper,we aim to reduce the annotation cost of crowd datasets,and propose a crowd density estimation method based on weakly-supervised learning,in the absence of crowd position supervision information,which directly reduces the number of crowds by using the number of pedestrians in the image as the supervised information.For this purpose,we design a new training method,which exploits the correlation between global and local image features by incremental learning to train the network.Specifically,we design a parent-child network(PC-Net)focusing on the global and local image respectively,and propose a linear feature calibration structure to train the PC-Net simultaneously,and the child network learns feature transfer factors and feature bias weights,and uses the transfer factors and bias weights to linearly feature calibrate the features extracted from the Parent network,to improve the convergence of the network by using local features hidden in the crowd images.In addition,we use the pyramid vision transformer as the backbone of the PC-Net to extract crowd features at different levels,and design a global-local feature loss function(L2).We combine it with a crowd counting loss(LC)to enhance the sensitivity of the network to crowd features during the training process,which effectively improves the accuracy of crowd density estimation.The experimental results show that the PC-Net significantly reduces the gap between fullysupervised and weakly-supervised crowd density estimation,and outperforms the comparison methods on five datasets of Shanghai Tech Part A,ShanghaiTech Part B,UCF_CC_50,UCF_QNRF and JHU-CROWD++.展开更多
Despite the maturity of ensemble numerical weather prediction(NWP),the resulting forecasts are still,more often than not,under-dispersed.As such,forecast calibration tools have become popular.Among those tools,quantil...Despite the maturity of ensemble numerical weather prediction(NWP),the resulting forecasts are still,more often than not,under-dispersed.As such,forecast calibration tools have become popular.Among those tools,quantile regression(QR)is highly competitive in terms of both flexibility and predictive performance.Nevertheless,a long-standing problem of QR is quantile crossing,which greatly limits the interpretability of QR-calibrated forecasts.On this point,this study proposes a non-crossing quantile regression neural network(NCQRNN),for calibrating ensemble NWP forecasts into a set of reliable quantile forecasts without crossing.The overarching design principle of NCQRNN is to add on top of the conventional QRNN structure another hidden layer,which imposes a non-decreasing mapping between the combined output from nodes of the last hidden layer to the nodes of the output layer,through a triangular weight matrix with positive entries.The empirical part of the work considers a solar irradiance case study,in which four years of ensemble irradiance forecasts at seven locations,issued by the European Centre for Medium-Range Weather Forecasts,are calibrated via NCQRNN,as well as via an eclectic mix of benchmarking models,ranging from the naïve climatology to the state-of-the-art deep-learning and other non-crossing models.Formal and stringent forecast verification suggests that the forecasts post-processed via NCQRNN attain the maximum sharpness subject to calibration,amongst all competitors.Furthermore,the proposed conception to resolve quantile crossing is remarkably simple yet general,and thus has broad applicability as it can be integrated with many shallow-and deep-learning-based neural networks.展开更多
Bridges are one of the most vulnerable components of a highway transportation network system subjected to earthquake ground motions. Prediction of resilience and sustainability of bridge performance in a probabilistic...Bridges are one of the most vulnerable components of a highway transportation network system subjected to earthquake ground motions. Prediction of resilience and sustainability of bridge performance in a probabilistic manner provides valuable information for pre-event system upgrading and post-event functional recovery of the network. The current study integrates bridge seismic damageability information obtained through empirical, analytical and experimental procedures and quantifies threshold limits of bridge damage states consistent with the physical damage description given in HAZUS. Experimental data from a large-scale shaking table test are utilized for this purpose. This experiment was conducted at the University of Nevada, Reno, where a research team from the University of California, Irvine, participated. Observed experimental damage data are processed to identify and quantify bridge damage states in terms of rotational ductility at bridge column ends. In parallel, a mechanistic model for fragility curves is developed in such a way that the model can be calibrated against empirical fragility curves that have been constructed from damage data obtained during the 1994 Northridge earthquake. This calibration quantifies threshold values of bridge damage states and makes the analytical study consistent with damage data observed in past earthquakes. The mechanistic model is transportable and applicable to most types and sizes of bridges. Finally, calibrated damage state definitions are compared with that obtained using experimental findings. Comparison shows excellent consistency among results from analytical, empirical and experimental observations.展开更多
In traditional digital twin communication system testing,we can apply test cases as completely as possible in order to ensure the correctness of the system implementation,and even then,there is no guarantee that the d...In traditional digital twin communication system testing,we can apply test cases as completely as possible in order to ensure the correctness of the system implementation,and even then,there is no guarantee that the digital twin communication system implementation is completely correct.Formal verification is currently recognized as a method to ensure the correctness of software system for communication in digital twins because it uses rigorous mathematical methods to verify the correctness of systems for communication in digital twins and can effectively help system designers determine whether the system is designed and implemented correctly.In this paper,we use the interactive theorem proving tool Isabelle/HOL to construct the formal model of the X86 architecture,and to model the related assembly instructions.The verification result shows that the system states obtained after the operations of relevant assembly instructions is consistent with the expected states,indicating that the system meets the design expectations.展开更多
The consensus of the automotive industry and traffic management authorities is that autonomous vehicles must follow the same traffic laws as human drivers.Using formal or digital methods,natural language traffic rules...The consensus of the automotive industry and traffic management authorities is that autonomous vehicles must follow the same traffic laws as human drivers.Using formal or digital methods,natural language traffic rules can be translated into machine language and used by autonomous vehicles.In this paper,a translation flow is designed.Beyond the translation,a deeper examination is required,because the semantics of natural languages are rich and complex,and frequently contain hidden assumptions.The issue of how to ensure that digital rules are accurate and consistent with the original intent of the traffic rules they represent is both significant and unresolved.In response,we propose a method of formal verification that combines equivalence verification with model checking.Reasonable and reassuring digital traffic rules can be obtained by utilizing the proposed traffic rule digitization flow and verification method.In addition,we offer a number of simulation applications that employ digital traffic rules to assess vehicle violations.The experimental findings indicate that our digital rules utilizing metric temporal logic(MTL)can be easily incorporated into simulation platforms and autonomous driving systems(ADS).展开更多
The high-intensity heavy-ion accelerator facility(HIAF)is a scientific research facility complex composed of multiple cas-cade accelerators of different types,which pose a scheduling problem for devices distributed ov...The high-intensity heavy-ion accelerator facility(HIAF)is a scientific research facility complex composed of multiple cas-cade accelerators of different types,which pose a scheduling problem for devices distributed over a certain range of 2 km,involving over a hundred devices.The white rabbit,a technology-enhancing Gigabit Ethernet,has shown the capability of scheduling distributed timing devices but still faces the challenge of obtaining real-time synchronization calibration param-eters with high precision.This study presents a calibration system based on a time-to-digital converter implemented on an ARM-based System-on-Chip(SoC).The system consists of four multi-sample delay lines,a bubble-proof encoder,an edge controller for managing data from different channels,and a highly effective calibration module that benefits from the SoC architecture.The performance was evaluated with an average RMS precision of 5.51 ps by measuring the time intervals from 0 to 24,000 ps with 120,000 data for every test.The design presented in this study refines the calibration precision of the HIAF timing system.This eliminates the errors caused by manual calibration without efficiency loss and provides data support for fault diagnosis.It can also be easily tailored or ported to other devices for specific applications and provides more space for developing timing systems for particle accelerators,such as white rabbits on HIAF.展开更多
Recent advancements in satellite technologies and the declining cost of access to space have led to the emergence of large satellite constellations in Low Earth Orbit(LEO).However,these constellations often rely on be...Recent advancements in satellite technologies and the declining cost of access to space have led to the emergence of large satellite constellations in Low Earth Orbit(LEO).However,these constellations often rely on bent-pipe architecture,resulting in high communication costs.Existing onboard inference architectures suffer from limitations in terms of low accuracy and inflexibility in the deployment and management of in-orbit applications.To address these challenges,we propose a cloud-native-based satellite design specifically tailored for Earth Observation tasks,enabling diverse computing paradigms.In this work,we present a case study of a satellite-ground collaborative inference system deployed in the Tiansuan constellation,demonstrating a remarkable 50%accuracy improvement and a substantial 90%data reduction.Our work sheds light on in-orbit energy,where in-orbit computing accounts for 17%of the total onboard energy consumption.Our approach represents a significant advancement of cloud-native satellite,aiming to enhance the accuracy of in-orbit computing while simultaneously reducing communication cost.展开更多
The maturity of 5G technology has enabled crowd-sensing services to collect multimedia data over wireless network,so it has promoted the applications of crowd-sensing services in different fields,but also brings more ...The maturity of 5G technology has enabled crowd-sensing services to collect multimedia data over wireless network,so it has promoted the applications of crowd-sensing services in different fields,but also brings more privacy security challenges,the most commom which is privacy leakage.As a privacy protection technology combining data integrity check and identity anonymity,ring signature is widely used in the field of privacy protection.However,introducing signature technology leads to additional signature verification overhead.In the scenario of crowd-sensing,the existing signature schemes have low efficiency in multi-signature verification.Therefore,it is necessary to design an efficient multi-signature verification scheme while ensuring security.In this paper,a batch-verifiable signature scheme is proposed based on the crowd-sensing background,which supports the sensing platform to verify the uploaded multiple signature data efficiently,so as to overcoming the defects of the traditional signature scheme in multi-signature verification.In our proposal,a method for linking homologous data was presented,which was valuable for incentive mechanism and data analysis.Simulation results showed that the proposed scheme has good performance in terms of security and efficiency in crowd-sensing applications with a large number of users and data.展开更多
Dynamic signature is a biometric modality that recognizes an individual’s anatomic and behavioural characteristics when signing their name. The rampant case of signature falsification (Identity Theft) was the key mot...Dynamic signature is a biometric modality that recognizes an individual’s anatomic and behavioural characteristics when signing their name. The rampant case of signature falsification (Identity Theft) was the key motivating factor for embarking on this study. This study was necessitated by the damages and dangers posed by signature forgery coupled with the intractable nature of the problem. The aim and objectives of this study is to design a proactive and responsive system that could compare two signature samples and detect the correct signature against the forged one. Dynamic Signature verification is an important biometric technique that aims to detect whether a given signature is genuine or forged. In this research work, Convolutional Neural Networks (CNNsor ConvNet) which is a class of deep, feed forward artificial neural networks that has successfully been applied to analysing visual imagery was used to train the model. The signature images are stored in a file directory structure which the Keras Python library can work with. Then the CNN was implemented in python using the Keras with the TensorFlow backend to learn the patterns associated with the signature. The result showed that for the same CNNs-based network experimental result of average accuracy, the larger the training dataset, the higher the test accuracy. However, when the training dataset are insufficient, better results can be obtained. The paper concluded that by training datasets using CNNs network, 98% accuracy in the result was recorded, in the experimental part, the model achieved a high degree of accuracy in the classification of the biometric parameters used.展开更多
A vacuum ultraviolet(VUV)spectroscopy with a focal length of 1 m has been engineered specifically for observing edge impurity emissions in Experimental Advanced Superconducting Tokamak(EAST).In this study,wavelength c...A vacuum ultraviolet(VUV)spectroscopy with a focal length of 1 m has been engineered specifically for observing edge impurity emissions in Experimental Advanced Superconducting Tokamak(EAST).In this study,wavelength calibration for the VUV spectroscopy is achieved utilizing a zinc lamp.The grating angle and charge-coupled device(CCD)position are carefully calibrated for different wavelength positions.The wavelength calibration of the VUV spectroscopy is crucial for improving the accuracy of impurity spectral data,and is required to identify more impurity spectral lines for impurity transport research.Impurity spectra of EAST plasmas have also been obtained in the wavelength range of 50–300 nm with relatively high spectral resolution.It is found that the impurity emissions in the edge region are still dominated by low-Z impurities,such as carbon,oxygen,and nitrogen,albeit with the application of fulltungsten divertors on the EAST tokamak.展开更多
Blockchain can realize the reliable storage of a large amount of data that is chronologically related and verifiable within the system.This technology has been widely used and has developed rapidly in big data systems...Blockchain can realize the reliable storage of a large amount of data that is chronologically related and verifiable within the system.This technology has been widely used and has developed rapidly in big data systems across various fields.An increasing number of users are participating in application systems that use blockchain as their underlying architecture.As the number of transactions and the capital involved in blockchain grow,ensuring information security becomes imperative.Addressing the verification of transactional information security and privacy has emerged as a critical challenge.Blockchain-based verification methods can effectively eliminate the need for centralized third-party organizations.However,the efficiency of nodes in storing and verifying blockchain data faces unprecedented challenges.To address this issue,this paper introduces an efficient verification scheme for transaction security.Initially,it presents a node evaluation module to estimate the activity level of user nodes participating in transactions,accompanied by a probabilistic analysis for all transactions.Subsequently,this paper optimizes the conventional transaction organization form,introduces a heterogeneous Merkle tree storage structure,and designs algorithms for constructing these heterogeneous trees.Theoretical analyses and simulation experiments conclusively demonstrate the superior performance of this scheme.When verifying the same number of transactions,the heterogeneous Merkle tree transmits less data and is more efficient than traditional methods.The findings indicate that the heterogeneous Merkle tree structure is suitable for various blockchain applications,including the Internet of Things.This scheme can markedly enhance the efficiency of information verification and bolster the security of distributed systems.展开更多
Online Signature Verification (OSV), as a personal identification technology, is widely used in various industries.However, it faces challenges, such as incomplete feature extraction, low accuracy, and computational h...Online Signature Verification (OSV), as a personal identification technology, is widely used in various industries.However, it faces challenges, such as incomplete feature extraction, low accuracy, and computational heaviness. Toaddress these issues, we propose a novel approach for online signature verification, using a one-dimensionalGhost-ACmix Residual Network (1D-ACGRNet), which is a Ghost-ACmix Residual Network that combines convolutionwith a self-attention mechanism and performs improvement by using Ghost method. The Ghost-ACmix Residualstructure is introduced to leverage both self-attention and convolution mechanisms for capturing global featureinformation and extracting local information, effectively complementing whole and local signature features andmitigating the problem of insufficient feature extraction. Then, the Ghost-based Convolution and Self-Attention(ACG) block is proposed to simplify the common parts between convolution and self-attention using the Ghostmodule and employ feature transformation to obtain intermediate features, thus reducing computational costs.Additionally, feature selection is performed using the random forestmethod, and the data is dimensionally reducedusing Principal Component Analysis (PCA). Finally, tests are implemented on the MCYT-100 datasets and theSVC-2004 Task2 datasets, and the equal error rates (EERs) for small-sample training using five genuine andforged signatures are 3.07% and 4.17%, respectively. The EERs for training with ten genuine and forged signaturesare 0.91% and 2.12% on the respective datasets. The experimental results illustrate that the proposed approacheffectively enhances the accuracy of online signature verification.展开更多
基金funded by the China National Space Administration(CNSA)the Macao Foundationsupported by the Science and Technology Development Fund(FDCT)of Macao(Grant Nos.0014/2022/A1,SKL-LPS(MUST)-20212023,0034/2024/AMJ)。
文摘The Solar X-ray Detector(SXD)on-board the Macao Science Satellite-1B(MSS-1B)was successfully launched via the Chinese Long March-2C rocket on 21 May 2023,and commenced operations in early June of the same year.The MSS-1B/Soft X-ray Detection Units(SXDUs)employ two silicon drift detectors(SDDs),providing a wide range of energy spectra spanning from 0.7 to 24 keV.Notably,the SXDUs deliver a high-resolution capability of 0.14 keV@5.9 keV and operate with a time cadence of 1 second.Here,we perform thorough calibrations of the MSS-1B/SXDUs,employing a combination of ground experiments and simulations.In addition,quantitative analysis comparing the flux measurements obtained by the MSS-1B/SXDUs to the data collected by the Geostationary Operational Environmental Satellite(GOES),provides compelling evidence of their consistency.Furthermore,the preliminary spectral analysis results showcase the robustness and expected performance of the MSS-1B/SXDUs,unlocking their potential for facilitating the study of dynamic evolution of solar flares.Moreover,the innovative MSS-1B/Solar X-ray Detector facilitates concurrent observations of solar soft and hard X-rays,thereby making valuable contributions to the advancements in solar research.
基金Supported by the National Key Research and Development Program of China(No.2016YFC1401001)the National Natural Science Foundation of China(Nos.41501417,41406204)
文摘HY-2A is the first one of the Chinese HY-2 ocean satellite series carrying a microwave radiometer(RM)to measure sea surface temperature,sea surface wind speed,atmospheric water vapor,cloud liquid water content,and rain rate.We verified the RM level 1B brightness temperature(T B)to retrieve environmental parameters.In the verification,TB that simulated using the ocean-atmosphere radiative transfer model(RTM)was used as a reference.The total bias and total standard deviation(SD)of the RM level 1B TB,with reference to the RTM simulation,ranged-20.6-4.38 K and 0.7-2.93 K,respectively.We found that both the total bias and the total SD depend on the frequency and polarization,although the values for ascending and descending passes are different.In addition,substantial seasonal variation of the bias was found at all channels.The verification results indicate the RM has some problems regarding calibration,e.g.,correction of antenna spillover and antenna physical emission,especially for the 18.7-GHz channel.Based on error analyses,a statistical recalibration algorithm was designed and recalibration was performed for the RM level 1B TB.Validation of the recalibrated TB indicated that the quality of the recalibrated RM level 1B TB was improved significantly.The bias of the recalibrated T B at all channels was reduced to<0.4 K,seasonal variation was almost eradicated,and SD was diminished(i.e.,the SD of the 18.7-GHz channel was reduced by more than 0.5K).
基金The authors would like to thank the National Natural Science Foundation of China(Grant Nos.51879184 and 52079091)for funding this work.
文摘Since its introduction,discontinuous deformation analysis(DDA)has been widely used in different areas of rock mechanics.By dividing large blocks into subblocks and introducing artificial joints,DDA can be applied to rock fracture simulation.However,parameter calibration,a fundamental issue in discontinuum methods,has not received enough attention in DDA.In this study,the parameter calibration of DDA for intact rock is carefully studied.To this end,a subblock DDA with Voronoi tessellation is presented first.Then,a modified contact constitutive law is introduced,in which the tensile and shear meso-strengths are modified to be independent of the bond lengths.This improvement can prevent the unjustified preferential failure of short edges.A method for imposing confining pressure is also introduced.Thereafter,sensitivity analysis is performed to investigate the influence of the calculated parameters and meso-parameters on the mechanical properties of modeled rock.Based on the sensitivity analysis,a unified calibration procedure is suggested for both cases with and without confining pressure.Finally,the calibration procedure is applied to two examples,including a biaxial compression test.The results show that the proposed Voronoi-based DDA can simulate rock fracture with and without confining pressure very well after careful parameter calibration.
文摘General Administration of Quality Supervision,Inspection and Quarantine of P.R.China has approved the following 8 national measuring verification regulations in 2010 and publicize now.
文摘General Administration of Quality Supervision,Inspection and Quarantine of P.R.China has approved the following 11 national measuring verification regulations in 2008 and publicize now.
文摘General Administration of Quality Supervision,Inspection and Quarantine of P.R.China hasapproved the following 24 national measuring verification regulations in 2008 and publicize now.
文摘General Administration of Quality Supervision,Inspection and Quarantine of P.R.China has approved the following 10 national measuring verification regulations in 2008 and publicize now.
基金the Humanities and Social Science Fund of the Ministry of Education of China(21YJAZH077)。
文摘In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation metrics.In this paper,we aim to reduce the annotation cost of crowd datasets,and propose a crowd density estimation method based on weakly-supervised learning,in the absence of crowd position supervision information,which directly reduces the number of crowds by using the number of pedestrians in the image as the supervised information.For this purpose,we design a new training method,which exploits the correlation between global and local image features by incremental learning to train the network.Specifically,we design a parent-child network(PC-Net)focusing on the global and local image respectively,and propose a linear feature calibration structure to train the PC-Net simultaneously,and the child network learns feature transfer factors and feature bias weights,and uses the transfer factors and bias weights to linearly feature calibrate the features extracted from the Parent network,to improve the convergence of the network by using local features hidden in the crowd images.In addition,we use the pyramid vision transformer as the backbone of the PC-Net to extract crowd features at different levels,and design a global-local feature loss function(L2).We combine it with a crowd counting loss(LC)to enhance the sensitivity of the network to crowd features during the training process,which effectively improves the accuracy of crowd density estimation.The experimental results show that the PC-Net significantly reduces the gap between fullysupervised and weakly-supervised crowd density estimation,and outperforms the comparison methods on five datasets of Shanghai Tech Part A,ShanghaiTech Part B,UCF_CC_50,UCF_QNRF and JHU-CROWD++.
基金supported by the National Natural Science Foundation of China (Project No.42375192)the China Meteorological Administration Climate Change Special Program (CMA-CCSP+1 种基金Project No.QBZ202315)support by the Vector Stiftung through the Young Investigator Group"Artificial Intelligence for Probabilistic Weather Forecasting."
文摘Despite the maturity of ensemble numerical weather prediction(NWP),the resulting forecasts are still,more often than not,under-dispersed.As such,forecast calibration tools have become popular.Among those tools,quantile regression(QR)is highly competitive in terms of both flexibility and predictive performance.Nevertheless,a long-standing problem of QR is quantile crossing,which greatly limits the interpretability of QR-calibrated forecasts.On this point,this study proposes a non-crossing quantile regression neural network(NCQRNN),for calibrating ensemble NWP forecasts into a set of reliable quantile forecasts without crossing.The overarching design principle of NCQRNN is to add on top of the conventional QRNN structure another hidden layer,which imposes a non-decreasing mapping between the combined output from nodes of the last hidden layer to the nodes of the output layer,through a triangular weight matrix with positive entries.The empirical part of the work considers a solar irradiance case study,in which four years of ensemble irradiance forecasts at seven locations,issued by the European Centre for Medium-Range Weather Forecasts,are calibrated via NCQRNN,as well as via an eclectic mix of benchmarking models,ranging from the naïve climatology to the state-of-the-art deep-learning and other non-crossing models.Formal and stringent forecast verification suggests that the forecasts post-processed via NCQRNN attain the maximum sharpness subject to calibration,amongst all competitors.Furthermore,the proposed conception to resolve quantile crossing is remarkably simple yet general,and thus has broad applicability as it can be integrated with many shallow-and deep-learning-based neural networks.
基金Supported by:Multidisciplinary Center for Earthquake Engineering Research,Contract No.R271883
文摘Bridges are one of the most vulnerable components of a highway transportation network system subjected to earthquake ground motions. Prediction of resilience and sustainability of bridge performance in a probabilistic manner provides valuable information for pre-event system upgrading and post-event functional recovery of the network. The current study integrates bridge seismic damageability information obtained through empirical, analytical and experimental procedures and quantifies threshold limits of bridge damage states consistent with the physical damage description given in HAZUS. Experimental data from a large-scale shaking table test are utilized for this purpose. This experiment was conducted at the University of Nevada, Reno, where a research team from the University of California, Irvine, participated. Observed experimental damage data are processed to identify and quantify bridge damage states in terms of rotational ductility at bridge column ends. In parallel, a mechanistic model for fragility curves is developed in such a way that the model can be calibrated against empirical fragility curves that have been constructed from damage data obtained during the 1994 Northridge earthquake. This calibration quantifies threshold values of bridge damage states and makes the analytical study consistent with damage data observed in past earthquakes. The mechanistic model is transportable and applicable to most types and sizes of bridges. Finally, calibrated damage state definitions are compared with that obtained using experimental findings. Comparison shows excellent consistency among results from analytical, empirical and experimental observations.
基金supported in part by the Natural Science Foundation of Jiangsu Province in China under grant No.BK20191475the fifth phase of“333 Project”scientific research funding project of Jiangsu Province in China under grant No.BRA2020306the Qing Lan Project of Jiangsu Province in China under grant No.2019.
文摘In traditional digital twin communication system testing,we can apply test cases as completely as possible in order to ensure the correctness of the system implementation,and even then,there is no guarantee that the digital twin communication system implementation is completely correct.Formal verification is currently recognized as a method to ensure the correctness of software system for communication in digital twins because it uses rigorous mathematical methods to verify the correctness of systems for communication in digital twins and can effectively help system designers determine whether the system is designed and implemented correctly.In this paper,we use the interactive theorem proving tool Isabelle/HOL to construct the formal model of the X86 architecture,and to model the related assembly instructions.The verification result shows that the system states obtained after the operations of relevant assembly instructions is consistent with the expected states,indicating that the system meets the design expectations.
文摘The consensus of the automotive industry and traffic management authorities is that autonomous vehicles must follow the same traffic laws as human drivers.Using formal or digital methods,natural language traffic rules can be translated into machine language and used by autonomous vehicles.In this paper,a translation flow is designed.Beyond the translation,a deeper examination is required,because the semantics of natural languages are rich and complex,and frequently contain hidden assumptions.The issue of how to ensure that digital rules are accurate and consistent with the original intent of the traffic rules they represent is both significant and unresolved.In response,we propose a method of formal verification that combines equivalence verification with model checking.Reasonable and reassuring digital traffic rules can be obtained by utilizing the proposed traffic rule digitization flow and verification method.In addition,we offer a number of simulation applications that employ digital traffic rules to assess vehicle violations.The experimental findings indicate that our digital rules utilizing metric temporal logic(MTL)can be easily incorporated into simulation platforms and autonomous driving systems(ADS).
基金supported by high-intensity heavy-ion accelerator facility(HIAF)approved by the National Development and Reform Commission of China(2017-000052-73-01-002107)。
文摘The high-intensity heavy-ion accelerator facility(HIAF)is a scientific research facility complex composed of multiple cas-cade accelerators of different types,which pose a scheduling problem for devices distributed over a certain range of 2 km,involving over a hundred devices.The white rabbit,a technology-enhancing Gigabit Ethernet,has shown the capability of scheduling distributed timing devices but still faces the challenge of obtaining real-time synchronization calibration param-eters with high precision.This study presents a calibration system based on a time-to-digital converter implemented on an ARM-based System-on-Chip(SoC).The system consists of four multi-sample delay lines,a bubble-proof encoder,an edge controller for managing data from different channels,and a highly effective calibration module that benefits from the SoC architecture.The performance was evaluated with an average RMS precision of 5.51 ps by measuring the time intervals from 0 to 24,000 ps with 120,000 data for every test.The design presented in this study refines the calibration precision of the HIAF timing system.This eliminates the errors caused by manual calibration without efficiency loss and provides data support for fault diagnosis.It can also be easily tailored or ported to other devices for specific applications and provides more space for developing timing systems for particle accelerators,such as white rabbits on HIAF.
基金supported by National Natural Science Foundation of China(62032003).
文摘Recent advancements in satellite technologies and the declining cost of access to space have led to the emergence of large satellite constellations in Low Earth Orbit(LEO).However,these constellations often rely on bent-pipe architecture,resulting in high communication costs.Existing onboard inference architectures suffer from limitations in terms of low accuracy and inflexibility in the deployment and management of in-orbit applications.To address these challenges,we propose a cloud-native-based satellite design specifically tailored for Earth Observation tasks,enabling diverse computing paradigms.In this work,we present a case study of a satellite-ground collaborative inference system deployed in the Tiansuan constellation,demonstrating a remarkable 50%accuracy improvement and a substantial 90%data reduction.Our work sheds light on in-orbit energy,where in-orbit computing accounts for 17%of the total onboard energy consumption.Our approach represents a significant advancement of cloud-native satellite,aiming to enhance the accuracy of in-orbit computing while simultaneously reducing communication cost.
基金supported by National Natural Science Foundation of China under Grant No.61972360Shandong Provincial Natural Science Foundation of China under Grant Nos.ZR2020MF148,ZR2020QF108.
文摘The maturity of 5G technology has enabled crowd-sensing services to collect multimedia data over wireless network,so it has promoted the applications of crowd-sensing services in different fields,but also brings more privacy security challenges,the most commom which is privacy leakage.As a privacy protection technology combining data integrity check and identity anonymity,ring signature is widely used in the field of privacy protection.However,introducing signature technology leads to additional signature verification overhead.In the scenario of crowd-sensing,the existing signature schemes have low efficiency in multi-signature verification.Therefore,it is necessary to design an efficient multi-signature verification scheme while ensuring security.In this paper,a batch-verifiable signature scheme is proposed based on the crowd-sensing background,which supports the sensing platform to verify the uploaded multiple signature data efficiently,so as to overcoming the defects of the traditional signature scheme in multi-signature verification.In our proposal,a method for linking homologous data was presented,which was valuable for incentive mechanism and data analysis.Simulation results showed that the proposed scheme has good performance in terms of security and efficiency in crowd-sensing applications with a large number of users and data.
文摘Dynamic signature is a biometric modality that recognizes an individual’s anatomic and behavioural characteristics when signing their name. The rampant case of signature falsification (Identity Theft) was the key motivating factor for embarking on this study. This study was necessitated by the damages and dangers posed by signature forgery coupled with the intractable nature of the problem. The aim and objectives of this study is to design a proactive and responsive system that could compare two signature samples and detect the correct signature against the forged one. Dynamic Signature verification is an important biometric technique that aims to detect whether a given signature is genuine or forged. In this research work, Convolutional Neural Networks (CNNsor ConvNet) which is a class of deep, feed forward artificial neural networks that has successfully been applied to analysing visual imagery was used to train the model. The signature images are stored in a file directory structure which the Keras Python library can work with. Then the CNN was implemented in python using the Keras with the TensorFlow backend to learn the patterns associated with the signature. The result showed that for the same CNNs-based network experimental result of average accuracy, the larger the training dataset, the higher the test accuracy. However, when the training dataset are insufficient, better results can be obtained. The paper concluded that by training datasets using CNNs network, 98% accuracy in the result was recorded, in the experimental part, the model achieved a high degree of accuracy in the classification of the biometric parameters used.
基金partially supported by National Natural Science Foundation of China(Nos.U23A2077,12175278,12205072)the National Magnetic Confinement Fusion Science Program of China(Nos.2019YFE0304002,2018YFE0303103)+2 种基金the Comprehensive Research Facility for Fusion Technology Program of China(No.2018-000052-73-01-001228)Major Science and Technology Infrastructure Maintenance and Reconstruction Projects of the Chinese Academy of Sciences(2021)the University Synergy Innovation Program of Anhui Province(No.GXXT2021-029)。
文摘A vacuum ultraviolet(VUV)spectroscopy with a focal length of 1 m has been engineered specifically for observing edge impurity emissions in Experimental Advanced Superconducting Tokamak(EAST).In this study,wavelength calibration for the VUV spectroscopy is achieved utilizing a zinc lamp.The grating angle and charge-coupled device(CCD)position are carefully calibrated for different wavelength positions.The wavelength calibration of the VUV spectroscopy is crucial for improving the accuracy of impurity spectral data,and is required to identify more impurity spectral lines for impurity transport research.Impurity spectra of EAST plasmas have also been obtained in the wavelength range of 50–300 nm with relatively high spectral resolution.It is found that the impurity emissions in the edge region are still dominated by low-Z impurities,such as carbon,oxygen,and nitrogen,albeit with the application of fulltungsten divertors on the EAST tokamak.
基金funded by the National Natural Science Foundation of China(62072056,62172058)the Researchers Supporting Project Number(RSP2023R102)King Saud University,Riyadh,Saudi Arabia+4 种基金funded by the Hunan Provincial Key Research and Development Program(2022SK2107,2022GK2019)the Natural Science Foundation of Hunan Province(2023JJ30054)the Foundation of State Key Laboratory of Public Big Data(PBD2021-15)the Young Doctor Innovation Program of Zhejiang Shuren University(2019QC30)Postgraduate Scientific Research Innovation Project of Hunan Province(CX20220940,CX20220941).
文摘Blockchain can realize the reliable storage of a large amount of data that is chronologically related and verifiable within the system.This technology has been widely used and has developed rapidly in big data systems across various fields.An increasing number of users are participating in application systems that use blockchain as their underlying architecture.As the number of transactions and the capital involved in blockchain grow,ensuring information security becomes imperative.Addressing the verification of transactional information security and privacy has emerged as a critical challenge.Blockchain-based verification methods can effectively eliminate the need for centralized third-party organizations.However,the efficiency of nodes in storing and verifying blockchain data faces unprecedented challenges.To address this issue,this paper introduces an efficient verification scheme for transaction security.Initially,it presents a node evaluation module to estimate the activity level of user nodes participating in transactions,accompanied by a probabilistic analysis for all transactions.Subsequently,this paper optimizes the conventional transaction organization form,introduces a heterogeneous Merkle tree storage structure,and designs algorithms for constructing these heterogeneous trees.Theoretical analyses and simulation experiments conclusively demonstrate the superior performance of this scheme.When verifying the same number of transactions,the heterogeneous Merkle tree transmits less data and is more efficient than traditional methods.The findings indicate that the heterogeneous Merkle tree structure is suitable for various blockchain applications,including the Internet of Things.This scheme can markedly enhance the efficiency of information verification and bolster the security of distributed systems.
基金National Natural Science Foundation of China(Grant No.62073227)Liaoning Provincial Science and Technology Department Foundation(Grant No.2023JH2/101300212).
文摘Online Signature Verification (OSV), as a personal identification technology, is widely used in various industries.However, it faces challenges, such as incomplete feature extraction, low accuracy, and computational heaviness. Toaddress these issues, we propose a novel approach for online signature verification, using a one-dimensionalGhost-ACmix Residual Network (1D-ACGRNet), which is a Ghost-ACmix Residual Network that combines convolutionwith a self-attention mechanism and performs improvement by using Ghost method. The Ghost-ACmix Residualstructure is introduced to leverage both self-attention and convolution mechanisms for capturing global featureinformation and extracting local information, effectively complementing whole and local signature features andmitigating the problem of insufficient feature extraction. Then, the Ghost-based Convolution and Self-Attention(ACG) block is proposed to simplify the common parts between convolution and self-attention using the Ghostmodule and employ feature transformation to obtain intermediate features, thus reducing computational costs.Additionally, feature selection is performed using the random forestmethod, and the data is dimensionally reducedusing Principal Component Analysis (PCA). Finally, tests are implemented on the MCYT-100 datasets and theSVC-2004 Task2 datasets, and the equal error rates (EERs) for small-sample training using five genuine andforged signatures are 3.07% and 4.17%, respectively. The EERs for training with ten genuine and forged signaturesare 0.91% and 2.12% on the respective datasets. The experimental results illustrate that the proposed approacheffectively enhances the accuracy of online signature verification.