Discuss the problem of infinite increasing coin list in anonymous E-cash systems, which reduce the efficiency of whole system greatly. Though some methods are suggested, no one can solve the problem with high efficien...Discuss the problem of infinite increasing coin list in anonymous E-cash systems, which reduce the efficiency of whole system greatly. Though some methods are suggested, no one can solve the problem with high efficiency and flexibility. Here, we use the technique of adding information in blind signatures to deal with this problem. Through adding timestamp in signatures, we can separate the valid period of all used coins into pieces. Only the coins in the last stage are recorded. So the scale of the coins list is controlled. We also analyze the anonymity of these data, and add some indispensable restrictions to them. These restrictions can ensure that the imported data don’t break the anonymity of the customers. In order to fulfill these qualifications, we lead to the concept of restricted common data (RCD). Furthermore, we propose two schemes to add RCD in the blind signature. The simple one is easy to implement, while the complex one can note the value of the coin. The usage of RCD leads to little additional cost, as well as maintaining the anonymity of customers. This method fits for most kinds of anonymous E-cash systems.展开更多
Cooperative spectrum monitoring with multiple sensors has been deemed as an efficient mechanism for improving the monitoring accuracy and enlarging the monitoring area in wireless sensor networks.However,there exists ...Cooperative spectrum monitoring with multiple sensors has been deemed as an efficient mechanism for improving the monitoring accuracy and enlarging the monitoring area in wireless sensor networks.However,there exists redundancy among the spectrum data collected by a sensor node within a data collection period,which may reduce the data uploading efficiency.In this paper,we investigate the inter-data commonality detection which describes how much two data have in common.We define common segment set and divide it into six categories firstly,then a method to measure a common segment set is conducted by extracting commonality between two files.Moreover,the existing algorithms fail in finding a good common segment set,so Common Data Measurement(CDM)algorithm that can identify a good common segment set based on inter-data commonality detection is proposed.Theoretical analysis proves that CDM algorithm achieves a good measurement for the commonality between two strings.In addition,we conduct an synthetic dataset which are produced randomly.Numerical results shows that CDM algorithm can get better performance in measuring commonality between two binary files compared with Greedy-String-Tiling(GST)algorithm and simple greedy algorithm.展开更多
Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challeng...Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challenging task. A multidatabase common data model is firstly introduced based on XML, named XML-based Integration Data Model (XIDM), which is suitable for integrating different types of schemas. Then an approach of schema mappings based on XIDM in multidatabase systems has been presented. The mappings include global mappings, dealing with horizontal and vertical partitioning between global schemas and export schemas, and local mappings, processing the transformation between export schemas and local schemas. Finally, the illustration and implementation of schema mappings in a multidatabase prototype - Panorama system are also discussed. The implementation results demonstrate that the XIDM is an efficient model for managing multiple heterogeneous data sources and the approaches of schema mapping based on XIDM behave very well when integrating relational, object-oriented database systems and other file systems.展开更多
Differences in the imaging subgroups of cerebral small vessel disease(CSVD)need to be further explored.First,we use propensity score matching to obtain balanced datasets.Then random forest(RF)is adopted to classify th...Differences in the imaging subgroups of cerebral small vessel disease(CSVD)need to be further explored.First,we use propensity score matching to obtain balanced datasets.Then random forest(RF)is adopted to classify the subgroups compared with support vector machine(SVM)and extreme gradient boosting(XGBoost),and to select the features.The top 10 important features are included in the stepwise logistic regression,and the odds ratio(OR)and 95%confidence interval(CI)are obtained.There are 41290 adult inpatient records diagnosed with CSVD.Accuracy and area under curve(AUC)of RF are close to 0.7,which performs best in classification compared to SVM and XGBoost.OR and 95%CI of hematocrit for white matter lesions(WMLs),lacunes,microbleeds,atrophy,and enlarged perivascular space(EPVS)are 0.9875(0.9857−0.9893),0.9728(0.9705−0.9752),0.9782(0.9740−0.9824),1.0093(1.0081−1.0106),and 0.9716(0.9597−0.9832).OR and 95%CI of red cell distribution width for WMLs,lacunes,atrophy,and EPVS are 0.9600(0.9538−0.9662),0.9630(0.9559−0.9702),1.0751(1.0686−1.0817),and 0.9304(0.8864−0.9755).OR and 95%CI of platelet distribution width for WMLs,lacunes,and microbleeds are 1.1796(1.1636−1.1958),1.1663(1.1476−1.1853),and 1.0416(1.0152−1.0687).This study proposes a new analytical framework to select important clinical markers for CSVD with machine learning based on a common data model,which has low cost,fast speed,large sample size,and continuous data sources.展开更多
The Energization and Radiation in Geospace (ERG) mission seeks to explore the dynamics of the radiation belts in the Earth's inner magnetosphere with a space-borne probe (ERG satellite) in coordination with relat...The Energization and Radiation in Geospace (ERG) mission seeks to explore the dynamics of the radiation belts in the Earth's inner magnetosphere with a space-borne probe (ERG satellite) in coordination with related ground observations and simulations/modeling studies. For this mission, the Science Center of the ERG project (ERG-SC) will provide a useful data analysis platform based on the THEMIS Data Analysis software Suite (TDAS), which has been widely used by researchers in many conjunction studies of the Time History of Events and Macroscale Interactions during Substorms (THEMIS) spacecraft and ground data. To import SuperDARN data to this highly useful platform, ERG-SC, in close collaboration with SuperDARN groups, developed the Common Data Format (CDF) design suitable for fitacf data and has prepared an open database of SuperDARN data archived in CDE ERG-SC has also been developing programs written in Interactive Data Language (IDL) to load fltacf CDF files and to generate various kinds of plots-not only range-time-intensity-type plots but also two-dimensional map plots that can be superposed with other data, such as all-sky images of THEMIS-GBO and orbital footprints of various satellites. The CDF-TDAS scheme developed by ERG-SC will make it easier for researchers who are not familiar with SuperDARN data to access and analyze SuperDARN data and thereby facilitate collaborative studies with satellite data, such as the inner magnetosphere data pro- vided by the ERG (Japan)-RBSP (USA)-THEMIS (USA) fleet.展开更多
Data envelopment analysis(DEA) is a mathematical programming approach to appraise the relative efficiencies of peer decision-making unit(DMU),which is widely used in ranking DMUs.However,almost all DEA-related ran...Data envelopment analysis(DEA) is a mathematical programming approach to appraise the relative efficiencies of peer decision-making unit(DMU),which is widely used in ranking DMUs.However,almost all DEA-related ranking approaches are based on the self-evaluation efficiencies.In other words,each DMU chooses the weights it prefers to most,so the resulted efficiencies are not suitable to be used as ranking criteria.Therefore this paper proposes a new approach to determine a bundle of common weights in DEA efficiency evaluation model by introducing a multi-objective integer programming.The paper also gives the solving process of this multi-objective integer programming,and the solution is proven a Pareto efficient solution.The solving process ensures that the obtained common weight bundle is acceptable by a great number of DMUs.Finally a numeral example is given to demonstrate the approach.展开更多
Towards a better understanding of hydrological interactions between the land surface and atmosphere, land surface mod- els are routinely used to simulate hydro-meteorological fluxes. However, there is a lack of observ...Towards a better understanding of hydrological interactions between the land surface and atmosphere, land surface mod- els are routinely used to simulate hydro-meteorological fluxes. However, there is a lack of observations available for model forcing, to estimate the hydro-meteorological fluxes in East Asia. In this study, Common Land Model (CLM) was used in offline-mode during the summer monsoon period of 2006 in East Asia, with different forcings from Asiaflux, Korea Land Data Assimilation System (KLDAS), and Global Land Data Assimilation System (GLDAS), at point and regional scales, separately. The CLM results were compared with observations from Asiaflux sites. The estimated net radiation showed good agreement, with r = 0.99 for the point scale and 0.85 for the regional scale. The estimated sensible and latent heat fluxes using Asiaflux and KLDAS data indicated reasonable agreement, with r = 0.70. The estimated soil moisture and soil temperature showed similar patterns to observations, although the estimated water fluxes using KLDAS showed larger discrepancies than those of Asiaflux because of scale mismatch. The spatial distribution of hydro-meteorological fluxes according to KLDAS for East Asia were compared to the CLM results with GLDAS, and the GLDAS provided online. The spatial distributions of CLM with KLDAS were analogous to CLM with GLDAS, and the standalone GLDAS data. The results indicate that KLDAS is a good potential source of high spatial resolution forcing data. Therefore, the KLDAS is a promising alternative product, capable of compensating for the lack of observations and low resolution grid data for East Asia.展开更多
文摘Discuss the problem of infinite increasing coin list in anonymous E-cash systems, which reduce the efficiency of whole system greatly. Though some methods are suggested, no one can solve the problem with high efficiency and flexibility. Here, we use the technique of adding information in blind signatures to deal with this problem. Through adding timestamp in signatures, we can separate the valid period of all used coins into pieces. Only the coins in the last stage are recorded. So the scale of the coins list is controlled. We also analyze the anonymity of these data, and add some indispensable restrictions to them. These restrictions can ensure that the imported data don’t break the anonymity of the customers. In order to fulfill these qualifications, we lead to the concept of restricted common data (RCD). Furthermore, we propose two schemes to add RCD in the blind signature. The simple one is easy to implement, while the complex one can note the value of the coin. The usage of RCD leads to little additional cost, as well as maintaining the anonymity of customers. This method fits for most kinds of anonymous E-cash systems.
基金supported in part by the National Natural Science Foundation of China(No.61901328)the China Postdoctoral Science Foundation (No. 2019M653558)+1 种基金the Fundamental Research Funds for the Central Universities (No. CJT150101)the Key project of National Natural Science Foundation of China (No. 61631015)
文摘Cooperative spectrum monitoring with multiple sensors has been deemed as an efficient mechanism for improving the monitoring accuracy and enlarging the monitoring area in wireless sensor networks.However,there exists redundancy among the spectrum data collected by a sensor node within a data collection period,which may reduce the data uploading efficiency.In this paper,we investigate the inter-data commonality detection which describes how much two data have in common.We define common segment set and divide it into six categories firstly,then a method to measure a common segment set is conducted by extracting commonality between two files.Moreover,the existing algorithms fail in finding a good common segment set,so Common Data Measurement(CDM)algorithm that can identify a good common segment set based on inter-data commonality detection is proposed.Theoretical analysis proves that CDM algorithm achieves a good measurement for the commonality between two strings.In addition,we conduct an synthetic dataset which are produced randomly.Numerical results shows that CDM algorithm can get better performance in measuring commonality between two binary files compared with Greedy-String-Tiling(GST)algorithm and simple greedy algorithm.
文摘Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challenging task. A multidatabase common data model is firstly introduced based on XML, named XML-based Integration Data Model (XIDM), which is suitable for integrating different types of schemas. Then an approach of schema mappings based on XIDM in multidatabase systems has been presented. The mappings include global mappings, dealing with horizontal and vertical partitioning between global schemas and export schemas, and local mappings, processing the transformation between export schemas and local schemas. Finally, the illustration and implementation of schema mappings in a multidatabase prototype - Panorama system are also discussed. The implementation results demonstrate that the XIDM is an efficient model for managing multiple heterogeneous data sources and the approaches of schema mapping based on XIDM behave very well when integrating relational, object-oriented database systems and other file systems.
基金supported by the National Natural Science Foundation of China(Nos.72204169 and 81825007)Beijing Outstanding Young Scientist Program(No.BJJWZYJH01201910025030)+5 种基金Capital’s Funds for Health Improvement and Research(No.2022-2-2045)National Key R&D Program of China(Nos.2022YFF15015002022YFF1501501,2022YFF1501502,2022YFF1501503,2022YFF1501504,and 2022YFF1501505)Youth Beijing Scholar Program(No.010)Beijing Laboratory of Oral Health(No.PXM2021_014226_000041)Beijing Talent Project-Class A:Innovation and Development(No.2018A12)National Ten-Thousand Talent PlanLeadership of Scientific and Technological Innovation,and National Key R&D Program of China(Nos.2017YFC1307900 and 2017YFC1307905).
文摘Differences in the imaging subgroups of cerebral small vessel disease(CSVD)need to be further explored.First,we use propensity score matching to obtain balanced datasets.Then random forest(RF)is adopted to classify the subgroups compared with support vector machine(SVM)and extreme gradient boosting(XGBoost),and to select the features.The top 10 important features are included in the stepwise logistic regression,and the odds ratio(OR)and 95%confidence interval(CI)are obtained.There are 41290 adult inpatient records diagnosed with CSVD.Accuracy and area under curve(AUC)of RF are close to 0.7,which performs best in classification compared to SVM and XGBoost.OR and 95%CI of hematocrit for white matter lesions(WMLs),lacunes,microbleeds,atrophy,and enlarged perivascular space(EPVS)are 0.9875(0.9857−0.9893),0.9728(0.9705−0.9752),0.9782(0.9740−0.9824),1.0093(1.0081−1.0106),and 0.9716(0.9597−0.9832).OR and 95%CI of red cell distribution width for WMLs,lacunes,atrophy,and EPVS are 0.9600(0.9538−0.9662),0.9630(0.9559−0.9702),1.0751(1.0686−1.0817),and 0.9304(0.8864−0.9755).OR and 95%CI of platelet distribution width for WMLs,lacunes,and microbleeds are 1.1796(1.1636−1.1958),1.1663(1.1476−1.1853),and 1.0416(1.0152−1.0687).This study proposes a new analytical framework to select important clinical markers for CSVD with machine learning based on a common data model,which has low cost,fast speed,large sample size,and continuous data sources.
文摘The Energization and Radiation in Geospace (ERG) mission seeks to explore the dynamics of the radiation belts in the Earth's inner magnetosphere with a space-borne probe (ERG satellite) in coordination with related ground observations and simulations/modeling studies. For this mission, the Science Center of the ERG project (ERG-SC) will provide a useful data analysis platform based on the THEMIS Data Analysis software Suite (TDAS), which has been widely used by researchers in many conjunction studies of the Time History of Events and Macroscale Interactions during Substorms (THEMIS) spacecraft and ground data. To import SuperDARN data to this highly useful platform, ERG-SC, in close collaboration with SuperDARN groups, developed the Common Data Format (CDF) design suitable for fitacf data and has prepared an open database of SuperDARN data archived in CDE ERG-SC has also been developing programs written in Interactive Data Language (IDL) to load fltacf CDF files and to generate various kinds of plots-not only range-time-intensity-type plots but also two-dimensional map plots that can be superposed with other data, such as all-sky images of THEMIS-GBO and orbital footprints of various satellites. The CDF-TDAS scheme developed by ERG-SC will make it easier for researchers who are not familiar with SuperDARN data to access and analyze SuperDARN data and thereby facilitate collaborative studies with satellite data, such as the inner magnetosphere data pro- vided by the ERG (Japan)-RBSP (USA)-THEMIS (USA) fleet.
基金supported by the National Natural Science Foundation of China for Innovative Research Groups(70821001)and the National Natural Science Foundation of China(70801056)
文摘Data envelopment analysis(DEA) is a mathematical programming approach to appraise the relative efficiencies of peer decision-making unit(DMU),which is widely used in ranking DMUs.However,almost all DEA-related ranking approaches are based on the self-evaluation efficiencies.In other words,each DMU chooses the weights it prefers to most,so the resulted efficiencies are not suitable to be used as ranking criteria.Therefore this paper proposes a new approach to determine a bundle of common weights in DEA efficiency evaluation model by introducing a multi-objective integer programming.The paper also gives the solving process of this multi-objective integer programming,and the solution is proven a Pareto efficient solution.The solving process ensures that the obtained common weight bundle is acceptable by a great number of DMUs.Finally a numeral example is given to demonstrate the approach.
基金supported by Space Core Technology Development Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science,ICTFuture Planning(NRF-2014M1A3A3A02034789)+1 种基金Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2013R1A1A2A10004743)the Korea Meteorological Administration Research and Development Program under Grant Weather Information Service Engine(WISE)project,KMA-2012-0001-A
文摘Towards a better understanding of hydrological interactions between the land surface and atmosphere, land surface mod- els are routinely used to simulate hydro-meteorological fluxes. However, there is a lack of observations available for model forcing, to estimate the hydro-meteorological fluxes in East Asia. In this study, Common Land Model (CLM) was used in offline-mode during the summer monsoon period of 2006 in East Asia, with different forcings from Asiaflux, Korea Land Data Assimilation System (KLDAS), and Global Land Data Assimilation System (GLDAS), at point and regional scales, separately. The CLM results were compared with observations from Asiaflux sites. The estimated net radiation showed good agreement, with r = 0.99 for the point scale and 0.85 for the regional scale. The estimated sensible and latent heat fluxes using Asiaflux and KLDAS data indicated reasonable agreement, with r = 0.70. The estimated soil moisture and soil temperature showed similar patterns to observations, although the estimated water fluxes using KLDAS showed larger discrepancies than those of Asiaflux because of scale mismatch. The spatial distribution of hydro-meteorological fluxes according to KLDAS for East Asia were compared to the CLM results with GLDAS, and the GLDAS provided online. The spatial distributions of CLM with KLDAS were analogous to CLM with GLDAS, and the standalone GLDAS data. The results indicate that KLDAS is a good potential source of high spatial resolution forcing data. Therefore, the KLDAS is a promising alternative product, capable of compensating for the lack of observations and low resolution grid data for East Asia.