To support a large amount of GPS data generated from various moving objects, the back-end servers usually store low-sampling-rate trajectories. Therefore, no precise position information can be obtained directly from ...To support a large amount of GPS data generated from various moving objects, the back-end servers usually store low-sampling-rate trajectories. Therefore, no precise position information can be obtained directly from the back-end servers and uncertainty is an inherent characteristic of the spatio-temporal data. How to deal with the uncertainty thus becomes a basic and challenging problem. A lot of researches have been rigidly conducted on the uncertainty of a moving object itself and isolated from the context where it is derived. However, we discover that the uncertainty of moving objects can be efficiently reduced and effectively ranked using the context-aware information. In this paper, we focus on context- aware information and propose an integrated framework, Context-Based Uncertainty Reduction and Ranking (CURR), to reduce and rank the uncertainty of trajectories. Specifically, given two consecutive samplings, we aim to infer and rank the possible trajectories in accordance with the information extracted from context. Since some context-aware information can be used to reduce the uncertainty while some context-aware information can be used to rank the uncertainty, to leverage them accordingly, CURR naturally consists of two stages: reduction stage and ranking stage which complement each other. We also implement a prototype system to validate the effectiveness of our solution. Extensive experiments are conducted and the evaluation results demonstrate the efficiency and high accuracy of CURR.展开更多
Catastrophe models estimate risk at the intersection of hazard,exposure,and vulnerability.Each of these areas requires diverse sources of data,which are very often incomplete,inconsistent,or missing altogether.The poo...Catastrophe models estimate risk at the intersection of hazard,exposure,and vulnerability.Each of these areas requires diverse sources of data,which are very often incomplete,inconsistent,or missing altogether.The poor quality of the data is a source of epistemic uncertainty,which affects the vulnerability models as well as the output of the catastrophe models.This article identifies the different sources of epistemic uncertainty in the data,and elaborates on strategies to reduce this uncertainty,in particular through identification,augmentation,and integration of the different types of data.The challenges are illustrated through the Florida Public Hurricane Loss Model(FPHLM),which estimates insured losses on residential buildings caused by hurricane events in Florida.To define the input exposure,and for model development,calibration,and validation purposes,the FPHLM teams accessed three main sources of data:county tax appraiser databases,National Flood Insurance Protection(NFIP)portfolios,and wind insurance portfolios.The data from these different sources were reformatted and processed,and the insurance databases were separately cross-referenced at the county level with tax appraiser databases.The FPHLM hazard teams assigned estimates of natural hazard intensity measure to each insurance claim.These efforts produced an integrated and more complete set of building descriptors for each policy in the NFIP and wind portfolios.The article describes the impact of these uncertainty reductions on the development and validation of the vulnerability models,and suggests avenues for data improvement.Lessons learned should be of interest to professionals involved in disaster risk assessment and management.展开更多
The state of in situ stress is a crucial parameter in subsurface engineering,especially for critical projects like nuclear waste repository.As one of the two ISRM suggested methods,the overcoring(OC)method is widely u...The state of in situ stress is a crucial parameter in subsurface engineering,especially for critical projects like nuclear waste repository.As one of the two ISRM suggested methods,the overcoring(OC)method is widely used to estimate the full stress tensors in rocks by independent regression analysis of the data from each OC test.However,such customary independent analysis of individual OC tests,known as no pooling,is liable to yield unreliable test-specific stress estimates due to various uncertainty sources involved in the OC method.To address this problem,a practical and no-cost solution is considered by incorporating into OC data analysis additional information implied within adjacent OC tests,which are usually available in OC measurement campaigns.Hence,this paper presents a Bayesian partial pooling(hierarchical)model for combined analysis of adjacent OC tests.We performed five case studies using OC test data made at a nuclear waste repository research site of Sweden.The results demonstrate that partial pooling of adjacent OC tests indeed allows borrowing of information across adjacent tests,and yields improved stress tensor estimates with reduced uncertainties simultaneously for all individual tests than they are independently analysed as no pooling,particularly for those unreliable no pooling stress estimates.A further model comparison shows that the partial pooling model also gives better predictive performance,and thus confirms that the information borrowed across adjacent OC tests is relevant and effective.展开更多
This paper is concerned with the contribution of forensic science to the legal process by helping reduce uncertainty.Although it is now widely accepted that uncertainty should be handled by probability because it is a...This paper is concerned with the contribution of forensic science to the legal process by helping reduce uncertainty.Although it is now widely accepted that uncertainty should be handled by probability because it is a safeguard against incoherent proceedings,there remain diverging and conflicting views on how probability ought to be interpreted.This is exemplified by the proposals in scientific literature that call for procedures of probability computation that are referred to as"objective,"suggesting that scientists ought to use them in their reporting to recipients of expert information.I find such proposals objectionable.They need to be viewed cautiously,essentially because ensuing probabilistic statements can be perceived as making forensic science prescriptive.A motivating example from the context of forensic DNA analysis will be chosen to illustrate this.As a main point,it shall be argued that such constraining suggestions can be avoided by interpreting probability as a measure of personal belief,that is,subjective probability.Invoking references to foundational literature from mathematical statistics and philosophy of science,the discussion will explore the consequences of this interdisciplinary viewpoint for the practice of forensic expert reporting.It will be emphasized that-as an operational interpretation of probability_the subjectivist perspective enables forensic science to add value to the legal process,in particular by avoiding inferential impasses to which other interpretations of probability may lead.Moreover,understanding probability from a subjective perspective can encourage participants in the legal process to take on more responsibility in matters regarding the coherent handling of uncertainty.This would assure more balanced interactions at the interface between science and the law.This,in turn,provides support for ongoing developments that can be called the"probabilization"of forensic science.展开更多
基金This work was supported by the National High Technology Research and Development 863 Program of China under Grant No. 2013AA01A603, the Pilot Project of Chinese Academy of Sciences under Grant No. XDA06010600, and the National Natural Science Foundation of China under Grant No. 61402312.
文摘To support a large amount of GPS data generated from various moving objects, the back-end servers usually store low-sampling-rate trajectories. Therefore, no precise position information can be obtained directly from the back-end servers and uncertainty is an inherent characteristic of the spatio-temporal data. How to deal with the uncertainty thus becomes a basic and challenging problem. A lot of researches have been rigidly conducted on the uncertainty of a moving object itself and isolated from the context where it is derived. However, we discover that the uncertainty of moving objects can be efficiently reduced and effectively ranked using the context-aware information. In this paper, we focus on context- aware information and propose an integrated framework, Context-Based Uncertainty Reduction and Ranking (CURR), to reduce and rank the uncertainty of trajectories. Specifically, given two consecutive samplings, we aim to infer and rank the possible trajectories in accordance with the information extracted from context. Since some context-aware information can be used to reduce the uncertainty while some context-aware information can be used to rank the uncertainty, to leverage them accordingly, CURR naturally consists of two stages: reduction stage and ranking stage which complement each other. We also implement a prototype system to validate the effectiveness of our solution. Extensive experiments are conducted and the evaluation results demonstrate the efficiency and high accuracy of CURR.
基金The Florida Office of Insurance Regulation(FLOIR)provided financial support
文摘Catastrophe models estimate risk at the intersection of hazard,exposure,and vulnerability.Each of these areas requires diverse sources of data,which are very often incomplete,inconsistent,or missing altogether.The poor quality of the data is a source of epistemic uncertainty,which affects the vulnerability models as well as the output of the catastrophe models.This article identifies the different sources of epistemic uncertainty in the data,and elaborates on strategies to reduce this uncertainty,in particular through identification,augmentation,and integration of the different types of data.The challenges are illustrated through the Florida Public Hurricane Loss Model(FPHLM),which estimates insured losses on residential buildings caused by hurricane events in Florida.To define the input exposure,and for model development,calibration,and validation purposes,the FPHLM teams accessed three main sources of data:county tax appraiser databases,National Flood Insurance Protection(NFIP)portfolios,and wind insurance portfolios.The data from these different sources were reformatted and processed,and the insurance databases were separately cross-referenced at the county level with tax appraiser databases.The FPHLM hazard teams assigned estimates of natural hazard intensity measure to each insurance claim.These efforts produced an integrated and more complete set of building descriptors for each policy in the NFIP and wind portfolios.The article describes the impact of these uncertainty reductions on the development and validation of the vulnerability models,and suggests avenues for data improvement.Lessons learned should be of interest to professionals involved in disaster risk assessment and management.
基金supported by the Guangdong Basic and Applied Basic Research Foundation(2023A1515011244).
文摘The state of in situ stress is a crucial parameter in subsurface engineering,especially for critical projects like nuclear waste repository.As one of the two ISRM suggested methods,the overcoring(OC)method is widely used to estimate the full stress tensors in rocks by independent regression analysis of the data from each OC test.However,such customary independent analysis of individual OC tests,known as no pooling,is liable to yield unreliable test-specific stress estimates due to various uncertainty sources involved in the OC method.To address this problem,a practical and no-cost solution is considered by incorporating into OC data analysis additional information implied within adjacent OC tests,which are usually available in OC measurement campaigns.Hence,this paper presents a Bayesian partial pooling(hierarchical)model for combined analysis of adjacent OC tests.We performed five case studies using OC test data made at a nuclear waste repository research site of Sweden.The results demonstrate that partial pooling of adjacent OC tests indeed allows borrowing of information across adjacent tests,and yields improved stress tensor estimates with reduced uncertainties simultaneously for all individual tests than they are independently analysed as no pooling,particularly for those unreliable no pooling stress estimates.A further model comparison shows that the partial pooling model also gives better predictive performance,and thus confirms that the information borrowed across adjacent OC tests is relevant and effective.
文摘This paper is concerned with the contribution of forensic science to the legal process by helping reduce uncertainty.Although it is now widely accepted that uncertainty should be handled by probability because it is a safeguard against incoherent proceedings,there remain diverging and conflicting views on how probability ought to be interpreted.This is exemplified by the proposals in scientific literature that call for procedures of probability computation that are referred to as"objective,"suggesting that scientists ought to use them in their reporting to recipients of expert information.I find such proposals objectionable.They need to be viewed cautiously,essentially because ensuing probabilistic statements can be perceived as making forensic science prescriptive.A motivating example from the context of forensic DNA analysis will be chosen to illustrate this.As a main point,it shall be argued that such constraining suggestions can be avoided by interpreting probability as a measure of personal belief,that is,subjective probability.Invoking references to foundational literature from mathematical statistics and philosophy of science,the discussion will explore the consequences of this interdisciplinary viewpoint for the practice of forensic expert reporting.It will be emphasized that-as an operational interpretation of probability_the subjectivist perspective enables forensic science to add value to the legal process,in particular by avoiding inferential impasses to which other interpretations of probability may lead.Moreover,understanding probability from a subjective perspective can encourage participants in the legal process to take on more responsibility in matters regarding the coherent handling of uncertainty.This would assure more balanced interactions at the interface between science and the law.This,in turn,provides support for ongoing developments that can be called the"probabilization"of forensic science.