Regional healthcare platforms collect clinical data from hospitals in specific areas for the purpose of healthcare management.It is a common requirement to reuse the data for clinical research.However,we have to face ...Regional healthcare platforms collect clinical data from hospitals in specific areas for the purpose of healthcare management.It is a common requirement to reuse the data for clinical research.However,we have to face challenges like the inconsistence of terminology in electronic health records (EHR) and the complexities in data quality and data formats in regional healthcare platform.In this paper,we propose methodology and process on constructing large scale cohorts which forms the basis of causality and comparative effectiveness relationship in epidemiology.We firstly constructed a Chinese terminology knowledge graph to deal with the diversity of vocabularies on regional platform.Secondly,we built special disease case repositories (i.e.,heart failure repository) that utilize the graph to search the related patients and to normalize the data.Based on the requirements of the clinical research which aimed to explore the effectiveness of taking statin on 180-days readmission in patients with heart failure,we built a large-scale retrospective cohort with 29647 cases of heart failure patients from the heart failure repository.After the propensity score matching,the study group (n=6346) and the control group (n=6346) with parallel clinical characteristics were acquired.Logistic regression analysis showed that taking statins had a negative correlation with 180-days readmission in heart failure patients.This paper presents the workflow and application example of big data mining based on regional EHR data.展开更多
Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster ...Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster large scale data where the similarities are not sparse in many cases. This paper presents two variants of AP for grouping large scale data with a dense similarity matrix. The local approach is partition affinity propagation (PAP) and the global method is landmark affinity propagation (LAP). PAP passes messages in the subsets of data first and then merges them as the number of initial step of iterations; it can effectively reduce the number of iterations of clustering. LAP passes messages between the landmark data points first and then clusters non-landmark data points; it is a large global approximation method to speed up clustering. Experiments are conducted on many datasets, such as random data points, manifold subspaces, images of faces and Chinese calligraphy, and the results demonstrate that the two ap-proaches are feasible and practicable.展开更多
This paper makes astudy on the interactive digital gener-alization, where map generalizationcan be divided into intellective reason-ing procedure and operational proce-dure, which are done by human andcomputer, respec...This paper makes astudy on the interactive digital gener-alization, where map generalizationcan be divided into intellective reason-ing procedure and operational proce-dure, which are done by human andcomputer, respectively. And an inter-active map generalization environmentfor large scale topographic map is thendesigned and realized. This researchfocuses on: ① the significance of re-searching an interactive map generali-zation environment, ② the features oflarge scale topographic map and inter-active map generalization, ③ the con-struction of map generalization-orien-ted database platform.展开更多
Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effectiv...Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effective and innovative digital platform to observe trend from social media users’ perspective who are direct or indirect witnesses of the calamitous event. This paper aims to collect and analyze twitter data related to the recent wildfire in California to perform a trend analysis by classifying firsthand and credible information from Twitter users. This work investigates tweets on the recent wildfire in California and classifies them based on witnesses into two types: 1) direct witnesses and 2) indirect witnesses. The collected and analyzed information can be useful for law enforcement agencies and humanitarian organizations for communication and verification of the situational awareness during wildfire hazards. Trend analysis is an aggregated approach that includes sentimental analysis and topic modeling performed through domain-expert manual annotation and machine learning. Trend analysis ultimately builds a fine-grained analysis to assess evacuation routes and provide valuable information to the firsthand emergency responders<span style="font-family:Verdana;">.</span>展开更多
Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes...Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape.展开更多
Local diversity AdaBoost support vector machine(LDAB-SVM) is proposed for large scale dataset classification problems.The training dataset is split into several blocks firstly, and some models based on these dataset...Local diversity AdaBoost support vector machine(LDAB-SVM) is proposed for large scale dataset classification problems.The training dataset is split into several blocks firstly, and some models based on these dataset blocks are built.In order to obtain a better performance, AdaBoost is used in each model building.In the boosting iteration step, the component learners which have higher diversity and accuracy are collected via the kernel parameters adjusting.Then the local models via voting method are integrated.The experimental study shows that LDAB-SVM can deal with large scale dataset efficiently without reducing the performance of the classifier.展开更多
The main constituent parts of unmanned aerial vehicle aerial photogrammetry systems are discussed.The key issues including the division of regional networks,the layout of regional networks,the correction of lens disto...The main constituent parts of unmanned aerial vehicle aerial photogrammetry systems are discussed.The key issues including the division of regional networks,the layout of regional networks,the correction of lens distortion,the optimization of external orientation elements,the aerial triangulation,the image matching and fusion,and the production of digital elevation models and digital orthoimages,tilt real 3 D models,and digital line drawings,were analyzed.The advantages of UAV aerial photogrammetry were compared.This study provides reference for measuring large-scale topographic maps by UAV photogrammetry systems.展开更多
This paper offers preliminary work on system dynamics and Data mining tools. It tries to understand the dynamics of carrying out large-scale events, such as Hajj. The study looks at a large, recurring problem as a var...This paper offers preliminary work on system dynamics and Data mining tools. It tries to understand the dynamics of carrying out large-scale events, such as Hajj. The study looks at a large, recurring problem as a variable to consider, such as how the flow of people changes over time as well as how location interacts with placement. The predicted data is analyzed using Vensim PLE 32 modeling software, GIS Arc Map 10.2.1, and AnyLogic 7.3.1 software regarding the potential placement of temporal service points, taking into consideration the three dynamic constraints and behavioral aspects: a large population, limitation in time, and space. This research proposes appropriate data analyses to ensure the optimal positioning of the service points with limited time and space for large-scale events. The conceptual framework would be the output of this study. Knowledge may be added to the insights based on the technique.展开更多
基金Supported by the National Major Scientific and Technological Special Project for"Significant New Drugs Development’’(No.2018ZX09201008)Special Fund Project for Information Development from Shanghai Municipal Commission of Economy and Information(No.201701013)
文摘Regional healthcare platforms collect clinical data from hospitals in specific areas for the purpose of healthcare management.It is a common requirement to reuse the data for clinical research.However,we have to face challenges like the inconsistence of terminology in electronic health records (EHR) and the complexities in data quality and data formats in regional healthcare platform.In this paper,we propose methodology and process on constructing large scale cohorts which forms the basis of causality and comparative effectiveness relationship in epidemiology.We firstly constructed a Chinese terminology knowledge graph to deal with the diversity of vocabularies on regional platform.Secondly,we built special disease case repositories (i.e.,heart failure repository) that utilize the graph to search the related patients and to normalize the data.Based on the requirements of the clinical research which aimed to explore the effectiveness of taking statin on 180-days readmission in patients with heart failure,we built a large-scale retrospective cohort with 29647 cases of heart failure patients from the heart failure repository.After the propensity score matching,the study group (n=6346) and the control group (n=6346) with parallel clinical characteristics were acquired.Logistic regression analysis showed that taking statins had a negative correlation with 180-days readmission in heart failure patients.This paper presents the workflow and application example of big data mining based on regional EHR data.
基金the National Natural Science Foundation of China (Nos. 60533090 and 60603096)the National Hi-Tech Research and Development Program (863) of China (No. 2006AA010107)+2 种基金the Key Technology R&D Program of China (No. 2006BAH02A13-4)the Program for Changjiang Scholars and Innovative Research Team in University of China (No. IRT0652)the Cultivation Fund of the Key Scientific and Technical Innovation Project of MOE, China (No. 706033)
文摘Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster large scale data where the similarities are not sparse in many cases. This paper presents two variants of AP for grouping large scale data with a dense similarity matrix. The local approach is partition affinity propagation (PAP) and the global method is landmark affinity propagation (LAP). PAP passes messages in the subsets of data first and then merges them as the number of initial step of iterations; it can effectively reduce the number of iterations of clustering. LAP passes messages between the landmark data points first and then clusters non-landmark data points; it is a large global approximation method to speed up clustering. Experiments are conducted on many datasets, such as random data points, manifold subspaces, images of faces and Chinese calligraphy, and the results demonstrate that the two ap-proaches are feasible and practicable.
基金Supported by National Basic Research Program of China (973 Program) (2009CB320601), National Natural Science Foundation of China (60774048, 60821063), the Program for Cheung Kong Scholars, and the Research Fund for the Doctoral Program of China Higher Education (20070145015)
文摘这份报纸学习样品数据的问题为有变化时间的延期的不明确的连续时间的模糊大规模系统的可靠 H 夸张控制。第一,模糊夸张模型( FHM )被用来为某些复杂大规模系统建立模型,然后根据 Lyapunov 指导方法和大规模系统的分散的控制理论,线性 matrixine 质量( LMI )基于条件 arederived toguarantee H 性能不仅当所有控制部件正在操作很好时,而且面对一些可能的致动器失败。而且,致动器的精确失败参数没被要求,并且要求仅仅是失败参数的更低、上面的界限。条件依赖于时间延期的上面的界限,并且不依赖于变化时间的延期的衍生物。因此,获得的结果是不太保守的。最后,二个例子被提供说明设计过程和它的有效性。
文摘This paper makes astudy on the interactive digital gener-alization, where map generalizationcan be divided into intellective reason-ing procedure and operational proce-dure, which are done by human andcomputer, respectively. And an inter-active map generalization environmentfor large scale topographic map is thendesigned and realized. This researchfocuses on: ① the significance of re-searching an interactive map generali-zation environment, ② the features oflarge scale topographic map and inter-active map generalization, ③ the con-struction of map generalization-orien-ted database platform.
文摘Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effective and innovative digital platform to observe trend from social media users’ perspective who are direct or indirect witnesses of the calamitous event. This paper aims to collect and analyze twitter data related to the recent wildfire in California to perform a trend analysis by classifying firsthand and credible information from Twitter users. This work investigates tweets on the recent wildfire in California and classifies them based on witnesses into two types: 1) direct witnesses and 2) indirect witnesses. The collected and analyzed information can be useful for law enforcement agencies and humanitarian organizations for communication and verification of the situational awareness during wildfire hazards. Trend analysis is an aggregated approach that includes sentimental analysis and topic modeling performed through domain-expert manual annotation and machine learning. Trend analysis ultimately builds a fine-grained analysis to assess evacuation routes and provide valuable information to the firsthand emergency responders<span style="font-family:Verdana;">.</span>
基金Supported by Project of National Natural Science Foundation(No.41874134)
文摘Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape.
基金supported by the National Natural Science Foundation of China (60603098)
文摘Local diversity AdaBoost support vector machine(LDAB-SVM) is proposed for large scale dataset classification problems.The training dataset is split into several blocks firstly, and some models based on these dataset blocks are built.In order to obtain a better performance, AdaBoost is used in each model building.In the boosting iteration step, the component learners which have higher diversity and accuracy are collected via the kernel parameters adjusting.Then the local models via voting method are integrated.The experimental study shows that LDAB-SVM can deal with large scale dataset efficiently without reducing the performance of the classifier.
文摘The main constituent parts of unmanned aerial vehicle aerial photogrammetry systems are discussed.The key issues including the division of regional networks,the layout of regional networks,the correction of lens distortion,the optimization of external orientation elements,the aerial triangulation,the image matching and fusion,and the production of digital elevation models and digital orthoimages,tilt real 3 D models,and digital line drawings,were analyzed.The advantages of UAV aerial photogrammetry were compared.This study provides reference for measuring large-scale topographic maps by UAV photogrammetry systems.
文摘This paper offers preliminary work on system dynamics and Data mining tools. It tries to understand the dynamics of carrying out large-scale events, such as Hajj. The study looks at a large, recurring problem as a variable to consider, such as how the flow of people changes over time as well as how location interacts with placement. The predicted data is analyzed using Vensim PLE 32 modeling software, GIS Arc Map 10.2.1, and AnyLogic 7.3.1 software regarding the potential placement of temporal service points, taking into consideration the three dynamic constraints and behavioral aspects: a large population, limitation in time, and space. This research proposes appropriate data analyses to ensure the optimal positioning of the service points with limited time and space for large-scale events. The conceptual framework would be the output of this study. Knowledge may be added to the insights based on the technique.