In order to improve the accuracy and integrality of mining data records from the web, the concepts of isomorphic page and directory page and three algorithms are proposed. An isomorphic web page is a set of web pages ...In order to improve the accuracy and integrality of mining data records from the web, the concepts of isomorphic page and directory page and three algorithms are proposed. An isomorphic web page is a set of web pages that have uniform structure, only differing in main information. A web page which contains many links that link to isomorphic web pages is called a directory page. Algorithm 1 can find directory web pages in a web using adjacent links similar analysis method. It first sorts the link, and then counts the links in each directory. If the count is greater than a given valve then finds the similar sub-page links in the directory and gives the results. A function for an isomorphic web page judgment is also proposed. Algorithm 2 can mine data records from an isomorphic page using a noise information filter. It is based on the fact that the noise information is the same in two isomorphic pages, only the main information is different. Algorithm 3 can mine data records from an entire website using the technology of spider. The experiment shows that the proposed algorithms can mine data records more intactly than the existing algorithms. Mining data records from isomorphic pages is an efficient method.展开更多
As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The ...As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.展开更多
In the software of data management system, there are some different lengths of records needed storing in an array, and the number of records often increases in use of the software. A universal data structure is presen...In the software of data management system, there are some different lengths of records needed storing in an array, and the number of records often increases in use of the software. A universal data structure is presented in the design, and it provide an unified interface for dynamic storage records in different length, so that the developers can call the unified interface directly for the data storage to simplify the design of data management system.展开更多
Patterning of L10 FePt nanoparticles(NPs) with high coercivity offers a promising route to develop bit-patterned media(BPM) for the next generation magnetic data recording system, but the synthesis of monodisperse FeP...Patterning of L10 FePt nanoparticles(NPs) with high coercivity offers a promising route to develop bit-patterned media(BPM) for the next generation magnetic data recording system, but the synthesis of monodisperse FePt NPs and mass production of their nanopatterns has been a longstanding challenge. Here, highly efficient nanoimprint lithography was applied for large-scale universal patterning, which was achieved by imprinting the solution of a single-source bimetallic precursor. The rigid coplanar metallic cores and the surrounding flexible tails in the bimetallic complex permit the spontaneous molecular arrangements to form the highly ordered negative morphology replicated from the soft template.In-situ pyrolysis study was then investigated by one-pot pyrolysis of the precursor under an Ar/H2 atmosphere, and the resultant NPs were fully characterized to identify the phase,morphology and magnetic properties. Finally, highly-ordered patterns on certain substrates were preserved perfectly after pyrolysis and could be potentially utilized in magnetic data recording media.展开更多
In order to settle the problem of workflow data consis-tency under the distributed environment, an invalidation strategy based-on timely updating record list is put forward. The strategy adopting the method of updatin...In order to settle the problem of workflow data consis-tency under the distributed environment, an invalidation strategy based-on timely updating record list is put forward. The strategy adopting the method of updating the records list and the recovery mechanism of updating message proves the classical invalidation strategy. When the request cycle of duplication is too long, the strategy uses the method of updating the records list to pause for sending updating message; when the long cycle duplication is requested again, it uses the recovery mechanism to resume the updating message. This strategy not only ensures the consistency of the workflow data, but also reduces the unnecessary network traffic. From theoretical comparison with those common strategies, the unnecessary network traffic of this strategy is fewer and more stable. The simulation results validate this conclusion.展开更多
This article studies the fault recorder in power system and introduces the Comtrade format. Andituses C++ programming to read recorded fault data and adopts Fourier analysis and symmetrical component method to filter ...This article studies the fault recorder in power system and introduces the Comtrade format. Andituses C++ programming to read recorded fault data and adopts Fourier analysis and symmetrical component method to filter and extract fundamental waves. Finally the effectiveness of the data processing method introduced in this paper is verified by CAAP software.展开更多
This paper presented a rule merging and simplifying method and an improved analysis deviation algorithm. The fuzzy equivalence theory avoids the rigid way (either this or that) of traditional equivalence theory. Durin...This paper presented a rule merging and simplifying method and an improved analysis deviation algorithm. The fuzzy equivalence theory avoids the rigid way (either this or that) of traditional equivalence theory. During a data cleaning process task, some rules exist such as included/being included relations with each other. The equivalence degree of the being-included rule is smaller than that of the including rule, so a rule merging and simplifying method is introduced to reduce the total computing time. And this kind of relation will affect the deviation of fuzzy equivalence degree. An improved analysis deviation algorithm that omits the influence of the included rules' equivalence degree was also presented. Normally the duplicate records are logged in a file, and users have to check and verify them one by one. It's time-cost. The proposed algorithm can save users' labor during duplicate records checking. Finally, an experiment was presented which demonstrates the possibility of the rule.展开更多
The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical...The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical components in rocks is expensive and time consuming.However,the basic well log curves are not well correlated with BI so correlation-based,machine-learning methods are not able to derive highly accurate BI predictions using such data.A correlation-free,optimized data-matching algorithm is configured to predict BI on a supervised basis from well log and core data available from two published wells in the Lower Barnett Shale Formation (Texas).This transparent open box (TOB) algorithm matches data records by calculating the sum of squared errors between their variables and selecting the best matches as those with the minimum squared errors.It then applies optimizers to adjust weights applied to individual variable errors to minimize the root mean square error (RMSE)between calculated and predicted (BI).The prediction accuracy achieved by TOB using just five well logs (Gr,ρb,Ns,Rs,Dt) to predict BI is dependent on the density of data records sampled.At a sampling density of about one sample per 0.5 ft BI is predicted with RMSE~0.056 and R^(2)~0.790.At a sampling density of about one sample per0.1 ft BI is predicted with RMSE~0.008 and R^(2)~0.995.Adding a stratigraphic height index as an additional (sixth)input variable method improves BI prediction accuracy to RMSE~0.003 and R^(2)~0.999 for the two wells with only 1 record in 10,000 yielding a BI prediction error of>±0.1.The model has the potential to be applied in an unsupervised basis to predict BI from basic well log data in surrounding wells lacking mineralogical measurements but with similar lithofacies and burial histories.The method could also be extended to predict elastic rock properties in and seismic attributes from wells and seismic data to improve the precision of brittleness index and fracability mapping spatially.展开更多
常规收集的卫生数据在药物流行病学研究中发挥着越来越重要的作用。根据STROBE声明拓展而来的RECORD声明是目前利用常规收集卫生数据进行研究的主要指南,但RECORD声明并不完全适用于药物流行病学研究,RECORD-PE(RECORD for Pharmacoepid...常规收集的卫生数据在药物流行病学研究中发挥着越来越重要的作用。根据STROBE声明拓展而来的RECORD声明是目前利用常规收集卫生数据进行研究的主要指南,但RECORD声明并不完全适用于药物流行病学研究,RECORD-PE(RECORD for Pharmacoepidemiology)声明在RECORD声明基础上,针对药物流行病学研究扩展了15个条目。本文对RECORD-PE进行了详细解读,并给出了相应的实例,便于我国学者更好地掌握RECORD-PE声明并进行利用。展开更多
In terms of the temporal-spatial distribution features of earthquakes, we study the completeness of historical data in North China where there is the most plenty historical data and with the longest record history by ...In terms of the temporal-spatial distribution features of earthquakes, we study the completeness of historical data in North China where there is the most plenty historical data and with the longest record history by some meth ods of analysis and comparison. The results are obtained for events with Ms≥4 are largely complete since 1484 in North China (except Huanghai sea region and remote districts, such as Nei Mongol Autonomous region), but quakes with Ms≥6 are largely complete since 1291 in the middle and lower reaches of the Yellow River.展开更多
Four different states of Si15Sb85 and Ge2Sb2Te5 phase change memory thin films are obtained by crystallization degree modulation through laser initialization at different powers or annealing at different temperatures....Four different states of Si15Sb85 and Ge2Sb2Te5 phase change memory thin films are obtained by crystallization degree modulation through laser initialization at different powers or annealing at different temperatures. The polarization characteristics of these two four-level phase change recording media are analyzed systematically. A simple and effective readout scheme is then proposed, and the readout signal is numerically simulated. The results show that a high-contrast polarization readout can be obtained in an extensive wavelength range for the four-level phase change recording media using common phase change materials. This study will help in-depth understanding of the physical mechanisms and provide technical approaches to multilevel phase change recording.展开更多
Effective storage of healthcare information is the foundation of the rapid development of electronic health recorder (EHR). This paper presents a research on the data model of EHR storage, focusing on solving the co...Effective storage of healthcare information is the foundation of the rapid development of electronic health recorder (EHR). This paper presents a research on the data model of EHR storage, focusing on solving the complex and abstract information model and data types in HL7 V3(Health Level 7 Version 3.0) as well as HL7 localizing storage. Using health-care information exchange and sharing standards may settle the problem of interoperability between heterogeneous systems. HL7 is the most widely accepted and used standard. HL7 standardizes the information format in the process of transmission. Nevertheless, it can not guide the storage of health-care data directly. HL7 V3 developed an abstract information model-reference information model (RIM) and data types in it are complex. In this paper, we propose an approach on converting from the abstract HL7 V3 information model to the relational data model. Our approach resolves RIM's complex relationships and its data types and localizes the HL7 V3.展开更多
Importance: Health information technology has been used to improve diabetes care and outcomes. With the implementation of our diabetes registry, we discovered several flaws in the data. Objective: The aim of this pape...Importance: Health information technology has been used to improve diabetes care and outcomes. With the implementation of our diabetes registry, we discovered several flaws in the data. Objective: The aim of this paper is to demonstrate whether improving diabetes templates in electronic medical records associated with data feedback, improves process and quality outcomes for patients with diabetes. Methods: We redesigned our chronic diseases templates and clinical flow, built a diabetes registry and used the data for feedback to educate providers, staff and address inconsistencies. A total of 724 active diabetic patients were identified in October 2009 (pre-implementation) and 731 active diabetic patients were identified in June 2011 (post-implementation). Results: The results showed an improvement in the process outcomes of ordering hemoglobin A1C every 6 months and a microalbumin every 12 months (p-value 0.05). Discussion: Data feedback and lessons learned were instrumental to our practice change.展开更多
Personalized medicine is the development of “tailored” therapies that reflect traditional medical approaches with the incorporation of the patient’s unique genetic profile and the environmental basis of the disease...Personalized medicine is the development of “tailored” therapies that reflect traditional medical approaches with the incorporation of the patient’s unique genetic profile and the environmental basis of the disease. These individualized strategies encompass disease prevention and diagnosis, as well as treatment strategies. Today’s healthcare workforce is faced with the availability of massive amounts of patient- and disease-related data. When mined effectively, these data will help produce more efficient and effective diagnoses and treatment, leading to better prognoses for patients at both the individual and population level. Designing preventive and therapeutic interventions for those patients who will benefit most while minimizing side effects and controlling healthcare costs requires bringing diverse data sources together in an analytic paradigm. A resource to clinicians in the development and application of personalized medicine is largely facilitated, perhaps even driven, by the analysis of “big data”. For example, the availability of clinical data warehouses is a significant resource for clinicians in practicing personalized medicine. These “big data” repositories can be queried by clinicians, using specific questions, with data used to gain an understanding of challenges in patient care and treatment. Health informaticians are critical partners to data analytics including the use of technological infrastructures and predictive data mining strategies to access data from multiple sources, assisting clinicians’ interpretation of data and development of personalized, targeted therapy recommendations. In this paper, we look at the concept of personalized medicine, offering perspectives in four important, influencing topics: 1) the availability of “big data” and the role of biomedical informatics in personalized medicine, 2) the need for interdisciplinary teams in the development and evaluation of personalized therapeutic approaches, and 3) the impact of electronic medical record systems and clinical data warehouses on the field of personalized medicine. In closing, we present our fourth perspective, an overview to some of the ethical concerns related to personalized medicine and health equity.展开更多
文摘In order to improve the accuracy and integrality of mining data records from the web, the concepts of isomorphic page and directory page and three algorithms are proposed. An isomorphic web page is a set of web pages that have uniform structure, only differing in main information. A web page which contains many links that link to isomorphic web pages is called a directory page. Algorithm 1 can find directory web pages in a web using adjacent links similar analysis method. It first sorts the link, and then counts the links in each directory. If the count is greater than a given valve then finds the similar sub-page links in the directory and gives the results. A function for an isomorphic web page judgment is also proposed. Algorithm 2 can mine data records from an isomorphic page using a noise information filter. It is based on the fact that the noise information is the same in two isomorphic pages, only the main information is different. Algorithm 3 can mine data records from an entire website using the technology of spider. The experiment shows that the proposed algorithms can mine data records more intactly than the existing algorithms. Mining data records from isomorphic pages is an efficient method.
基金supported by the Meteorological Soft Science Project(Grant No.2023ZZXM29)the Natural Science Fund Project of Tianjin,China(Grant No.21JCYBJC00740)the Key Research and Development-Social Development Program of Jiangsu Province,China(Grant No.BE2021685).
文摘As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.
文摘In the software of data management system, there are some different lengths of records needed storing in an array, and the number of records often increases in use of the software. A universal data structure is presented in the design, and it provide an unified interface for dynamic storage records in different length, so that the developers can call the unified interface directly for the data storage to simplify the design of data management system.
基金supported by the National Natural Science Foundation of China (21701112, 21504074 and 51573151)Hong Kong Research Grants Council (HKBU12317216, Poly U153062/18P and Poly U153015/14P)+2 种基金Areas of Excellence Scheme, University Grants Committee of HKSAR (Ao E/P-03/08)the Hong Kong Polytechnic University (1-ZE1C and 1-ZE25)the Science, Technology and Innovation Committee of Shenzhen Municipality (JCYJ20160531193836532)
文摘Patterning of L10 FePt nanoparticles(NPs) with high coercivity offers a promising route to develop bit-patterned media(BPM) for the next generation magnetic data recording system, but the synthesis of monodisperse FePt NPs and mass production of their nanopatterns has been a longstanding challenge. Here, highly efficient nanoimprint lithography was applied for large-scale universal patterning, which was achieved by imprinting the solution of a single-source bimetallic precursor. The rigid coplanar metallic cores and the surrounding flexible tails in the bimetallic complex permit the spontaneous molecular arrangements to form the highly ordered negative morphology replicated from the soft template.In-situ pyrolysis study was then investigated by one-pot pyrolysis of the precursor under an Ar/H2 atmosphere, and the resultant NPs were fully characterized to identify the phase,morphology and magnetic properties. Finally, highly-ordered patterns on certain substrates were preserved perfectly after pyrolysis and could be potentially utilized in magnetic data recording media.
基金National Basic Research Program of China (973 Program) (2005CD312904)
文摘In order to settle the problem of workflow data consis-tency under the distributed environment, an invalidation strategy based-on timely updating record list is put forward. The strategy adopting the method of updating the records list and the recovery mechanism of updating message proves the classical invalidation strategy. When the request cycle of duplication is too long, the strategy uses the method of updating the records list to pause for sending updating message; when the long cycle duplication is requested again, it uses the recovery mechanism to resume the updating message. This strategy not only ensures the consistency of the workflow data, but also reduces the unnecessary network traffic. From theoretical comparison with those common strategies, the unnecessary network traffic of this strategy is fewer and more stable. The simulation results validate this conclusion.
文摘This article studies the fault recorder in power system and introduces the Comtrade format. Andituses C++ programming to read recorded fault data and adopts Fourier analysis and symmetrical component method to filter and extract fundamental waves. Finally the effectiveness of the data processing method introduced in this paper is verified by CAAP software.
文摘This paper presented a rule merging and simplifying method and an improved analysis deviation algorithm. The fuzzy equivalence theory avoids the rigid way (either this or that) of traditional equivalence theory. During a data cleaning process task, some rules exist such as included/being included relations with each other. The equivalence degree of the being-included rule is smaller than that of the including rule, so a rule merging and simplifying method is introduced to reduce the total computing time. And this kind of relation will affect the deviation of fuzzy equivalence degree. An improved analysis deviation algorithm that omits the influence of the included rules' equivalence degree was also presented. Normally the duplicate records are logged in a file, and users have to check and verify them one by one. It's time-cost. The proposed algorithm can save users' labor during duplicate records checking. Finally, an experiment was presented which demonstrates the possibility of the rule.
文摘The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical components in rocks is expensive and time consuming.However,the basic well log curves are not well correlated with BI so correlation-based,machine-learning methods are not able to derive highly accurate BI predictions using such data.A correlation-free,optimized data-matching algorithm is configured to predict BI on a supervised basis from well log and core data available from two published wells in the Lower Barnett Shale Formation (Texas).This transparent open box (TOB) algorithm matches data records by calculating the sum of squared errors between their variables and selecting the best matches as those with the minimum squared errors.It then applies optimizers to adjust weights applied to individual variable errors to minimize the root mean square error (RMSE)between calculated and predicted (BI).The prediction accuracy achieved by TOB using just five well logs (Gr,ρb,Ns,Rs,Dt) to predict BI is dependent on the density of data records sampled.At a sampling density of about one sample per 0.5 ft BI is predicted with RMSE~0.056 and R^(2)~0.790.At a sampling density of about one sample per0.1 ft BI is predicted with RMSE~0.008 and R^(2)~0.995.Adding a stratigraphic height index as an additional (sixth)input variable method improves BI prediction accuracy to RMSE~0.003 and R^(2)~0.999 for the two wells with only 1 record in 10,000 yielding a BI prediction error of>±0.1.The model has the potential to be applied in an unsupervised basis to predict BI from basic well log data in surrounding wells lacking mineralogical measurements but with similar lithofacies and burial histories.The method could also be extended to predict elastic rock properties in and seismic attributes from wells and seismic data to improve the precision of brittleness index and fracability mapping spatially.
文摘常规收集的卫生数据在药物流行病学研究中发挥着越来越重要的作用。根据STROBE声明拓展而来的RECORD声明是目前利用常规收集卫生数据进行研究的主要指南,但RECORD声明并不完全适用于药物流行病学研究,RECORD-PE(RECORD for Pharmacoepidemiology)声明在RECORD声明基础上,针对药物流行病学研究扩展了15个条目。本文对RECORD-PE进行了详细解读,并给出了相应的实例,便于我国学者更好地掌握RECORD-PE声明并进行利用。
文摘In terms of the temporal-spatial distribution features of earthquakes, we study the completeness of historical data in North China where there is the most plenty historical data and with the longest record history by some meth ods of analysis and comparison. The results are obtained for events with Ms≥4 are largely complete since 1484 in North China (except Huanghai sea region and remote districts, such as Nei Mongol Autonomous region), but quakes with Ms≥6 are largely complete since 1291 in the middle and lower reaches of the Yellow River.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61178059 and 61137002)the Key Program of the Science and Technology Commission of Shanghai Municipality,China(Grant No.11jc1413300)
文摘Four different states of Si15Sb85 and Ge2Sb2Te5 phase change memory thin films are obtained by crystallization degree modulation through laser initialization at different powers or annealing at different temperatures. The polarization characteristics of these two four-level phase change recording media are analyzed systematically. A simple and effective readout scheme is then proposed, and the readout signal is numerically simulated. The results show that a high-contrast polarization readout can be obtained in an extensive wavelength range for the four-level phase change recording media using common phase change materials. This study will help in-depth understanding of the physical mechanisms and provide technical approaches to multilevel phase change recording.
基金supported by Dongguan City Medical and Health Research Project under Grant No. 200910515018Guangdong Provincial Department Cooperation Project under Grant No. 2009B090300362+1 种基金Sichuan Province Science & Technology Pillar Program under Grant No.2010SZ0062the Fundamental Research Funds for the Central Universities under Grant No. ZYGX2009X016
文摘Effective storage of healthcare information is the foundation of the rapid development of electronic health recorder (EHR). This paper presents a research on the data model of EHR storage, focusing on solving the complex and abstract information model and data types in HL7 V3(Health Level 7 Version 3.0) as well as HL7 localizing storage. Using health-care information exchange and sharing standards may settle the problem of interoperability between heterogeneous systems. HL7 is the most widely accepted and used standard. HL7 standardizes the information format in the process of transmission. Nevertheless, it can not guide the storage of health-care data directly. HL7 V3 developed an abstract information model-reference information model (RIM) and data types in it are complex. In this paper, we propose an approach on converting from the abstract HL7 V3 information model to the relational data model. Our approach resolves RIM's complex relationships and its data types and localizes the HL7 V3.
文摘Importance: Health information technology has been used to improve diabetes care and outcomes. With the implementation of our diabetes registry, we discovered several flaws in the data. Objective: The aim of this paper is to demonstrate whether improving diabetes templates in electronic medical records associated with data feedback, improves process and quality outcomes for patients with diabetes. Methods: We redesigned our chronic diseases templates and clinical flow, built a diabetes registry and used the data for feedback to educate providers, staff and address inconsistencies. A total of 724 active diabetic patients were identified in October 2009 (pre-implementation) and 731 active diabetic patients were identified in June 2011 (post-implementation). Results: The results showed an improvement in the process outcomes of ordering hemoglobin A1C every 6 months and a microalbumin every 12 months (p-value 0.05). Discussion: Data feedback and lessons learned were instrumental to our practice change.
文摘Personalized medicine is the development of “tailored” therapies that reflect traditional medical approaches with the incorporation of the patient’s unique genetic profile and the environmental basis of the disease. These individualized strategies encompass disease prevention and diagnosis, as well as treatment strategies. Today’s healthcare workforce is faced with the availability of massive amounts of patient- and disease-related data. When mined effectively, these data will help produce more efficient and effective diagnoses and treatment, leading to better prognoses for patients at both the individual and population level. Designing preventive and therapeutic interventions for those patients who will benefit most while minimizing side effects and controlling healthcare costs requires bringing diverse data sources together in an analytic paradigm. A resource to clinicians in the development and application of personalized medicine is largely facilitated, perhaps even driven, by the analysis of “big data”. For example, the availability of clinical data warehouses is a significant resource for clinicians in practicing personalized medicine. These “big data” repositories can be queried by clinicians, using specific questions, with data used to gain an understanding of challenges in patient care and treatment. Health informaticians are critical partners to data analytics including the use of technological infrastructures and predictive data mining strategies to access data from multiple sources, assisting clinicians’ interpretation of data and development of personalized, targeted therapy recommendations. In this paper, we look at the concept of personalized medicine, offering perspectives in four important, influencing topics: 1) the availability of “big data” and the role of biomedical informatics in personalized medicine, 2) the need for interdisciplinary teams in the development and evaluation of personalized therapeutic approaches, and 3) the impact of electronic medical record systems and clinical data warehouses on the field of personalized medicine. In closing, we present our fourth perspective, an overview to some of the ethical concerns related to personalized medicine and health equity.