This study is an exploratory analysis of applying natural language processing techniques such as Term Frequency-Inverse Document Frequency and Sentiment Analysis on Twitter data. The uniqueness of this work is establi...This study is an exploratory analysis of applying natural language processing techniques such as Term Frequency-Inverse Document Frequency and Sentiment Analysis on Twitter data. The uniqueness of this work is established by determining the overall sentiment of a politician’s tweets based on TF-IDF values of terms used in their published tweets. By calculating the TF-IDF value of terms from the corpus, this work displays the correlation between TF-IDF score and polarity. The results of this work show that calculating the TF-IDF score of the corpus allows for a more accurate representation of the overall polarity since terms are given a weight based on their uniqueness and relevance rather than just the frequency at which they appear in the corpus.展开更多
In recent years,machine learning algorithms and in particular deep learning has shown promising results when used in the field of legal domain.The legal field is strongly affected by the problem of information overloa...In recent years,machine learning algorithms and in particular deep learning has shown promising results when used in the field of legal domain.The legal field is strongly affected by the problem of information overload,due to the large amount of legal material stored in textual form.Legal text processing is essential in the legal domain to analyze the texts of the court events to automatically predict smart decisions.With an increasing number of digitally available documents,legal text processing is essential to analyze documents which helps to automate various legal domain tasks.Legal document classification is a valuable tool in legal services for enhancing the quality and efficiency of legal document review.In this paper,we propose Sammon Keyword Mapping-based Quadratic Discriminant Recurrent Multilayer Perceptive Deep Neural Classifier(SKM-QDRMPDNC),a system that applies deep neural methods to the problem of legal document classification.The SKM-QDRMPDNC technique consists of many layers to perform the keyword extraction and classification.First,the set of legal documents are collected from the dataset.Then the keyword extraction is performed using SammonMapping technique based on the distance measure.With the extracted features,Quadratic Discriminant analysis is applied to performthe document classification based on the likelihood ratio test.Finally,the classified legal documents are obtained at the output layer.This process is repeated until minimum error is attained.The experimental assessment is carried out using various performance metrics such as accuracy,precision,recall,F-measure,and computational time based on several legal documents collected from the dataset.The observed results validated that the proposed SKM-QDRMPDNC technique provides improved performance in terms of achieving higher accuracy,precision,recall,and F-measure with minimum computation time when compared to existing methods.展开更多
<strong>Background:</strong> Patients medical records are used to document care processes for communication amongst healthcare workers for continued patient management. Incomplete or inaccurate documentati...<strong>Background:</strong> Patients medical records are used to document care processes for communication amongst healthcare workers for continued patient management. Incomplete or inaccurate documentation can adversely affect the quality of patients’ care, leading to medication and treatment errors, increased morbidity, and mortality. Quality documentation in medical records is therefore an essential component of optimal healthcare and facilitates an individual’s continuity of care. This study aimed to assess the quality of documentation of clinical data through the review of the accuracy and completeness of clinical records among newly diagnosed HIV-positive persons. The study is a sub analysis of a prospective longitudinal study that followed a cohort of 12,413 persons who were newly diagnosed with HIV infection. Severe limitations in retrieving reliable information and data became an obstacle to our research and led the study team to conduct medical records documentation and data audit to verify the accuracy and completeness of the data for newly diagnosed HIV positive persons. <strong>Methods: </strong>A cross-sectional study was conducted using routine data generated from 75 randomly selected newly diagnosed HIV positive persons aged 12-years-old and above between June 1, 2014 and March 31, 2015 in 36 purposively selected primary health care (PHC) clinics in South Africa. The facilities were selected from three high HIV-burden districts of South Africa (Gert Sibande, uThukela and City of Johannesburg). <strong>Results: </strong>Significant differences in the accuracy and completeness of clinical records were observed between data generated through the self-assessment by the facility managers and data primarily collected through review of the patients’ clinical stationery and facility registers. 80% of the newly diagnosed HIV positive persons were not documented as screened for tuberculosis (TB) on the clinical chart and 69% of newly diagnosed clients were not clinically staged (WHO staging). Furthermore, 80% of newly diagnosed HIV positive persons’ follow up visit dates were not documented in the patient’s clinical chart. Completeness of the data elements on the case record forms ranged from as low as 26% to a maximum of 66%. It was noteworthy that all the clients’ information documented in HIV counselling and testing registers, continuum of care registers and clinical charts were only partially completed. <strong>Conclusion:</strong> Each of the health care facilities under study had some significant gaps in medical records documentation of clinical data on newly diagnosed HIV positive persons. Data and information accuracy and completeness were a serious challenge in most facilities during the period under investigation. Of interest was the inconsistency of data recorded in the HCT registers, continuum of care and clinical charts of individual patients. <strong>This is a major impediment to HIV/AIDS comprehensive care.</strong>展开更多
A field-programmable gate array(FPGA)based high-speed broadband data acquisition system is designed.The system has a dual channel simultaneous acquisition function.The maximum sampling rate is 500 MSa/s and bandwidth ...A field-programmable gate array(FPGA)based high-speed broadband data acquisition system is designed.The system has a dual channel simultaneous acquisition function.The maximum sampling rate is 500 MSa/s and bandwidth is200 MHz,which solves the large bandwidth,high-speed signal acquisition and processing problems.At present,the data acquisition system is successfully used in broadband receiver test systems.展开更多
Since the late of previous decade, hypertext technique has been applied in many areas. A hypertext data model with version control which is applied to a digital delivery for engineering documents named Optical Disk ba...Since the late of previous decade, hypertext technique has been applied in many areas. A hypertext data model with version control which is applied to a digital delivery for engineering documents named Optical Disk based Electronic Archives Management System(ODEAMS) is presented first and it has successfully solved some problems in engineering data management. Then, this paper describes some details to implement the hypertext network in ODEAMS after introducing the requirements and characters of engineering data management.展开更多
The existing data mining methods are mostly focused on relational databases and structured data, but not on complex structured data (like in extensible markup language(XML)). By converting XML document type descriptio...The existing data mining methods are mostly focused on relational databases and structured data, but not on complex structured data (like in extensible markup language(XML)). By converting XML document type description to the relational semantic recording XML data relations, and using an XML data mining language, the XML data mining system presents a strategy to mine information on XML.展开更多
文摘This study is an exploratory analysis of applying natural language processing techniques such as Term Frequency-Inverse Document Frequency and Sentiment Analysis on Twitter data. The uniqueness of this work is established by determining the overall sentiment of a politician’s tweets based on TF-IDF values of terms used in their published tweets. By calculating the TF-IDF value of terms from the corpus, this work displays the correlation between TF-IDF score and polarity. The results of this work show that calculating the TF-IDF score of the corpus allows for a more accurate representation of the overall polarity since terms are given a weight based on their uniqueness and relevance rather than just the frequency at which they appear in the corpus.
文摘In recent years,machine learning algorithms and in particular deep learning has shown promising results when used in the field of legal domain.The legal field is strongly affected by the problem of information overload,due to the large amount of legal material stored in textual form.Legal text processing is essential in the legal domain to analyze the texts of the court events to automatically predict smart decisions.With an increasing number of digitally available documents,legal text processing is essential to analyze documents which helps to automate various legal domain tasks.Legal document classification is a valuable tool in legal services for enhancing the quality and efficiency of legal document review.In this paper,we propose Sammon Keyword Mapping-based Quadratic Discriminant Recurrent Multilayer Perceptive Deep Neural Classifier(SKM-QDRMPDNC),a system that applies deep neural methods to the problem of legal document classification.The SKM-QDRMPDNC technique consists of many layers to perform the keyword extraction and classification.First,the set of legal documents are collected from the dataset.Then the keyword extraction is performed using SammonMapping technique based on the distance measure.With the extracted features,Quadratic Discriminant analysis is applied to performthe document classification based on the likelihood ratio test.Finally,the classified legal documents are obtained at the output layer.This process is repeated until minimum error is attained.The experimental assessment is carried out using various performance metrics such as accuracy,precision,recall,F-measure,and computational time based on several legal documents collected from the dataset.The observed results validated that the proposed SKM-QDRMPDNC technique provides improved performance in terms of achieving higher accuracy,precision,recall,and F-measure with minimum computation time when compared to existing methods.
文摘<strong>Background:</strong> Patients medical records are used to document care processes for communication amongst healthcare workers for continued patient management. Incomplete or inaccurate documentation can adversely affect the quality of patients’ care, leading to medication and treatment errors, increased morbidity, and mortality. Quality documentation in medical records is therefore an essential component of optimal healthcare and facilitates an individual’s continuity of care. This study aimed to assess the quality of documentation of clinical data through the review of the accuracy and completeness of clinical records among newly diagnosed HIV-positive persons. The study is a sub analysis of a prospective longitudinal study that followed a cohort of 12,413 persons who were newly diagnosed with HIV infection. Severe limitations in retrieving reliable information and data became an obstacle to our research and led the study team to conduct medical records documentation and data audit to verify the accuracy and completeness of the data for newly diagnosed HIV positive persons. <strong>Methods: </strong>A cross-sectional study was conducted using routine data generated from 75 randomly selected newly diagnosed HIV positive persons aged 12-years-old and above between June 1, 2014 and March 31, 2015 in 36 purposively selected primary health care (PHC) clinics in South Africa. The facilities were selected from three high HIV-burden districts of South Africa (Gert Sibande, uThukela and City of Johannesburg). <strong>Results: </strong>Significant differences in the accuracy and completeness of clinical records were observed between data generated through the self-assessment by the facility managers and data primarily collected through review of the patients’ clinical stationery and facility registers. 80% of the newly diagnosed HIV positive persons were not documented as screened for tuberculosis (TB) on the clinical chart and 69% of newly diagnosed clients were not clinically staged (WHO staging). Furthermore, 80% of newly diagnosed HIV positive persons’ follow up visit dates were not documented in the patient’s clinical chart. Completeness of the data elements on the case record forms ranged from as low as 26% to a maximum of 66%. It was noteworthy that all the clients’ information documented in HIV counselling and testing registers, continuum of care registers and clinical charts were only partially completed. <strong>Conclusion:</strong> Each of the health care facilities under study had some significant gaps in medical records documentation of clinical data on newly diagnosed HIV positive persons. Data and information accuracy and completeness were a serious challenge in most facilities during the period under investigation. Of interest was the inconsistency of data recorded in the HCT registers, continuum of care and clinical charts of individual patients. <strong>This is a major impediment to HIV/AIDS comprehensive care.</strong>
文摘A field-programmable gate array(FPGA)based high-speed broadband data acquisition system is designed.The system has a dual channel simultaneous acquisition function.The maximum sampling rate is 500 MSa/s and bandwidth is200 MHz,which solves the large bandwidth,high-speed signal acquisition and processing problems.At present,the data acquisition system is successfully used in broadband receiver test systems.
文摘Since the late of previous decade, hypertext technique has been applied in many areas. A hypertext data model with version control which is applied to a digital delivery for engineering documents named Optical Disk based Electronic Archives Management System(ODEAMS) is presented first and it has successfully solved some problems in engineering data management. Then, this paper describes some details to implement the hypertext network in ODEAMS after introducing the requirements and characters of engineering data management.
文摘The existing data mining methods are mostly focused on relational databases and structured data, but not on complex structured data (like in extensible markup language(XML)). By converting XML document type description to the relational semantic recording XML data relations, and using an XML data mining language, the XML data mining system presents a strategy to mine information on XML.