E-learning produces the data on the learners’utilization of the software,which helps the teacher to perceive the learners’mental status and learning efficiency,so it is of great value to make full use of the data.Wi...E-learning produces the data on the learners’utilization of the software,which helps the teacher to perceive the learners’mental status and learning efficiency,so it is of great value to make full use of the data.With Speexx foreign language learning system being the case,this thesis introduces the function of such data and the modes of how to use them to facilitate the blendedteaching and learning.展开更多
To comprehensively understand the Arctic and Antarctic upper atmosphere, it is often crucial to analyze various data that are obtained from many regions. Infrastructure that promotes such interdisciplinary studies on ...To comprehensively understand the Arctic and Antarctic upper atmosphere, it is often crucial to analyze various data that are obtained from many regions. Infrastructure that promotes such interdisciplinary studies on the upper atmosphere has been developed by a Japanese inter-university project called the Inter-university Upper atmosphere Global Observation Network (1UGONET). The objective of this paper is to describe the infrastructure and tools developed by IUGONET. We focus on the data analysis software. It is written in Interactive Data Language (IDL) and is a plug-in for the THEMIS Data Analysis Software suite (TDAS), which is a set of IDL libraries used to visualize and analyze satellite- and ground-based data. We present plots of upper atmospheric data provided by IUGONET as examples of applications, and verify the usefulness of the software in the study of polar science. We discuss IUGONET's new and unique developments, i.e., an executable file of TDAS that can run on the IDL Virtual Machine, IDL routines to retrieve metadata from the IUGONET database, and an archive of 3-D simulation data that uses the Common Data Format so that it can easily be used with TDAS.展开更多
Entity recognition and extraction are the foundations of knowledge graph construction.Entity data in the field of software engineering come from different platforms and communities,and have different formats.This pape...Entity recognition and extraction are the foundations of knowledge graph construction.Entity data in the field of software engineering come from different platforms and communities,and have different formats.This paper divides multi-source software knowledge entities into unstructured data,semi-structured data and code data.For these different types of data,Bi-directional Long Short-Term Memory(Bi-LSTM)with Conditional Random Field(CRF),template matching,and abstract syntax tree are used and integrated into a multi-source software knowledge entity extraction integration model(MEIM)to extract software entities.The model can be updated continuously based on user’s feedbacks to improve the accuracy.To deal with the shortage of entity annotation datasets,keyword extraction methods based on Term Frequency–Inverse Document Frequency(TF-IDF),TextRank,and K-Means are applied to annotate tasks.The proposed MEIM model is applied to the Spring Boot framework,which demonstrates good adaptability.The extracted entities are used to construct a knowledge graph,which is applied to association retrieval and association visualization.展开更多
Prediction and diagnosis of cardiovascular diseases(CVDs)based,among other things,on medical examinations and patient symptoms are the biggest challenges in medicine.About 17.9 million people die from CVDs annually,ac...Prediction and diagnosis of cardiovascular diseases(CVDs)based,among other things,on medical examinations and patient symptoms are the biggest challenges in medicine.About 17.9 million people die from CVDs annually,accounting for 31%of all deaths worldwide.With a timely prognosis and thorough consideration of the patient’s medical history and lifestyle,it is possible to predict CVDs and take preventive measures to eliminate or control this life-threatening disease.In this study,we used various patient datasets from a major hospital in the United States as prognostic factors for CVD.The data was obtained by monitoring a total of 918 patients whose criteria for adults were 28-77 years old.In this study,we present a data mining modeling approach to analyze the performance,classification accuracy and number of clusters on Cardiovascular Disease Prognostic datasets in unsupervised machine learning(ML)using the Orange data mining software.Various techniques are then used to classify the model parameters,such as k-nearest neighbors,support vector machine,random forest,artificial neural network(ANN),naïve bayes,logistic regression,stochastic gradient descent(SGD),and AdaBoost.To determine the number of clusters,various unsupervised ML clustering methods were used,such as k-means,hierarchical,and density-based spatial clustering of applications with noise clustering.The results showed that the best model performance analysis and classification accuracy were SGD and ANN,both of which had a high score of 0.900 on Cardiovascular Disease Prognostic datasets.Based on the results of most clustering methods,such as k-means and hierarchical clustering,Cardiovascular Disease Prognostic datasets can be divided into two clusters.The prognostic accuracy of CVD depends on the accuracy of the proposed model in determining the diagnostic model.The more accurate the model,the better it can predict which patients are at risk for CVD.展开更多
The Energization and Radiation in Geospace (ERG) mission seeks to explore the dynamics of the radiation belts in the Earth's inner magnetosphere with a space-borne probe (ERG satellite) in coordination with relat...The Energization and Radiation in Geospace (ERG) mission seeks to explore the dynamics of the radiation belts in the Earth's inner magnetosphere with a space-borne probe (ERG satellite) in coordination with related ground observations and simulations/modeling studies. For this mission, the Science Center of the ERG project (ERG-SC) will provide a useful data analysis platform based on the THEMIS Data Analysis software Suite (TDAS), which has been widely used by researchers in many conjunction studies of the Time History of Events and Macroscale Interactions during Substorms (THEMIS) spacecraft and ground data. To import SuperDARN data to this highly useful platform, ERG-SC, in close collaboration with SuperDARN groups, developed the Common Data Format (CDF) design suitable for fitacf data and has prepared an open database of SuperDARN data archived in CDE ERG-SC has also been developing programs written in Interactive Data Language (IDL) to load fltacf CDF files and to generate various kinds of plots-not only range-time-intensity-type plots but also two-dimensional map plots that can be superposed with other data, such as all-sky images of THEMIS-GBO and orbital footprints of various satellites. The CDF-TDAS scheme developed by ERG-SC will make it easier for researchers who are not familiar with SuperDARN data to access and analyze SuperDARN data and thereby facilitate collaborative studies with satellite data, such as the inner magnetosphere data pro- vided by the ERG (Japan)-RBSP (USA)-THEMIS (USA) fleet.展开更多
Network updates have become increasingly prevalent since the broad adoption of software-defined networks(SDNs)in data centers.Modern TCP designs,including cutting-edge TCP variants DCTCP,CUBIC,and BBR,however,are not ...Network updates have become increasingly prevalent since the broad adoption of software-defined networks(SDNs)in data centers.Modern TCP designs,including cutting-edge TCP variants DCTCP,CUBIC,and BBR,however,are not resilient to network updates that provoke flow rerouting.In this paper,we first demonstrate that popular TCP implementations perform inadequately in the presence of frequent and inconsistent network updates,because inconsistent and frequent network updates result in out-of-order packets and packet drops induced via transitory congestion and lead to serious performance deterioration.We look into the causes and propose a network update-friendly TCP(NUFTCP),which is an extension of the DCTCP variant,as a solution.Simulations are used to assess the proposed NUFTCP.Our findings reveal that NUFTCP can more effectively manage the problems of out-of-order packets and packet drops triggered in network updates,and it outperforms DCTCP considerably.展开更多
With the increasing demand and the wide application of high performance commodity multi-core processors, both the quantity and scale of data centers grow dramatically and they bring heavy energy consumption. Researche...With the increasing demand and the wide application of high performance commodity multi-core processors, both the quantity and scale of data centers grow dramatically and they bring heavy energy consumption. Researchers and engineers have applied much effort to reducing hardware energy consumption, but software is the true consumer of power and another key in making better use of energy. System software is critical to better energy utilization, because it is not only the manager of hardware but also the bridge and platform between applications and hardware. In this paper, we summarize some trends that can affect the efficiency of data centers. Meanwhile, we investigate the causes of software inefficiency. Based on these studies, major technical challenges and corresponding possible solutions to attain green system software in programmability, scalability, efficiency and software architecture are discussed. Finally, some of our research progress on trusted energy efficient system software is briefly introduced.展开更多
Software intelligent development has become one of the most important research trends in software engineering. In this paper, we put forward two key concepts -- intelligent development environment (IntelliDE) and so...Software intelligent development has become one of the most important research trends in software engineering. In this paper, we put forward two key concepts -- intelligent development environment (IntelliDE) and software knowledge graph -- for the first time. IntelliDE is an ecosystem in which software big data are aggregated, mined and analyzed to provide intelligent assistance in the life cycle of software development. We present its architecture and discuss its key research issues and challenges. Software knowledge graph is a software knowledge representation and management framework, which plays an important role in IntelliDE. We study its concept and introduce some concrete details and examples to show how it could be constructed and leveraged.展开更多
This paper presents a literature review in the field of summarizing software artifacts, focusing on bug reports, source code, mailing lists and developer discussions artifacts. From Jan. 2010 to Apr. 2016, numerous su...This paper presents a literature review in the field of summarizing software artifacts, focusing on bug reports, source code, mailing lists and developer discussions artifacts. From Jan. 2010 to Apr. 2016, numerous summarization techniques, approaches, and tools have been proposed to satisfy the ongoing demand of improving software performance and quality and facilitating developers in understanding the problems at hand. Since aforementioned artifacts contain both structured and unstructured data at the same time, researchers have applied different machine learning and data mining techniques to generate summaries. Therefore, this paper first intends to provide a general perspective on the state of the art, describing the type of artifacts, approaches for summarization, as well as the common portions of experimental procedures shared among these artifacts. Moreover, we discuss the applications of summarization, i.e., what tasks at hand have been achieved through summarization. Next, this paper presents tools that are generated for summarization tasks or employed during summarization tasks. In addition, we present different summarization evaluation methods employed in selected studies as well as other important factors that are used for the evaluation of generated summaries such as adequacy and quality. Moreover, we briefly present modern communication channels and complementarities with commonalities among different software artifacts. Finally, some thoughts about the challenges applicable to the existing studies in general as well as future research directions are also discussed. The survey of existing studies will allow future researchers to have a wide and useful background knowledge on the main and important aspects of this research field.展开更多
Cloud monitoring is of a source of big data that are constantly produced from traces of infrastructures,platforms, and applications. Analysis of monitoring data delivers insights of the system's workload and usage pa...Cloud monitoring is of a source of big data that are constantly produced from traces of infrastructures,platforms, and applications. Analysis of monitoring data delivers insights of the system's workload and usage pattern and ensures workloads are operating at optimum levels. The analysis process involves data query and extraction, data analysis, and result visualization. Since the volume of monitoring data is big, these operations require a scalable and reliable architecture to extract, aggregate, and analyze data in an arbitrary range of granularity. Ultimately, the results of analysis become the knowledge of the system and should be shared and communicated. This paper presents our cloud service architecture that explores a search cluster for data indexing and query. We develop REST APIs that the data can be accessed by different analysis modules. This architecture enables extensions to integrate with software frameworks of both batch processing(such as Hadoop) and stream processing(such as Spark) of big data. The analysis results are structured in Semantic Media Wiki pages in the context of the monitoring data source and the analysis process. This cloud architecture is empirically assessed to evaluate its responsiveness when processing a large set of data records under node failures.展开更多
In this paper we propose a service-oriented architecture for spatial data integration (SOA-SDI) in the context of a large number of available spatial data sources that are physically sitting at different places, and d...In this paper we propose a service-oriented architecture for spatial data integration (SOA-SDI) in the context of a large number of available spatial data sources that are physically sitting at different places, and develop web-based GIS systems based on SOA-SDI, allowing client applications to pull in, analyze and present spatial data from those available spatial data sources. The proposed architecture logically includes 4 layers or components; they are layer of multiple data provider services, layer of data in-tegration, layer of backend services, and front-end graphical user interface (GUI) for spatial data presentation. On the basis of the 4-layered SOA-SDI framework, WebGIS applications can be quickly deployed, which proves that SOA-SDI has the potential to reduce the input of software development and shorten the development period.展开更多
文摘E-learning produces the data on the learners’utilization of the software,which helps the teacher to perceive the learners’mental status and learning efficiency,so it is of great value to make full use of the data.With Speexx foreign language learning system being the case,this thesis introduces the function of such data and the modes of how to use them to facilitate the blendedteaching and learning.
基金supported by the Special Edu-cational Research Budget(Research Promotion)[FY2009]the Special Budget(Project)[FY2010 and later years]from the Ministry of Education,Culture,Sports,Science and Technology(MEXT),Japansupported by the GRENE Arctic Climate Change Research Project,Japan
文摘To comprehensively understand the Arctic and Antarctic upper atmosphere, it is often crucial to analyze various data that are obtained from many regions. Infrastructure that promotes such interdisciplinary studies on the upper atmosphere has been developed by a Japanese inter-university project called the Inter-university Upper atmosphere Global Observation Network (1UGONET). The objective of this paper is to describe the infrastructure and tools developed by IUGONET. We focus on the data analysis software. It is written in Interactive Data Language (IDL) and is a plug-in for the THEMIS Data Analysis Software suite (TDAS), which is a set of IDL libraries used to visualize and analyze satellite- and ground-based data. We present plots of upper atmospheric data provided by IUGONET as examples of applications, and verify the usefulness of the software in the study of polar science. We discuss IUGONET's new and unique developments, i.e., an executable file of TDAS that can run on the IDL Virtual Machine, IDL routines to retrieve metadata from the IUGONET database, and an archive of 3-D simulation data that uses the Common Data Format so that it can easily be used with TDAS.
基金Zhifang Liao:Ministry of Science and Technology:Key Research and Development Project(2018YFB003800),Hunan Provincial Key Laboratory of Finance&Economics Big Data Scienceand Technology(Hunan University of Finance and Economics)2017TP1025,HNNSF 2018JJ2535Shengzong Liu:NSF61802120.
文摘Entity recognition and extraction are the foundations of knowledge graph construction.Entity data in the field of software engineering come from different platforms and communities,and have different formats.This paper divides multi-source software knowledge entities into unstructured data,semi-structured data and code data.For these different types of data,Bi-directional Long Short-Term Memory(Bi-LSTM)with Conditional Random Field(CRF),template matching,and abstract syntax tree are used and integrated into a multi-source software knowledge entity extraction integration model(MEIM)to extract software entities.The model can be updated continuously based on user’s feedbacks to improve the accuracy.To deal with the shortage of entity annotation datasets,keyword extraction methods based on Term Frequency–Inverse Document Frequency(TF-IDF),TextRank,and K-Means are applied to annotate tasks.The proposed MEIM model is applied to the Spring Boot framework,which demonstrates good adaptability.The extracted entities are used to construct a knowledge graph,which is applied to association retrieval and association visualization.
文摘Prediction and diagnosis of cardiovascular diseases(CVDs)based,among other things,on medical examinations and patient symptoms are the biggest challenges in medicine.About 17.9 million people die from CVDs annually,accounting for 31%of all deaths worldwide.With a timely prognosis and thorough consideration of the patient’s medical history and lifestyle,it is possible to predict CVDs and take preventive measures to eliminate or control this life-threatening disease.In this study,we used various patient datasets from a major hospital in the United States as prognostic factors for CVD.The data was obtained by monitoring a total of 918 patients whose criteria for adults were 28-77 years old.In this study,we present a data mining modeling approach to analyze the performance,classification accuracy and number of clusters on Cardiovascular Disease Prognostic datasets in unsupervised machine learning(ML)using the Orange data mining software.Various techniques are then used to classify the model parameters,such as k-nearest neighbors,support vector machine,random forest,artificial neural network(ANN),naïve bayes,logistic regression,stochastic gradient descent(SGD),and AdaBoost.To determine the number of clusters,various unsupervised ML clustering methods were used,such as k-means,hierarchical,and density-based spatial clustering of applications with noise clustering.The results showed that the best model performance analysis and classification accuracy were SGD and ANN,both of which had a high score of 0.900 on Cardiovascular Disease Prognostic datasets.Based on the results of most clustering methods,such as k-means and hierarchical clustering,Cardiovascular Disease Prognostic datasets can be divided into two clusters.The prognostic accuracy of CVD depends on the accuracy of the proposed model in determining the diagnostic model.The more accurate the model,the better it can predict which patients are at risk for CVD.
文摘The Energization and Radiation in Geospace (ERG) mission seeks to explore the dynamics of the radiation belts in the Earth's inner magnetosphere with a space-borne probe (ERG satellite) in coordination with related ground observations and simulations/modeling studies. For this mission, the Science Center of the ERG project (ERG-SC) will provide a useful data analysis platform based on the THEMIS Data Analysis software Suite (TDAS), which has been widely used by researchers in many conjunction studies of the Time History of Events and Macroscale Interactions during Substorms (THEMIS) spacecraft and ground data. To import SuperDARN data to this highly useful platform, ERG-SC, in close collaboration with SuperDARN groups, developed the Common Data Format (CDF) design suitable for fitacf data and has prepared an open database of SuperDARN data archived in CDE ERG-SC has also been developing programs written in Interactive Data Language (IDL) to load fltacf CDF files and to generate various kinds of plots-not only range-time-intensity-type plots but also two-dimensional map plots that can be superposed with other data, such as all-sky images of THEMIS-GBO and orbital footprints of various satellites. The CDF-TDAS scheme developed by ERG-SC will make it easier for researchers who are not familiar with SuperDARN data to access and analyze SuperDARN data and thereby facilitate collaborative studies with satellite data, such as the inner magnetosphere data pro- vided by the ERG (Japan)-RBSP (USA)-THEMIS (USA) fleet.
基金supportted by the King Khalid University through the Large Group Project(No.RGP.2/312/44).
文摘Network updates have become increasingly prevalent since the broad adoption of software-defined networks(SDNs)in data centers.Modern TCP designs,including cutting-edge TCP variants DCTCP,CUBIC,and BBR,however,are not resilient to network updates that provoke flow rerouting.In this paper,we first demonstrate that popular TCP implementations perform inadequately in the presence of frequent and inconsistent network updates,because inconsistent and frequent network updates result in out-of-order packets and packet drops induced via transitory congestion and lead to serious performance deterioration.We look into the causes and propose a network update-friendly TCP(NUFTCP),which is an extension of the DCTCP variant,as a solution.Simulations are used to assess the proposed NUFTCP.Our findings reveal that NUFTCP can more effectively manage the problems of out-of-order packets and packet drops triggered in network updates,and it outperforms DCTCP considerably.
文摘With the increasing demand and the wide application of high performance commodity multi-core processors, both the quantity and scale of data centers grow dramatically and they bring heavy energy consumption. Researchers and engineers have applied much effort to reducing hardware energy consumption, but software is the true consumer of power and another key in making better use of energy. System software is critical to better energy utilization, because it is not only the manager of hardware but also the bridge and platform between applications and hardware. In this paper, we summarize some trends that can affect the efficiency of data centers. Meanwhile, we investigate the causes of software inefficiency. Based on these studies, major technical challenges and corresponding possible solutions to attain green system software in programmability, scalability, efficiency and software architecture are discussed. Finally, some of our research progress on trusted energy efficient system software is briefly introduced.
文摘Software intelligent development has become one of the most important research trends in software engineering. In this paper, we put forward two key concepts -- intelligent development environment (IntelliDE) and software knowledge graph -- for the first time. IntelliDE is an ecosystem in which software big data are aggregated, mined and analyzed to provide intelligent assistance in the life cycle of software development. We present its architecture and discuss its key research issues and challenges. Software knowledge graph is a software knowledge representation and management framework, which plays an important role in IntelliDE. We study its concept and introduce some concrete details and examples to show how it could be constructed and leveraged.
基金This work was supported in part by the National Basic Research 973 Program of China under Grant No. 2013CB035906, the Fundamental Research Funds for the Central Universities of China under Grant No. DUT13RC(3)53, and in part by the New Century Excellent Talents in University of China under Grant No. NCET-13-0073 and the National Natural Science Foundation of China under Grant No. 61300017.
文摘This paper presents a literature review in the field of summarizing software artifacts, focusing on bug reports, source code, mailing lists and developer discussions artifacts. From Jan. 2010 to Apr. 2016, numerous summarization techniques, approaches, and tools have been proposed to satisfy the ongoing demand of improving software performance and quality and facilitating developers in understanding the problems at hand. Since aforementioned artifacts contain both structured and unstructured data at the same time, researchers have applied different machine learning and data mining techniques to generate summaries. Therefore, this paper first intends to provide a general perspective on the state of the art, describing the type of artifacts, approaches for summarization, as well as the common portions of experimental procedures shared among these artifacts. Moreover, we discuss the applications of summarization, i.e., what tasks at hand have been achieved through summarization. Next, this paper presents tools that are generated for summarization tasks or employed during summarization tasks. In addition, we present different summarization evaluation methods employed in selected studies as well as other important factors that are used for the evaluation of generated summaries such as adequacy and quality. Moreover, we briefly present modern communication channels and complementarities with commonalities among different software artifacts. Finally, some thoughts about the challenges applicable to the existing studies in general as well as future research directions are also discussed. The survey of existing studies will allow future researchers to have a wide and useful background knowledge on the main and important aspects of this research field.
基金supported by the Discovery grant No.RGPIN 2014-05254 from Natural Science&Engineering Research Council(NSERC),Canada
文摘Cloud monitoring is of a source of big data that are constantly produced from traces of infrastructures,platforms, and applications. Analysis of monitoring data delivers insights of the system's workload and usage pattern and ensures workloads are operating at optimum levels. The analysis process involves data query and extraction, data analysis, and result visualization. Since the volume of monitoring data is big, these operations require a scalable and reliable architecture to extract, aggregate, and analyze data in an arbitrary range of granularity. Ultimately, the results of analysis become the knowledge of the system and should be shared and communicated. This paper presents our cloud service architecture that explores a search cluster for data indexing and query. We develop REST APIs that the data can be accessed by different analysis modules. This architecture enables extensions to integrate with software frameworks of both batch processing(such as Hadoop) and stream processing(such as Spark) of big data. The analysis results are structured in Semantic Media Wiki pages in the context of the monitoring data source and the analysis process. This cloud architecture is empirically assessed to evaluate its responsiveness when processing a large set of data records under node failures.
基金Supported by the Research Fund of Key GIS Lab of the Education Ministry (No. 200610)
文摘In this paper we propose a service-oriented architecture for spatial data integration (SOA-SDI) in the context of a large number of available spatial data sources that are physically sitting at different places, and develop web-based GIS systems based on SOA-SDI, allowing client applications to pull in, analyze and present spatial data from those available spatial data sources. The proposed architecture logically includes 4 layers or components; they are layer of multiple data provider services, layer of data in-tegration, layer of backend services, and front-end graphical user interface (GUI) for spatial data presentation. On the basis of the 4-layered SOA-SDI framework, WebGIS applications can be quickly deployed, which proves that SOA-SDI has the potential to reduce the input of software development and shorten the development period.