期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Classification and quantification of timestamp data quality issues and its impact on data quality outcome
1
作者 Rex Ambe 《Data Intelligence》 EI 2024年第3期812-833,共22页
Timestamps play a key role in process mining because it determines the chronology of which events occurred and subsequently how they are ordered in process modelling.The timestamp in process mining gives an insight on... Timestamps play a key role in process mining because it determines the chronology of which events occurred and subsequently how they are ordered in process modelling.The timestamp in process mining gives an insight on process performance,conformance,and modelling.This therefore means problems with the timestamp will result in misrepresentations of the mined process.A few articles have been published on the quantification of data quality problems but just one of the articles at the time of this paper is based on the quantification of timestamp quality problems.This article evaluates the quality of timestamps in event log across two axes using eleven quality dimensions and four levels of potential data quality problems.The eleven data quality dimensions were obtained by doing a thorough literature review of more than fifty process mining articles which focus on quality dimensions.This evaluation resulted in twelve data quality quantification metrics and the metrics were applied to the MIMIC-ll dataset as an illustration.The outcome of the timestamp quality quantification using the proposed typology enabled the user to appreciate the quality of the event log and thus makes it possible to evaluate the risk of carrying out specific data cleaning measures to improve the process mining outcome. 展开更多
关键词 TIMESTAMP Process mining Data quality dimensions event log Quality metrics Business process
原文传递
UiLog: Improving Log-Based Fault Diagnosis by Log Analysis 被引量:4
2
作者 De-Qing Zou 《Journal of Computer Science & Technology》 SCIE EI CSCD 2016年第5期1038-1052,共15页
In modern computer systems, system event logs have always been the primary source for checking system status. As computer systems become more and more complex, the interaction between software and hardware increases f... In modern computer systems, system event logs have always been the primary source for checking system status. As computer systems become more and more complex, the interaction between software and hardware increases frequently. The components will generate enormous log information, including running reports and fault information. The sheer quantity of data is a great challenge for analysis relying on the manual method. In this paper, we implement a management and analysis system of log information, which can assist system administrators to understand the real-time status of the entire system, classify logs into different fault types, and determine the root cause of the faults. In addition, we improve the existing fault correlation analysis method based on the results of system log classification. We apply the system in a cloud computing environment for evaluation. The results show that our system can classify fault logs automatically and effectively. With the proposed system, administrators can easily detect the root cause of faults. 展开更多
关键词 fault diagnosis system event log log classification fault correlation analysis
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部