Femtosecond laser direct inscription is a technique especially useful for prototyping purposes due to its distinctive advantages such as high fabrication accuracy,true 3D processing flexibility,and no need for mold or...Femtosecond laser direct inscription is a technique especially useful for prototyping purposes due to its distinctive advantages such as high fabrication accuracy,true 3D processing flexibility,and no need for mold or photomask.In this paper,we demonstrate the design and fabrication of a planar lightwave circuit(PLC)power splitter encoded with waveguide Bragg gratings(WBG)using a femtosecond laser inscription technique for passive optical network(PON)fault localization application.Both the reflected wavelengths and intervals of WBGs can be conveniently tuned.In the experiment,we succeeded in directly inscribing WBGs in 1×4 PLC splitter chips with a wavelength interval of about 4 nm and an adjustable reflectivity of up to 70% in the C-band.The proposed method is suitable for the prototyping of a PLC splitter encoded with WBG for PON fault localization applications.展开更多
Software debugging accounts for a vast majority of the financial and time costs in software developing and maintenance. Thus, approaches of software fault localization that can help automate the debugging process have...Software debugging accounts for a vast majority of the financial and time costs in software developing and maintenance. Thus, approaches of software fault localization that can help automate the debugging process have become a hot topic in the field of software engineering. Given the great demand for software fault localization, an approach based on the artificial bee colony (ABC) algorithm is proposed to be integrated with other related techniques. In this process, the source program is initially instrumented after analyzing the dependence information. The test case sets are then compiled and run on the instrumented program, and execution results are input to the ABC algorithm. The algorithm can determine the largest fitness value and best food source by calculating the average fitness of the employed bees in the iteralive process. The program unit with the highest suspicion score corresponding to the best test case set is regarded as the final fault localization. Experiments are conducted with the TCAS program in the Siemens suite. Results demonstrate that the proposed fault localization method is effective and efficient. The ABC algorithm can efficiently avoid the local optimum, and ensure the validity of the fault location to a larger extent.展开更多
Fault localization is an important topic in software testing, as it enables the developer to specify fault location in their code. One of the dynamic fault localization techniques is statistical debugging. In this stu...Fault localization is an important topic in software testing, as it enables the developer to specify fault location in their code. One of the dynamic fault localization techniques is statistical debugging. In this study, two statistical debugging algorithms are implemented, SOBER and Cause Isolation, and then the experimental works are conducted on five programs coded using Python as an example of well-known dynamic programming language. Results showed that in programs that contain only single bug, the two studied statistical debugging algorithms are very effective to localize a bug. In programs that have more than one bug, SOBER algorithm has limitations related to nested predicates, rarely observed predicates and complement predicates. The Cause Isolation has limitations related to sorting predicates based on importance and detecting bugs in predicate condition. The accuracy of both SOBER and Cause Isolation is affected by the program size. Quality comparison showed that SOBER algorithm requires more code examination than Cause Isolation to discover the bugs.展开更多
The most resource-intensive and laborious part of debugging is finding the exact location of the fault from the more significant number of code snippets.Plenty of machine intelligence models has offered the effective ...The most resource-intensive and laborious part of debugging is finding the exact location of the fault from the more significant number of code snippets.Plenty of machine intelligence models has offered the effective localization of defects.Some models can precisely locate the faulty with more than 95%accuracy,resulting in demand for trustworthy models in fault localization.Confidence and trustworthiness within machine intelligencebased software models can only be achieved via explainable artificial intelligence in Fault Localization(XFL).The current study presents a model for generating counterfactual interpretations for the fault localization model’s decisions.Neural system approximations and disseminated presentation of input information may be achieved by building a nonlinear neural network model.That demonstrates a high level of proficiency in transfer learning,even with minimal training data.The proposed XFL would make the decisionmaking transparent simultaneously without impacting the model’s performance.The proposed XFL ranks the software program statements based on the possible vulnerability score approximated from the training data.The model’s performance is further evaluated using various metrics like the number of assessed statements,confidence level of fault localization,and TopN evaluation strategies.展开更多
In the long distance GIL under certain conditions, this paper researches and realizes detection of PD characters and accurate fault localization through UHF coupling sensors at different positions of the GIL pipeline....In the long distance GIL under certain conditions, this paper researches and realizes detection of PD characters and accurate fault localization through UHF coupling sensors at different positions of the GIL pipeline. The main methods for the detection are UHF signal amplitude difference (DOA) and time difference (TOF). We analyze the localization error by using TE and TEM component and high order TE mode component in electromagnetic coaxial wave guide theory. Research and field test prove the DOA detection error can meet the requirements of real-time online diagnosis and for history tracking analysis. The error of TOF detection method can be controlled within 3% and can be applied to the site.展开更多
Fault localization is an important and challeng- ing task during software testing. Among techniques studied in this field, program spectrum based fault localization is a promising approach. To perform spectrum based f...Fault localization is an important and challeng- ing task during software testing. Among techniques studied in this field, program spectrum based fault localization is a promising approach. To perform spectrum based fault local- ization, a set of test oracles should be provided, and the ef- fectiveness of fault localization depends highly on the quality of test oracles. Moreover, their effectiveness is usually af- fected when multiple simultaneous faults are present. Faced with multiple faults it is difficult for developers to determine when to stop the fault localization process. To address these issues, we propose an iterative fauk localization process, i.e., an iterative process of selecting test cases for effective fault localization (IPSETFUL), to identify as many faults as pos- sible in the program until the stopping criterion is satisfied. It is performed based on a concept lattice of program spec- trum (CLPS) proposed in our previous work. Based on the labeling approach of CLPS, program statements are catego- rized as dangerous statements, safe statements, and sensitive statements. To identify the faults, developers need to check the dangerous statements. Meantime, developers need to se- lect a set of test cases covering the dangerous or sensitive statements from the original test suite, and a new CLPS is generated for the next iteration. The same process is pro- ceeded in the same way. This iterative process ends until there are no failing tests in the test suite and all statements on the CLPS become safe statements. We conduct an empirical study on several subject programs, and the results show that IPSETFUL can help identify most of the faults in the program with the given test suite. Moreover, it can save much effort in inspecting unfaulty program statements compared with the existing spectrum based fault localization techniques and the relevant state of the art technique.展开更多
Debugging is a time-consuming task in software development. Although various automated approaches have been proposed, they are not effective enough. On the other hand, in manual debugging, developers have difficulty i...Debugging is a time-consuming task in software development. Although various automated approaches have been proposed, they are not effective enough. On the other hand, in manual debugging, developers have difficulty in choosing breakpoints. To address these problems and help developers locate faults effectively, we propose an interactive fault-localization framework, combining the benefits of automated approaches and manual debugging. Before the fault is found, this framework continuously recommends checking points based on statements' suspicions, which are calculated according to the execution information of test cases and the feedback information from the developer at earlier checking points. Then we propose a naive approach, which is an initial implementation of this framework. However, with this naive approach or manual debugging, developers' wrong estimation of whether the faulty statement is executed before the checking point (breakpoint) may make the debugging process fail. So we propose another robust approach based on this framework, handling cases where developers make mistakes during the fault-localization process. We performed two experimental studies and the results show that the two interactive approaches are quite effective compared with existing fault-localization approaches. Moreover, the robust approach can help developers find faults when they make wrong estimation at some checking points.展开更多
Fault localization techniques are originally proposed to assist in manual debugging by generally producing a rank list of suspicious locations.With the increasing popularity of automated program repair,the fault local...Fault localization techniques are originally proposed to assist in manual debugging by generally producing a rank list of suspicious locations.With the increasing popularity of automated program repair,the fault localization techniques have been introduced to effectively reduce the search space of automated program repair.Unlike developers who mainly focus on the rank information,current automated program repair has two strategies to use the fault localization information:suspiciousness-first algorithm(SFA)based on the suspiciousness accuracy and rank-first algorithm(RFA)relying on the rank accuracy.However,despite the fact that the two different usages are widely adopted by current automated program repair and may result in different repair results,little is known about the impacts of the two strategies on automated program repair.In this paper we empirically compare the performance of SFA and RFA in the context of automated program repair.Specifically,we implement the two strategies and six well-studied fault localization techniques into four state-of-the-art automated program repair tools,and then use these tools to perform repair experiments on 60 real-world bugs from Defects4J.Our study presents a number of interesting findings:RFA outperforms SFA in 70.02%of cases when measured by the number of candidate patches generated before a valid patch is found(NCP),while SFA performs better in parallel repair and patch diversity;the performance of SFA can be improved by increasing the suspiciousness accuracy of fault localization techniques;finally,we use SimFix that deploys SFA to successfully repair four extra Defects4J bugs which cannot be repaired by SimFix originally using RFA.These observations provide a new perspective for future research on the usage and improvement of fault localization in automated program repair.展开更多
The delay fault induced by cross-talk effect is one of the difficult problems in the fault diagnosis of digital circuit. An intelligent fault diagnosis based on IDDT testing and support vector machines (SVM) classif...The delay fault induced by cross-talk effect is one of the difficult problems in the fault diagnosis of digital circuit. An intelligent fault diagnosis based on IDDT testing and support vector machines (SVM) classifier was proposed in this paper. Firstly, the fault model induced by cross-talk effect and the IDDT testing method were analyzed, and then a delay fault localization method based on SVM was presented. The fault features of the sampled signals were extracted by wavelet packet decomposition and served as input parameters of SVM classifier to classify the different fault types. The simulation results illustrate that the method presented is accurate and effective, reaches a high diagnosis rate above 95%.展开更多
The acceleration grid power supply(AGPS) is a crucial part of the Negative-ion Neutral Beam Injection system in the China Fusion Engineering Test Reactor,which includes a 3-phase passive(diode) rectifier.To diagnose a...The acceleration grid power supply(AGPS) is a crucial part of the Negative-ion Neutral Beam Injection system in the China Fusion Engineering Test Reactor,which includes a 3-phase passive(diode) rectifier.To diagnose and localize faults in the rectifier,this paper proposes a frequencydomain analysis-based fault diagnosis algorithm for the rectifier in AGPS.First,time-domain expressions and spectral characteristics of the output voltage of the TPTL-NPC inverter-based power supply are analyzed.Then,frequency-domain analysis-based fault diagnosis and frequency-domain analysis-based sub-fault diagnosis algorithms are proposed to diagnose open circuit(OC) faults of diode(s),which benefit from the analysis of harmonics magnitude and phase-angle of the output voltage.Only a fundamental period is needed to diagnose and localize exact faults,and a strong Variable-duration Fault Detection Method is proposed to identify acceptable ripple from OC faults.Detailed simulations and experimental results demonstrate the effectiveness,quickness,and robustness of the proposed algorithms,and the diagnosis algorithms proposed in this article provide a significant method for the fault diagnosis of other rectifiers and converters.展开更多
Currently, some fault prognosis technology occasionally has relatively unsatisfied performance especially for in- cipient faults in nonlinear processes duo to their large time delay and complex internal connection. To...Currently, some fault prognosis technology occasionally has relatively unsatisfied performance especially for in- cipient faults in nonlinear processes duo to their large time delay and complex internal connection. To overcome this deficiency, multivariate time delay analysis is incorporated into the high sensitive local kernel principal component analysis. In this approach, mutual information estimation and Bayesian information criterion (BIC) are separately used to acquire the correlation degree and time delay of the process variables. Moreover, in order to achieve prediction, time series prediction by back propagation (BP) network is applied whose input is multivar- iate correlated time series other than the original time series. Then the multivariate time delayed series and future values obtained by time series prediction are combined to construct the input of local kernel principal component analysis (LKPCA) model for incipient fault prognosis. The new method has been exemplified in a sim- ple nonlinear process and the complicated Tennessee Eastman (TE) benchmark process. The results indicate that the new method has suoerioritv in the fault prognosis sensitivity over other traditional fault prognosis methods.展开更多
Purified terephthalic acid(PTA) is an important chemical raw material. P-xylene(PX) is transformed to terephthalic acid(TA) through oxidation process and TA is refined to produce PTA. The PX oxidation reaction is a co...Purified terephthalic acid(PTA) is an important chemical raw material. P-xylene(PX) is transformed to terephthalic acid(TA) through oxidation process and TA is refined to produce PTA. The PX oxidation reaction is a complex process involving three-phase reaction of gas, liquid and solid. To monitor the process and to improve the product quality, as well as to visualize the fault type clearly, a fault diagnosis method based on selforganizing map(SOM) and high dimensional feature extraction method, local tangent space alignment(LTSA),is proposed. In this method, LTSA can reduce the dimension and keep the topology information simultaneously,and SOM distinguishes various states on the output map. Monitoring results of PX oxidation reaction process indicate that the LTSA–SOM can well detect and visualize the fault type.展开更多
Recently,advanced sensing techniques ensure a large number of multivariate sensing data for intelligent fault diagnosis of machines.Given the advantage of obtaining accurate diagnosis results,multi-sensor fusion has l...Recently,advanced sensing techniques ensure a large number of multivariate sensing data for intelligent fault diagnosis of machines.Given the advantage of obtaining accurate diagnosis results,multi-sensor fusion has long been studied in the fault diagnosis field.However,existing studies suffer from two weaknesses.First,the relations of multiple sensors are either neglected or calculated only to improve the diagnostic accuracy of fault types.Second,the localization for multi-source faults is seldom investigated,although locating the anomaly variable over multivariate sensing data for certain types of faults is desirable.This article attempts to overcome the above weaknesses by proposing a global method to recognize fault types and localize fault sources with the help of multi-sensor relations(MSRs).First,an MSR model is developed to learn MSRs automatically and further obtain fault recognition results.Second,centrality measures are employed to analyze the MSR graphs learned by the MSR model,and fault sources are therefore determined.The proposed method is demonstrated by experiments on an induction motor and a centrifugal pump.Results show the proposed method’s validity in diagnosing fault types and sources.展开更多
Accurate fault area localization is a challenging problem in resonant grounding systems(RGSs).Accordingly,this paper proposes a novel two-stage localization method for single-phase earth faults in RGSs.Firstly,a fault...Accurate fault area localization is a challenging problem in resonant grounding systems(RGSs).Accordingly,this paper proposes a novel two-stage localization method for single-phase earth faults in RGSs.Firstly,a faulty feeder identification algorithm based on a Bayesian classifier is proposed.Three characteristic parameters of the RGS(the energy ratio,impedance factor,and energy spectrum entropy)are calculated based on the zero-sequence current(ZSC)of each feeder using wavelet packet transformations.Then,the values of three parameters are sent to a pre-trained Bayesian classifier to recognize the exact fault mode.With this result,the faulty feeder can be finally identified.To find the exact fault area on the faulty feeder,a localization method based on the similarity comparison of dominant frequency-band waveforms is proposed in an RGS equipped with feeder terminal units(FTUs).The FTUs can provide the information on the ZSC at their locations.Through wavelet-packet transformation,ZSC dominant frequency-band waveforms can be obtained at all FTU points.Similarities of the waveforms of characteristics at all FTU points are calculated and compared.The neighboring FTU points with the maximum diversity are the faulty sections finally determined.The proposed method exhibits higher accuracy in both faulty feeder identification and fault area localization compared to the previous methods.Finally,the effectiveness of the proposed method is validated by comparing simulation and experimental results.展开更多
Complex processes often work with multiple operation regions, it is critical to develop effective monitoring approaches to ensure the safety of chemical processes. In this work, a discriminant local consistency Gaussi...Complex processes often work with multiple operation regions, it is critical to develop effective monitoring approaches to ensure the safety of chemical processes. In this work, a discriminant local consistency Gaussian mixture model(DLCGMM) for multimode process monitoring is proposed for multimode process monitoring by integrating LCGMM with modified local Fisher discriminant analysis(MLFDA). Different from Fisher discriminant analysis(FDA) that aims to discover the global optimal discriminant directions, MLFDA is capable of uncovering multimodality and local structure of the data by exploiting the posterior probabilities of observations within clusters calculated from the results of LCGMM. This may enable MLFDA to capture more meaningful discriminant information hidden in the high-dimensional multimode observations comparing to FDA. Contrary to most existing multimode process monitoring approaches, DLCGMM performs LCGMM and MFLDA iteratively, and the optimal subspaces with multi-Gaussianity and the optimal discriminant projection vectors are simultaneously achieved in the framework of supervised and unsupervised learning. Furthermore, monitoring statistics are established on each cluster that represents a specific operation condition and two global Bayesian inference-based fault monitoring indexes are established by combining with all the monitoring results of all clusters. The efficiency and effectiveness of the proposed method are evaluated through UCI datasets, a simulated multimode model and the Tennessee Eastman benchmark process.展开更多
In the process of software development,the ability to localize faults is crucial for improving the efficiency of debugging.Generally speaking,detecting and repairing errant behavior at an early stage of the developmen...In the process of software development,the ability to localize faults is crucial for improving the efficiency of debugging.Generally speaking,detecting and repairing errant behavior at an early stage of the development cycle considerably reduces costs and development time.Researchers have tried to utilize various methods to locate the faulty codes.However,failing test cases usually account for a small portion of the test suite,which inevitably leads to the class-imbalance phenomenon and hampers the effectiveness of fault localization.Accordingly,in this work,we propose a new fault localization approach named ContextAug.After obtaining dynamic execution through test cases,ContextAug traces these executions to build an information model;subsequently,it constructs a failure context with propagation dependencies to intersect with new model-domain failing test samples synthesized by the minimum variability of the minority feature space.In contrast to traditional test generation directly from the input domain,ContextAug seeks a new perspective to synthesize failing test samples from the model domain,which is much easier to augment test suites.Through conducting empirical research on real large-sized programs with 13 state-of-the-art fault localization approaches,ContextAug could significantly improve fault localization effectiveness with up to 54.53%.Thus,ContextAug is verified as able to improve fault localization effectiveness.展开更多
A sedimentological investigation was carried out to reconstruct the paleogeography of the Zagros Foreland Basin.Based on the study of more than 1000 rock samples,nine carbonate microfacies and three terrigenous facies...A sedimentological investigation was carried out to reconstruct the paleogeography of the Zagros Foreland Basin.Based on the study of more than 1000 rock samples,nine carbonate microfacies and three terrigenous facies were identified.The study reveals that the Maastrichtian succession was deposited in a widespread homoclinal ramp in the High Zagros Basin.Three(Gandom Kar area),two(Ardal area),seven(Gardbishe area),five(Shirmard area),two(Kuh-e-Kamaneh area),three(Kuh-e-Balghar area),and six(Murak area)of depositional sequences(3rd order)were identified.The thickness of the lowstand systems tract(LST)due to activities of local faults and subsidence in the southeast is more than in the central and northwest of the High Zagros Basin during the Early and Early Middle Maastrichtian.During the Middle Maastrichtian,the shallow and deep marine deposits were formed during the transgressive systems tract(TST)and highstand systems tract(HST)in this basin and the rate of subsidence in the center of this basin(Gardbishe area)is higher than in other areas and the platform was drowned in this area.The falling relative sea-level due to activities of local faults led to that marine deposits were absent in all parts of the High Zagros Basin(except the south part)during the Late Maastrichtian.Paleogeographical studies on the Zagros Basin during the Late Campanian-Maastrichtian showed the following results:shallow marine environments were developed in the southeast of this basin,and the turbidite,delta,and fluvial environments in the northwest were developed more than in other areas.展开更多
基金supported by the ZTE Industry-University-Institute Fund Project under Grant No.IA20221202011。
文摘Femtosecond laser direct inscription is a technique especially useful for prototyping purposes due to its distinctive advantages such as high fabrication accuracy,true 3D processing flexibility,and no need for mold or photomask.In this paper,we demonstrate the design and fabrication of a planar lightwave circuit(PLC)power splitter encoded with waveguide Bragg gratings(WBG)using a femtosecond laser inscription technique for passive optical network(PON)fault localization application.Both the reflected wavelengths and intervals of WBGs can be conveniently tuned.In the experiment,we succeeded in directly inscribing WBGs in 1×4 PLC splitter chips with a wavelength interval of about 4 nm and an adjustable reflectivity of up to 70% in the C-band.The proposed method is suitable for the prototyping of a PLC splitter encoded with WBG for PON fault localization applications.
文摘Software debugging accounts for a vast majority of the financial and time costs in software developing and maintenance. Thus, approaches of software fault localization that can help automate the debugging process have become a hot topic in the field of software engineering. Given the great demand for software fault localization, an approach based on the artificial bee colony (ABC) algorithm is proposed to be integrated with other related techniques. In this process, the source program is initially instrumented after analyzing the dependence information. The test case sets are then compiled and run on the instrumented program, and execution results are input to the ABC algorithm. The algorithm can determine the largest fitness value and best food source by calculating the average fitness of the employed bees in the iteralive process. The program unit with the highest suspicion score corresponding to the best test case set is regarded as the final fault localization. Experiments are conducted with the TCAS program in the Siemens suite. Results demonstrate that the proposed fault localization method is effective and efficient. The ABC algorithm can efficiently avoid the local optimum, and ensure the validity of the fault location to a larger extent.
文摘Fault localization is an important topic in software testing, as it enables the developer to specify fault location in their code. One of the dynamic fault localization techniques is statistical debugging. In this study, two statistical debugging algorithms are implemented, SOBER and Cause Isolation, and then the experimental works are conducted on five programs coded using Python as an example of well-known dynamic programming language. Results showed that in programs that contain only single bug, the two studied statistical debugging algorithms are very effective to localize a bug. In programs that have more than one bug, SOBER algorithm has limitations related to nested predicates, rarely observed predicates and complement predicates. The Cause Isolation has limitations related to sorting predicates based on importance and detecting bugs in predicate condition. The accuracy of both SOBER and Cause Isolation is affected by the program size. Quality comparison showed that SOBER algorithm requires more code examination than Cause Isolation to discover the bugs.
文摘The most resource-intensive and laborious part of debugging is finding the exact location of the fault from the more significant number of code snippets.Plenty of machine intelligence models has offered the effective localization of defects.Some models can precisely locate the faulty with more than 95%accuracy,resulting in demand for trustworthy models in fault localization.Confidence and trustworthiness within machine intelligencebased software models can only be achieved via explainable artificial intelligence in Fault Localization(XFL).The current study presents a model for generating counterfactual interpretations for the fault localization model’s decisions.Neural system approximations and disseminated presentation of input information may be achieved by building a nonlinear neural network model.That demonstrates a high level of proficiency in transfer learning,even with minimal training data.The proposed XFL would make the decisionmaking transparent simultaneously without impacting the model’s performance.The proposed XFL ranks the software program statements based on the possible vulnerability score approximated from the training data.The model’s performance is further evaluated using various metrics like the number of assessed statements,confidence level of fault localization,and TopN evaluation strategies.
文摘In the long distance GIL under certain conditions, this paper researches and realizes detection of PD characters and accurate fault localization through UHF coupling sensors at different positions of the GIL pipeline. The main methods for the detection are UHF signal amplitude difference (DOA) and time difference (TOF). We analyze the localization error by using TE and TEM component and high order TE mode component in electromagnetic coaxial wave guide theory. Research and field test prove the DOA detection error can meet the requirements of real-time online diagnosis and for history tracking analysis. The error of TOF detection method can be controlled within 3% and can be applied to the site.
文摘Fault localization is an important and challeng- ing task during software testing. Among techniques studied in this field, program spectrum based fault localization is a promising approach. To perform spectrum based fault local- ization, a set of test oracles should be provided, and the ef- fectiveness of fault localization depends highly on the quality of test oracles. Moreover, their effectiveness is usually af- fected when multiple simultaneous faults are present. Faced with multiple faults it is difficult for developers to determine when to stop the fault localization process. To address these issues, we propose an iterative fauk localization process, i.e., an iterative process of selecting test cases for effective fault localization (IPSETFUL), to identify as many faults as pos- sible in the program until the stopping criterion is satisfied. It is performed based on a concept lattice of program spec- trum (CLPS) proposed in our previous work. Based on the labeling approach of CLPS, program statements are catego- rized as dangerous statements, safe statements, and sensitive statements. To identify the faults, developers need to check the dangerous statements. Meantime, developers need to se- lect a set of test cases covering the dangerous or sensitive statements from the original test suite, and a new CLPS is generated for the next iteration. The same process is pro- ceeded in the same way. This iterative process ends until there are no failing tests in the test suite and all statements on the CLPS become safe statements. We conduct an empirical study on several subject programs, and the results show that IPSETFUL can help identify most of the faults in the program with the given test suite. Moreover, it can save much effort in inspecting unfaulty program statements compared with the existing spectrum based fault localization techniques and the relevant state of the art technique.
基金supported by the National Basic Research Program of China under Grant No.2009CB320703the National High-Tech Research and Development 863 Program of China under Grant No.2007AA010301+2 种基金the Science Fund for Creative Research Groups of China under Grant No.60821003the National Natural Science Foundation of China under Grant No.60803012the China Postdoctoral Science Foundation Project under Grant No.20080440254
文摘Debugging is a time-consuming task in software development. Although various automated approaches have been proposed, they are not effective enough. On the other hand, in manual debugging, developers have difficulty in choosing breakpoints. To address these problems and help developers locate faults effectively, we propose an interactive fault-localization framework, combining the benefits of automated approaches and manual debugging. Before the fault is found, this framework continuously recommends checking points based on statements' suspicions, which are calculated according to the execution information of test cases and the feedback information from the developer at earlier checking points. Then we propose a naive approach, which is an initial implementation of this framework. However, with this naive approach or manual debugging, developers' wrong estimation of whether the faulty statement is executed before the checking point (breakpoint) may make the debugging process fail. So we propose another robust approach based on this framework, handling cases where developers make mistakes during the fault-localization process. We performed two experimental studies and the results show that the two interactive approaches are quite effective compared with existing fault-localization approaches. Moreover, the robust approach can help developers find faults when they make wrong estimation at some checking points.
基金This research was supported in part by the National Natural Science Foundation of China(Grant Nos.61672529,61379054,61602504,61502015)the Fundamental Research Funds for the Central Universities(2019CDXYRJ0011)National Defense Science Foundation of China(3001010).
文摘Fault localization techniques are originally proposed to assist in manual debugging by generally producing a rank list of suspicious locations.With the increasing popularity of automated program repair,the fault localization techniques have been introduced to effectively reduce the search space of automated program repair.Unlike developers who mainly focus on the rank information,current automated program repair has two strategies to use the fault localization information:suspiciousness-first algorithm(SFA)based on the suspiciousness accuracy and rank-first algorithm(RFA)relying on the rank accuracy.However,despite the fact that the two different usages are widely adopted by current automated program repair and may result in different repair results,little is known about the impacts of the two strategies on automated program repair.In this paper we empirically compare the performance of SFA and RFA in the context of automated program repair.Specifically,we implement the two strategies and six well-studied fault localization techniques into four state-of-the-art automated program repair tools,and then use these tools to perform repair experiments on 60 real-world bugs from Defects4J.Our study presents a number of interesting findings:RFA outperforms SFA in 70.02%of cases when measured by the number of candidate patches generated before a valid patch is found(NCP),while SFA performs better in parallel repair and patch diversity;the performance of SFA can be improved by increasing the suspiciousness accuracy of fault localization techniques;finally,we use SimFix that deploys SFA to successfully repair four extra Defects4J bugs which cannot be repaired by SimFix originally using RFA.These observations provide a new perspective for future research on the usage and improvement of fault localization in automated program repair.
基金Supported by the National Natural Science Foun-dation of China (60374008 ,60501022)
文摘The delay fault induced by cross-talk effect is one of the difficult problems in the fault diagnosis of digital circuit. An intelligent fault diagnosis based on IDDT testing and support vector machines (SVM) classifier was proposed in this paper. Firstly, the fault model induced by cross-talk effect and the IDDT testing method were analyzed, and then a delay fault localization method based on SVM was presented. The fault features of the sampled signals were extracted by wavelet packet decomposition and served as input parameters of SVM classifier to classify the different fault types. The simulation results illustrate that the method presented is accurate and effective, reaches a high diagnosis rate above 95%.
基金supported by the National Key R&D Program of China(No.2017YFE0300104)National Natural Science Foundation of China(No.51821005)
文摘The acceleration grid power supply(AGPS) is a crucial part of the Negative-ion Neutral Beam Injection system in the China Fusion Engineering Test Reactor,which includes a 3-phase passive(diode) rectifier.To diagnose and localize faults in the rectifier,this paper proposes a frequencydomain analysis-based fault diagnosis algorithm for the rectifier in AGPS.First,time-domain expressions and spectral characteristics of the output voltage of the TPTL-NPC inverter-based power supply are analyzed.Then,frequency-domain analysis-based fault diagnosis and frequency-domain analysis-based sub-fault diagnosis algorithms are proposed to diagnose open circuit(OC) faults of diode(s),which benefit from the analysis of harmonics magnitude and phase-angle of the output voltage.Only a fundamental period is needed to diagnose and localize exact faults,and a strong Variable-duration Fault Detection Method is proposed to identify acceptable ripple from OC faults.Detailed simulations and experimental results demonstrate the effectiveness,quickness,and robustness of the proposed algorithms,and the diagnosis algorithms proposed in this article provide a significant method for the fault diagnosis of other rectifiers and converters.
基金Supported by the National Natural Science Foundation of China(61573051,61472021)the Natural Science Foundation of Beijing(4142039)+1 种基金Open Fund of the State Key Laboratory of Software Development Environment(SKLSDE-2015KF-01)Fundamental Research Funds for the Central Universities(PT1613-05)
文摘Currently, some fault prognosis technology occasionally has relatively unsatisfied performance especially for in- cipient faults in nonlinear processes duo to their large time delay and complex internal connection. To overcome this deficiency, multivariate time delay analysis is incorporated into the high sensitive local kernel principal component analysis. In this approach, mutual information estimation and Bayesian information criterion (BIC) are separately used to acquire the correlation degree and time delay of the process variables. Moreover, in order to achieve prediction, time series prediction by back propagation (BP) network is applied whose input is multivar- iate correlated time series other than the original time series. Then the multivariate time delayed series and future values obtained by time series prediction are combined to construct the input of local kernel principal component analysis (LKPCA) model for incipient fault prognosis. The new method has been exemplified in a sim- ple nonlinear process and the complicated Tennessee Eastman (TE) benchmark process. The results indicate that the new method has suoerioritv in the fault prognosis sensitivity over other traditional fault prognosis methods.
基金Supported by the Major State Basic Research Development Program of China(2012CB720500)the National Natural Science Foundation of China(6133301021276078)+3 种基金the National Science Fund for Outstanding Young Scholars(61222303)the Fundamental Research Funds for the Central Universities,Shanghai Rising-Star Program(13QH1401200)the Program for New Century Excellent Talents in University(NCET-10-0885)Shanghai R&D Platform Construction Program(13DZ2295300)
文摘Purified terephthalic acid(PTA) is an important chemical raw material. P-xylene(PX) is transformed to terephthalic acid(TA) through oxidation process and TA is refined to produce PTA. The PX oxidation reaction is a complex process involving three-phase reaction of gas, liquid and solid. To monitor the process and to improve the product quality, as well as to visualize the fault type clearly, a fault diagnosis method based on selforganizing map(SOM) and high dimensional feature extraction method, local tangent space alignment(LTSA),is proposed. In this method, LTSA can reduce the dimension and keep the topology information simultaneously,and SOM distinguishes various states on the output map. Monitoring results of PX oxidation reaction process indicate that the LTSA–SOM can well detect and visualize the fault type.
基金supported by the National Natural Science Foundation of China(Grant No.52025056)the Fundamental Research Funds for the Central Universities.
文摘Recently,advanced sensing techniques ensure a large number of multivariate sensing data for intelligent fault diagnosis of machines.Given the advantage of obtaining accurate diagnosis results,multi-sensor fusion has long been studied in the fault diagnosis field.However,existing studies suffer from two weaknesses.First,the relations of multiple sensors are either neglected or calculated only to improve the diagnostic accuracy of fault types.Second,the localization for multi-source faults is seldom investigated,although locating the anomaly variable over multivariate sensing data for certain types of faults is desirable.This article attempts to overcome the above weaknesses by proposing a global method to recognize fault types and localize fault sources with the help of multi-sensor relations(MSRs).First,an MSR model is developed to learn MSRs automatically and further obtain fault recognition results.Second,centrality measures are employed to analyze the MSR graphs learned by the MSR model,and fault sources are therefore determined.The proposed method is demonstrated by experiments on an induction motor and a centrifugal pump.Results show the proposed method’s validity in diagnosing fault types and sources.
文摘Accurate fault area localization is a challenging problem in resonant grounding systems(RGSs).Accordingly,this paper proposes a novel two-stage localization method for single-phase earth faults in RGSs.Firstly,a faulty feeder identification algorithm based on a Bayesian classifier is proposed.Three characteristic parameters of the RGS(the energy ratio,impedance factor,and energy spectrum entropy)are calculated based on the zero-sequence current(ZSC)of each feeder using wavelet packet transformations.Then,the values of three parameters are sent to a pre-trained Bayesian classifier to recognize the exact fault mode.With this result,the faulty feeder can be finally identified.To find the exact fault area on the faulty feeder,a localization method based on the similarity comparison of dominant frequency-band waveforms is proposed in an RGS equipped with feeder terminal units(FTUs).The FTUs can provide the information on the ZSC at their locations.Through wavelet-packet transformation,ZSC dominant frequency-band waveforms can be obtained at all FTU points.Similarities of the waveforms of characteristics at all FTU points are calculated and compared.The neighboring FTU points with the maximum diversity are the faulty sections finally determined.The proposed method exhibits higher accuracy in both faulty feeder identification and fault area localization compared to the previous methods.Finally,the effectiveness of the proposed method is validated by comparing simulation and experimental results.
基金Supported by the National Natural Science Foundation of China(61273167)
文摘Complex processes often work with multiple operation regions, it is critical to develop effective monitoring approaches to ensure the safety of chemical processes. In this work, a discriminant local consistency Gaussian mixture model(DLCGMM) for multimode process monitoring is proposed for multimode process monitoring by integrating LCGMM with modified local Fisher discriminant analysis(MLFDA). Different from Fisher discriminant analysis(FDA) that aims to discover the global optimal discriminant directions, MLFDA is capable of uncovering multimodality and local structure of the data by exploiting the posterior probabilities of observations within clusters calculated from the results of LCGMM. This may enable MLFDA to capture more meaningful discriminant information hidden in the high-dimensional multimode observations comparing to FDA. Contrary to most existing multimode process monitoring approaches, DLCGMM performs LCGMM and MFLDA iteratively, and the optimal subspaces with multi-Gaussianity and the optimal discriminant projection vectors are simultaneously achieved in the framework of supervised and unsupervised learning. Furthermore, monitoring statistics are established on each cluster that represents a specific operation condition and two global Bayesian inference-based fault monitoring indexes are established by combining with all the monitoring results of all clusters. The efficiency and effectiveness of the proposed method are evaluated through UCI datasets, a simulated multimode model and the Tennessee Eastman benchmark process.
文摘In the process of software development,the ability to localize faults is crucial for improving the efficiency of debugging.Generally speaking,detecting and repairing errant behavior at an early stage of the development cycle considerably reduces costs and development time.Researchers have tried to utilize various methods to locate the faulty codes.However,failing test cases usually account for a small portion of the test suite,which inevitably leads to the class-imbalance phenomenon and hampers the effectiveness of fault localization.Accordingly,in this work,we propose a new fault localization approach named ContextAug.After obtaining dynamic execution through test cases,ContextAug traces these executions to build an information model;subsequently,it constructs a failure context with propagation dependencies to intersect with new model-domain failing test samples synthesized by the minimum variability of the minority feature space.In contrast to traditional test generation directly from the input domain,ContextAug seeks a new perspective to synthesize failing test samples from the model domain,which is much easier to augment test suites.Through conducting empirical research on real large-sized programs with 13 state-of-the-art fault localization approaches,ContextAug could significantly improve fault localization effectiveness with up to 54.53%.Thus,ContextAug is verified as able to improve fault localization effectiveness.
基金the University of Isfahan for the financial support。
文摘A sedimentological investigation was carried out to reconstruct the paleogeography of the Zagros Foreland Basin.Based on the study of more than 1000 rock samples,nine carbonate microfacies and three terrigenous facies were identified.The study reveals that the Maastrichtian succession was deposited in a widespread homoclinal ramp in the High Zagros Basin.Three(Gandom Kar area),two(Ardal area),seven(Gardbishe area),five(Shirmard area),two(Kuh-e-Kamaneh area),three(Kuh-e-Balghar area),and six(Murak area)of depositional sequences(3rd order)were identified.The thickness of the lowstand systems tract(LST)due to activities of local faults and subsidence in the southeast is more than in the central and northwest of the High Zagros Basin during the Early and Early Middle Maastrichtian.During the Middle Maastrichtian,the shallow and deep marine deposits were formed during the transgressive systems tract(TST)and highstand systems tract(HST)in this basin and the rate of subsidence in the center of this basin(Gardbishe area)is higher than in other areas and the platform was drowned in this area.The falling relative sea-level due to activities of local faults led to that marine deposits were absent in all parts of the High Zagros Basin(except the south part)during the Late Maastrichtian.Paleogeographical studies on the Zagros Basin during the Late Campanian-Maastrichtian showed the following results:shallow marine environments were developed in the southeast of this basin,and the turbidite,delta,and fluvial environments in the northwest were developed more than in other areas.