Forecasting on success or failure of software has become an interesting and,in fact,an essential task in the software development industry.In order to explore the latest data on successes and failures,this research fo...Forecasting on success or failure of software has become an interesting and,in fact,an essential task in the software development industry.In order to explore the latest data on successes and failures,this research focused on certain questions such as is early phase of the software development life cycle better than later phases in predicting software success and avoiding high rework?What human factors contribute to success or failure of a software?What software practices are used by the industry practitioners to achieve high quality of software in their day-to-day work?In order to conduct this empirical analysis a total of 104 practitioners were recruited to determine how human factors,misinterpretation,and miscommunication of requirements and decision-making processes play their roles in software success forecasting.We discussed a potential relationship between forecasting of software success or failure and the development processes.We noticed that experienced participants had more confidence in their practices and responded to the questionnaire in this empirical study,and they were more likely to rate software success forecasting linking to the development processes.Our analysis also shows that cognitive bias is the central human factor that negatively affects forecasting of software success rate.The results of this empirical study also validated that requirements’misinterpretation and miscommunication were themain causes behind software systems’failure.It has been seen that reliable,relevant,and trustworthy sources of information help in decision-making to predict software systems’success in the software industry.This empirical study highlights a need for other software practitioners to avoid such bias while working on software projects.Future investigation can be performed to identify the other human factors that may impact software systems’success.展开更多
Artificial intelligent based dialog systems are getting attention from both business and academic communities.The key parts for such intelligent chatbot systems are domain classification,intent detection,and named ent...Artificial intelligent based dialog systems are getting attention from both business and academic communities.The key parts for such intelligent chatbot systems are domain classification,intent detection,and named entity recognition.Various supervised,unsupervised,and hybrid approaches are used to detect each field.Such intelligent systems,also called natural language understanding systems analyze user requests in sequential order:domain classification,intent,and entity recognition based on the semantic rules of the classified domain.This sequential approach propagates the downstream error;i.e.,if the domain classification model fails to classify the domain,intent and entity recognition fail.Furthermore,training such intelligent system necessitates a large number of user-annotated datasets for each domain.This study proposes a single joint predictive deep neural network framework based on long short-term memory using only a small user-annotated dataset to address these issues.It investigates value added by incorporating unlabeled data from user chatting logs into multi-domain spoken language understanding systems.Systematic experimental analysis of the proposed joint frameworks,along with the semi-supervised multi-domain model,using open-source annotated and unannotated utterances shows robust improvement in the predictive performance of the proposed multi-domain intelligent chatbot over a base joint model and joint model based on adversarial learning.展开更多
Performance anomaly detection is the process of identifying occurrences that do not conform to expected behavior or correlate with other incidents or events in time series data.Anomaly detection has been applied to ar...Performance anomaly detection is the process of identifying occurrences that do not conform to expected behavior or correlate with other incidents or events in time series data.Anomaly detection has been applied to areas such as fraud detection,intrusion detection systems,and network systems.In this paper,we propose an anomaly detection framework that uses dynamic features of quality of service that are collected in a simulated setup.Three variants of recurrent neural networks-SimpleRNN,long short term memory,and gated recurrent unit are evaluated.The results reveal that the proposed method effectively detects anomalies in web services with high accuracy.The performance of the proposed anomaly detection framework is superior to that of existing approaches using maximum accuracy and detection rate metrics.展开更多
Regression testing is a widely studied research area,with the aim of meeting the quality challenges of software systems.To achieve a software system of good quality,we face high consumption of resources during testing...Regression testing is a widely studied research area,with the aim of meeting the quality challenges of software systems.To achieve a software system of good quality,we face high consumption of resources during testing.To overcome this challenge,test case prioritization(TCP)as a sub-type of regression testing is continuously investigated to achieve the testing objectives.This study provides an insight into proposing the ontology-based TCP(OTCP)approach,aimed at reducing the consumption of resources for the quality improvement and maintenance of software systems.The proposed approach uses software metrics to examine the behavior of classes of software systems.It uses Binary Logistic Regression(BLR)and AdaBoostM1 classifiers to verify correct predictions of the faulty and non-faulty classes of software systems.Reference ontology is used to match the code metrics and class attributes.We investigated five Java programs for the evaluation of the proposed approach,which was used to achieve code metrics.This study has resulted in an average percentage of fault detected(APFD)value of 94.80%,which is higher when compared to other TCP approaches.In future works,large sized programs in different languages can be used to evaluate the scalability of the proposed OTCP approach.展开更多
This paper presents a review of the ensemble learning models proposed for web services classification,selection,and composition.Web service is an evo-lutionary research area,and ensemble learning has become a hot spot...This paper presents a review of the ensemble learning models proposed for web services classification,selection,and composition.Web service is an evo-lutionary research area,and ensemble learning has become a hot spot to assess web services’earlier mentioned aspects.The proposed research aims to review the state of art approaches performed on the interesting web services area.The literature on the research topic is examined using the preferred reporting items for systematic reviews and meta-analyses(PRISMA)as a research method.The study reveals an increasing trend of using ensemble learning in the chosen papers within the last ten years.Naïve Bayes(NB),Support Vector Machine’(SVM),and other classifiers were identified as widely explored in selected studies.Core analysis of web services classification suggests that web services’performance aspects can be investigated in future works.This paper also identified performance measuring metrics,including accuracy,precision,recall,and f-measure,widely used in the literature.展开更多
Enterprise architecture (EA) efforts focus on business, technology, data, and application architecture, and their integration. However, less attention has been given to one of the most critical EA elements, i.e., user...Enterprise architecture (EA) efforts focus on business, technology, data, and application architecture, and their integration. However, less attention has been given to one of the most critical EA elements, i.e., users (EA audiences). As a result, existing EA management systems (EAMS) have become old, large, content-centric document-repositories that are unable to provide meaningful information of use to the enterprise users and aligned with their needs and functional scope. We argue that a semantic technology based mechanism focusing on enterprise information and user-centricity has the potential to solve this problem. In this context, we present a novel ontology-based strategy named the user-centric semantics-oriented EA (U-SEA) model. Based on this model, we have developed a user-centric semantics-oriented enterprise architecture management (U-SEAM) system. Our approach is generic enough to be used in a wide variety of user-centric EAM applications. The results obtained show computational feasibility to integrate and govern enterprise information and to reduce complexity with respect to interoperability between enterprise information and users.展开更多
基金supported by the BK21 FOUR(Fostering Outstanding Universities for Research)funded by the Ministry of Education and National Research Foundation of Korea.
文摘Forecasting on success or failure of software has become an interesting and,in fact,an essential task in the software development industry.In order to explore the latest data on successes and failures,this research focused on certain questions such as is early phase of the software development life cycle better than later phases in predicting software success and avoiding high rework?What human factors contribute to success or failure of a software?What software practices are used by the industry practitioners to achieve high quality of software in their day-to-day work?In order to conduct this empirical analysis a total of 104 practitioners were recruited to determine how human factors,misinterpretation,and miscommunication of requirements and decision-making processes play their roles in software success forecasting.We discussed a potential relationship between forecasting of software success or failure and the development processes.We noticed that experienced participants had more confidence in their practices and responded to the questionnaire in this empirical study,and they were more likely to rate software success forecasting linking to the development processes.Our analysis also shows that cognitive bias is the central human factor that negatively affects forecasting of software success rate.The results of this empirical study also validated that requirements’misinterpretation and miscommunication were themain causes behind software systems’failure.It has been seen that reliable,relevant,and trustworthy sources of information help in decision-making to predict software systems’success in the software industry.This empirical study highlights a need for other software practitioners to avoid such bias while working on software projects.Future investigation can be performed to identify the other human factors that may impact software systems’success.
基金This research was supported by the BK21 FOUR(Fostering Outstanding Universities for Research)funded by the Ministry of Education(MOE,Korea)and National Research Foundation of Korea(NFR).
文摘Artificial intelligent based dialog systems are getting attention from both business and academic communities.The key parts for such intelligent chatbot systems are domain classification,intent detection,and named entity recognition.Various supervised,unsupervised,and hybrid approaches are used to detect each field.Such intelligent systems,also called natural language understanding systems analyze user requests in sequential order:domain classification,intent,and entity recognition based on the semantic rules of the classified domain.This sequential approach propagates the downstream error;i.e.,if the domain classification model fails to classify the domain,intent and entity recognition fail.Furthermore,training such intelligent system necessitates a large number of user-annotated datasets for each domain.This study proposes a single joint predictive deep neural network framework based on long short-term memory using only a small user-annotated dataset to address these issues.It investigates value added by incorporating unlabeled data from user chatting logs into multi-domain spoken language understanding systems.Systematic experimental analysis of the proposed joint frameworks,along with the semi-supervised multi-domain model,using open-source annotated and unannotated utterances shows robust improvement in the predictive performance of the proposed multi-domain intelligent chatbot over a base joint model and joint model based on adversarial learning.
文摘Performance anomaly detection is the process of identifying occurrences that do not conform to expected behavior or correlate with other incidents or events in time series data.Anomaly detection has been applied to areas such as fraud detection,intrusion detection systems,and network systems.In this paper,we propose an anomaly detection framework that uses dynamic features of quality of service that are collected in a simulated setup.Three variants of recurrent neural networks-SimpleRNN,long short term memory,and gated recurrent unit are evaluated.The results reveal that the proposed method effectively detects anomalies in web services with high accuracy.The performance of the proposed anomaly detection framework is superior to that of existing approaches using maximum accuracy and detection rate metrics.
文摘Regression testing is a widely studied research area,with the aim of meeting the quality challenges of software systems.To achieve a software system of good quality,we face high consumption of resources during testing.To overcome this challenge,test case prioritization(TCP)as a sub-type of regression testing is continuously investigated to achieve the testing objectives.This study provides an insight into proposing the ontology-based TCP(OTCP)approach,aimed at reducing the consumption of resources for the quality improvement and maintenance of software systems.The proposed approach uses software metrics to examine the behavior of classes of software systems.It uses Binary Logistic Regression(BLR)and AdaBoostM1 classifiers to verify correct predictions of the faulty and non-faulty classes of software systems.Reference ontology is used to match the code metrics and class attributes.We investigated five Java programs for the evaluation of the proposed approach,which was used to achieve code metrics.This study has resulted in an average percentage of fault detected(APFD)value of 94.80%,which is higher when compared to other TCP approaches.In future works,large sized programs in different languages can be used to evaluate the scalability of the proposed OTCP approach.
基金This research was supported by the BK21 FOUR(Fostering Outstanding Universities for Research)the Ministry of Education(MOE,Korea)and National Research Foundation of Korea(NRF).
文摘This paper presents a review of the ensemble learning models proposed for web services classification,selection,and composition.Web service is an evo-lutionary research area,and ensemble learning has become a hot spot to assess web services’earlier mentioned aspects.The proposed research aims to review the state of art approaches performed on the interesting web services area.The literature on the research topic is examined using the preferred reporting items for systematic reviews and meta-analyses(PRISMA)as a research method.The study reveals an increasing trend of using ensemble learning in the chosen papers within the last ten years.Naïve Bayes(NB),Support Vector Machine’(SVM),and other classifiers were identified as widely explored in selected studies.Core analysis of web services classification suggests that web services’performance aspects can be investigated in future works.This paper also identified performance measuring metrics,including accuracy,precision,recall,and f-measure,widely used in the literature.
文摘Enterprise architecture (EA) efforts focus on business, technology, data, and application architecture, and their integration. However, less attention has been given to one of the most critical EA elements, i.e., users (EA audiences). As a result, existing EA management systems (EAMS) have become old, large, content-centric document-repositories that are unable to provide meaningful information of use to the enterprise users and aligned with their needs and functional scope. We argue that a semantic technology based mechanism focusing on enterprise information and user-centricity has the potential to solve this problem. In this context, we present a novel ontology-based strategy named the user-centric semantics-oriented EA (U-SEA) model. Based on this model, we have developed a user-centric semantics-oriented enterprise architecture management (U-SEAM) system. Our approach is generic enough to be used in a wide variety of user-centric EAM applications. The results obtained show computational feasibility to integrate and govern enterprise information and to reduce complexity with respect to interoperability between enterprise information and users.