DQPSK modem has been chosen as the modem scheme in many mobile communication systems. A new signal processing technique of π/4-DQPSK modem based on software radio is discussed in this paper. Unlike many other softwar...DQPSK modem has been chosen as the modem scheme in many mobile communication systems. A new signal processing technique of π/4-DQPSK modem based on software radio is discussed in this paper. Unlike many other software radio solutions to the subject, we choose a universal digital radio baseband processor operating as the co-processor of DSP. Only the core algorithms for signal processing are implemented with DSP. Thus the computation burden on DSP is reduced significantly. Compared with the traditional ones, the technique mentioned in this paper is more promising and attractive. It is extremely compact and power-efficient, which is often required by a mobile communication system. The implementation of baseband signal processing for π/4-DQPSK modem on this platform is illustrated in detail. Special emphases are laid on the architecture of the system and the algorithms used in the baseband signal processing. Finally, some experimental results are presented and the performances of the signal processing and compensation algorithms are evaluated through computer simulations.展开更多
Entity recognition and extraction are the foundations of knowledge graph construction.Entity data in the field of software engineering come from different platforms and communities,and have different formats.This pape...Entity recognition and extraction are the foundations of knowledge graph construction.Entity data in the field of software engineering come from different platforms and communities,and have different formats.This paper divides multi-source software knowledge entities into unstructured data,semi-structured data and code data.For these different types of data,Bi-directional Long Short-Term Memory(Bi-LSTM)with Conditional Random Field(CRF),template matching,and abstract syntax tree are used and integrated into a multi-source software knowledge entity extraction integration model(MEIM)to extract software entities.The model can be updated continuously based on user’s feedbacks to improve the accuracy.To deal with the shortage of entity annotation datasets,keyword extraction methods based on Term Frequency–Inverse Document Frequency(TF-IDF),TextRank,and K-Means are applied to annotate tasks.The proposed MEIM model is applied to the Spring Boot framework,which demonstrates good adaptability.The extracted entities are used to construct a knowledge graph,which is applied to association retrieval and association visualization.展开更多
Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly...Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly articles via a two-stage annotation methodology:1)pilot stage-to define the scheme(described in prior work);and 2)adjudication stage-to normalize the graphing model(the focus of this paper).Design/methodology/approach:We re-annotate,a second time,the contributions-pertinent information across 50 prior-annotated NLP scholarly articles in terms of a data pipeline comprising:contribution-centered sentences,phrases,and triple statements.To this end,specifically,care was taken in the adjudication annotation stage to reduce annotation noise while formulating the guidelines for our proposed novel NLP contributions structuring and graphing scheme.Findings:The application of NLPCONTRIBUTIONGRAPH on the 50 articles resulted finally in a dataset of 900 contribution-focused sentences,4,702 contribution-information-centered phrases,and 2,980 surface-structured triples.The intra-annotation agreement between the first and second stages,in terms of F1-score,was 67.92%for sentences,41.82%for phrases,and 22.31%for triple statements indicating that with increased granularity of the information,the annotation decision variance is greater.Research limitations:NLPCONTRIBUTIONGRAPH has limited scope for structuring scholarly contributions compared with STEM(Science,Technology,Engineering,and Medicine)scholarly knowledge at large.Further,the annotation scheme in this work is designed by only an intra-annotator consensus-a single annotator first annotated the data to propose the initial scheme,following which,the same annotator reannotated the data to normalize the annotations in an adjudication stage.However,the expected goal of this work is to achieve a standardized retrospective model of capturing NLP contributions from scholarly articles.This would entail a larger initiative of enlisting multiple annotators to accommodate different worldviews into a“single”set of structures and relationships as the final scheme.Given that the initial scheme is first proposed and the complexity of the annotation task in the realistic timeframe,our intraannotation procedure is well-suited.Nevertheless,the model proposed in this work is presently limited since it does not incorporate multiple annotator worldviews.This is planned as future work to produce a robust model.Practical implications:We demonstrate NLPCONTRIBUTIONGRAPH data integrated into the Open Research Knowledge Graph(ORKG),a next-generation KG-based digital library with intelligent computations enabled over structured scholarly knowledge,as a viable aid to assist researchers in their day-to-day tasks.Originality/value:NLPCONTRIBUTIONGRAPH is a novel scheme to annotate research contributions from NLP articles and integrate them in a knowledge graph,which to the best of our knowledge does not exist in the community.Furthermore,our quantitative evaluations over the two-stage annotation tasks offer insights into task difficulty.展开更多
This paper points out various relationships between Design Knowledge and Software Engineering. After introduction of human design, the relationships between industrial Software Engineering is discussed, then further d...This paper points out various relationships between Design Knowledge and Software Engineering. After introduction of human design, the relationships between industrial Software Engineering is discussed, then further details of human design knowledge are revealed with discussions on humanistic aspects of design.展开更多
The software defects are managed through the knowledge base,and defect management is upgraded from the data level to the knowledge level. The rule knowledge is mined from bug data based on a rule-based knowledge extra...The software defects are managed through the knowledge base,and defect management is upgraded from the data level to the knowledge level. The rule knowledge is mined from bug data based on a rule-based knowledge extraction model,and the appropriate strategy is configured in the strategy layer to predict software defects. The model is extracted by direct association rules and extended association rules,which improve the prediction rate of related defects and the efficiency of software testing.展开更多
This research develops a knowledge model for Software Process Improvement (SPI) project based on knowledge creation theory and its twenty-four measurement items, and proposes two hypothesizes about the interaction of ...This research develops a knowledge model for Software Process Improvement (SPI) project based on knowledge creation theory and its twenty-four measurement items, and proposes two hypothesizes about the interaction of explicit knowledge and tacit knowledge in SPI. Eleven factors are extracted through statistical analysis. Three knowledge-creation practices for capturing tacit knowledge contribute greatly to SPI, which are communication among members, crossover collaboration in practical work and pair programming. Two knowledge-creation practices for capturing explicit knowledge have significant positive impact on SPI, which are integrating project document and on-the-job training. Ultimately, suggestions for improvement are put forward, that is, encouraging communication among staff and integrating documents in real time, and future research is also illustrated.展开更多
Knowledge transfer model of software process improvement (SPI) and the conceptual framework of influencing factors are established. The model includes five elements which are knowledge of transfer, sources of knowledg...Knowledge transfer model of software process improvement (SPI) and the conceptual framework of influencing factors are established. The model includes five elements which are knowledge of transfer, sources of knowledge, recipients of knowledge, relationship of transfer parties, and the environment of transfer. The conceptual framework includes ten key factors which are ambiguity, systematism, transfer willingness, capacity of impartation, capacity of absorption, incen-tive mechanism, culture, technical support, trust and knowledge distance. The research hypothesis is put forward. Em-pirical study concludes that the trust relationship among SPI staffs has the greatest influence on knowledge transfer, and organizational incentive mechanism can produce positive effect to knowledge transfer of SPI. Finally, some sug-gestions are put forward to improve the knowledge transfer of SPI: establishing a rational incentive mechanism, exe-cuting some necessary training to transfer parties and using software benchmarking.展开更多
Requirement engineering in any software development is the most important phase to ensure the success or failure of software. Knowledge modeling and management are helping tools to learn the software organizations. Th...Requirement engineering in any software development is the most important phase to ensure the success or failure of software. Knowledge modeling and management are helping tools to learn the software organizations. The traditional Requirements engineering practices are based upon the interaction of stakeholders which causes iteratively changes in requirements and difficulties in communication and understanding problem domain etc. So, to resolve such issues we use knowledge based techniques to support the RE practices as well as software development process. Our technique is based on two prospective, theoretical and practical implementations. In this paper, we described the need of knowledge management in software engineering and then proposed a model based on knowledge management to support the software development process. To verify our results, we used controlled experiment approach. We have implemented our model, and verify results by using and without using proposed knowledge based RE process. Our resultant proposed model can save the overall cost and time of requirement engineering process as well as software development.展开更多
New theories,methodologies,and technologies have been continuously invented and widely applied in modern software development,along with many new tools and best practices that are of remarkable significance in the sof...New theories,methodologies,and technologies have been continuously invented and widely applied in modern software development,along with many new tools and best practices that are of remarkable significance in the software industry.In Software Engineering(SE)programs of universities,it is quite difficult for their curricula to chase after the fast-evolving technology trend.As a consequence,there have been significant challenges in designing an evolvable SE curriculum.In this paper,we present a knowledge graph based curriculum design method for SE programs.Knowledge Points(KPs)are organized into a multi-layer and multi-dimensionally annotated knowledge graph called SEKG,and five principles are applied to partition the SEKG into a set of inter-related courses.Metrics for evaluating the quality of an SE curriculum are briefly discussed.This method can not only help design a systematic curriculum from existing software engineering KPs but also facilitate curriculum evolution to adapt to technology trends.展开更多
In this paper, it is emphasized that taking into consideration of imperfection of knowledge, of the team of the designers/developers, about the problem domains and environments is essential in order to develop robust ...In this paper, it is emphasized that taking into consideration of imperfection of knowledge, of the team of the designers/developers, about the problem domains and environments is essential in order to develop robust software metrics and systems. In this respect, first various possible types of imperfections in knowledge are discussed and then various available formal/mathematical models for representing and handling these imperfections are discussed. The discussion of knowledge classification & representation is from computational perspective and that also within the context of software development enterprise, and not necessarily from organizational management, from library & information science, or from psychological perspectives.展开更多
The knowledge creation effective factors were found in both necessary elements for stimulus of knowledge creation and the key influencing factors of software project success. The research was carried with the specific...The knowledge creation effective factors were found in both necessary elements for stimulus of knowledge creation and the key influencing factors of software project success. The research was carried with the specific successful practices of Microsoft Corporation and William Johnson’s analysis of R & D project knowledge creation. The knowledge creation effective factors in requirement development project are clarified through deeply interviewing the software enterprises in Guangdong province as well as other corporate information departments. The effective factors are divided with R & D project knowledge creation model in the view of organizational, team, personal and technical four levels through literature research and interview in enterprises, and the empirical study was done with questionnaire and exploratory analysis.展开更多
Software requirements engineering deals with: elicitation, specification, and validation of software requirements. Furthermore there is a need to facilitate collaboration amongst stakeholders and analysts. Fewer effor...Software requirements engineering deals with: elicitation, specification, and validation of software requirements. Furthermore there is a need to facilitate collaboration amongst stakeholders and analysts. Fewer efforts were deployed to support them in performing their job on a day to day basis. To solve this problem we use knowledge management for software requirements engineering. This paper proposes a knowledge management framework, based on the SECI model of knowledge creation, aimed at exploiting tacit and explicit knowledge related to software requirements within a given software project. The core part of the proposed framework is a set of four sub systems “Socializer”;“Externalizer”;“Combiner”;and “Internalizer”, attached to a couple of domain ontologies and a set of knowledge assets. Indeed we aim to facilitate a semantic based interpretation of knowledge assets related to software requirements by restricting their interpretation through the application domain and software requirements ontologies. We anticipate that this framework would be very helpful for stakeholders as well as analysts to exchange and manage their knowledge within a given software project. We show in the case study, through a virtual payroll project using the two-step approach: domain level requirements plus design level requirements, how the key elicitation SRE techniques are used during the first phase of domain requirements elicitation through the four subsystems of our framework.展开更多
Software testing courses are characterized by strong practicality,comprehensiveness,and diversity.Due to the differences among students and the needs to design personalized solutions for their specific requirements,th...Software testing courses are characterized by strong practicality,comprehensiveness,and diversity.Due to the differences among students and the needs to design personalized solutions for their specific requirements,the design of the existing software testing courses fails to meet the demands for personalized learning.Knowledge graphs,with their rich semantics and good visualization effects,have a wide range of applications in the field of education.In response to the current problem of software testing courses which fails to meet the needs for personalized learning,this paper offers a learning path recommendation based on knowledge graphs to provide personalized learning paths for students.展开更多
In project-based organizations knowledge is a critical resource used to develop and deliver products and services with a high level of quality. Therefore, a systematic and sustainable process is necessary to coordinat...In project-based organizations knowledge is a critical resource used to develop and deliver products and services with a high level of quality. Therefore, a systematic and sustainable process is necessary to coordinate knowledge management, project management and product lifecycle. This scenario predominates in companies focused on the creation and maintenance of information systems. This article presents an exploratory study based on a framework that integrates cognitive, managerial, and operational processes in a public Brazilian organization that provides services in the area of information and communications technology, focusing on the construction and maintenance of information systems. Those processes are operationalized by three management models considering knowledge, project, and software development processes. Our proposal aims to understand the relationships between those three management models and their influence on the software development process in the organization under study. Our premise is based on the principle that cognitive management, project management, and software development management must be integrated to fulfill the demands of product development and service provision. The research data was composed of registers of working hours spent on software development and maintenance projects involving 244 people allocated to 5064 projects in the period from 2007 to 2013. The study resulted in the identification of the relationships among the three management models adopted by the organization, with emphasis on knowledge management activities, which were not directly identified, making it difficult to account for and measure them. We established a set of activities connected to each one of the knowledge management model phases. Since those activities were not visible before, our approach contributed to build a systematic process to register and relate activities linked to the dimensions of cognitive processes, project management, and software construction.展开更多
In the first editorial of this two-part special issue, we pointed out that one of the biggest trends in wireless broadband, radar, sonar, and broadcasting technology is software RF processing and digital front-end [1]...In the first editorial of this two-part special issue, we pointed out that one of the biggest trends in wireless broadband, radar, sonar, and broadcasting technology is software RF processing and digital front-end [1]. Thistrend encompasses signal processing algorithms and integrated circuit design and includes digital pre-distortion (DPD), conversions between digital and analog signals, digita up-conversion (DUC), digital down-conversion (DDC), DC offset,展开更多
One of the biggest technology trends in wireless broadband, radar, sonar, and broadcasting systems is software radio frequency processing and digital front-end. This trend encompasses a broad range of topics, from ci...One of the biggest technology trends in wireless broadband, radar, sonar, and broadcasting systems is software radio frequency processing and digital front-end. This trend encompasses a broad range of topics, from circuit design and signal processing to system integration. It includes digital up-conversion (DUC) and down-conversion (DDC), digital predistortion (DPD),展开更多
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn...Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.展开更多
Software Development Life Cycle (SDLC) is one of the major ingredients for the development of efficient software systems within a time frame and low-cost involvement. From the literature, it is evident that there are ...Software Development Life Cycle (SDLC) is one of the major ingredients for the development of efficient software systems within a time frame and low-cost involvement. From the literature, it is evident that there are various kinds of process models that are used by the software industries for the development of small, medium and long-term software projects, but many of them do not cover risk management. It is quite obvious that the improper selection of the software development process model leads to failure of the software products as it is time bound activity. In the present work, a new software development process model is proposed which covers the risks at any stage of the development of the software product. The model is named a Hemant-Vipin (HV) process model and may be helpful for the software industries for development of the efficient software products and timely delivery at the end of the client. The efficiency of the HV process model is observed by considering various kinds of factors like requirement clarity, user feedback, change agility, predictability, risk identification, practical implementation, customer satisfaction, incremental development, use of ready-made components, quick design, resource organization and many more and found through a case study that the presented approach covers many of parameters in comparison of the existing process models. .展开更多
结合国内图书馆普遍开展的论文引证检索服务的实际需求,在大量工作实践的基础上,设计并实现了一款基于ISI Web of Knowledge平台检索结果引证检索统计报告的软件,能够根据不同的统计指标,对检索结果进行快速统计。实践证明,该软件提高...结合国内图书馆普遍开展的论文引证检索服务的实际需求,在大量工作实践的基础上,设计并实现了一款基于ISI Web of Knowledge平台检索结果引证检索统计报告的软件,能够根据不同的统计指标,对检索结果进行快速统计。实践证明,该软件提高了工作效率的同时,保证了正确率。展开更多
文摘DQPSK modem has been chosen as the modem scheme in many mobile communication systems. A new signal processing technique of π/4-DQPSK modem based on software radio is discussed in this paper. Unlike many other software radio solutions to the subject, we choose a universal digital radio baseband processor operating as the co-processor of DSP. Only the core algorithms for signal processing are implemented with DSP. Thus the computation burden on DSP is reduced significantly. Compared with the traditional ones, the technique mentioned in this paper is more promising and attractive. It is extremely compact and power-efficient, which is often required by a mobile communication system. The implementation of baseband signal processing for π/4-DQPSK modem on this platform is illustrated in detail. Special emphases are laid on the architecture of the system and the algorithms used in the baseband signal processing. Finally, some experimental results are presented and the performances of the signal processing and compensation algorithms are evaluated through computer simulations.
基金Zhifang Liao:Ministry of Science and Technology:Key Research and Development Project(2018YFB003800),Hunan Provincial Key Laboratory of Finance&Economics Big Data Scienceand Technology(Hunan University of Finance and Economics)2017TP1025,HNNSF 2018JJ2535Shengzong Liu:NSF61802120.
文摘Entity recognition and extraction are the foundations of knowledge graph construction.Entity data in the field of software engineering come from different platforms and communities,and have different formats.This paper divides multi-source software knowledge entities into unstructured data,semi-structured data and code data.For these different types of data,Bi-directional Long Short-Term Memory(Bi-LSTM)with Conditional Random Field(CRF),template matching,and abstract syntax tree are used and integrated into a multi-source software knowledge entity extraction integration model(MEIM)to extract software entities.The model can be updated continuously based on user’s feedbacks to improve the accuracy.To deal with the shortage of entity annotation datasets,keyword extraction methods based on Term Frequency–Inverse Document Frequency(TF-IDF),TextRank,and K-Means are applied to annotate tasks.The proposed MEIM model is applied to the Spring Boot framework,which demonstrates good adaptability.The extracted entities are used to construct a knowledge graph,which is applied to association retrieval and association visualization.
基金This work was co-funded by the European Research Council for the project ScienceGRAPH(Grant agreement ID:819536)by the TIB Leibniz Information Centre for Science and Technology.
文摘Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly articles via a two-stage annotation methodology:1)pilot stage-to define the scheme(described in prior work);and 2)adjudication stage-to normalize the graphing model(the focus of this paper).Design/methodology/approach:We re-annotate,a second time,the contributions-pertinent information across 50 prior-annotated NLP scholarly articles in terms of a data pipeline comprising:contribution-centered sentences,phrases,and triple statements.To this end,specifically,care was taken in the adjudication annotation stage to reduce annotation noise while formulating the guidelines for our proposed novel NLP contributions structuring and graphing scheme.Findings:The application of NLPCONTRIBUTIONGRAPH on the 50 articles resulted finally in a dataset of 900 contribution-focused sentences,4,702 contribution-information-centered phrases,and 2,980 surface-structured triples.The intra-annotation agreement between the first and second stages,in terms of F1-score,was 67.92%for sentences,41.82%for phrases,and 22.31%for triple statements indicating that with increased granularity of the information,the annotation decision variance is greater.Research limitations:NLPCONTRIBUTIONGRAPH has limited scope for structuring scholarly contributions compared with STEM(Science,Technology,Engineering,and Medicine)scholarly knowledge at large.Further,the annotation scheme in this work is designed by only an intra-annotator consensus-a single annotator first annotated the data to propose the initial scheme,following which,the same annotator reannotated the data to normalize the annotations in an adjudication stage.However,the expected goal of this work is to achieve a standardized retrospective model of capturing NLP contributions from scholarly articles.This would entail a larger initiative of enlisting multiple annotators to accommodate different worldviews into a“single”set of structures and relationships as the final scheme.Given that the initial scheme is first proposed and the complexity of the annotation task in the realistic timeframe,our intraannotation procedure is well-suited.Nevertheless,the model proposed in this work is presently limited since it does not incorporate multiple annotator worldviews.This is planned as future work to produce a robust model.Practical implications:We demonstrate NLPCONTRIBUTIONGRAPH data integrated into the Open Research Knowledge Graph(ORKG),a next-generation KG-based digital library with intelligent computations enabled over structured scholarly knowledge,as a viable aid to assist researchers in their day-to-day tasks.Originality/value:NLPCONTRIBUTIONGRAPH is a novel scheme to annotate research contributions from NLP articles and integrate them in a knowledge graph,which to the best of our knowledge does not exist in the community.Furthermore,our quantitative evaluations over the two-stage annotation tasks offer insights into task difficulty.
文摘This paper points out various relationships between Design Knowledge and Software Engineering. After introduction of human design, the relationships between industrial Software Engineering is discussed, then further details of human design knowledge are revealed with discussions on humanistic aspects of design.
文摘The software defects are managed through the knowledge base,and defect management is upgraded from the data level to the knowledge level. The rule knowledge is mined from bug data based on a rule-based knowledge extraction model,and the appropriate strategy is configured in the strategy layer to predict software defects. The model is extracted by direct association rules and extended association rules,which improve the prediction rate of related defects and the efficiency of software testing.
文摘This research develops a knowledge model for Software Process Improvement (SPI) project based on knowledge creation theory and its twenty-four measurement items, and proposes two hypothesizes about the interaction of explicit knowledge and tacit knowledge in SPI. Eleven factors are extracted through statistical analysis. Three knowledge-creation practices for capturing tacit knowledge contribute greatly to SPI, which are communication among members, crossover collaboration in practical work and pair programming. Two knowledge-creation practices for capturing explicit knowledge have significant positive impact on SPI, which are integrating project document and on-the-job training. Ultimately, suggestions for improvement are put forward, that is, encouraging communication among staff and integrating documents in real time, and future research is also illustrated.
文摘Knowledge transfer model of software process improvement (SPI) and the conceptual framework of influencing factors are established. The model includes five elements which are knowledge of transfer, sources of knowledge, recipients of knowledge, relationship of transfer parties, and the environment of transfer. The conceptual framework includes ten key factors which are ambiguity, systematism, transfer willingness, capacity of impartation, capacity of absorption, incen-tive mechanism, culture, technical support, trust and knowledge distance. The research hypothesis is put forward. Em-pirical study concludes that the trust relationship among SPI staffs has the greatest influence on knowledge transfer, and organizational incentive mechanism can produce positive effect to knowledge transfer of SPI. Finally, some sug-gestions are put forward to improve the knowledge transfer of SPI: establishing a rational incentive mechanism, exe-cuting some necessary training to transfer parties and using software benchmarking.
文摘Requirement engineering in any software development is the most important phase to ensure the success or failure of software. Knowledge modeling and management are helping tools to learn the software organizations. The traditional Requirements engineering practices are based upon the interaction of stakeholders which causes iteratively changes in requirements and difficulties in communication and understanding problem domain etc. So, to resolve such issues we use knowledge based techniques to support the RE practices as well as software development process. Our technique is based on two prospective, theoretical and practical implementations. In this paper, we described the need of knowledge management in software engineering and then proposed a model based on knowledge management to support the software development process. To verify our results, we used controlled experiment approach. We have implemented our model, and verify results by using and without using proposed knowledge based RE process. Our resultant proposed model can save the overall cost and time of requirement engineering process as well as software development.
文摘New theories,methodologies,and technologies have been continuously invented and widely applied in modern software development,along with many new tools and best practices that are of remarkable significance in the software industry.In Software Engineering(SE)programs of universities,it is quite difficult for their curricula to chase after the fast-evolving technology trend.As a consequence,there have been significant challenges in designing an evolvable SE curriculum.In this paper,we present a knowledge graph based curriculum design method for SE programs.Knowledge Points(KPs)are organized into a multi-layer and multi-dimensionally annotated knowledge graph called SEKG,and five principles are applied to partition the SEKG into a set of inter-related courses.Metrics for evaluating the quality of an SE curriculum are briefly discussed.This method can not only help design a systematic curriculum from existing software engineering KPs but also facilitate curriculum evolution to adapt to technology trends.
文摘In this paper, it is emphasized that taking into consideration of imperfection of knowledge, of the team of the designers/developers, about the problem domains and environments is essential in order to develop robust software metrics and systems. In this respect, first various possible types of imperfections in knowledge are discussed and then various available formal/mathematical models for representing and handling these imperfections are discussed. The discussion of knowledge classification & representation is from computational perspective and that also within the context of software development enterprise, and not necessarily from organizational management, from library & information science, or from psychological perspectives.
文摘The knowledge creation effective factors were found in both necessary elements for stimulus of knowledge creation and the key influencing factors of software project success. The research was carried with the specific successful practices of Microsoft Corporation and William Johnson’s analysis of R & D project knowledge creation. The knowledge creation effective factors in requirement development project are clarified through deeply interviewing the software enterprises in Guangdong province as well as other corporate information departments. The effective factors are divided with R & D project knowledge creation model in the view of organizational, team, personal and technical four levels through literature research and interview in enterprises, and the empirical study was done with questionnaire and exploratory analysis.
文摘Software requirements engineering deals with: elicitation, specification, and validation of software requirements. Furthermore there is a need to facilitate collaboration amongst stakeholders and analysts. Fewer efforts were deployed to support them in performing their job on a day to day basis. To solve this problem we use knowledge management for software requirements engineering. This paper proposes a knowledge management framework, based on the SECI model of knowledge creation, aimed at exploiting tacit and explicit knowledge related to software requirements within a given software project. The core part of the proposed framework is a set of four sub systems “Socializer”;“Externalizer”;“Combiner”;and “Internalizer”, attached to a couple of domain ontologies and a set of knowledge assets. Indeed we aim to facilitate a semantic based interpretation of knowledge assets related to software requirements by restricting their interpretation through the application domain and software requirements ontologies. We anticipate that this framework would be very helpful for stakeholders as well as analysts to exchange and manage their knowledge within a given software project. We show in the case study, through a virtual payroll project using the two-step approach: domain level requirements plus design level requirements, how the key elicitation SRE techniques are used during the first phase of domain requirements elicitation through the four subsystems of our framework.
基金supported by the Special Funds for Basic Research of Central Universities(D5000220240)the Special Funds for Education and Teaching Reform in 2023(06410-23GZ230102)。
文摘Software testing courses are characterized by strong practicality,comprehensiveness,and diversity.Due to the differences among students and the needs to design personalized solutions for their specific requirements,the design of the existing software testing courses fails to meet the demands for personalized learning.Knowledge graphs,with their rich semantics and good visualization effects,have a wide range of applications in the field of education.In response to the current problem of software testing courses which fails to meet the needs for personalized learning,this paper offers a learning path recommendation based on knowledge graphs to provide personalized learning paths for students.
文摘In project-based organizations knowledge is a critical resource used to develop and deliver products and services with a high level of quality. Therefore, a systematic and sustainable process is necessary to coordinate knowledge management, project management and product lifecycle. This scenario predominates in companies focused on the creation and maintenance of information systems. This article presents an exploratory study based on a framework that integrates cognitive, managerial, and operational processes in a public Brazilian organization that provides services in the area of information and communications technology, focusing on the construction and maintenance of information systems. Those processes are operationalized by three management models considering knowledge, project, and software development processes. Our proposal aims to understand the relationships between those three management models and their influence on the software development process in the organization under study. Our premise is based on the principle that cognitive management, project management, and software development management must be integrated to fulfill the demands of product development and service provision. The research data was composed of registers of working hours spent on software development and maintenance projects involving 244 people allocated to 5064 projects in the period from 2007 to 2013. The study resulted in the identification of the relationships among the three management models adopted by the organization, with emphasis on knowledge management activities, which were not directly identified, making it difficult to account for and measure them. We established a set of activities connected to each one of the knowledge management model phases. Since those activities were not visible before, our approach contributed to build a systematic process to register and relate activities linked to the dimensions of cognitive processes, project management, and software construction.
文摘In the first editorial of this two-part special issue, we pointed out that one of the biggest trends in wireless broadband, radar, sonar, and broadcasting technology is software RF processing and digital front-end [1]. Thistrend encompasses signal processing algorithms and integrated circuit design and includes digital pre-distortion (DPD), conversions between digital and analog signals, digita up-conversion (DUC), digital down-conversion (DDC), DC offset,
文摘One of the biggest technology trends in wireless broadband, radar, sonar, and broadcasting systems is software radio frequency processing and digital front-end. This trend encompasses a broad range of topics, from circuit design and signal processing to system integration. It includes digital up-conversion (DUC) and down-conversion (DDC), digital predistortion (DPD),
基金This Research is funded by Researchers Supporting Project Number(RSPD2024R947),King Saud University,Riyadh,Saudi Arabia.
文摘Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.
文摘Software Development Life Cycle (SDLC) is one of the major ingredients for the development of efficient software systems within a time frame and low-cost involvement. From the literature, it is evident that there are various kinds of process models that are used by the software industries for the development of small, medium and long-term software projects, but many of them do not cover risk management. It is quite obvious that the improper selection of the software development process model leads to failure of the software products as it is time bound activity. In the present work, a new software development process model is proposed which covers the risks at any stage of the development of the software product. The model is named a Hemant-Vipin (HV) process model and may be helpful for the software industries for development of the efficient software products and timely delivery at the end of the client. The efficiency of the HV process model is observed by considering various kinds of factors like requirement clarity, user feedback, change agility, predictability, risk identification, practical implementation, customer satisfaction, incremental development, use of ready-made components, quick design, resource organization and many more and found through a case study that the presented approach covers many of parameters in comparison of the existing process models. .