Due to the interdependency of frame synchronization(FS)and channel estimation(CE),joint FS and CE(JFSCE)schemes are proposed to enhance their functionalities and therefore boost the overall performance of wireless com...Due to the interdependency of frame synchronization(FS)and channel estimation(CE),joint FS and CE(JFSCE)schemes are proposed to enhance their functionalities and therefore boost the overall performance of wireless communication systems.Although traditional JFSCE schemes alleviate the influence between FS and CE,they show deficiencies in dealing with hardware imperfection(HI)and deterministic line-of-sight(LOS)path.To tackle this challenge,we proposed a cascaded ELM-based JFSCE to alleviate the influence of HI in the scenario of the Rician fading channel.Specifically,the conventional JFSCE method is first employed to extract the initial features,and thus forms the non-Neural Network(NN)solutions for FS and CE,respectively.Then,the ELMbased networks,named FS-NET and CE-NET,are cascaded to capture the NN solutions of FS and CE.Simulation and analysis results show that,compared with the conventional JFSCE methods,the proposed cascaded ELM-based JFSCE significantly reduces the error probability of FS and the normalized mean square error(NMSE)of CE,even against the impacts of parameter variations.展开更多
Software Defined Networking(SDN)is programmable by separation of forwarding control through the centralization of the controller.The controller plays the role of the‘brain’that dictates the intelligent part of SDN t...Software Defined Networking(SDN)is programmable by separation of forwarding control through the centralization of the controller.The controller plays the role of the‘brain’that dictates the intelligent part of SDN technology.Various versions of SDN controllers exist as a response to the diverse demands and functions expected of them.There are several SDN controllers available in the open market besides a large number of commercial controllers;some are developed tomeet carrier-grade service levels and one of the recent trends in open-source SDN controllers is the Open Network Operating System(ONOS).This paper presents a comparative study between open source SDN controllers,which are known as Network Controller Platform(NOX),Python-based Network Controller(POX),component-based SDN framework(Ryu),Java-based OpenFlow controller(Floodlight),OpenDayLight(ODL)and ONOS.The discussion is further extended into ONOS architecture,as well as,the evolution of ONOS controllers.This article will review use cases based on ONOS controllers in several application deployments.Moreover,the opportunities and challenges of open source SDN controllers will be discussed,exploring carriergrade ONOS for future real-world deployments,ONOS unique features and identifying the suitable choice of SDN controller for service providers.In addition,we attempt to provide answers to several critical questions relating to the implications of the open-source nature of SDN controllers regarding vendor lock-in,interoperability,and standards compliance,Similarly,real-world use cases of organizations using open-source SDN are highlighted and how the open-source community contributes to the development of SDN controllers.Furthermore,challenges faced by open-source projects,and considerations when choosing an open-source SDN controller are underscored.Then the role of Artificial Intelligence(AI)and Machine Learning(ML)in the evolution of open-source SDN controllers in light of recent research is indicated.In addition,the challenges and limitations associated with deploying open-source SDN controllers in production networks,how can they be mitigated,and finally how opensource SDN controllers handle network security and ensure that network configurations and policies are robust and resilient are presented.Potential opportunities and challenges for future Open SDN deployment are outlined to conclude the article.展开更多
Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As re...Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept.展开更多
Agile Transformations are challenging processes for organizations that look to extend the benefits of Agile philosophy and methods beyond software engineering.Despite the impact of these transformations on orga-nizati...Agile Transformations are challenging processes for organizations that look to extend the benefits of Agile philosophy and methods beyond software engineering.Despite the impact of these transformations on orga-nizations,they have not been extensively studied in academia.We conducted a study grounded in workshops and interviews with 99 participants from 30 organizations,including organizations undergoing transformations(“final organizations”)and companies supporting these processes(“consultants”).The study aims to understand the motivations,objectives,and factors driving and challenging these transformations.Over 700 responses were collected to the question and categorized into 32 objectives.The findings show that organizations primarily aim to achieve customer centricity and adaptability,both with 8%of the mentions.Other primary important objectives,with above 4%of mentions,include alignment of goals,lean delivery,sustainable processes,and a flatter,more team-based organizational structure.We also detect discrepancies in perspectives between the objectives identified by the two kinds of organizations and the existing agile literature and models.This misalignment highlights the need for practitioners to understand with the practical realities the organizations face.展开更多
Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro...Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro-posed to improve the efficiency for edge inference of Deep Neural Networks(DNNs),existing PoT schemes require a huge amount of bit-wise manipulation and have large memory overhead,and their efficiency is bounded by the bottleneck of computation latency and memory footprint.To tackle this challenge,we present an efficient inference approach on the basis of PoT quantization and model compression.An integer-only scalar PoT quantization(IOS-PoT)is designed jointly with a distribution loss regularizer,wherein the regularizer minimizes quantization errors and training disturbances.Additionally,two-stage model compression is developed to effectively reduce memory requirement,and alleviate bandwidth usage in communications of networked heterogenous learning systems.The product look-up table(P-LUT)inference scheme is leveraged to replace bit-shifting with only indexing and addition operations for achieving low-latency computation and implementing efficient edge accelerators.Finally,comprehensive experiments on Residual Networks(ResNets)and efficient architectures with Canadian Institute for Advanced Research(CIFAR),ImageNet,and Real-world Affective Faces Database(RAF-DB)datasets,indicate that our approach achieves 2×∼10×improvement in the reduction of both weight size and computation cost in comparison to state-of-the-art methods.A P-LUT accelerator prototype is implemented on the Xilinx KV260 Field Programmable Gate Array(FPGA)platform for accelerating convolution operations,with performance results showing that P-LUT reduces memory footprint by 1.45×,achieves more than 3×power efficiency and 2×resource efficiency,compared to the conventional bit-shifting scheme.展开更多
Purpose: To clarify the effectiveness of 3-D delivery animation software for the mother’s and husband’s satisfaction with delivery. Subjects and Method: We independently developed a software application used to disp...Purpose: To clarify the effectiveness of 3-D delivery animation software for the mother’s and husband’s satisfaction with delivery. Subjects and Method: We independently developed a software application used to display the pelvic region and explain the labor process. The study involved a collaboration with hospital staff who recruited 18 primiparous and 18 multiparous mothers who were hospitalized for delivery at Facility A. The midwife explained the process of delivery using the “Delivery Animation Software”. A self-administered, anonymous questionnaire was distributed and analyzed separately for primiparous and multiparous mothers and their husbands. Results: 1) For both primiparous and multiparous couples, both mothers and their husbands gained a significantly higher level of understanding after delivery than during pregnancy. 2) The Self-Evaluation Scale for Experience of Delivery results were as follows: “I did my best for the baby even if it was painful” was selected more often for “birth coping skills”;“reliable medical staff” was selected more often for “physiological birth process”;“the birth progressed as I expected” was selected frequently by primiparous mothers;and “the birth progressed smoothly” was selected often by multiparous mothers. 3) In terms of husbands’ satisfaction with the delivery, “I was satisfied with the delivery”, “I was given an easy-to-understand explanation”, and “They explained the process to me” was selected of primiparous and multiparous fathers. 4) All primiparous and multiparous mothers positively evaluated whether the delivery animation was helpful in understanding the process of delivery. Conclusion: The delivery animation was effective in improving the understanding and satisfaction of both the mothers and their husbands.展开更多
When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ...When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.展开更多
In recent years,the rapid development of computer software has led to numerous security problems,particularly software vulnerabilities.These flaws can cause significant harm to users’privacy and property.Current secu...In recent years,the rapid development of computer software has led to numerous security problems,particularly software vulnerabilities.These flaws can cause significant harm to users’privacy and property.Current security defect detection technology relies on manual or professional reasoning,leading to missed detection and high false detection rates.Artificial intelligence technology has led to the development of neural network models based on machine learning or deep learning to intelligently mine holes,reducing missed alarms and false alarms.So,this project aims to study Java source code defect detection methods for defects like null pointer reference exception,XSS(Transform),and Structured Query Language(SQL)injection.Also,the project uses open-source Javalang to translate the Java source code,conducts a deep search on the AST to obtain the empty syntax feature library,and converts the Java source code into a dependency graph.The feature vector is then used as the learning target for the neural network.Four types of Convolutional Neural Networks(CNN),Long Short-Term Memory(LSTM),Bi-directional Long Short-Term Memory(BiLSTM),and Attention Mechanism+Bidirectional LSTM,are used to investigate various code defects,including blank pointer reference exception,XSS,and SQL injection defects.Experimental results show that the attention mechanism in two-dimensional BLSTM is the most effective for object recognition,verifying the correctness of the method.展开更多
Redundancy,correlation,feature irrelevance,and missing samples are just a few problems that make it difficult to analyze software defect data.Additionally,it might be challenging to maintain an even distribution of da...Redundancy,correlation,feature irrelevance,and missing samples are just a few problems that make it difficult to analyze software defect data.Additionally,it might be challenging to maintain an even distribution of data relating to both defective and non-defective software.The latter software class’s data are predominately present in the dataset in the majority of experimental situations.The objective of this review study is to demonstrate the effectiveness of combining ensemble learning and feature selection in improving the performance of defect classification.Besides the successful feature selection approach,a novel variant of the ensemble learning technique is analyzed to address the challenges of feature redundancy and data imbalance,providing robustness in the classification process.To overcome these problems and lessen their impact on the fault classification performance,authors carefully integrate effective feature selection with ensemble learning models.Forward selection demonstrates that a significant area under the receiver operating curve(ROC)can be attributed to only a small subset of features.The Greedy forward selection(GFS)technique outperformed Pearson’s correlation method when evaluating feature selection techniques on the datasets.Ensemble learners,such as random forests(RF)and the proposed average probability ensemble(APE),demonstrate greater resistance to the impact of weak features when compared to weighted support vector machines(W-SVMs)and extreme learning machines(ELM).Furthermore,in the case of the NASA and Java datasets,the enhanced average probability ensemble model,which incorporates the Greedy forward selection technique with the average probability ensemble model,achieved remarkably high accuracy for the area under the ROC.It approached a value of 1.0,indicating exceptional performance.This review emphasizes the importance of meticulously selecting attributes in a software dataset to accurately classify damaged components.In addition,the suggested ensemble learning model successfully addressed the aforementioned problems with software data and produced outstanding classification performance.展开更多
This paper reviews the adaptive sparse grid discontinuous Galerkin(aSG-DG)method for computing high dimensional partial differential equations(PDEs)and its software implementation.The C++software package called AdaM-D...This paper reviews the adaptive sparse grid discontinuous Galerkin(aSG-DG)method for computing high dimensional partial differential equations(PDEs)and its software implementation.The C++software package called AdaM-DG,implementing the aSG-DG method,is available on GitHub at https://github.com/JuntaoHuang/adaptive-multiresolution-DG.The package is capable of treating a large class of high dimensional linear and nonlinear PDEs.We review the essential components of the algorithm and the functionality of the software,including the multiwavelets used,assembling of bilinear operators,fast matrix-vector product for data with hierarchical structures.We further demonstrate the performance of the package by reporting the numerical error and the CPU cost for several benchmark tests,including linear transport equations,wave equations,and Hamilton-Jacobi(HJ)equations.展开更多
The healthcare sector holds valuable and sensitive data.The amount of this data and the need to handle,exchange,and protect it,has been increasing at a fast pace.Due to their nature,software-defined networks(SDNs)are ...The healthcare sector holds valuable and sensitive data.The amount of this data and the need to handle,exchange,and protect it,has been increasing at a fast pace.Due to their nature,software-defined networks(SDNs)are widely used in healthcare systems,as they ensure effective resource utilization,safety,great network management,and monitoring.In this sector,due to the value of thedata,SDNs faceamajor challengeposed byawide range of attacks,such as distributed denial of service(DDoS)and probe attacks.These attacks reduce network performance,causing the degradation of different key performance indicators(KPIs)or,in the worst cases,a network failure which can threaten human lives.This can be significant,especially with the current expansion of portable healthcare that supports mobile and wireless devices for what is called mobile health,or m-health.In this study,we examine the effectiveness of using SDNs for defense against DDoS,as well as their effects on different network KPIs under various scenarios.We propose a threshold-based DDoS classifier(TBDC)technique to classify DDoS attacks in healthcare SDNs,aiming to block traffic considered a hazard in the form of a DDoS attack.We then evaluate the accuracy and performance of the proposed TBDC approach.Our technique shows outstanding performance,increasing the mean throughput by 190.3%,reducing the mean delay by 95%,and reducing packet loss by 99.7%relative to normal,with DDoS attack traffic.展开更多
Effort estimation plays a crucial role in software development projects,aiding in resource allocation,project planning,and risk management.Traditional estimation techniques often struggle to provide accurate estimates...Effort estimation plays a crucial role in software development projects,aiding in resource allocation,project planning,and risk management.Traditional estimation techniques often struggle to provide accurate estimates due to the complex nature of software projects.In recent years,machine learning approaches have shown promise in improving the accuracy of effort estimation models.This study proposes a hybrid model that combines Long Short-Term Memory(LSTM)and Random Forest(RF)algorithms to enhance software effort estimation.The proposed hybrid model takes advantage of the strengths of both LSTM and RF algorithms.To evaluate the performance of the hybrid model,an extensive set of software development projects is used as the experimental dataset.The experimental results demonstrate that the proposed hybrid model outperforms traditional estimation techniques in terms of accuracy and reliability.The integration of LSTM and RF enables the model to efficiently capture temporal dependencies and non-linear interactions in the software development data.The hybrid model enhances estimation accuracy,enabling project managers and stakeholders to make more precise predictions of effort needed for upcoming software projects.展开更多
The construction industry, known for its low productivity, is increasingly utilising software and mobile apps to enhance efficiency. However, more comprehensive research is needed to understand the effectiveness of th...The construction industry, known for its low productivity, is increasingly utilising software and mobile apps to enhance efficiency. However, more comprehensive research is needed to understand the effectiveness of these technology applications. The PRISMA principles utilised a scoping review methodology to ascertain pertinent studies and extract significant findings. From 2013 onwards, articles containing data on mobile applications or software designed to enhance productivity in the construction sector were obtained from multiple databases, including Emerald Insight, Science Direct, IEEE Xplore, and Google Scholar. After evaluating 2604 articles, 30 were determined to be pertinent to the study and were subsequently analysed for the review. The review identified five key themes: effectiveness, benefits, successful implementation examples, obstacles and limitations, and a comprehensive list of software and mobile apps. In addition, 71 software and mobile apps have shown potentially how these technologies can improve communication, collaboration, project management, real-time collaboration, document management, and on-the-go project information and estimating processes in the construction industry, increasing efficiency and productivity. The findings highlight the potential of these technologies such as Automation, Radio-Frequency Identification (RFID), Building Information Modeling (BIM), Augmented Reality (AR), Virtual Reality (VR), and Internet of Things (IoT) to improve efficiency and communication in the construction industry. Despite challenges such as cost, lack of awareness, resistance to change, compatibility concerns, human resources, technological and security concerns and licensing issues, the study identifies specific mobile applications and software with the potential to enhance efficiency significantly, improve productivity and streamline workflows. The broader societal impacts of construction software and mobile app development include increased efficiency, job creation, and sustainability.展开更多
The developed auxiliary software serves to simplify, standardize and facilitate the software loading of the structural organization of a complex technological system, as well as its further manipulation within the pro...The developed auxiliary software serves to simplify, standardize and facilitate the software loading of the structural organization of a complex technological system, as well as its further manipulation within the process of solving the considered technological system. Its help can be especially useful in the case of a complex structural organization of a technological system with a large number of different functional elements grouped into several technological subsystems. This paper presents the results of its application for a special complex technological system related to the reference steam block for the combined production of heat and electricity.展开更多
In recent years,the domain of machine translation has experienced remarkable growth,particularly with the emergence of neural machine translation,which has significantly enhanced both the accuracy and fluency of trans...In recent years,the domain of machine translation has experienced remarkable growth,particularly with the emergence of neural machine translation,which has significantly enhanced both the accuracy and fluency of translation.At the same time,AI also showed its tremendous advancement,with its capabilities now extending to assisting users in a multitude of tasks,including translation,garnering attention across various sectors.In this paper,the author selects representative sentences from both literary and scientific texts,and translates them using two translation software and two AI tools for comparison.The results show that all four translation tools are very efficient and can help with simple translation tasks.However,the accuracy of terminology needs to be improved,and it is difficult to make adjustments based on the characteristics of the target language.It is worth mentioning that one of the advantages of AI is its interactivity,which allows it to modify the translation according to the translator’s needs.展开更多
The SubBytes (S-box) transformation is the most crucial operation in the AES algorithm, significantly impacting the implementation performance of AES chips. To design a high-performance S-box, a segmented optimization...The SubBytes (S-box) transformation is the most crucial operation in the AES algorithm, significantly impacting the implementation performance of AES chips. To design a high-performance S-box, a segmented optimization implementation of the S-box is proposed based on the composite field inverse operation in this paper. This proposed S-box implementation is modeled using Verilog language and synthesized using Design Complier software under the premise of ensuring the correctness of the simulation result. The synthesis results show that, compared to several current S-box implementation schemes, the proposed implementation of the S-box significantly reduces the area overhead and critical path delay, then gets higher hardware efficiency. This provides strong support for realizing efficient and compact S-box ASIC designs.展开更多
This paper delves into Agile Development Methods in Software Engineering,contrasting them with the traditional Waterfall model and analyzing their efficiency.Agile methods,known for their adaptability and customer-cen...This paper delves into Agile Development Methods in Software Engineering,contrasting them with the traditional Waterfall model and analyzing their efficiency.Agile methods,known for their adaptability and customer-centric approach,have gained prominence in the fast-paced software development industry.These methods,including Scrum,Kanban,and Extreme Programming(XP),are characterized by iterative cycles,collaborative efforts,and a focus on rapid delivery and quality improvement.The paper compares these agile methodologies to the sequential and rigid Waterfall model,highlighting agile’s superior flexibility,adaptability,and responsiveness to changing requirements.It emphasizes the importance of customer involvement in agile processes,which leads to higher satisfaction and better alignment with user expectations.The analysis reveals that agile methods not only enhance the speed of delivery but also improve the overall quality of the software product.The paper concludes that agile methodologies are more effective in today's dynamic software development environment,providing a robust framework for managing complex projects and ensuring the delivery of high-quality,relevant software solutions.展开更多
With the rapid development of information technology,the demand for talents in the field of software engineering is growing.In order to cultivate high-quality software engineering talents who meet the market demand,un...With the rapid development of information technology,the demand for talents in the field of software engineering is growing.In order to cultivate high-quality software engineering talents who meet the market demand,universities have continuously carried out the construction of software engineering majors.Accreditation Board for Engineering and Technology(ABET)certification,as an internationally recognized higher education quality assurance system,provides important reference and guidance for the construction of software engineering majors.Guided by student learning outcomes and core competencies,combined with the characteristics of software engineering talent cultivation,the innovation of talent cultivation mode takes industry-education integration and school-enterprise cooperation as the main development paths and explores comprehensive reform of the major in terms of professional positioning and goals,curriculum system,teaching conditions,and teachers.This comprehensive reform model has effectively promoted the development of major construction and improved the quality of talent cultivation.展开更多
Under the background of“new engineering”construction,software engineering teaching pays more attention to cultivating students’engineering practice and innovation ability.In view of the inconsistency between develo...Under the background of“new engineering”construction,software engineering teaching pays more attention to cultivating students’engineering practice and innovation ability.In view of the inconsistency between development and demand design,team division of labor,difficult measurement of individual contribution,single assessment method,and other problems in traditional practice teaching,this paper proposes that under the guidance of agile development methods,software engineering courses should adopt Scrum framework to organize course project practice,use agile collaboration platform to implement individual work,follow up experiment progress,and ensure effective project advancement.The statistical data of curriculum“diversity”assessment show that there is an obvious improvement effect on students’software engineering ability and quality.展开更多
With the development of the digital city,data and data analysis have become more and more important.The database is the foundation of data analysis.In this paper,the software system of the urban land planning database...With the development of the digital city,data and data analysis have become more and more important.The database is the foundation of data analysis.In this paper,the software system of the urban land planning database of Shanghai in China is developed based on MySQL.The conceptual model of the urban land planning database is proposed,and the entities,attributes and connections of this model are discussed.Then the E-R conceptual model is transformed into a logical structure,which is supported by the relational databasemanagement system(DBMS).Based on the conceptual and logical structures,by using Spring Boot as the back-end framework and using MySQL and Java API as the development tools,a platformwith datamanagement,information sharing,map assistance and other functions is established.The functionalmodules in this platformare designed.The results of J Meter test show that the DBMS can add,store and retrieve information data stably,and it has the advantages of fast response and low error rate.The software system of the urban land planning database developed in this paper can improve the efficiency of storing and managing land data,eliminating redundant data and sharing data.展开更多
基金supported in part by the Sichuan Science and Technology Program(Grant No.2023YFG0316)the Industry-University Research Innovation Fund of China University(Grant No.2021ITA10016)+1 种基金the Key Scientific Research Fund of Xihua University(Grant No.Z1320929)the Special Funds of Industry Development of Sichuan Province(Grant No.zyf-2018-056).
文摘Due to the interdependency of frame synchronization(FS)and channel estimation(CE),joint FS and CE(JFSCE)schemes are proposed to enhance their functionalities and therefore boost the overall performance of wireless communication systems.Although traditional JFSCE schemes alleviate the influence between FS and CE,they show deficiencies in dealing with hardware imperfection(HI)and deterministic line-of-sight(LOS)path.To tackle this challenge,we proposed a cascaded ELM-based JFSCE to alleviate the influence of HI in the scenario of the Rician fading channel.Specifically,the conventional JFSCE method is first employed to extract the initial features,and thus forms the non-Neural Network(NN)solutions for FS and CE,respectively.Then,the ELMbased networks,named FS-NET and CE-NET,are cascaded to capture the NN solutions of FS and CE.Simulation and analysis results show that,compared with the conventional JFSCE methods,the proposed cascaded ELM-based JFSCE significantly reduces the error probability of FS and the normalized mean square error(NMSE)of CE,even against the impacts of parameter variations.
基金supported by UniversitiKebangsaan Malaysia,under Dana Impak Perdana 2.0.(Ref:DIP–2022–020).
文摘Software Defined Networking(SDN)is programmable by separation of forwarding control through the centralization of the controller.The controller plays the role of the‘brain’that dictates the intelligent part of SDN technology.Various versions of SDN controllers exist as a response to the diverse demands and functions expected of them.There are several SDN controllers available in the open market besides a large number of commercial controllers;some are developed tomeet carrier-grade service levels and one of the recent trends in open-source SDN controllers is the Open Network Operating System(ONOS).This paper presents a comparative study between open source SDN controllers,which are known as Network Controller Platform(NOX),Python-based Network Controller(POX),component-based SDN framework(Ryu),Java-based OpenFlow controller(Floodlight),OpenDayLight(ODL)and ONOS.The discussion is further extended into ONOS architecture,as well as,the evolution of ONOS controllers.This article will review use cases based on ONOS controllers in several application deployments.Moreover,the opportunities and challenges of open source SDN controllers will be discussed,exploring carriergrade ONOS for future real-world deployments,ONOS unique features and identifying the suitable choice of SDN controller for service providers.In addition,we attempt to provide answers to several critical questions relating to the implications of the open-source nature of SDN controllers regarding vendor lock-in,interoperability,and standards compliance,Similarly,real-world use cases of organizations using open-source SDN are highlighted and how the open-source community contributes to the development of SDN controllers.Furthermore,challenges faced by open-source projects,and considerations when choosing an open-source SDN controller are underscored.Then the role of Artificial Intelligence(AI)and Machine Learning(ML)in the evolution of open-source SDN controllers in light of recent research is indicated.In addition,the challenges and limitations associated with deploying open-source SDN controllers in production networks,how can they be mitigated,and finally how opensource SDN controllers handle network security and ensure that network configurations and policies are robust and resilient are presented.Potential opportunities and challenges for future Open SDN deployment are outlined to conclude the article.
文摘Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept.
基金funding from the European Commission for the Ruralities Project(grant agreement no.101060876).
文摘Agile Transformations are challenging processes for organizations that look to extend the benefits of Agile philosophy and methods beyond software engineering.Despite the impact of these transformations on orga-nizations,they have not been extensively studied in academia.We conducted a study grounded in workshops and interviews with 99 participants from 30 organizations,including organizations undergoing transformations(“final organizations”)and companies supporting these processes(“consultants”).The study aims to understand the motivations,objectives,and factors driving and challenging these transformations.Over 700 responses were collected to the question and categorized into 32 objectives.The findings show that organizations primarily aim to achieve customer centricity and adaptability,both with 8%of the mentions.Other primary important objectives,with above 4%of mentions,include alignment of goals,lean delivery,sustainable processes,and a flatter,more team-based organizational structure.We also detect discrepancies in perspectives between the objectives identified by the two kinds of organizations and the existing agile literature and models.This misalignment highlights the need for practitioners to understand with the practical realities the organizations face.
基金This work was supported by Open Fund Project of State Key Laboratory of Intelligent Vehicle Safety Technology by Grant with No.IVSTSKL-202311Key Projects of Science and Technology Research Programme of Chongqing Municipal Education Commission by Grant with No.KJZD-K202301505+1 种基金Cooperation Project between Chongqing Municipal Undergraduate Universities and Institutes Affiliated to the Chinese Academy of Sciences in 2021 by Grant with No.HZ2021015Chongqing Graduate Student Research Innovation Program by Grant with No.CYS240801.
文摘Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro-posed to improve the efficiency for edge inference of Deep Neural Networks(DNNs),existing PoT schemes require a huge amount of bit-wise manipulation and have large memory overhead,and their efficiency is bounded by the bottleneck of computation latency and memory footprint.To tackle this challenge,we present an efficient inference approach on the basis of PoT quantization and model compression.An integer-only scalar PoT quantization(IOS-PoT)is designed jointly with a distribution loss regularizer,wherein the regularizer minimizes quantization errors and training disturbances.Additionally,two-stage model compression is developed to effectively reduce memory requirement,and alleviate bandwidth usage in communications of networked heterogenous learning systems.The product look-up table(P-LUT)inference scheme is leveraged to replace bit-shifting with only indexing and addition operations for achieving low-latency computation and implementing efficient edge accelerators.Finally,comprehensive experiments on Residual Networks(ResNets)and efficient architectures with Canadian Institute for Advanced Research(CIFAR),ImageNet,and Real-world Affective Faces Database(RAF-DB)datasets,indicate that our approach achieves 2×∼10×improvement in the reduction of both weight size and computation cost in comparison to state-of-the-art methods.A P-LUT accelerator prototype is implemented on the Xilinx KV260 Field Programmable Gate Array(FPGA)platform for accelerating convolution operations,with performance results showing that P-LUT reduces memory footprint by 1.45×,achieves more than 3×power efficiency and 2×resource efficiency,compared to the conventional bit-shifting scheme.
文摘Purpose: To clarify the effectiveness of 3-D delivery animation software for the mother’s and husband’s satisfaction with delivery. Subjects and Method: We independently developed a software application used to display the pelvic region and explain the labor process. The study involved a collaboration with hospital staff who recruited 18 primiparous and 18 multiparous mothers who were hospitalized for delivery at Facility A. The midwife explained the process of delivery using the “Delivery Animation Software”. A self-administered, anonymous questionnaire was distributed and analyzed separately for primiparous and multiparous mothers and their husbands. Results: 1) For both primiparous and multiparous couples, both mothers and their husbands gained a significantly higher level of understanding after delivery than during pregnancy. 2) The Self-Evaluation Scale for Experience of Delivery results were as follows: “I did my best for the baby even if it was painful” was selected more often for “birth coping skills”;“reliable medical staff” was selected more often for “physiological birth process”;“the birth progressed as I expected” was selected frequently by primiparous mothers;and “the birth progressed smoothly” was selected often by multiparous mothers. 3) In terms of husbands’ satisfaction with the delivery, “I was satisfied with the delivery”, “I was given an easy-to-understand explanation”, and “They explained the process to me” was selected of primiparous and multiparous fathers. 4) All primiparous and multiparous mothers positively evaluated whether the delivery animation was helpful in understanding the process of delivery. Conclusion: The delivery animation was effective in improving the understanding and satisfaction of both the mothers and their husbands.
基金the R&D&I,Spain grants PID2020-119478GB-I00 and,PID2020-115832GB-I00 funded by MCIN/AEI/10.13039/501100011033.N.Rodríguez-Barroso was supported by the grant FPU18/04475 funded by MCIN/AEI/10.13039/501100011033 and by“ESF Investing in your future”Spain.J.Moyano was supported by a postdoctoral Juan de la Cierva Formación grant FJC2020-043823-I funded by MCIN/AEI/10.13039/501100011033 and by European Union NextGenerationEU/PRTR.J.Del Ser acknowledges funding support from the Spanish Centro para el Desarrollo Tecnológico Industrial(CDTI)through the AI4ES projectthe Department of Education of the Basque Government(consolidated research group MATHMODE,IT1456-22)。
文摘When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.
基金This work is supported by the Provincial Key Science and Technology Special Project of Henan(No.221100240100)。
文摘In recent years,the rapid development of computer software has led to numerous security problems,particularly software vulnerabilities.These flaws can cause significant harm to users’privacy and property.Current security defect detection technology relies on manual or professional reasoning,leading to missed detection and high false detection rates.Artificial intelligence technology has led to the development of neural network models based on machine learning or deep learning to intelligently mine holes,reducing missed alarms and false alarms.So,this project aims to study Java source code defect detection methods for defects like null pointer reference exception,XSS(Transform),and Structured Query Language(SQL)injection.Also,the project uses open-source Javalang to translate the Java source code,conducts a deep search on the AST to obtain the empty syntax feature library,and converts the Java source code into a dependency graph.The feature vector is then used as the learning target for the neural network.Four types of Convolutional Neural Networks(CNN),Long Short-Term Memory(LSTM),Bi-directional Long Short-Term Memory(BiLSTM),and Attention Mechanism+Bidirectional LSTM,are used to investigate various code defects,including blank pointer reference exception,XSS,and SQL injection defects.Experimental results show that the attention mechanism in two-dimensional BLSTM is the most effective for object recognition,verifying the correctness of the method.
文摘Redundancy,correlation,feature irrelevance,and missing samples are just a few problems that make it difficult to analyze software defect data.Additionally,it might be challenging to maintain an even distribution of data relating to both defective and non-defective software.The latter software class’s data are predominately present in the dataset in the majority of experimental situations.The objective of this review study is to demonstrate the effectiveness of combining ensemble learning and feature selection in improving the performance of defect classification.Besides the successful feature selection approach,a novel variant of the ensemble learning technique is analyzed to address the challenges of feature redundancy and data imbalance,providing robustness in the classification process.To overcome these problems and lessen their impact on the fault classification performance,authors carefully integrate effective feature selection with ensemble learning models.Forward selection demonstrates that a significant area under the receiver operating curve(ROC)can be attributed to only a small subset of features.The Greedy forward selection(GFS)technique outperformed Pearson’s correlation method when evaluating feature selection techniques on the datasets.Ensemble learners,such as random forests(RF)and the proposed average probability ensemble(APE),demonstrate greater resistance to the impact of weak features when compared to weighted support vector machines(W-SVMs)and extreme learning machines(ELM).Furthermore,in the case of the NASA and Java datasets,the enhanced average probability ensemble model,which incorporates the Greedy forward selection technique with the average probability ensemble model,achieved remarkably high accuracy for the area under the ROC.It approached a value of 1.0,indicating exceptional performance.This review emphasizes the importance of meticulously selecting attributes in a software dataset to accurately classify damaged components.In addition,the suggested ensemble learning model successfully addressed the aforementioned problems with software data and produced outstanding classification performance.
基金supported by the NSF grant DMS-2111383Air Force Office of Scientific Research FA9550-18-1-0257the NSF grant DMS-2011838.
文摘This paper reviews the adaptive sparse grid discontinuous Galerkin(aSG-DG)method for computing high dimensional partial differential equations(PDEs)and its software implementation.The C++software package called AdaM-DG,implementing the aSG-DG method,is available on GitHub at https://github.com/JuntaoHuang/adaptive-multiresolution-DG.The package is capable of treating a large class of high dimensional linear and nonlinear PDEs.We review the essential components of the algorithm and the functionality of the software,including the multiwavelets used,assembling of bilinear operators,fast matrix-vector product for data with hierarchical structures.We further demonstrate the performance of the package by reporting the numerical error and the CPU cost for several benchmark tests,including linear transport equations,wave equations,and Hamilton-Jacobi(HJ)equations.
基金extend their appreciation to Researcher Supporting Project Number(RSPD2023R582)King Saud University,Riyadh,Saudi Arabia.
文摘The healthcare sector holds valuable and sensitive data.The amount of this data and the need to handle,exchange,and protect it,has been increasing at a fast pace.Due to their nature,software-defined networks(SDNs)are widely used in healthcare systems,as they ensure effective resource utilization,safety,great network management,and monitoring.In this sector,due to the value of thedata,SDNs faceamajor challengeposed byawide range of attacks,such as distributed denial of service(DDoS)and probe attacks.These attacks reduce network performance,causing the degradation of different key performance indicators(KPIs)or,in the worst cases,a network failure which can threaten human lives.This can be significant,especially with the current expansion of portable healthcare that supports mobile and wireless devices for what is called mobile health,or m-health.In this study,we examine the effectiveness of using SDNs for defense against DDoS,as well as their effects on different network KPIs under various scenarios.We propose a threshold-based DDoS classifier(TBDC)technique to classify DDoS attacks in healthcare SDNs,aiming to block traffic considered a hazard in the form of a DDoS attack.We then evaluate the accuracy and performance of the proposed TBDC approach.Our technique shows outstanding performance,increasing the mean throughput by 190.3%,reducing the mean delay by 95%,and reducing packet loss by 99.7%relative to normal,with DDoS attack traffic.
文摘Effort estimation plays a crucial role in software development projects,aiding in resource allocation,project planning,and risk management.Traditional estimation techniques often struggle to provide accurate estimates due to the complex nature of software projects.In recent years,machine learning approaches have shown promise in improving the accuracy of effort estimation models.This study proposes a hybrid model that combines Long Short-Term Memory(LSTM)and Random Forest(RF)algorithms to enhance software effort estimation.The proposed hybrid model takes advantage of the strengths of both LSTM and RF algorithms.To evaluate the performance of the hybrid model,an extensive set of software development projects is used as the experimental dataset.The experimental results demonstrate that the proposed hybrid model outperforms traditional estimation techniques in terms of accuracy and reliability.The integration of LSTM and RF enables the model to efficiently capture temporal dependencies and non-linear interactions in the software development data.The hybrid model enhances estimation accuracy,enabling project managers and stakeholders to make more precise predictions of effort needed for upcoming software projects.
文摘The construction industry, known for its low productivity, is increasingly utilising software and mobile apps to enhance efficiency. However, more comprehensive research is needed to understand the effectiveness of these technology applications. The PRISMA principles utilised a scoping review methodology to ascertain pertinent studies and extract significant findings. From 2013 onwards, articles containing data on mobile applications or software designed to enhance productivity in the construction sector were obtained from multiple databases, including Emerald Insight, Science Direct, IEEE Xplore, and Google Scholar. After evaluating 2604 articles, 30 were determined to be pertinent to the study and were subsequently analysed for the review. The review identified five key themes: effectiveness, benefits, successful implementation examples, obstacles and limitations, and a comprehensive list of software and mobile apps. In addition, 71 software and mobile apps have shown potentially how these technologies can improve communication, collaboration, project management, real-time collaboration, document management, and on-the-go project information and estimating processes in the construction industry, increasing efficiency and productivity. The findings highlight the potential of these technologies such as Automation, Radio-Frequency Identification (RFID), Building Information Modeling (BIM), Augmented Reality (AR), Virtual Reality (VR), and Internet of Things (IoT) to improve efficiency and communication in the construction industry. Despite challenges such as cost, lack of awareness, resistance to change, compatibility concerns, human resources, technological and security concerns and licensing issues, the study identifies specific mobile applications and software with the potential to enhance efficiency significantly, improve productivity and streamline workflows. The broader societal impacts of construction software and mobile app development include increased efficiency, job creation, and sustainability.
文摘The developed auxiliary software serves to simplify, standardize and facilitate the software loading of the structural organization of a complex technological system, as well as its further manipulation within the process of solving the considered technological system. Its help can be especially useful in the case of a complex structural organization of a technological system with a large number of different functional elements grouped into several technological subsystems. This paper presents the results of its application for a special complex technological system related to the reference steam block for the combined production of heat and electricity.
文摘In recent years,the domain of machine translation has experienced remarkable growth,particularly with the emergence of neural machine translation,which has significantly enhanced both the accuracy and fluency of translation.At the same time,AI also showed its tremendous advancement,with its capabilities now extending to assisting users in a multitude of tasks,including translation,garnering attention across various sectors.In this paper,the author selects representative sentences from both literary and scientific texts,and translates them using two translation software and two AI tools for comparison.The results show that all four translation tools are very efficient and can help with simple translation tasks.However,the accuracy of terminology needs to be improved,and it is difficult to make adjustments based on the characteristics of the target language.It is worth mentioning that one of the advantages of AI is its interactivity,which allows it to modify the translation according to the translator’s needs.
文摘The SubBytes (S-box) transformation is the most crucial operation in the AES algorithm, significantly impacting the implementation performance of AES chips. To design a high-performance S-box, a segmented optimization implementation of the S-box is proposed based on the composite field inverse operation in this paper. This proposed S-box implementation is modeled using Verilog language and synthesized using Design Complier software under the premise of ensuring the correctness of the simulation result. The synthesis results show that, compared to several current S-box implementation schemes, the proposed implementation of the S-box significantly reduces the area overhead and critical path delay, then gets higher hardware efficiency. This provides strong support for realizing efficient and compact S-box ASIC designs.
文摘This paper delves into Agile Development Methods in Software Engineering,contrasting them with the traditional Waterfall model and analyzing their efficiency.Agile methods,known for their adaptability and customer-centric approach,have gained prominence in the fast-paced software development industry.These methods,including Scrum,Kanban,and Extreme Programming(XP),are characterized by iterative cycles,collaborative efforts,and a focus on rapid delivery and quality improvement.The paper compares these agile methodologies to the sequential and rigid Waterfall model,highlighting agile’s superior flexibility,adaptability,and responsiveness to changing requirements.It emphasizes the importance of customer involvement in agile processes,which leads to higher satisfaction and better alignment with user expectations.The analysis reveals that agile methods not only enhance the speed of delivery but also improve the overall quality of the software product.The paper concludes that agile methodologies are more effective in today's dynamic software development environment,providing a robust framework for managing complex projects and ensuring the delivery of high-quality,relevant software solutions.
基金Digital Twin and Acoustic Perception Research Team(2021XJTD06)。
文摘With the rapid development of information technology,the demand for talents in the field of software engineering is growing.In order to cultivate high-quality software engineering talents who meet the market demand,universities have continuously carried out the construction of software engineering majors.Accreditation Board for Engineering and Technology(ABET)certification,as an internationally recognized higher education quality assurance system,provides important reference and guidance for the construction of software engineering majors.Guided by student learning outcomes and core competencies,combined with the characteristics of software engineering talent cultivation,the innovation of talent cultivation mode takes industry-education integration and school-enterprise cooperation as the main development paths and explores comprehensive reform of the major in terms of professional positioning and goals,curriculum system,teaching conditions,and teachers.This comprehensive reform model has effectively promoted the development of major construction and improved the quality of talent cultivation.
文摘Under the background of“new engineering”construction,software engineering teaching pays more attention to cultivating students’engineering practice and innovation ability.In view of the inconsistency between development and demand design,team division of labor,difficult measurement of individual contribution,single assessment method,and other problems in traditional practice teaching,this paper proposes that under the guidance of agile development methods,software engineering courses should adopt Scrum framework to organize course project practice,use agile collaboration platform to implement individual work,follow up experiment progress,and ensure effective project advancement.The statistical data of curriculum“diversity”assessment show that there is an obvious improvement effect on students’software engineering ability and quality.
基金funded by Start-Up Funds for Scientific Research of Shenzhen University,Grant No.000002112313.
文摘With the development of the digital city,data and data analysis have become more and more important.The database is the foundation of data analysis.In this paper,the software system of the urban land planning database of Shanghai in China is developed based on MySQL.The conceptual model of the urban land planning database is proposed,and the entities,attributes and connections of this model are discussed.Then the E-R conceptual model is transformed into a logical structure,which is supported by the relational databasemanagement system(DBMS).Based on the conceptual and logical structures,by using Spring Boot as the back-end framework and using MySQL and Java API as the development tools,a platformwith datamanagement,information sharing,map assistance and other functions is established.The functionalmodules in this platformare designed.The results of J Meter test show that the DBMS can add,store and retrieve information data stably,and it has the advantages of fast response and low error rate.The software system of the urban land planning database developed in this paper can improve the efficiency of storing and managing land data,eliminating redundant data and sharing data.