The purpose of software defect prediction is to identify defect-prone code modules to assist software quality assurance teams with the appropriate allocation of resources and labor.In previous software defect predicti...The purpose of software defect prediction is to identify defect-prone code modules to assist software quality assurance teams with the appropriate allocation of resources and labor.In previous software defect prediction studies,transfer learning was effective in solving the problem of inconsistent project data distribution.However,target projects often lack sufficient data,which affects the performance of the transfer learning model.In addition,the presence of uncorrelated features between projects can decrease the prediction accuracy of the transfer learning model.To address these problems,this article propose a software defect prediction method based on stable learning(SDP-SL)that combines code visualization techniques and residual networks.This method first transforms code files into code images using code visualization techniques and then constructs a defect prediction model based on these code images.During the model training process,target project data are not required as prior knowledge.Following the principles of stable learning,this paper dynamically adjusted the weights of source project samples to eliminate dependencies between features,thereby capturing the“invariance mechanism”within the data.This approach explores the genuine relationship between code defect features and labels,thereby enhancing defect prediction performance.To evaluate the performance of SDP-SL,this article conducted comparative experiments on 10 open-source projects in the PROMISE dataset.The experimental results demonstrated that in terms of the F-measure,the proposed SDP-SL method outperformed other within-project defect prediction methods by 2.11%-44.03%.In cross-project defect prediction,the SDP-SL method provided an improvement of 5.89%-25.46% in prediction performance compared to other cross-project defect prediction methods.Therefore,SDP-SL can effectively enhance within-and cross-project defect predictions.展开更多
This research investigates the ecological importance,changes,and status of mangrove wetlands along China’s coastline.Visual interpretation,geological surveys,and ISO clustering unsupervised classification methods are...This research investigates the ecological importance,changes,and status of mangrove wetlands along China’s coastline.Visual interpretation,geological surveys,and ISO clustering unsupervised classification methods are employed to interpret mangrove distribution from remote sensing images from 2021,utilizing ArcGIS software platform.Furthermore,the carbon storage capacity of mangrove wetlands is quantified using the carbon storage module of InVEST model.Results show that the mangrove wetlands in China covered an area of 278.85 km2 in 2021,predominantly distributed in Hainan,Guangxi,Guangdong,Fujian,Zhejiang,Taiwan,Hong Kong,and Macao.The total carbon storage is assessed at 2.11×10^(6) t,with specific regional data provided.Trends since the 1950s reveal periods of increase,decrease,sharp decrease,and slight-steady increases in mangrove areas in China.An important finding is the predominant replacement of natural coastlines adjacent to mangrove wetlands by artificial ones,highlighting the need for creating suitable spaces for mangrove restoration.This study is poised to guide future mangroverelated investigations and conservation strategies.展开更多
The Message Passing Interface (MPI) is a widely accepted standard for parallel computing on distributed memorysystems.However, MPI implementations can contain defects that impact the reliability and performance of par...The Message Passing Interface (MPI) is a widely accepted standard for parallel computing on distributed memorysystems.However, MPI implementations can contain defects that impact the reliability and performance of parallelapplications. Detecting and correcting these defects is crucial, yet there is a lack of published models specificallydesigned for correctingMPI defects. To address this, we propose a model for detecting and correcting MPI defects(DC_MPI), which aims to detect and correct defects in various types of MPI communication, including blockingpoint-to-point (BPTP), nonblocking point-to-point (NBPTP), and collective communication (CC). The defectsaddressed by the DC_MPI model include illegal MPI calls, deadlocks (DL), race conditions (RC), and messagemismatches (MM). To assess the effectiveness of the DC_MPI model, we performed experiments on a datasetconsisting of 40 MPI codes. The results indicate that the model achieved a detection rate of 37 out of 40 codes,resulting in an overall detection accuracy of 92.5%. Additionally, the execution duration of the DC_MPI modelranged from 0.81 to 1.36 s. These findings show that the DC_MPI model is useful in detecting and correctingdefects in MPI implementations, thereby enhancing the reliability and performance of parallel applications. TheDC_MPImodel fills an important research gap and provides a valuable tool for improving the quality ofMPI-basedparallel computing systems.展开更多
Agile Transformations are challenging processes for organizations that look to extend the benefits of Agile philosophy and methods beyond software engineering.Despite the impact of these transformations on orga-nizati...Agile Transformations are challenging processes for organizations that look to extend the benefits of Agile philosophy and methods beyond software engineering.Despite the impact of these transformations on orga-nizations,they have not been extensively studied in academia.We conducted a study grounded in workshops and interviews with 99 participants from 30 organizations,including organizations undergoing transformations(“final organizations”)and companies supporting these processes(“consultants”).The study aims to understand the motivations,objectives,and factors driving and challenging these transformations.Over 700 responses were collected to the question and categorized into 32 objectives.The findings show that organizations primarily aim to achieve customer centricity and adaptability,both with 8%of the mentions.Other primary important objectives,with above 4%of mentions,include alignment of goals,lean delivery,sustainable processes,and a flatter,more team-based organizational structure.We also detect discrepancies in perspectives between the objectives identified by the two kinds of organizations and the existing agile literature and models.This misalignment highlights the need for practitioners to understand with the practical realities the organizations face.展开更多
Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As re...Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept.展开更多
Software Defined Networking(SDN)is programmable by separation of forwarding control through the centralization of the controller.The controller plays the role of the‘brain’that dictates the intelligent part of SDN t...Software Defined Networking(SDN)is programmable by separation of forwarding control through the centralization of the controller.The controller plays the role of the‘brain’that dictates the intelligent part of SDN technology.Various versions of SDN controllers exist as a response to the diverse demands and functions expected of them.There are several SDN controllers available in the open market besides a large number of commercial controllers;some are developed tomeet carrier-grade service levels and one of the recent trends in open-source SDN controllers is the Open Network Operating System(ONOS).This paper presents a comparative study between open source SDN controllers,which are known as Network Controller Platform(NOX),Python-based Network Controller(POX),component-based SDN framework(Ryu),Java-based OpenFlow controller(Floodlight),OpenDayLight(ODL)and ONOS.The discussion is further extended into ONOS architecture,as well as,the evolution of ONOS controllers.This article will review use cases based on ONOS controllers in several application deployments.Moreover,the opportunities and challenges of open source SDN controllers will be discussed,exploring carriergrade ONOS for future real-world deployments,ONOS unique features and identifying the suitable choice of SDN controller for service providers.In addition,we attempt to provide answers to several critical questions relating to the implications of the open-source nature of SDN controllers regarding vendor lock-in,interoperability,and standards compliance,Similarly,real-world use cases of organizations using open-source SDN are highlighted and how the open-source community contributes to the development of SDN controllers.Furthermore,challenges faced by open-source projects,and considerations when choosing an open-source SDN controller are underscored.Then the role of Artificial Intelligence(AI)and Machine Learning(ML)in the evolution of open-source SDN controllers in light of recent research is indicated.In addition,the challenges and limitations associated with deploying open-source SDN controllers in production networks,how can they be mitigated,and finally how opensource SDN controllers handle network security and ensure that network configurations and policies are robust and resilient are presented.Potential opportunities and challenges for future Open SDN deployment are outlined to conclude the article.展开更多
As one of the most effective techniques for finding software vulnerabilities,fuzzing has become a hot topic in software security.It feeds potentially syntactically or semantically malformed test data to a target progr...As one of the most effective techniques for finding software vulnerabilities,fuzzing has become a hot topic in software security.It feeds potentially syntactically or semantically malformed test data to a target program to mine vulnerabilities and crash the system.In recent years,considerable efforts have been dedicated by researchers and practitioners towards improving fuzzing,so there aremore and more methods and forms,whichmake it difficult to have a comprehensive understanding of the technique.This paper conducts a thorough survey of fuzzing,focusing on its general process,classification,common application scenarios,and some state-of-the-art techniques that have been introduced to improve its performance.Finally,this paper puts forward key research challenges and proposes possible future research directions that may provide new insights for researchers.展开更多
We investigate the skyrmion motion driven by spin waves on magnetic nanotubes through micromagnetic simulations.Our key results include demonstrating the stability and enhanced mobility of skyrmions on the edgeless na...We investigate the skyrmion motion driven by spin waves on magnetic nanotubes through micromagnetic simulations.Our key results include demonstrating the stability and enhanced mobility of skyrmions on the edgeless nanotube geometry,which prevents destruction at boundaries—a common issue in planar geometries.We explore the influence of the damping coefficient,amplitude,and frequency of microwaves on skyrmion dynamics,revealing a non-uniform velocity profile characterized by acceleration and deceleration phases.Our results show that the skyrmion Hall effect is significantly modulated on nanotubes compared to planar models,with specific dependencies on the spin-wave parameters.These findings provide insights into skyrmion manipulation for spintronic applications,highlighting the potential for high-speed and efficient information transport in magnonic devices.展开更多
Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely h...Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD.展开更多
Within the magnonics community,there has been a lot of interests in the magnon–skyrmion interaction.Magnons and skyrmions are two intriguing phenomena in condensed matter physics,and magnetic nanotubes have emerged a...Within the magnonics community,there has been a lot of interests in the magnon–skyrmion interaction.Magnons and skyrmions are two intriguing phenomena in condensed matter physics,and magnetic nanotubes have emerged as a suitable platform to study their complex interactions.We show that magnon frequency combs can be induced in magnetic nanotubes by three-wave mixing between the propagating magnons and skyrmion.This study enriches our fundamental comprehension of magnon–skyrmion interactions and holds promise for developing innovative spintronic devices and applications.This frequency comb tunability and unique spectral features offer a rich platform for exploring novel avenues in magnetic nanotechnology.展开更多
DD4hep serves as a generic detector description toolkit recommended for offline software development in next-generation high-energy physics(HEP)experiments.Conversely,Filmbox(FBX)stands out as a widely used 3D modelin...DD4hep serves as a generic detector description toolkit recommended for offline software development in next-generation high-energy physics(HEP)experiments.Conversely,Filmbox(FBX)stands out as a widely used 3D modeling file format within the 3D software industry.In this paper,we introduce a novel method that can automatically convert complex HEP detector geometries from DD4hep description into 3D models in the FBX format.The feasibility of this method was dem-onstrated by its application to the DD4hep description of the Compact Linear Collider detector and several sub-detectors of the super Tau-Charm facility and circular electron-positron collider experiments.The automatic DD4hep–FBX detector conversion interface provides convenience for further development of applications,such as detector design,simulation,visualization,data monitoring,and outreach,in HEP experiments.展开更多
Forest habitats are critical for biodiversity,ecosystem services,human livelihoods,and well-being.Capacity to conduct theoretical and applied forest ecology research addressing direct(e.g.,deforestation)and indirect(e...Forest habitats are critical for biodiversity,ecosystem services,human livelihoods,and well-being.Capacity to conduct theoretical and applied forest ecology research addressing direct(e.g.,deforestation)and indirect(e.g.,climate change)anthropogenic pressures has benefited considerably from new field-and statistical-techniques.We used machine learning and bibliometric structural topic modelling to identify 20 latent topics comprising four principal fields from a corpus of 16,952 forest ecology/forestry articles published in eight ecology and five forestry journals between 2010 and 2022.Articles published per year increased from 820 in 2010 to 2,354 in 2021,shifting toward more applied topics.Publications from China and some countries in North America and Europe dominated,with relatively fewer articles from some countries in West and Central Africa and West Asia,despite globally important forest resources.Most study sites were in some countries in North America,Central Asia,and South America,and Australia.Articles utilizing R statistical software predominated,increasing from 29.5%in 2010 to 71.4%in 2022.The most frequently used packages included lme4,vegan,nlme,MuMIn,ggplot2,car,MASS,mgcv,multcomp and raster.R was more often used in forest ecology than applied forestry articles.R software offers advantages in script and workflow-sharing compared to other statistical packages.Our findings demonstrate that the disciplines of forest ecology/forestry are expanding both in number and scope,aided by more sophisticated statistical tools,to tackle the challenges of redressing forest habitat loss and the socio-economic impacts of deforestation.展开更多
In the early time of oilfield development, insufficient production data and unclear understanding of oil production presented a challenge to reservoir engineers in devising effective development plans. To address this...In the early time of oilfield development, insufficient production data and unclear understanding of oil production presented a challenge to reservoir engineers in devising effective development plans. To address this challenge, this study proposes a method using data mining technology to search for similar oil fields and predict well productivity. A query system of 135 analogy parameters is established based on geological and reservoir engineering research, and the weight values of these parameters are calculated using a data algorithm to establish an analogy system. The fuzzy matter-element algorithm is then used to calculate the similarity between oil fields, with fields having similarity greater than 70% identified as similar oil fields. Using similar oil fields as sample data, 8 important factors affecting well productivity are identified using the Pearson coefficient and mean decrease impurity(MDI) method. To establish productivity prediction models, linear regression(LR), random forest regression(RF), support vector regression(SVR), backpropagation(BP), extreme gradient boosting(XGBoost), and light gradient boosting machine(Light GBM) algorithms are used. Their performance is evaluated using the coefficient of determination(R^(2)), explained variance score(EV), mean squared error(MSE), and mean absolute error(MAE) metrics. The Light GBM model is selected to predict the productivity of 30 wells in the PL field with an average error of only 6.31%, which significantly improves the accuracy of the productivity prediction and meets the application requirements in the field. Finally, a software platform integrating data query,oil field analogy, productivity prediction, and knowledge base is established to identify patterns in massive reservoir development data and provide valuable technical references for new reservoir development.展开更多
When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ...When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.展开更多
The Internet of Things(IoT)has characteristics such as node mobility,node heterogeneity,link heterogeneity,and topology heterogeneity.In the face of the IoT characteristics and the explosive growth of IoT nodes,which ...The Internet of Things(IoT)has characteristics such as node mobility,node heterogeneity,link heterogeneity,and topology heterogeneity.In the face of the IoT characteristics and the explosive growth of IoT nodes,which brings about large-scale data processing requirements,edge computing architecture has become an emerging network architecture to support IoT applications due to its ability to provide powerful computing capabilities and good service functions.However,the defense mechanism of Edge Computing-enabled IoT Nodes(ECIoTNs)is still weak due to their limited resources,so that they are susceptible to malicious software spread,which can compromise data confidentiality and network service availability.Facing this situation,we put forward an epidemiology-based susceptible-curb-infectious-removed-dead(SCIRD)model.Then,we analyze the dynamics of ECIoTNs with different infection levels under different initial conditions to obtain the dynamic differential equations.Additionally,we establish the presence of equilibrium states in the SCIRD model.Furthermore,we conduct an analysis of the model’s stability and examine the conditions under which malicious software will either spread or disappear within Edge Computing-enabled IoT(ECIoT)networks.Lastly,we validate the efficacy and superiority of the SCIRD model through MATLAB simulations.These research findings offer a theoretical foundation for suppressing the propagation of malicious software in ECIoT networks.The experimental results indicate that the theoretical SCIRD model has instructive significance,deeply revealing the principles of malicious software propagation in ECIoT networks.This study solves a challenging security problem of ECIoT networks by determining the malicious software propagation threshold,which lays the foundation for buildingmore secure and reliable ECIoT networks.展开更多
Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages ot...Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages other thanEnglish is a challenging task, especially for analyzing sentiment analysis in social media reviews. Most existingsentiment analysis systems focus on English, leaving a significant research gap in other languages due to limitedresources and tools. This research aims to address this gap by building a sentiment lexicon for local languages,which is then used with a machine learning algorithm for efficient sentiment analysis. In the first step, a lexiconis developed that includes five languages: Urdu, Roman Urdu, Pashto, Roman Pashto, and English. The sentimentscores from SentiWordNet are associated with each word in the lexicon to produce an effective sentiment score. Inthe second step, a naive Bayesian algorithm is applied to the developed lexicon for efficient sentiment analysis ofRoman Pashto. Both the sentiment lexicon and sentiment analysis steps were evaluated using information retrievalmetrics, with an accuracy score of 0.89 for the sentiment lexicon and 0.83 for the sentiment analysis. The resultsshowcase the potential for improving software engineering tasks related to user feedback analysis and productdevelopment.展开更多
In recent years,the rapid development of computer software has led to numerous security problems,particularly software vulnerabilities.These flaws can cause significant harm to users’privacy and property.Current secu...In recent years,the rapid development of computer software has led to numerous security problems,particularly software vulnerabilities.These flaws can cause significant harm to users’privacy and property.Current security defect detection technology relies on manual or professional reasoning,leading to missed detection and high false detection rates.Artificial intelligence technology has led to the development of neural network models based on machine learning or deep learning to intelligently mine holes,reducing missed alarms and false alarms.So,this project aims to study Java source code defect detection methods for defects like null pointer reference exception,XSS(Transform),and Structured Query Language(SQL)injection.Also,the project uses open-source Javalang to translate the Java source code,conducts a deep search on the AST to obtain the empty syntax feature library,and converts the Java source code into a dependency graph.The feature vector is then used as the learning target for the neural network.Four types of Convolutional Neural Networks(CNN),Long Short-Term Memory(LSTM),Bi-directional Long Short-Term Memory(BiLSTM),and Attention Mechanism+Bidirectional LSTM,are used to investigate various code defects,including blank pointer reference exception,XSS,and SQL injection defects.Experimental results show that the attention mechanism in two-dimensional BLSTM is the most effective for object recognition,verifying the correctness of the method.展开更多
The settling flux of biodeposition affects the environmental quality of cage culture areas and determines their environmental carrying capacity.Simple and effective simulation of the settling flux of biodeposition is ...The settling flux of biodeposition affects the environmental quality of cage culture areas and determines their environmental carrying capacity.Simple and effective simulation of the settling flux of biodeposition is extremely important for determining the spatial distribution of biodeposition.Theoretically,biodeposition in cage culture areas without specific emission rules can be simplified as point source pollution.Fluent is a fluid simulation software that can simulate the dispersion of particulate matter simply and efficiently.Based on the simplification of pollution sources and bays,the settling flux of biodeposition can be easily and effectively simulated by Fluent fluid software.In the present work,the feasibility of this method was evaluated by simulation of the settling flux of biodeposition in Maniao Bay,Hainan Province,China,and 20 sampling sites were selected for determining the settling fluxes.At sampling sites P1,P2,P3,P4,P5,Z1,Z2,Z3,Z4,A1,A2,A3,A4,B1,B2,C1,C2,C3 and C4,the measured settling fluxes of biodeposition were 26.02,15.78,10.77,58.16,6.57,72.17,12.37,12.11,106.64,150.96,22.59,11.41,18.03,7.90,19.23,7.06,11.84,5.19 and 2.57 g d^(−1)m^(−2),respectively.The simulated settling fluxes of biodeposition at the corresponding sites were 16.03,23.98,8.87,46.90,4.52,104.77,16.03,8.35,180.83,213.06,39.10,17.47,20.98,9.78,23.25,7.84,15.90,6.06 and 1.65 g d^(−1)m^(−2),respectively.There was a positive correlation between the simulated settling fluxes and measured ones(R=0.94,P=2.22×10^(−9)<0.05),which implies that the spatial differentiation of biodeposition flux was well simulated.Moreover,the posterior difference ratio of the simulation was 0.38,and the small error probability was 0.94,which means that the simulated results reached an acceptable level from the perspective of relative error.Thus,if nonpoint source pollution is simplified to point source pollution and open waters are simplified based on similarity theory,the setting flux of biodeposition in the open waters can be simply and effectively simulated by the fluid simulation software Fluent.展开更多
We introduce Quafu-Qcover,an open-source cloud-based software package developed for solving combinatorial optimization problems using quantum simulators and hardware backends.Quafu-Qcover provides a standardized and c...We introduce Quafu-Qcover,an open-source cloud-based software package developed for solving combinatorial optimization problems using quantum simulators and hardware backends.Quafu-Qcover provides a standardized and comprehensive workflow that utilizes the quantum approximate optimization algorithm(QAOA).It facilitates the automatic conversion of the original problem into a quadratic unconstrained binary optimization(QUBO)model and its corresponding Ising model,which can be subsequently transformed into a weight graph.The core of Qcover relies on a graph decomposition-based classical algorithm,which efficiently derives the optimal parameters for the shallow QAOA circuit.Quafu-Qcover incorporates a dedicated compiler capable of translating QAOA circuits into physical quantum circuits that can be executed on Quafu cloud quantum computers.Compared to a general-purpose compiler,our compiler demonstrates the ability to generate shorter circuit depths,while also exhibiting superior speed performance.Additionally,the Qcover compiler has the capability to dynamically create a library of qubits coupling substructures in real-time,utilizing the most recent calibration data from the superconducting quantum devices.This ensures that computational tasks can be assigned to connected physical qubits with the highest fidelity.The Quafu-Qcover allows us to retrieve quantum computing sampling results using a task ID at any time,enabling asynchronous processing.Moreover,it incorporates modules for results preprocessing and visualization,facilitating an intuitive display of solutions for combinatorial optimization problems.We hope that Quafu-Qcover can serve as an instructive illustration for how to explore application problems on the Quafu cloud quantum computers.展开更多
Object Constraint Language(OCL)is one kind of lightweight formal specification,which is widely used for software verification and validation in NASA and Object Management Group projects.Although OCL provides a simple ...Object Constraint Language(OCL)is one kind of lightweight formal specification,which is widely used for software verification and validation in NASA and Object Management Group projects.Although OCL provides a simple expressive syntax,it is hard for the developers to write correctly due to lacking knowledge of the mathematical foundations of the first-order logic,which is approximately half accurate at the first stage of devel-opment.A deep neural network named DeepOCL is proposed,which takes the unre-stricted natural language as inputs and automatically outputs the best-scored OCL candidates without requiring a domain conceptual model that is compulsively required in existing rule-based generation approaches.To demonstrate the validity of our proposed approach,ablation experiments were conducted on a new sentence-aligned dataset named OCLPairs.The experiments show that the proposed DeepOCL can achieve state of the art for OCL statement generation,scored 74.30 on BLEU,and greatly outperformed experienced developers by 35.19%.The proposed approach is the first deep learning approach to generate the OCL expression from the natural language.It can be further developed as a CASE tool for the software industry.展开更多
基金supported by the NationalNatural Science Foundation of China(Grant No.61867004)the Youth Fund of the National Natural Science Foundation of China(Grant No.41801288).
文摘The purpose of software defect prediction is to identify defect-prone code modules to assist software quality assurance teams with the appropriate allocation of resources and labor.In previous software defect prediction studies,transfer learning was effective in solving the problem of inconsistent project data distribution.However,target projects often lack sufficient data,which affects the performance of the transfer learning model.In addition,the presence of uncorrelated features between projects can decrease the prediction accuracy of the transfer learning model.To address these problems,this article propose a software defect prediction method based on stable learning(SDP-SL)that combines code visualization techniques and residual networks.This method first transforms code files into code images using code visualization techniques and then constructs a defect prediction model based on these code images.During the model training process,target project data are not required as prior knowledge.Following the principles of stable learning,this paper dynamically adjusted the weights of source project samples to eliminate dependencies between features,thereby capturing the“invariance mechanism”within the data.This approach explores the genuine relationship between code defect features and labels,thereby enhancing defect prediction performance.To evaluate the performance of SDP-SL,this article conducted comparative experiments on 10 open-source projects in the PROMISE dataset.The experimental results demonstrated that in terms of the F-measure,the proposed SDP-SL method outperformed other within-project defect prediction methods by 2.11%-44.03%.In cross-project defect prediction,the SDP-SL method provided an improvement of 5.89%-25.46% in prediction performance compared to other cross-project defect prediction methods.Therefore,SDP-SL can effectively enhance within-and cross-project defect predictions.
基金supported by China Geological Survey(DD20211301).
文摘This research investigates the ecological importance,changes,and status of mangrove wetlands along China’s coastline.Visual interpretation,geological surveys,and ISO clustering unsupervised classification methods are employed to interpret mangrove distribution from remote sensing images from 2021,utilizing ArcGIS software platform.Furthermore,the carbon storage capacity of mangrove wetlands is quantified using the carbon storage module of InVEST model.Results show that the mangrove wetlands in China covered an area of 278.85 km2 in 2021,predominantly distributed in Hainan,Guangxi,Guangdong,Fujian,Zhejiang,Taiwan,Hong Kong,and Macao.The total carbon storage is assessed at 2.11×10^(6) t,with specific regional data provided.Trends since the 1950s reveal periods of increase,decrease,sharp decrease,and slight-steady increases in mangrove areas in China.An important finding is the predominant replacement of natural coastlines adjacent to mangrove wetlands by artificial ones,highlighting the need for creating suitable spaces for mangrove restoration.This study is poised to guide future mangroverelated investigations and conservation strategies.
基金the Deanship of Scientific Research at King Abdulaziz University,Jeddah,Saudi Arabia under the Grant No.RG-12-611-43.
文摘The Message Passing Interface (MPI) is a widely accepted standard for parallel computing on distributed memorysystems.However, MPI implementations can contain defects that impact the reliability and performance of parallelapplications. Detecting and correcting these defects is crucial, yet there is a lack of published models specificallydesigned for correctingMPI defects. To address this, we propose a model for detecting and correcting MPI defects(DC_MPI), which aims to detect and correct defects in various types of MPI communication, including blockingpoint-to-point (BPTP), nonblocking point-to-point (NBPTP), and collective communication (CC). The defectsaddressed by the DC_MPI model include illegal MPI calls, deadlocks (DL), race conditions (RC), and messagemismatches (MM). To assess the effectiveness of the DC_MPI model, we performed experiments on a datasetconsisting of 40 MPI codes. The results indicate that the model achieved a detection rate of 37 out of 40 codes,resulting in an overall detection accuracy of 92.5%. Additionally, the execution duration of the DC_MPI modelranged from 0.81 to 1.36 s. These findings show that the DC_MPI model is useful in detecting and correctingdefects in MPI implementations, thereby enhancing the reliability and performance of parallel applications. TheDC_MPImodel fills an important research gap and provides a valuable tool for improving the quality ofMPI-basedparallel computing systems.
基金funding from the European Commission for the Ruralities Project(grant agreement no.101060876).
文摘Agile Transformations are challenging processes for organizations that look to extend the benefits of Agile philosophy and methods beyond software engineering.Despite the impact of these transformations on orga-nizations,they have not been extensively studied in academia.We conducted a study grounded in workshops and interviews with 99 participants from 30 organizations,including organizations undergoing transformations(“final organizations”)and companies supporting these processes(“consultants”).The study aims to understand the motivations,objectives,and factors driving and challenging these transformations.Over 700 responses were collected to the question and categorized into 32 objectives.The findings show that organizations primarily aim to achieve customer centricity and adaptability,both with 8%of the mentions.Other primary important objectives,with above 4%of mentions,include alignment of goals,lean delivery,sustainable processes,and a flatter,more team-based organizational structure.We also detect discrepancies in perspectives between the objectives identified by the two kinds of organizations and the existing agile literature and models.This misalignment highlights the need for practitioners to understand with the practical realities the organizations face.
文摘Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept.
基金supported by UniversitiKebangsaan Malaysia,under Dana Impak Perdana 2.0.(Ref:DIP–2022–020).
文摘Software Defined Networking(SDN)is programmable by separation of forwarding control through the centralization of the controller.The controller plays the role of the‘brain’that dictates the intelligent part of SDN technology.Various versions of SDN controllers exist as a response to the diverse demands and functions expected of them.There are several SDN controllers available in the open market besides a large number of commercial controllers;some are developed tomeet carrier-grade service levels and one of the recent trends in open-source SDN controllers is the Open Network Operating System(ONOS).This paper presents a comparative study between open source SDN controllers,which are known as Network Controller Platform(NOX),Python-based Network Controller(POX),component-based SDN framework(Ryu),Java-based OpenFlow controller(Floodlight),OpenDayLight(ODL)and ONOS.The discussion is further extended into ONOS architecture,as well as,the evolution of ONOS controllers.This article will review use cases based on ONOS controllers in several application deployments.Moreover,the opportunities and challenges of open source SDN controllers will be discussed,exploring carriergrade ONOS for future real-world deployments,ONOS unique features and identifying the suitable choice of SDN controller for service providers.In addition,we attempt to provide answers to several critical questions relating to the implications of the open-source nature of SDN controllers regarding vendor lock-in,interoperability,and standards compliance,Similarly,real-world use cases of organizations using open-source SDN are highlighted and how the open-source community contributes to the development of SDN controllers.Furthermore,challenges faced by open-source projects,and considerations when choosing an open-source SDN controller are underscored.Then the role of Artificial Intelligence(AI)and Machine Learning(ML)in the evolution of open-source SDN controllers in light of recent research is indicated.In addition,the challenges and limitations associated with deploying open-source SDN controllers in production networks,how can they be mitigated,and finally how opensource SDN controllers handle network security and ensure that network configurations and policies are robust and resilient are presented.Potential opportunities and challenges for future Open SDN deployment are outlined to conclude the article.
基金supported in part by the National Natural Science Foundation of China under Grants 62273272,62303375,and 61873277in part by the Key Research and Development Program of Shaanxi Province under Grant 2023-YBGY-243+1 种基金in part by the Natural Science Foundation of Shaanxi Province under Grant 2020JQ-758in part by the Youth Innovation Team of Shaanxi Universities,and in part by the Special Fund for Scientific and Technological Innovation Strategy of Guangdong Province under Grant 2022A0505030025.
文摘As one of the most effective techniques for finding software vulnerabilities,fuzzing has become a hot topic in software security.It feeds potentially syntactically or semantically malformed test data to a target program to mine vulnerabilities and crash the system.In recent years,considerable efforts have been dedicated by researchers and practitioners towards improving fuzzing,so there aremore and more methods and forms,whichmake it difficult to have a comprehensive understanding of the technique.This paper conducts a thorough survey of fuzzing,focusing on its general process,classification,common application scenarios,and some state-of-the-art techniques that have been introduced to improve its performance.Finally,this paper puts forward key research challenges and proposes possible future research directions that may provide new insights for researchers.
基金supported by the National Key R&D Program of China(Grant No.2022YFA1402802)the National Natural Science Foundation of China(Grant Nos.12434003,12374103,and 12074057).
文摘We investigate the skyrmion motion driven by spin waves on magnetic nanotubes through micromagnetic simulations.Our key results include demonstrating the stability and enhanced mobility of skyrmions on the edgeless nanotube geometry,which prevents destruction at boundaries—a common issue in planar geometries.We explore the influence of the damping coefficient,amplitude,and frequency of microwaves on skyrmion dynamics,revealing a non-uniform velocity profile characterized by acceleration and deceleration phases.Our results show that the skyrmion Hall effect is significantly modulated on nanotubes compared to planar models,with specific dependencies on the spin-wave parameters.These findings provide insights into skyrmion manipulation for spintronic applications,highlighting the potential for high-speed and efficient information transport in magnonic devices.
文摘Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD.
基金supported by the National Key R&D Program China (Grant No.2022YFA1402802)the National Natural Science Foundation of China (Grant Nos.12374103 and 12074057)。
文摘Within the magnonics community,there has been a lot of interests in the magnon–skyrmion interaction.Magnons and skyrmions are two intriguing phenomena in condensed matter physics,and magnetic nanotubes have emerged as a suitable platform to study their complex interactions.We show that magnon frequency combs can be induced in magnetic nanotubes by three-wave mixing between the propagating magnons and skyrmion.This study enriches our fundamental comprehension of magnon–skyrmion interactions and holds promise for developing innovative spintronic devices and applications.This frequency comb tunability and unique spectral features offer a rich platform for exploring novel avenues in magnetic nanotechnology.
基金supported by the National Natural Science Foundation of China(Nos.12175321,11975021,11675275,and U1932101)National Key Research and Development Program of China(Nos.2023YFA1606000 and 2020YFA0406400)+2 种基金State Key Laboratory of Nuclear Physics and Technology,Peking University(Nos.NPT2020KFY04 and NPT2020KFY05)Strategic Priority Research Program of the Chinese Academy of Sciences(No.XDA10010900)National College Students Science and Technology Innovation Project,and Undergraduate Base Scientific Research Project of Sun Yat-sen University。
文摘DD4hep serves as a generic detector description toolkit recommended for offline software development in next-generation high-energy physics(HEP)experiments.Conversely,Filmbox(FBX)stands out as a widely used 3D modeling file format within the 3D software industry.In this paper,we introduce a novel method that can automatically convert complex HEP detector geometries from DD4hep description into 3D models in the FBX format.The feasibility of this method was dem-onstrated by its application to the DD4hep description of the Compact Linear Collider detector and several sub-detectors of the super Tau-Charm facility and circular electron-positron collider experiments.The automatic DD4hep–FBX detector conversion interface provides convenience for further development of applications,such as detector design,simulation,visualization,data monitoring,and outreach,in HEP experiments.
基金financially supported by the National Natural Science Foundation of China(31971541).
文摘Forest habitats are critical for biodiversity,ecosystem services,human livelihoods,and well-being.Capacity to conduct theoretical and applied forest ecology research addressing direct(e.g.,deforestation)and indirect(e.g.,climate change)anthropogenic pressures has benefited considerably from new field-and statistical-techniques.We used machine learning and bibliometric structural topic modelling to identify 20 latent topics comprising four principal fields from a corpus of 16,952 forest ecology/forestry articles published in eight ecology and five forestry journals between 2010 and 2022.Articles published per year increased from 820 in 2010 to 2,354 in 2021,shifting toward more applied topics.Publications from China and some countries in North America and Europe dominated,with relatively fewer articles from some countries in West and Central Africa and West Asia,despite globally important forest resources.Most study sites were in some countries in North America,Central Asia,and South America,and Australia.Articles utilizing R statistical software predominated,increasing from 29.5%in 2010 to 71.4%in 2022.The most frequently used packages included lme4,vegan,nlme,MuMIn,ggplot2,car,MASS,mgcv,multcomp and raster.R was more often used in forest ecology than applied forestry articles.R software offers advantages in script and workflow-sharing compared to other statistical packages.Our findings demonstrate that the disciplines of forest ecology/forestry are expanding both in number and scope,aided by more sophisticated statistical tools,to tackle the challenges of redressing forest habitat loss and the socio-economic impacts of deforestation.
基金supported by the National Natural Science Fund of China (No.52104049)the Science Foundation of China University of Petroleum,Beijing (No.2462022BJRC004)。
文摘In the early time of oilfield development, insufficient production data and unclear understanding of oil production presented a challenge to reservoir engineers in devising effective development plans. To address this challenge, this study proposes a method using data mining technology to search for similar oil fields and predict well productivity. A query system of 135 analogy parameters is established based on geological and reservoir engineering research, and the weight values of these parameters are calculated using a data algorithm to establish an analogy system. The fuzzy matter-element algorithm is then used to calculate the similarity between oil fields, with fields having similarity greater than 70% identified as similar oil fields. Using similar oil fields as sample data, 8 important factors affecting well productivity are identified using the Pearson coefficient and mean decrease impurity(MDI) method. To establish productivity prediction models, linear regression(LR), random forest regression(RF), support vector regression(SVR), backpropagation(BP), extreme gradient boosting(XGBoost), and light gradient boosting machine(Light GBM) algorithms are used. Their performance is evaluated using the coefficient of determination(R^(2)), explained variance score(EV), mean squared error(MSE), and mean absolute error(MAE) metrics. The Light GBM model is selected to predict the productivity of 30 wells in the PL field with an average error of only 6.31%, which significantly improves the accuracy of the productivity prediction and meets the application requirements in the field. Finally, a software platform integrating data query,oil field analogy, productivity prediction, and knowledge base is established to identify patterns in massive reservoir development data and provide valuable technical references for new reservoir development.
基金the R&D&I,Spain grants PID2020-119478GB-I00 and,PID2020-115832GB-I00 funded by MCIN/AEI/10.13039/501100011033.N.Rodríguez-Barroso was supported by the grant FPU18/04475 funded by MCIN/AEI/10.13039/501100011033 and by“ESF Investing in your future”Spain.J.Moyano was supported by a postdoctoral Juan de la Cierva Formación grant FJC2020-043823-I funded by MCIN/AEI/10.13039/501100011033 and by European Union NextGenerationEU/PRTR.J.Del Ser acknowledges funding support from the Spanish Centro para el Desarrollo Tecnológico Industrial(CDTI)through the AI4ES projectthe Department of Education of the Basque Government(consolidated research group MATHMODE,IT1456-22)。
文摘When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.
基金in part by National Undergraduate Innovation and Entrepreneurship Training Program under Grant No.202310347039Zhejiang Provincial Natural Science Foundation of China under Grant No.LZ22F020002Huzhou Science and Technology Planning Foundation under Grant No.2023GZ04.
文摘The Internet of Things(IoT)has characteristics such as node mobility,node heterogeneity,link heterogeneity,and topology heterogeneity.In the face of the IoT characteristics and the explosive growth of IoT nodes,which brings about large-scale data processing requirements,edge computing architecture has become an emerging network architecture to support IoT applications due to its ability to provide powerful computing capabilities and good service functions.However,the defense mechanism of Edge Computing-enabled IoT Nodes(ECIoTNs)is still weak due to their limited resources,so that they are susceptible to malicious software spread,which can compromise data confidentiality and network service availability.Facing this situation,we put forward an epidemiology-based susceptible-curb-infectious-removed-dead(SCIRD)model.Then,we analyze the dynamics of ECIoTNs with different infection levels under different initial conditions to obtain the dynamic differential equations.Additionally,we establish the presence of equilibrium states in the SCIRD model.Furthermore,we conduct an analysis of the model’s stability and examine the conditions under which malicious software will either spread or disappear within Edge Computing-enabled IoT(ECIoT)networks.Lastly,we validate the efficacy and superiority of the SCIRD model through MATLAB simulations.These research findings offer a theoretical foundation for suppressing the propagation of malicious software in ECIoT networks.The experimental results indicate that the theoretical SCIRD model has instructive significance,deeply revealing the principles of malicious software propagation in ECIoT networks.This study solves a challenging security problem of ECIoT networks by determining the malicious software propagation threshold,which lays the foundation for buildingmore secure and reliable ECIoT networks.
基金Researchers supporting Project Number(RSPD2024R576),King Saud University,Riyadh,Saudi Arabia.
文摘Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages other thanEnglish is a challenging task, especially for analyzing sentiment analysis in social media reviews. Most existingsentiment analysis systems focus on English, leaving a significant research gap in other languages due to limitedresources and tools. This research aims to address this gap by building a sentiment lexicon for local languages,which is then used with a machine learning algorithm for efficient sentiment analysis. In the first step, a lexiconis developed that includes five languages: Urdu, Roman Urdu, Pashto, Roman Pashto, and English. The sentimentscores from SentiWordNet are associated with each word in the lexicon to produce an effective sentiment score. Inthe second step, a naive Bayesian algorithm is applied to the developed lexicon for efficient sentiment analysis ofRoman Pashto. Both the sentiment lexicon and sentiment analysis steps were evaluated using information retrievalmetrics, with an accuracy score of 0.89 for the sentiment lexicon and 0.83 for the sentiment analysis. The resultsshowcase the potential for improving software engineering tasks related to user feedback analysis and productdevelopment.
基金This work is supported by the Provincial Key Science and Technology Special Project of Henan(No.221100240100)。
文摘In recent years,the rapid development of computer software has led to numerous security problems,particularly software vulnerabilities.These flaws can cause significant harm to users’privacy and property.Current security defect detection technology relies on manual or professional reasoning,leading to missed detection and high false detection rates.Artificial intelligence technology has led to the development of neural network models based on machine learning or deep learning to intelligently mine holes,reducing missed alarms and false alarms.So,this project aims to study Java source code defect detection methods for defects like null pointer reference exception,XSS(Transform),and Structured Query Language(SQL)injection.Also,the project uses open-source Javalang to translate the Java source code,conducts a deep search on the AST to obtain the empty syntax feature library,and converts the Java source code into a dependency graph.The feature vector is then used as the learning target for the neural network.Four types of Convolutional Neural Networks(CNN),Long Short-Term Memory(LSTM),Bi-directional Long Short-Term Memory(BiLSTM),and Attention Mechanism+Bidirectional LSTM,are used to investigate various code defects,including blank pointer reference exception,XSS,and SQL injection defects.Experimental results show that the attention mechanism in two-dimensional BLSTM is the most effective for object recognition,verifying the correctness of the method.
基金support from the National Key Research and Development Program of China(No.2018YFD0900704)the National Natural Science Foundation of China(No.31972796).
文摘The settling flux of biodeposition affects the environmental quality of cage culture areas and determines their environmental carrying capacity.Simple and effective simulation of the settling flux of biodeposition is extremely important for determining the spatial distribution of biodeposition.Theoretically,biodeposition in cage culture areas without specific emission rules can be simplified as point source pollution.Fluent is a fluid simulation software that can simulate the dispersion of particulate matter simply and efficiently.Based on the simplification of pollution sources and bays,the settling flux of biodeposition can be easily and effectively simulated by Fluent fluid software.In the present work,the feasibility of this method was evaluated by simulation of the settling flux of biodeposition in Maniao Bay,Hainan Province,China,and 20 sampling sites were selected for determining the settling fluxes.At sampling sites P1,P2,P3,P4,P5,Z1,Z2,Z3,Z4,A1,A2,A3,A4,B1,B2,C1,C2,C3 and C4,the measured settling fluxes of biodeposition were 26.02,15.78,10.77,58.16,6.57,72.17,12.37,12.11,106.64,150.96,22.59,11.41,18.03,7.90,19.23,7.06,11.84,5.19 and 2.57 g d^(−1)m^(−2),respectively.The simulated settling fluxes of biodeposition at the corresponding sites were 16.03,23.98,8.87,46.90,4.52,104.77,16.03,8.35,180.83,213.06,39.10,17.47,20.98,9.78,23.25,7.84,15.90,6.06 and 1.65 g d^(−1)m^(−2),respectively.There was a positive correlation between the simulated settling fluxes and measured ones(R=0.94,P=2.22×10^(−9)<0.05),which implies that the spatial differentiation of biodeposition flux was well simulated.Moreover,the posterior difference ratio of the simulation was 0.38,and the small error probability was 0.94,which means that the simulated results reached an acceptable level from the perspective of relative error.Thus,if nonpoint source pollution is simplified to point source pollution and open waters are simplified based on similarity theory,the setting flux of biodeposition in the open waters can be simply and effectively simulated by the fluid simulation software Fluent.
基金supported by the National Natural Science Foundation of China(Grant No.92365206)the support of the China Postdoctoral Science Foundation(Certificate Number:2023M740272)+1 种基金supported by the National Natural Science Foundation of China(Grant No.12247168)China Postdoctoral Science Foundation(Certificate Number:2022TQ0036)。
文摘We introduce Quafu-Qcover,an open-source cloud-based software package developed for solving combinatorial optimization problems using quantum simulators and hardware backends.Quafu-Qcover provides a standardized and comprehensive workflow that utilizes the quantum approximate optimization algorithm(QAOA).It facilitates the automatic conversion of the original problem into a quadratic unconstrained binary optimization(QUBO)model and its corresponding Ising model,which can be subsequently transformed into a weight graph.The core of Qcover relies on a graph decomposition-based classical algorithm,which efficiently derives the optimal parameters for the shallow QAOA circuit.Quafu-Qcover incorporates a dedicated compiler capable of translating QAOA circuits into physical quantum circuits that can be executed on Quafu cloud quantum computers.Compared to a general-purpose compiler,our compiler demonstrates the ability to generate shorter circuit depths,while also exhibiting superior speed performance.Additionally,the Qcover compiler has the capability to dynamically create a library of qubits coupling substructures in real-time,utilizing the most recent calibration data from the superconducting quantum devices.This ensures that computational tasks can be assigned to connected physical qubits with the highest fidelity.The Quafu-Qcover allows us to retrieve quantum computing sampling results using a task ID at any time,enabling asynchronous processing.Moreover,it incorporates modules for results preprocessing and visualization,facilitating an intuitive display of solutions for combinatorial optimization problems.We hope that Quafu-Qcover can serve as an instructive illustration for how to explore application problems on the Quafu cloud quantum computers.
基金The National Key Research and Development Program of China,Grant/Award Number:2021YFB2501301。
文摘Object Constraint Language(OCL)is one kind of lightweight formal specification,which is widely used for software verification and validation in NASA and Object Management Group projects.Although OCL provides a simple expressive syntax,it is hard for the developers to write correctly due to lacking knowledge of the mathematical foundations of the first-order logic,which is approximately half accurate at the first stage of devel-opment.A deep neural network named DeepOCL is proposed,which takes the unre-stricted natural language as inputs and automatically outputs the best-scored OCL candidates without requiring a domain conceptual model that is compulsively required in existing rule-based generation approaches.To demonstrate the validity of our proposed approach,ablation experiments were conducted on a new sentence-aligned dataset named OCLPairs.The experiments show that the proposed DeepOCL can achieve state of the art for OCL statement generation,scored 74.30 on BLEU,and greatly outperformed experienced developers by 35.19%.The proposed approach is the first deep learning approach to generate the OCL expression from the natural language.It can be further developed as a CASE tool for the software industry.