Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple ...Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation.展开更多
Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages ot...Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages other thanEnglish is a challenging task, especially for analyzing sentiment analysis in social media reviews. Most existingsentiment analysis systems focus on English, leaving a significant research gap in other languages due to limitedresources and tools. This research aims to address this gap by building a sentiment lexicon for local languages,which is then used with a machine learning algorithm for efficient sentiment analysis. In the first step, a lexiconis developed that includes five languages: Urdu, Roman Urdu, Pashto, Roman Pashto, and English. The sentimentscores from SentiWordNet are associated with each word in the lexicon to produce an effective sentiment score. Inthe second step, a naive Bayesian algorithm is applied to the developed lexicon for efficient sentiment analysis ofRoman Pashto. Both the sentiment lexicon and sentiment analysis steps were evaluated using information retrievalmetrics, with an accuracy score of 0.89 for the sentiment lexicon and 0.83 for the sentiment analysis. The resultsshowcase the potential for improving software engineering tasks related to user feedback analysis and productdevelopment.展开更多
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
Maintaining software reliability is the key idea for conducting quality research.This can be done by having less complex applications.While developers and other experts have made signicant efforts in this context,the ...Maintaining software reliability is the key idea for conducting quality research.This can be done by having less complex applications.While developers and other experts have made signicant efforts in this context,the level of reliability is not the same as it should be.Therefore,further research into the most detailed mechanisms for evaluating and increasing software reliability is essential.A signicant aspect of growing the degree of reliable applications is the quantitative assessment of reliability.There are multiple statistical as well as soft computing methods available in literature for predicting reliability of software.However,none of these mechanisms are useful for all kinds of failure datasets and applications.Hence nding the most optimal model for reliability prediction is an important concern.This paper suggests a novel method to substantially pick the best model of reliability prediction.This method is the combination of analytic hierarchy method(AHP),hesitant fuzzy(HF)sets and technique for order of preference by similarity to ideal solution(TOPSIS).In addition,using the different iterations of the process,procedural sensitivity was also performed to validate the ndings.The ndings of the software reliability prediction models prioritization will help the developers to estimate reliability prediction based on the software type.展开更多
The ability to accurately estimate the cost needed to complete a specific project has been a challenge over the past decades. For a successful software project, accurate prediction of the cost, time and effort is a ve...The ability to accurately estimate the cost needed to complete a specific project has been a challenge over the past decades. For a successful software project, accurate prediction of the cost, time and effort is a very much essential task. This paper presents a systematic review of different models used for software cost estimation which includes algorithmic methods, non-algorithmic methods and learning-oriented methods. The models considered in this review include both the traditional and the recent approaches for software cost estimation. The main objective of this paper is to provide an overview of software cost estimation models and summarize their strengths, weakness, accuracy, amount of data needed, and validation techniques used. Our findings show, in general, neural network based models outperforms other cost estimation techniques. However, no one technique fits every problem and we recommend practitioners to search for the model that best fit their needs.展开更多
The key to software reliability is fault-tolerant design ofapplication software.New fault-tolerant strategies andtheir design methods for application software under vari-ous computer system are introduced.It has such ...The key to software reliability is fault-tolerant design ofapplication software.New fault-tolerant strategies andtheir design methods for application software under vari-ous computer system are introduced.It has such advan-tages as simple hardware platform,independent fromapplication,stable reliability.lastly,some technicalproblems are discussed in details.展开更多
Based on the new algorithm for GIS image pixel topographic factors in remote sensing monitoring ofsoil losses, a software was developed for microcomputer to carry out computation at a medium river basin(county). This ...Based on the new algorithm for GIS image pixel topographic factors in remote sensing monitoring ofsoil losses, a software was developed for microcomputer to carry out computation at a medium river basin(county). This paper lays its emphasis on algorithmic skills and programming techniques as well as applicationof the software.展开更多
An user-oriented computer software consisting of three modeling codes, named DRAD, DRAA and FDPAT, is introduced. It can be used to design three types of Cassegrain system: classical, with shaped subreflector and with...An user-oriented computer software consisting of three modeling codes, named DRAD, DRAA and FDPAT, is introduced. It can be used to design three types of Cassegrain system: classical, with shaped subreflector and with dual shaped reflectors, and to analyse radiation patterns for the antennas. Several mathematical models and numerical techniques are presented.展开更多
Requirements elicitation is a fundamental phase of software development in which an analyst discovers the needs of different stakeholders and transforms them into requirements.This phase is cost-and time-intensive,and...Requirements elicitation is a fundamental phase of software development in which an analyst discovers the needs of different stakeholders and transforms them into requirements.This phase is cost-and time-intensive,and a project may fail if there are excessive costs and schedule overruns.COVID-19 has affected the software industry by reducing interactions between developers and customers.Such a lack of interaction is a key reason for the failure of software projects.Projects can also fail when customers do not know precisely what they want.Furthermore,selecting the unsuitable elicitation technique can also cause project failure.The present study,therefore,aimed to identify which requirements elicitation technique is the most cost-effective for large-scale projects when time to market is a critical issue or when the customer is not available.To that end,we conducted a systematic literature review on requirements elicitation techniques.Most primary studies identified introspection as the best technique,followed by survey and brainstorming.This finding suggests that introspection should be the first choice of elicitation technique,especially when the customer is not available or the project has strict time and cost constraints.Moreover,introspection should also be used as the starting point in the elicitation process of a large-scale project,and all known requirements should be elicited using this technique.展开更多
In this paper, we identify a set of factors that may be used to forecast software productivity and software development time. Software productivity was measured in function points per person hours, and software develo...In this paper, we identify a set of factors that may be used to forecast software productivity and software development time. Software productivity was measured in function points per person hours, and software development time was measured in number of elapsed days. Using field data on over 130 field software projects from various industries, we empirically test the impact of team size, integrated computer aided software engineering (ICASE) tools, software development type, software development platform, and programming language type on the software development productivity and development time. Our results indicate that team size, software development type, software development platform, and programming language type significantly impact software development productivity. However, only team size significantly impacts software development time. Our results indicate that effective management of software development teams, and using different management strategies for different software development type environments may improve software development productivity.展开更多
A mathematical model that makes use of data mining and soft computing techniques is proposed to estimate the software development effort. The proposed model works as follows: The parameters that have impact on the dev...A mathematical model that makes use of data mining and soft computing techniques is proposed to estimate the software development effort. The proposed model works as follows: The parameters that have impact on the development effort are divided into groups based on the distribution of their values in the available dataset. The linguistic terms are identified for the divided groups using fuzzy functions, and the parameters are fuzzified. The fuzzified parameters then adopt associative classification for generating association rules. The association rules depict the parameters influencing the software development effort. As the number of parameters that influence the effort is more, a large number of rules get generated and can reduce the complexity, the generated rules are filtered with respect to the metrics, support and confidence, which measures the strength of the rule. Genetic algorithm is then employed for selecting set of rules with high quality to improve the accuracy of the model. The datasets such as Nasa93, Cocomo81, Desharnais, Maxwell, and Finnish-v2 are used for evaluating the proposed model, and various evaluation metrics such as Mean Magnitude of Relative Error, Mean Absolute Residuals, Shepperd and MacDonell’s Standardized Accuracy, Enhanced Standardized Accuracy and Effect Size are adopted to substantiate the effectiveness of the proposed methods. The results infer that the accuracy of the model is influenced by the metrics support, confidence, and the number of association rules considered for effort prediction.展开更多
In the early time of oilfield development, insufficient production data and unclear understanding of oil production presented a challenge to reservoir engineers in devising effective development plans. To address this...In the early time of oilfield development, insufficient production data and unclear understanding of oil production presented a challenge to reservoir engineers in devising effective development plans. To address this challenge, this study proposes a method using data mining technology to search for similar oil fields and predict well productivity. A query system of 135 analogy parameters is established based on geological and reservoir engineering research, and the weight values of these parameters are calculated using a data algorithm to establish an analogy system. The fuzzy matter-element algorithm is then used to calculate the similarity between oil fields, with fields having similarity greater than 70% identified as similar oil fields. Using similar oil fields as sample data, 8 important factors affecting well productivity are identified using the Pearson coefficient and mean decrease impurity(MDI) method. To establish productivity prediction models, linear regression(LR), random forest regression(RF), support vector regression(SVR), backpropagation(BP), extreme gradient boosting(XGBoost), and light gradient boosting machine(Light GBM) algorithms are used. Their performance is evaluated using the coefficient of determination(R^(2)), explained variance score(EV), mean squared error(MSE), and mean absolute error(MAE) metrics. The Light GBM model is selected to predict the productivity of 30 wells in the PL field with an average error of only 6.31%, which significantly improves the accuracy of the productivity prediction and meets the application requirements in the field. Finally, a software platform integrating data query,oil field analogy, productivity prediction, and knowledge base is established to identify patterns in massive reservoir development data and provide valuable technical references for new reservoir development.展开更多
To explore the effects of freeze‒thaw cycles on the mechanical properties and crack evolution of fissured sandstone,biaxial compression experiments were carried out on sandstone subjected to freeze‒thaw cycles to char...To explore the effects of freeze‒thaw cycles on the mechanical properties and crack evolution of fissured sandstone,biaxial compression experiments were carried out on sandstone subjected to freeze‒thaw cycles to characterize the changes in the physical and mechanical properties of fissured sandstone caused by freeze‒thaw cycles.The crack evolution and crack change process on the surface of the fissured sandstone were recorded and analysed in detail via digital image technology(DIC).Numerical simulation was used to reveal the expansion process and damage mode of fine-scale cracks under the action of freeze‒thaw cycles,and the simulation results were compared and analysed with the experimental data to verify the reliability of the numerical model.The results show that the mass loss,porosity,peak stress and elastic modulus all increase with increasing number of freeze‒thaw cycles.With an increase in the number of freeze‒thaw cycles,a substantial change in displacement occurs around the prefabricated cracks,and a stress concentration appears at the crack tip.As new cracks continue to sprout at the tips of the prefabricated cracks until the microcracks gradually penetrate into the main cracks,the displacement cloud becomes obviously discontinuous,and the contours of the displacement field in the crack fracture damage area simply intersect with the prefabricated cracks to form an obvious fracture.The damage patterns of the fractured sandstone after freeze‒thaw cycles clearly differ,forming a symmetrical"L"-shaped damage pattern at zero freeze‒thaw cycles,a symmetrical"V"-shaped damage pattern at 10 freeze‒thaw cycles,and a"V"-shaped damage pattern at 20 freeze‒thaw cycles.After 20 freeze‒thaw cycles,a"V"-shaped destruction pattern and"L"-shaped destruction pattern are formed;after 30 freeze‒thaw cycles,an"N"-shaped destruction pattern is formed.This shows that the failure mode of fractured sandstone gradually becomes more complicated with an increasing number of freeze‒thaw cycles.The effects of freeze‒thaw cycles on the direction and rate of crack propagation are revealed through a temperature‒load coupled model,which provides an important reference for an in-depth understanding of the freeze‒thaw failure mechanisms of fractured rock masses.展开更多
基金[King Abdulaziz University][Deanship of Scientific Research]Grant Number[KEP-PHD-20-611-42].
文摘Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation.
基金Researchers supporting Project Number(RSPD2024R576),King Saud University,Riyadh,Saudi Arabia.
文摘Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages other thanEnglish is a challenging task, especially for analyzing sentiment analysis in social media reviews. Most existingsentiment analysis systems focus on English, leaving a significant research gap in other languages due to limitedresources and tools. This research aims to address this gap by building a sentiment lexicon for local languages,which is then used with a machine learning algorithm for efficient sentiment analysis. In the first step, a lexiconis developed that includes five languages: Urdu, Roman Urdu, Pashto, Roman Pashto, and English. The sentimentscores from SentiWordNet are associated with each word in the lexicon to produce an effective sentiment score. Inthe second step, a naive Bayesian algorithm is applied to the developed lexicon for efficient sentiment analysis ofRoman Pashto. Both the sentiment lexicon and sentiment analysis steps were evaluated using information retrievalmetrics, with an accuracy score of 0.89 for the sentiment lexicon and 0.83 for the sentiment analysis. The resultsshowcase the potential for improving software engineering tasks related to user feedback analysis and productdevelopment.
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
基金funded by Grant No.12-INF2970-10 from the National Science,Technology and Innovation Plan(MAARIFAH)the King Abdul-Aziz City for Science and Technology(KACST)Kingdom of Saudi Arabia.
文摘Maintaining software reliability is the key idea for conducting quality research.This can be done by having less complex applications.While developers and other experts have made signicant efforts in this context,the level of reliability is not the same as it should be.Therefore,further research into the most detailed mechanisms for evaluating and increasing software reliability is essential.A signicant aspect of growing the degree of reliable applications is the quantitative assessment of reliability.There are multiple statistical as well as soft computing methods available in literature for predicting reliability of software.However,none of these mechanisms are useful for all kinds of failure datasets and applications.Hence nding the most optimal model for reliability prediction is an important concern.This paper suggests a novel method to substantially pick the best model of reliability prediction.This method is the combination of analytic hierarchy method(AHP),hesitant fuzzy(HF)sets and technique for order of preference by similarity to ideal solution(TOPSIS).In addition,using the different iterations of the process,procedural sensitivity was also performed to validate the ndings.The ndings of the software reliability prediction models prioritization will help the developers to estimate reliability prediction based on the software type.
文摘The ability to accurately estimate the cost needed to complete a specific project has been a challenge over the past decades. For a successful software project, accurate prediction of the cost, time and effort is a very much essential task. This paper presents a systematic review of different models used for software cost estimation which includes algorithmic methods, non-algorithmic methods and learning-oriented methods. The models considered in this review include both the traditional and the recent approaches for software cost estimation. The main objective of this paper is to provide an overview of software cost estimation models and summarize their strengths, weakness, accuracy, amount of data needed, and validation techniques used. Our findings show, in general, neural network based models outperforms other cost estimation techniques. However, no one technique fits every problem and we recommend practitioners to search for the model that best fit their needs.
文摘The key to software reliability is fault-tolerant design ofapplication software.New fault-tolerant strategies andtheir design methods for application software under vari-ous computer system are introduced.It has such advan-tages as simple hardware platform,independent fromapplication,stable reliability.lastly,some technicalproblems are discussed in details.
文摘Based on the new algorithm for GIS image pixel topographic factors in remote sensing monitoring ofsoil losses, a software was developed for microcomputer to carry out computation at a medium river basin(county). This paper lays its emphasis on algorithmic skills and programming techniques as well as applicationof the software.
文摘An user-oriented computer software consisting of three modeling codes, named DRAD, DRAA and FDPAT, is introduced. It can be used to design three types of Cassegrain system: classical, with shaped subreflector and with dual shaped reflectors, and to analyse radiation patterns for the antennas. Several mathematical models and numerical techniques are presented.
基金funding this work through research group no.RG-1441-490.
文摘Requirements elicitation is a fundamental phase of software development in which an analyst discovers the needs of different stakeholders and transforms them into requirements.This phase is cost-and time-intensive,and a project may fail if there are excessive costs and schedule overruns.COVID-19 has affected the software industry by reducing interactions between developers and customers.Such a lack of interaction is a key reason for the failure of software projects.Projects can also fail when customers do not know precisely what they want.Furthermore,selecting the unsuitable elicitation technique can also cause project failure.The present study,therefore,aimed to identify which requirements elicitation technique is the most cost-effective for large-scale projects when time to market is a critical issue or when the customer is not available.To that end,we conducted a systematic literature review on requirements elicitation techniques.Most primary studies identified introspection as the best technique,followed by survey and brainstorming.This finding suggests that introspection should be the first choice of elicitation technique,especially when the customer is not available or the project has strict time and cost constraints.Moreover,introspection should also be used as the starting point in the elicitation process of a large-scale project,and all known requirements should be elicited using this technique.
文摘In this paper, we identify a set of factors that may be used to forecast software productivity and software development time. Software productivity was measured in function points per person hours, and software development time was measured in number of elapsed days. Using field data on over 130 field software projects from various industries, we empirically test the impact of team size, integrated computer aided software engineering (ICASE) tools, software development type, software development platform, and programming language type on the software development productivity and development time. Our results indicate that team size, software development type, software development platform, and programming language type significantly impact software development productivity. However, only team size significantly impacts software development time. Our results indicate that effective management of software development teams, and using different management strategies for different software development type environments may improve software development productivity.
文摘A mathematical model that makes use of data mining and soft computing techniques is proposed to estimate the software development effort. The proposed model works as follows: The parameters that have impact on the development effort are divided into groups based on the distribution of their values in the available dataset. The linguistic terms are identified for the divided groups using fuzzy functions, and the parameters are fuzzified. The fuzzified parameters then adopt associative classification for generating association rules. The association rules depict the parameters influencing the software development effort. As the number of parameters that influence the effort is more, a large number of rules get generated and can reduce the complexity, the generated rules are filtered with respect to the metrics, support and confidence, which measures the strength of the rule. Genetic algorithm is then employed for selecting set of rules with high quality to improve the accuracy of the model. The datasets such as Nasa93, Cocomo81, Desharnais, Maxwell, and Finnish-v2 are used for evaluating the proposed model, and various evaluation metrics such as Mean Magnitude of Relative Error, Mean Absolute Residuals, Shepperd and MacDonell’s Standardized Accuracy, Enhanced Standardized Accuracy and Effect Size are adopted to substantiate the effectiveness of the proposed methods. The results infer that the accuracy of the model is influenced by the metrics support, confidence, and the number of association rules considered for effort prediction.
基金supported by the National Natural Science Fund of China (No.52104049)the Science Foundation of China University of Petroleum,Beijing (No.2462022BJRC004)。
文摘In the early time of oilfield development, insufficient production data and unclear understanding of oil production presented a challenge to reservoir engineers in devising effective development plans. To address this challenge, this study proposes a method using data mining technology to search for similar oil fields and predict well productivity. A query system of 135 analogy parameters is established based on geological and reservoir engineering research, and the weight values of these parameters are calculated using a data algorithm to establish an analogy system. The fuzzy matter-element algorithm is then used to calculate the similarity between oil fields, with fields having similarity greater than 70% identified as similar oil fields. Using similar oil fields as sample data, 8 important factors affecting well productivity are identified using the Pearson coefficient and mean decrease impurity(MDI) method. To establish productivity prediction models, linear regression(LR), random forest regression(RF), support vector regression(SVR), backpropagation(BP), extreme gradient boosting(XGBoost), and light gradient boosting machine(Light GBM) algorithms are used. Their performance is evaluated using the coefficient of determination(R^(2)), explained variance score(EV), mean squared error(MSE), and mean absolute error(MAE) metrics. The Light GBM model is selected to predict the productivity of 30 wells in the PL field with an average error of only 6.31%, which significantly improves the accuracy of the productivity prediction and meets the application requirements in the field. Finally, a software platform integrating data query,oil field analogy, productivity prediction, and knowledge base is established to identify patterns in massive reservoir development data and provide valuable technical references for new reservoir development.
基金supported by the National Natural Science Foundation of China(Project No.52074123).
文摘To explore the effects of freeze‒thaw cycles on the mechanical properties and crack evolution of fissured sandstone,biaxial compression experiments were carried out on sandstone subjected to freeze‒thaw cycles to characterize the changes in the physical and mechanical properties of fissured sandstone caused by freeze‒thaw cycles.The crack evolution and crack change process on the surface of the fissured sandstone were recorded and analysed in detail via digital image technology(DIC).Numerical simulation was used to reveal the expansion process and damage mode of fine-scale cracks under the action of freeze‒thaw cycles,and the simulation results were compared and analysed with the experimental data to verify the reliability of the numerical model.The results show that the mass loss,porosity,peak stress and elastic modulus all increase with increasing number of freeze‒thaw cycles.With an increase in the number of freeze‒thaw cycles,a substantial change in displacement occurs around the prefabricated cracks,and a stress concentration appears at the crack tip.As new cracks continue to sprout at the tips of the prefabricated cracks until the microcracks gradually penetrate into the main cracks,the displacement cloud becomes obviously discontinuous,and the contours of the displacement field in the crack fracture damage area simply intersect with the prefabricated cracks to form an obvious fracture.The damage patterns of the fractured sandstone after freeze‒thaw cycles clearly differ,forming a symmetrical"L"-shaped damage pattern at zero freeze‒thaw cycles,a symmetrical"V"-shaped damage pattern at 10 freeze‒thaw cycles,and a"V"-shaped damage pattern at 20 freeze‒thaw cycles.After 20 freeze‒thaw cycles,a"V"-shaped destruction pattern and"L"-shaped destruction pattern are formed;after 30 freeze‒thaw cycles,an"N"-shaped destruction pattern is formed.This shows that the failure mode of fractured sandstone gradually becomes more complicated with an increasing number of freeze‒thaw cycles.The effects of freeze‒thaw cycles on the direction and rate of crack propagation are revealed through a temperature‒load coupled model,which provides an important reference for an in-depth understanding of the freeze‒thaw failure mechanisms of fractured rock masses.