Reservoir heterogeneities play a crucial role in governing reservoir performance and management.Traditionally,detailed and inter-well heterogeneity analyses are commonly performed by mapping seismic facies change in t...Reservoir heterogeneities play a crucial role in governing reservoir performance and management.Traditionally,detailed and inter-well heterogeneity analyses are commonly performed by mapping seismic facies change in the seismic data,which is a time-intensive task.Many researchers have utilized a robust Grey-level co-occurrence matrix(GLCM)-based texture attributes to map reservoir heterogeneity.However,these attributes take seismic data as input and might not be sensitive to lateral lithology variation.To incorporate the lithology information,we have developed an innovative impedance-based texture approach using GLCM workflow by integrating 3D acoustic impedance volume(a rock propertybased attribute)obtained from a deep convolution network-based impedance inversion.Our proposed workflow is anticipated to be more sensitive toward mapping lateral changes than the conventional amplitude-based texture approach,wherein seismic data is used as input.To evaluate the improvement,we applied the proposed workflow to the full-stack 3D seismic data from the Poseidon field,NW-shelf,Australia.This study demonstrates that a better demarcation of reservoir gas sands with improved lateral continuity is achievable with the presented approach compared to the conventional approach.In addition,we assess the implication of multi-stage faulting on facies distribution for effective reservoir characterization.This study also suggests a well-bounded potential reservoir facies distribution along the parallel fault lines.Thus,the proposed approach provides an efficient strategy by integrating the impedance information with texture attributes to improve the inference on reservoir heterogeneity,which can serve as a promising tool for identifying potential reservoir zones for both production benefits and fluid storage.展开更多
Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself disc...Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself discrimination paradigmin the biological immune system,the negative representation of information indicates features such as simplicity and efficiency,which is very suitable for preserving social network privacy.Therefore,we suggest a method to preserve the topology privacy and node attribute privacy of attribute social networks,called AttNetNRI.Specifically,a negative survey-based method is developed to disturb the relationship between nodes in the social network so that the topology structure can be kept private.Moreover,a negative database-based method is proposed to hide node attributes,so that the privacy of node attributes can be preserved while supporting the similarity estimation between different node attributes,which is crucial to the analysis of social networks.To evaluate the performance of the AttNetNRI,empirical studies have been conducted on various attribute social networks and compared with several state-of-the-art methods tailored to preserve the privacy of social networks.The experimental results show the superiority of the developed method in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topology disturbing and attribute hiding parts.The experimental results show the superiority of the developed methods in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topological interference and attribute-hiding components.展开更多
This paper studies the target controllability of multilayer complex networked systems,in which the nodes are highdimensional linear time invariant(LTI)dynamical systems,and the network topology is directed and weighte...This paper studies the target controllability of multilayer complex networked systems,in which the nodes are highdimensional linear time invariant(LTI)dynamical systems,and the network topology is directed and weighted.The influence of inter-layer couplings on the target controllability of multi-layer networks is discussed.It is found that even if there exists a layer which is not target controllable,the entire multi-layer network can still be target controllable due to the inter-layer couplings.For the multi-layer networks with general structure,a necessary and sufficient condition for target controllability is given by establishing the relationship between uncontrollable subspace and output matrix.By the derived condition,it can be found that the system may be target controllable even if it is not state controllable.On this basis,two corollaries are derived,which clarify the relationship between target controllability,state controllability and output controllability.For the multi-layer networks where the inter-layer couplings are directed chains and directed stars,sufficient conditions for target controllability of networked systems are given,respectively.These conditions are easier to verify than the classic criterion.展开更多
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext...Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.展开更多
Attribute reduction is a research hotspot in rough set theory. Traditional heuristic attribute reduction methods add the most important attribute to the decision attribute set each time, resulting in multiple redundan...Attribute reduction is a research hotspot in rough set theory. Traditional heuristic attribute reduction methods add the most important attribute to the decision attribute set each time, resulting in multiple redundant attribute calculations, high time consumption, and low reduction efficiency. In this paper, based on the idea of sequential three-branch decision classification domain, attributes are treated as objects of three-branch division, and attributes are divided into core attributes, relatively necessary attributes, and unnecessary attributes using attribute importance and thresholds. Core attributes are added to the decision attribute set, unnecessary attributes are rejected from being added, and relatively necessary attributes are repeatedly divided until the reduction result is obtained. Experiments were conducted on 8 groups of UCI datasets, and the results show that, compared to traditional reduction methods, the method proposed in this paper can effectively reduce time consumption while ensuring classification performance.展开更多
Due to the complexity and variability of carbonate formation leakage zones, lost circulation prediction and control is one of the major challenges of carbonate drilling. It raises well-control risks and production exp...Due to the complexity and variability of carbonate formation leakage zones, lost circulation prediction and control is one of the major challenges of carbonate drilling. It raises well-control risks and production expenses. This research utilizes the H oilfield as an example, employs seismic features to analyze mud loss prediction, and produces a complete set of pre-drilling mud loss prediction solutions. Firstly, 16seismic attributes are calculated based on the post-stack seismic data, and the mud loss rate per unit footage is specified. The sample set is constructed by extracting each attribute from the seismic trace surrounding 15 typical wells, with a ratio of 8:2 between the training set and the test set. With the calibration results for mud loss rate per unit footage, the nonlinear mapping relationship between seismic attributes and mud loss rate per unit size is established using the mixed density network model.Then, the influence of the number of sub-Gausses and the uncertainty coefficient on the model's prediction is evaluated. Finally, the model is used in conjunction with downhole drilling conditions to assess the risk of mud loss in various layers and along the wellbore trajectory. The study demonstrates that the mean relative errors of the model for training data and test data are 6.9% and 7.5%, respectively, and that R2is 90% and 88%, respectively, for training data and test data. The accuracy and efficacy of mud loss prediction may be greatly enhanced by combining 16 seismic attributes with the mud loss rate per unit footage and applying machine learning methods. The mud loss prediction model based on the MDN model can not only predict the mud loss rate but also objectively evaluate the prediction based on the quality of the data and the model.展开更多
The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the effi...The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.展开更多
Forests,the largest terrestrial carbon sinks,play an important role in carbon sequestration and climate change mitigation.Although forest attributes and environmental factors have been shown to impact aboveground biom...Forests,the largest terrestrial carbon sinks,play an important role in carbon sequestration and climate change mitigation.Although forest attributes and environmental factors have been shown to impact aboveground biomass,their influence on biomass stocks in species-rich forests in southern China,a biodiversity hotspot,has rarely been investigated.In this study,we characterized the effects of environmental factors,forest structure,and species diversity on aboveground biomass stocks of 30 plots(1 ha each) in natural forests located within seven nature reserves distributed across subtropical and marginal tropical zones in Guangxi,China.Our results indicate that forest aboveground biomass stocks in this region are lower than those in mature tropical and subtropical forests in other regions.Furthermore,we found that aboveground biomass was positively correlated with stand age,mean annual precipitation,elevation,structural attributes and species richness,although not with species evenness.When we compared stands with the same basal area,we found that aboveground biomass stock was higher in communities with a higher coefficient of variation of diameter at breast height.These findings highlight the importance of maintaining forest structural diversity and species richness to promote aboveground biomass accumulation and reveal the potential impacts of precipitation changes resulting from climate warming on the ecosystem services of subtropical and northern tropical forests in China.Notably,many natural forests in southern China are not fully stocked.Therefore,their continued growth will increase their carbon storage over time.展开更多
The technological revolution has spawned a new generation of industrial systems,but it has also put forward higher requirements for safety management accuracy,timeliness,and systematicness.Risk assessment needs to evo...The technological revolution has spawned a new generation of industrial systems,but it has also put forward higher requirements for safety management accuracy,timeliness,and systematicness.Risk assessment needs to evolve to address the existing and future challenges by considering the new demands and advancements in safety management.The study aims to propose a systematic and comprehensive risk assessment method to meet the needs of process system safety management.The methodology first incorporates possibility,severity,and dynamicity(PSD)to structure the“51X”evaluation indicator system,including the inherent,management,and disturbance risk factors.Subsequently,the four-tier(risk point-unit-enterprise-region)risk assessment(RA)mathematical model has been established to consider supervision needs.And in conclusion,the application of the PSD-RA method in ammonia refrigeration workshop cases and safety risk monitoring systems is presented to illustrate the feasibility and effectiveness of the proposed PSD-RA method in safety management.The findings show that the PSD-RA method can be well integrated with the needs of safety work informatization,which is also helpful for implementing the enterprise's safety work responsibility and the government's safety supervision responsibility.展开更多
The presence of numerous uncertainties in hybrid decision information systems(HDISs)renders attribute reduction a formidable task.Currently available attribute reduction algorithms,including those based on Pawlak attr...The presence of numerous uncertainties in hybrid decision information systems(HDISs)renders attribute reduction a formidable task.Currently available attribute reduction algorithms,including those based on Pawlak attribute importance,Skowron discernibility matrix,and information entropy,struggle to effectively manages multiple uncertainties simultaneously in HDISs like the precise measurement of disparities between nominal attribute values,and attributes with fuzzy boundaries and abnormal values.In order to address the aforementioned issues,this paper delves into the study of attribute reduction withinHDISs.First of all,a novel metric based on the decision attribute is introduced to solve the problem of accurately measuring the differences between nominal attribute values.The newly introduced distance metric has been christened the supervised distance that can effectively quantify the differences between the nominal attribute values.Then,based on the newly developed metric,a novel fuzzy relationship is defined from the perspective of“feedback on parity of attribute values to attribute sets”.This new fuzzy relationship serves as a valuable tool in addressing the challenges posed by abnormal attribute values.Furthermore,leveraging the newly introduced fuzzy relationship,the fuzzy conditional information entropy is defined as a solution to the challenges posed by fuzzy attributes.It effectively quantifies the uncertainty associated with fuzzy attribute values,thereby providing a robust framework for handling fuzzy information in hybrid information systems.Finally,an algorithm for attribute reduction utilizing the fuzzy conditional information entropy is presented.The experimental results on 12 datasets show that the average reduction rate of our algorithm reaches 84.04%,and the classification accuracy is improved by 3.91%compared to the original dataset,and by an average of 11.25%compared to the other 9 state-of-the-art reduction algorithms.The comprehensive analysis of these research results clearly indicates that our algorithm is highly effective in managing the intricate uncertainties inherent in hybrid data.展开更多
The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based o...The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.展开更多
This resolution 5 (25−1 factorial) study aimed to ascertain an understanding of the interactions between different geometries on the resulting Radar Cross Section (RCS) of a target. The results of the study are in lin...This resolution 5 (25−1 factorial) study aimed to ascertain an understanding of the interactions between different geometries on the resulting Radar Cross Section (RCS) of a target. The results of the study are in line with the general understanding of the impact different geometries have on RCS but show that geometries can also influence the variance of measured RCS, and typical attributes that reduce RCS increase the variance of the measured RCS. Notably, an increased angle between the front face of a plate and the direction of the radar signal decreased RCS but increased the variance of the RCS measured.展开更多
The domestic space can be defined as the sphere that articulates the needs for subjective containment and contextual stimuli.In this sense,questions arise about the indispensable attributes that spaces must possess fo...The domestic space can be defined as the sphere that articulates the needs for subjective containment and contextual stimuli.In this sense,questions arise about the indispensable attributes that spaces must possess for this articulation to take place adequately.Architecture,as the discipline in charge of satisfying the specific spatial needs of those who inhabit these spaces and,in a broader sense,as a concrete contribution to society,must address this relationship in all its complexity and generate concrete responses that incorporate the appropriate spatial attributes during the design processes.The design processes that shape living spaces confront this dialectic,and the manner in which they do so brings identity and character to them.It is believed that the higher the level of variables that are contemplated and weighted,the greater the adequacy of spaces to the changing dynamics of the people who inhabit them.This article focuses on a thorough analysis of these spatial attributes,in parallel to the definition of each one as a particular condition for design,based on their conceptualization,breakdown,and articulation.Conceptually,the following attributes are addressed:flexibility,adaptability,variability,versatility,multiplicity,plurality,integrality,gradualness,incrementality,progressiveness,independence,connectivity,intimacy,and privacy.Each of these attributes is valued as a contribution to creating adequate habitability in contextual terms,with consideration to possible integrations and combinations.展开更多
For neighborhood rough set attribute reduction algorithms based on dependency degree,a neighborhood computation method incorporating attribute weight values and a neighborhood rough set attribute reduction algorithm u...For neighborhood rough set attribute reduction algorithms based on dependency degree,a neighborhood computation method incorporating attribute weight values and a neighborhood rough set attribute reduction algorithm using discernment as the heuristic information was proposed.The reduction algorithm comprehensively considers the dependency degree and neighborhood granulation degree of attributes,allowing for a more accurate measurement of the importance degrees of attributes.Example analyses and experimental results demonstrate the feasibility and effectiveness of the algorithm.展开更多
AVO (Amplitude variation with offset) technology is widely used in gas hydrate research. BSR (Bottom simulating reflector), caused by the huge difference in wave impedance between the hydrate reservoir and the underly...AVO (Amplitude variation with offset) technology is widely used in gas hydrate research. BSR (Bottom simulating reflector), caused by the huge difference in wave impedance between the hydrate reservoir and the underlying free gas reservoir, is the bottom boundary mark of the hydrate reservoir. Analyzing the AVO attributes of BSR can evaluate hydrate reservoirs. However, the Zoeppritz equation which is the theoretical basis of conventional AVO technology has inherent problems: the Zoeppritz equation does not consider the influence of thin layer thickness on reflection coefficients;the approximation of the Zoeppritz equation assumes that the difference of wave impedance between the two sides of the interface is small. These assumptions are not consistent with the occurrence characteristics of natural gas hydrate. The Brekhovskikh equation, which is more suitable for thin-layer reflection coefficient calculation, is used as the theoretical basis for AVO analysis. The reflection coefficients calculated by the Brekhovskikh equation are complex numbers with phase angles. Therefore, attributes of the reflection coefficient and its phase angle changing with offset are used to analyze the hydrate reservoir's porosity, saturation, and thickness. Finally, the random forest algorithm is used to predict the reservoir porosity, hydrate saturation, and thickness of the hydrate reservoir. In the synthetic data, the inversion results based on the four attributes of the Brekhovskikh equation are better than the conventional inversion results based on the two attributes of Zoeppritz, and the thickness can be accurately predicted. The proposed method also achieves good results in the application of Blake Ridge data. According to the method proposed in this paper, the hydrate reservoir in the area has a high porosity (more than 50%), and a medium saturation (between 10% and 20%). The thickness is mainly between 200m and 300m. It is consistent with the previous results obtained by velocity analysis.展开更多
In this paper,an Observation Points Classifier Ensemble(OPCE)algorithm is proposed to deal with High-Dimensional Imbalanced Classification(HDIC)problems based on data processed using the Multi-Dimensional Scaling(MDS)...In this paper,an Observation Points Classifier Ensemble(OPCE)algorithm is proposed to deal with High-Dimensional Imbalanced Classification(HDIC)problems based on data processed using the Multi-Dimensional Scaling(MDS)feature extraction technique.First,dimensionality of the original imbalanced data is reduced using MDS so that distances between any two different samples are preserved as well as possible.Second,a novel OPCE algorithm is applied to classify imbalanced samples by placing optimised observation points in a low-dimensional data space.Third,optimization of the observation point mappings is carried out to obtain a reliable assessment of the unknown samples.Exhaustive experiments have been conducted to evaluate the feasibility,rationality,and effectiveness of the proposed OPCE algorithm using seven benchmark HDIC data sets.Experimental results show that(1)the OPCE algorithm can be trained faster on low-dimensional imbalanced data than on high-dimensional data;(2)the OPCE algorithm can correctly identify samples as the number of optimised observation points is increased;and(3)statistical analysis reveals that OPCE yields better HDIC performances on the selected data sets in comparison with eight other HDIC algorithms.This demonstrates that OPCE is a viable algorithm to deal with HDIC problems.展开更多
Due to the mobility of users in an organization,inclusion of dynamic attributes such as time and location becomes the major challenge in Ciphertext-Policy Attribute-Based Encryption(CP-ABE).By considering this challen...Due to the mobility of users in an organization,inclusion of dynamic attributes such as time and location becomes the major challenge in Ciphertext-Policy Attribute-Based Encryption(CP-ABE).By considering this challenge;we focus to present dynamic time and location information in CP-ABE with mul-ti-authorization.Atfirst,along with the set of attributes of the users,their corre-sponding location is also embedded.Geohash is used to encode the latitude and longitude of the user’s position.Then,decrypt time period and access time period of users are defined using the new time tree(NTT)structure.The NTT sets the encrypted duration of the encrypted data and the valid access time of the private key on the data user’s private key.Besides,single authorization of attribute authority(AA)is extended as multi authorization for enhancing the effectiveness of key generation.Simulation results depict that the proposed CP-ABE achieves better encryption time,decryption time,security level and memory usage.Namely,encryption time and decryption time of the proposed CP-ABE are reduced to 19%and 16%than that of existing CP-ABE scheme.展开更多
Decision implication is a form of decision knowledge represen-tation,which is able to avoid generating attribute implications that occur between condition attributes and between decision attributes.Compared with other...Decision implication is a form of decision knowledge represen-tation,which is able to avoid generating attribute implications that occur between condition attributes and between decision attributes.Compared with other forms of decision knowledge representation,decision implication has a stronger knowledge representation capability.Attribute granularization may facilitate the knowledge extraction of different attribute granularity layers and thus is of application significance.Decision implication canonical basis(DICB)is the most compact set of decision implications,which can efficiently represent all knowledge in the decision context.In order to mine all deci-sion information on decision context under attribute granulating,this paper proposes an updated method of DICB.To this end,the paper reduces the update of DICB to the updates of decision premises after deleting an attribute and after adding granulation attributes of some attributes.Based on this,the paper analyzes the changes of decision premises,examines the properties of decision premises,designs an algorithm for incrementally generating DICB,and verifies its effectiveness through experiments.In real life,by using the updated algorithm of DICB,users may obtain all decision knowledge on decision context after attribute granularization.展开更多
There is a constant search for biomaterials from natural products like plants for food and industrial applications.The work embodied in this report aimed at investigating the effects of microwave-assisted and soxhlet ...There is a constant search for biomaterials from natural products like plants for food and industrial applications.The work embodied in this report aimed at investigating the effects of microwave-assisted and soxhlet extraction(MAE and SE) techniques on the functional physicochemical quality characteristics of Moringa oleifera seed oil and proteins extracts. M. oleifera seeds were ground to fine powders and oil was extracted by microwave-assisted and soxhlet extraction techniques using petroleum ether. Quality attributes including yield percent, moisture content,iodine, saponification, specific gravity, viscosity, p H, thiobarbituric acid, acid and peroxide values were measured. Mineral and vitamin contents, chemical/functional groups, fatty acid(FA) composition, and reducing power of the oil were evaluated. Metabolomics of protein extracted from the defatted powders were analyzed by nuclear magnetic resonance(NMR). M. oleifera oil from MAE and SE methods had good yield(34.25 ± 0.0%,28.75 ± 0.0%), low moisture content(0.008 ± 0.0%, 0.011 ± 0.0%), non-drying and unsaturated, moderately saponified, less dense(0.91 ± 0.01, 0.92 ± 0.02 g m L^(-1)), had Newtonian flow, were weakly acidic, showed good content of FAs, recorded strong potential for long shelf-life, showed stability against oxidative rancidity and enzymatic hydrolysis, had very rich deposits of micro-and macro-nutrients as well as water-soluble and lipidsoluble vitamins, and functional groups in the oil were reflective of its content of long-and medium-chain triglycerides(LCT and MCT). Monounsaturated and saturated fatty acids(MUFA and SFA) were detected and the oil has excellent ferric ion reducing power. NMR metabolomic assay revealed the presence of nine essential amino acids(EAAs) in the protein extract. MAE technique is a feasible and acceptable alternative for high throughput extraction of M. oleifera oil with high yield and excellent quality attributes. The study revealed that MAE did not impart any remarkable advantage(s) on the physicochemical properties of M. oleifera seed oil and protein compared to SE technique.展开更多
As a crucial data preprocessing method in data mining,feature selection(FS)can be regarded as a bi-objective optimization problem that aims to maximize classification accuracy and minimize the number of selected featu...As a crucial data preprocessing method in data mining,feature selection(FS)can be regarded as a bi-objective optimization problem that aims to maximize classification accuracy and minimize the number of selected features.Evolutionary computing(EC)is promising for FS owing to its powerful search capability.However,in traditional EC-based methods,feature subsets are represented via a length-fixed individual encoding.It is ineffective for high-dimensional data,because it results in a huge search space and prohibitive training time.This work proposes a length-adaptive non-dominated sorting genetic algorithm(LA-NSGA)with a length-variable individual encoding and a length-adaptive evolution mechanism for bi-objective highdimensional FS.In LA-NSGA,an initialization method based on correlation and redundancy is devised to initialize individuals of diverse lengths,and a Pareto dominance-based length change operator is introduced to guide individuals to explore in promising search space adaptively.Moreover,a dominance-based local search method is employed for further improvement.The experimental results based on 12 high-dimensional gene datasets show that the Pareto front of feature subsets produced by LA-NSGA is superior to those of existing algorithms.展开更多
文摘Reservoir heterogeneities play a crucial role in governing reservoir performance and management.Traditionally,detailed and inter-well heterogeneity analyses are commonly performed by mapping seismic facies change in the seismic data,which is a time-intensive task.Many researchers have utilized a robust Grey-level co-occurrence matrix(GLCM)-based texture attributes to map reservoir heterogeneity.However,these attributes take seismic data as input and might not be sensitive to lateral lithology variation.To incorporate the lithology information,we have developed an innovative impedance-based texture approach using GLCM workflow by integrating 3D acoustic impedance volume(a rock propertybased attribute)obtained from a deep convolution network-based impedance inversion.Our proposed workflow is anticipated to be more sensitive toward mapping lateral changes than the conventional amplitude-based texture approach,wherein seismic data is used as input.To evaluate the improvement,we applied the proposed workflow to the full-stack 3D seismic data from the Poseidon field,NW-shelf,Australia.This study demonstrates that a better demarcation of reservoir gas sands with improved lateral continuity is achievable with the presented approach compared to the conventional approach.In addition,we assess the implication of multi-stage faulting on facies distribution for effective reservoir characterization.This study also suggests a well-bounded potential reservoir facies distribution along the parallel fault lines.Thus,the proposed approach provides an efficient strategy by integrating the impedance information with texture attributes to improve the inference on reservoir heterogeneity,which can serve as a promising tool for identifying potential reservoir zones for both production benefits and fluid storage.
基金supported by the National Natural Science Foundation of China(Nos.62006001,62372001)the Natural Science Foundation of Chongqing City(Grant No.CSTC2021JCYJ-MSXMX0002).
文摘Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself discrimination paradigmin the biological immune system,the negative representation of information indicates features such as simplicity and efficiency,which is very suitable for preserving social network privacy.Therefore,we suggest a method to preserve the topology privacy and node attribute privacy of attribute social networks,called AttNetNRI.Specifically,a negative survey-based method is developed to disturb the relationship between nodes in the social network so that the topology structure can be kept private.Moreover,a negative database-based method is proposed to hide node attributes,so that the privacy of node attributes can be preserved while supporting the similarity estimation between different node attributes,which is crucial to the analysis of social networks.To evaluate the performance of the AttNetNRI,empirical studies have been conducted on various attribute social networks and compared with several state-of-the-art methods tailored to preserve the privacy of social networks.The experimental results show the superiority of the developed method in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topology disturbing and attribute hiding parts.The experimental results show the superiority of the developed methods in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topological interference and attribute-hiding components.
基金supported by the National Natural Science Foundation of China (U1808205)Hebei Natural Science Foundation (F2000501005)。
文摘This paper studies the target controllability of multilayer complex networked systems,in which the nodes are highdimensional linear time invariant(LTI)dynamical systems,and the network topology is directed and weighted.The influence of inter-layer couplings on the target controllability of multi-layer networks is discussed.It is found that even if there exists a layer which is not target controllable,the entire multi-layer network can still be target controllable due to the inter-layer couplings.For the multi-layer networks with general structure,a necessary and sufficient condition for target controllability is given by establishing the relationship between uncontrollable subspace and output matrix.By the derived condition,it can be found that the system may be target controllable even if it is not state controllable.On this basis,two corollaries are derived,which clarify the relationship between target controllability,state controllability and output controllability.For the multi-layer networks where the inter-layer couplings are directed chains and directed stars,sufficient conditions for target controllability of networked systems are given,respectively.These conditions are easier to verify than the classic criterion.
文摘Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.
文摘Attribute reduction is a research hotspot in rough set theory. Traditional heuristic attribute reduction methods add the most important attribute to the decision attribute set each time, resulting in multiple redundant attribute calculations, high time consumption, and low reduction efficiency. In this paper, based on the idea of sequential three-branch decision classification domain, attributes are treated as objects of three-branch division, and attributes are divided into core attributes, relatively necessary attributes, and unnecessary attributes using attribute importance and thresholds. Core attributes are added to the decision attribute set, unnecessary attributes are rejected from being added, and relatively necessary attributes are repeatedly divided until the reduction result is obtained. Experiments were conducted on 8 groups of UCI datasets, and the results show that, compared to traditional reduction methods, the method proposed in this paper can effectively reduce time consumption while ensuring classification performance.
基金the financially supported by the National Natural Science Foundation of China(Grant No.52104013)the China Postdoctoral Science Foundation(Grant No.2022T150724)。
文摘Due to the complexity and variability of carbonate formation leakage zones, lost circulation prediction and control is one of the major challenges of carbonate drilling. It raises well-control risks and production expenses. This research utilizes the H oilfield as an example, employs seismic features to analyze mud loss prediction, and produces a complete set of pre-drilling mud loss prediction solutions. Firstly, 16seismic attributes are calculated based on the post-stack seismic data, and the mud loss rate per unit footage is specified. The sample set is constructed by extracting each attribute from the seismic trace surrounding 15 typical wells, with a ratio of 8:2 between the training set and the test set. With the calibration results for mud loss rate per unit footage, the nonlinear mapping relationship between seismic attributes and mud loss rate per unit size is established using the mixed density network model.Then, the influence of the number of sub-Gausses and the uncertainty coefficient on the model's prediction is evaluated. Finally, the model is used in conjunction with downhole drilling conditions to assess the risk of mud loss in various layers and along the wellbore trajectory. The study demonstrates that the mean relative errors of the model for training data and test data are 6.9% and 7.5%, respectively, and that R2is 90% and 88%, respectively, for training data and test data. The accuracy and efficacy of mud loss prediction may be greatly enhanced by combining 16 seismic attributes with the mud loss rate per unit footage and applying machine learning methods. The mud loss prediction model based on the MDN model can not only predict the mud loss rate but also objectively evaluate the prediction based on the quality of the data and the model.
基金supported by the Innovation Fund Project of the Gansu Education Department(Grant No.2021B-099).
文摘The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.
基金supported by the Guangxi Key R&D Program (project No. AB16380254)a research project of Guangxi Forestry Department (Guilinkezi [2015] No.5)supported a grant for Bagui Senior Fellow (C33600992001)。
文摘Forests,the largest terrestrial carbon sinks,play an important role in carbon sequestration and climate change mitigation.Although forest attributes and environmental factors have been shown to impact aboveground biomass,their influence on biomass stocks in species-rich forests in southern China,a biodiversity hotspot,has rarely been investigated.In this study,we characterized the effects of environmental factors,forest structure,and species diversity on aboveground biomass stocks of 30 plots(1 ha each) in natural forests located within seven nature reserves distributed across subtropical and marginal tropical zones in Guangxi,China.Our results indicate that forest aboveground biomass stocks in this region are lower than those in mature tropical and subtropical forests in other regions.Furthermore,we found that aboveground biomass was positively correlated with stand age,mean annual precipitation,elevation,structural attributes and species richness,although not with species evenness.When we compared stands with the same basal area,we found that aboveground biomass stock was higher in communities with a higher coefficient of variation of diameter at breast height.These findings highlight the importance of maintaining forest structural diversity and species richness to promote aboveground biomass accumulation and reveal the potential impacts of precipitation changes resulting from climate warming on the ecosystem services of subtropical and northern tropical forests in China.Notably,many natural forests in southern China are not fully stocked.Therefore,their continued growth will increase their carbon storage over time.
基金key technology project for the prevention and control of major workplace safety accidents in 2017 from the State Administration of Work Safety of China-the research on the identification and assessment technology and control system of major risks of enterprises for the prevention and control of severe accidents(Hubei-0002-2017AQ)supported by the Department of Emergency Management of Hubei Province,Wuhan 430064,China.
文摘The technological revolution has spawned a new generation of industrial systems,but it has also put forward higher requirements for safety management accuracy,timeliness,and systematicness.Risk assessment needs to evolve to address the existing and future challenges by considering the new demands and advancements in safety management.The study aims to propose a systematic and comprehensive risk assessment method to meet the needs of process system safety management.The methodology first incorporates possibility,severity,and dynamicity(PSD)to structure the“51X”evaluation indicator system,including the inherent,management,and disturbance risk factors.Subsequently,the four-tier(risk point-unit-enterprise-region)risk assessment(RA)mathematical model has been established to consider supervision needs.And in conclusion,the application of the PSD-RA method in ammonia refrigeration workshop cases and safety risk monitoring systems is presented to illustrate the feasibility and effectiveness of the proposed PSD-RA method in safety management.The findings show that the PSD-RA method can be well integrated with the needs of safety work informatization,which is also helpful for implementing the enterprise's safety work responsibility and the government's safety supervision responsibility.
基金Anhui Province Natural Science Research Project of Colleges and Universities(2023AH040321)Excellent Scientific Research and Innovation Team of Anhui Colleges(2022AH010098).
文摘The presence of numerous uncertainties in hybrid decision information systems(HDISs)renders attribute reduction a formidable task.Currently available attribute reduction algorithms,including those based on Pawlak attribute importance,Skowron discernibility matrix,and information entropy,struggle to effectively manages multiple uncertainties simultaneously in HDISs like the precise measurement of disparities between nominal attribute values,and attributes with fuzzy boundaries and abnormal values.In order to address the aforementioned issues,this paper delves into the study of attribute reduction withinHDISs.First of all,a novel metric based on the decision attribute is introduced to solve the problem of accurately measuring the differences between nominal attribute values.The newly introduced distance metric has been christened the supervised distance that can effectively quantify the differences between the nominal attribute values.Then,based on the newly developed metric,a novel fuzzy relationship is defined from the perspective of“feedback on parity of attribute values to attribute sets”.This new fuzzy relationship serves as a valuable tool in addressing the challenges posed by abnormal attribute values.Furthermore,leveraging the newly introduced fuzzy relationship,the fuzzy conditional information entropy is defined as a solution to the challenges posed by fuzzy attributes.It effectively quantifies the uncertainty associated with fuzzy attribute values,thereby providing a robust framework for handling fuzzy information in hybrid information systems.Finally,an algorithm for attribute reduction utilizing the fuzzy conditional information entropy is presented.The experimental results on 12 datasets show that the average reduction rate of our algorithm reaches 84.04%,and the classification accuracy is improved by 3.91%compared to the original dataset,and by an average of 11.25%compared to the other 9 state-of-the-art reduction algorithms.The comprehensive analysis of these research results clearly indicates that our algorithm is highly effective in managing the intricate uncertainties inherent in hybrid data.
文摘The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.
文摘This resolution 5 (25−1 factorial) study aimed to ascertain an understanding of the interactions between different geometries on the resulting Radar Cross Section (RCS) of a target. The results of the study are in line with the general understanding of the impact different geometries have on RCS but show that geometries can also influence the variance of measured RCS, and typical attributes that reduce RCS increase the variance of the measured RCS. Notably, an increased angle between the front face of a plate and the direction of the radar signal decreased RCS but increased the variance of the RCS measured.
文摘The domestic space can be defined as the sphere that articulates the needs for subjective containment and contextual stimuli.In this sense,questions arise about the indispensable attributes that spaces must possess for this articulation to take place adequately.Architecture,as the discipline in charge of satisfying the specific spatial needs of those who inhabit these spaces and,in a broader sense,as a concrete contribution to society,must address this relationship in all its complexity and generate concrete responses that incorporate the appropriate spatial attributes during the design processes.The design processes that shape living spaces confront this dialectic,and the manner in which they do so brings identity and character to them.It is believed that the higher the level of variables that are contemplated and weighted,the greater the adequacy of spaces to the changing dynamics of the people who inhabit them.This article focuses on a thorough analysis of these spatial attributes,in parallel to the definition of each one as a particular condition for design,based on their conceptualization,breakdown,and articulation.Conceptually,the following attributes are addressed:flexibility,adaptability,variability,versatility,multiplicity,plurality,integrality,gradualness,incrementality,progressiveness,independence,connectivity,intimacy,and privacy.Each of these attributes is valued as a contribution to creating adequate habitability in contextual terms,with consideration to possible integrations and combinations.
基金Anhui Provincial University Research Project(Project Number:2023AH051659)Tongling University Talent Research Initiation Fund Project(Project Number:2022tlxyrc31)+1 种基金Tongling University School-Level Scientific Research Project(Project Number:2021tlxytwh05)Tongling University Horizontal Project(Project Number:2023tlxyxdz237)。
文摘For neighborhood rough set attribute reduction algorithms based on dependency degree,a neighborhood computation method incorporating attribute weight values and a neighborhood rough set attribute reduction algorithm using discernment as the heuristic information was proposed.The reduction algorithm comprehensively considers the dependency degree and neighborhood granulation degree of attributes,allowing for a more accurate measurement of the importance degrees of attributes.Example analyses and experimental results demonstrate the feasibility and effectiveness of the algorithm.
基金The research is funded by the National Natural Science Foundation of China(No.12171455)the Original Innovation Research Program of the Chinese Academy of Sciences(CAS)under grant number ZDBS-LY-DQC003the Key Research Programs IGGCAS-2019031.
文摘AVO (Amplitude variation with offset) technology is widely used in gas hydrate research. BSR (Bottom simulating reflector), caused by the huge difference in wave impedance between the hydrate reservoir and the underlying free gas reservoir, is the bottom boundary mark of the hydrate reservoir. Analyzing the AVO attributes of BSR can evaluate hydrate reservoirs. However, the Zoeppritz equation which is the theoretical basis of conventional AVO technology has inherent problems: the Zoeppritz equation does not consider the influence of thin layer thickness on reflection coefficients;the approximation of the Zoeppritz equation assumes that the difference of wave impedance between the two sides of the interface is small. These assumptions are not consistent with the occurrence characteristics of natural gas hydrate. The Brekhovskikh equation, which is more suitable for thin-layer reflection coefficient calculation, is used as the theoretical basis for AVO analysis. The reflection coefficients calculated by the Brekhovskikh equation are complex numbers with phase angles. Therefore, attributes of the reflection coefficient and its phase angle changing with offset are used to analyze the hydrate reservoir's porosity, saturation, and thickness. Finally, the random forest algorithm is used to predict the reservoir porosity, hydrate saturation, and thickness of the hydrate reservoir. In the synthetic data, the inversion results based on the four attributes of the Brekhovskikh equation are better than the conventional inversion results based on the two attributes of Zoeppritz, and the thickness can be accurately predicted. The proposed method also achieves good results in the application of Blake Ridge data. According to the method proposed in this paper, the hydrate reservoir in the area has a high porosity (more than 50%), and a medium saturation (between 10% and 20%). The thickness is mainly between 200m and 300m. It is consistent with the previous results obtained by velocity analysis.
基金National Natural Science Foundation of China,Grant/Award Number:61972261Basic Research Foundations of Shenzhen,Grant/Award Numbers:JCYJ20210324093609026,JCYJ20200813091134001。
文摘In this paper,an Observation Points Classifier Ensemble(OPCE)algorithm is proposed to deal with High-Dimensional Imbalanced Classification(HDIC)problems based on data processed using the Multi-Dimensional Scaling(MDS)feature extraction technique.First,dimensionality of the original imbalanced data is reduced using MDS so that distances between any two different samples are preserved as well as possible.Second,a novel OPCE algorithm is applied to classify imbalanced samples by placing optimised observation points in a low-dimensional data space.Third,optimization of the observation point mappings is carried out to obtain a reliable assessment of the unknown samples.Exhaustive experiments have been conducted to evaluate the feasibility,rationality,and effectiveness of the proposed OPCE algorithm using seven benchmark HDIC data sets.Experimental results show that(1)the OPCE algorithm can be trained faster on low-dimensional imbalanced data than on high-dimensional data;(2)the OPCE algorithm can correctly identify samples as the number of optimised observation points is increased;and(3)statistical analysis reveals that OPCE yields better HDIC performances on the selected data sets in comparison with eight other HDIC algorithms.This demonstrates that OPCE is a viable algorithm to deal with HDIC problems.
文摘Due to the mobility of users in an organization,inclusion of dynamic attributes such as time and location becomes the major challenge in Ciphertext-Policy Attribute-Based Encryption(CP-ABE).By considering this challenge;we focus to present dynamic time and location information in CP-ABE with mul-ti-authorization.Atfirst,along with the set of attributes of the users,their corre-sponding location is also embedded.Geohash is used to encode the latitude and longitude of the user’s position.Then,decrypt time period and access time period of users are defined using the new time tree(NTT)structure.The NTT sets the encrypted duration of the encrypted data and the valid access time of the private key on the data user’s private key.Besides,single authorization of attribute authority(AA)is extended as multi authorization for enhancing the effectiveness of key generation.Simulation results depict that the proposed CP-ABE achieves better encryption time,decryption time,security level and memory usage.Namely,encryption time and decryption time of the proposed CP-ABE are reduced to 19%and 16%than that of existing CP-ABE scheme.
基金supported by the National Natural Science Foundation of China (Nos.61972238,62072294).
文摘Decision implication is a form of decision knowledge represen-tation,which is able to avoid generating attribute implications that occur between condition attributes and between decision attributes.Compared with other forms of decision knowledge representation,decision implication has a stronger knowledge representation capability.Attribute granularization may facilitate the knowledge extraction of different attribute granularity layers and thus is of application significance.Decision implication canonical basis(DICB)is the most compact set of decision implications,which can efficiently represent all knowledge in the decision context.In order to mine all deci-sion information on decision context under attribute granulating,this paper proposes an updated method of DICB.To this end,the paper reduces the update of DICB to the updates of decision premises after deleting an attribute and after adding granulation attributes of some attributes.Based on this,the paper analyzes the changes of decision premises,examines the properties of decision premises,designs an algorithm for incrementally generating DICB,and verifies its effectiveness through experiments.In real life,by using the updated algorithm of DICB,users may obtain all decision knowledge on decision context after attribute granularization.
基金funded by International Foundation for Science(IFS)and Organisation for the Prohibition of Chemical Weapons(OPCW)research grant awarded to Dr.Chukwuebuka Emmanuel Umeyor in 2019(Grant number:I-2-F-6448-1).
文摘There is a constant search for biomaterials from natural products like plants for food and industrial applications.The work embodied in this report aimed at investigating the effects of microwave-assisted and soxhlet extraction(MAE and SE) techniques on the functional physicochemical quality characteristics of Moringa oleifera seed oil and proteins extracts. M. oleifera seeds were ground to fine powders and oil was extracted by microwave-assisted and soxhlet extraction techniques using petroleum ether. Quality attributes including yield percent, moisture content,iodine, saponification, specific gravity, viscosity, p H, thiobarbituric acid, acid and peroxide values were measured. Mineral and vitamin contents, chemical/functional groups, fatty acid(FA) composition, and reducing power of the oil were evaluated. Metabolomics of protein extracted from the defatted powders were analyzed by nuclear magnetic resonance(NMR). M. oleifera oil from MAE and SE methods had good yield(34.25 ± 0.0%,28.75 ± 0.0%), low moisture content(0.008 ± 0.0%, 0.011 ± 0.0%), non-drying and unsaturated, moderately saponified, less dense(0.91 ± 0.01, 0.92 ± 0.02 g m L^(-1)), had Newtonian flow, were weakly acidic, showed good content of FAs, recorded strong potential for long shelf-life, showed stability against oxidative rancidity and enzymatic hydrolysis, had very rich deposits of micro-and macro-nutrients as well as water-soluble and lipidsoluble vitamins, and functional groups in the oil were reflective of its content of long-and medium-chain triglycerides(LCT and MCT). Monounsaturated and saturated fatty acids(MUFA and SFA) were detected and the oil has excellent ferric ion reducing power. NMR metabolomic assay revealed the presence of nine essential amino acids(EAAs) in the protein extract. MAE technique is a feasible and acceptable alternative for high throughput extraction of M. oleifera oil with high yield and excellent quality attributes. The study revealed that MAE did not impart any remarkable advantage(s) on the physicochemical properties of M. oleifera seed oil and protein compared to SE technique.
基金supported in part by the National Natural Science Foundation of China(62172065,62072060)。
文摘As a crucial data preprocessing method in data mining,feature selection(FS)can be regarded as a bi-objective optimization problem that aims to maximize classification accuracy and minimize the number of selected features.Evolutionary computing(EC)is promising for FS owing to its powerful search capability.However,in traditional EC-based methods,feature subsets are represented via a length-fixed individual encoding.It is ineffective for high-dimensional data,because it results in a huge search space and prohibitive training time.This work proposes a length-adaptive non-dominated sorting genetic algorithm(LA-NSGA)with a length-variable individual encoding and a length-adaptive evolution mechanism for bi-objective highdimensional FS.In LA-NSGA,an initialization method based on correlation and redundancy is devised to initialize individuals of diverse lengths,and a Pareto dominance-based length change operator is introduced to guide individuals to explore in promising search space adaptively.Moreover,a dominance-based local search method is employed for further improvement.The experimental results based on 12 high-dimensional gene datasets show that the Pareto front of feature subsets produced by LA-NSGA is superior to those of existing algorithms.