As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocrea...As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocreate a misleading perception among users. While early research primarily focused on text-based features forfake news detection mechanisms, there has been relatively limited exploration of learning shared representationsin multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal modelfor detecting fake news, which relies on similarity reasoning and adversarial networks. The model employsBidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) toextract visual features. Subsequently, the model establishes similarity representations between the textual featuresextracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features arefused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigatethe relationship between fake news and events. This paper validates the proposed model using publicly availablemultimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approachachieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodalmodalmodelsand existing multimodal models. In contrast, the overall better performance of our model on the Weibo datasetsurpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarialnetworks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However,current research is limited to the fusion of only text and image modalities. Future research directions should aimto further integrate features fromadditionalmodalities to comprehensively represent themultifaceted informationof fake news.展开更多
3D printing is widely adopted to quickly produce rock mass models with complex structures in batches,improving the consistency and repeatability of physical modeling.It is necessary to regulate the mechanical properti...3D printing is widely adopted to quickly produce rock mass models with complex structures in batches,improving the consistency and repeatability of physical modeling.It is necessary to regulate the mechanical properties of 3D-printed specimens to make them proportionally similar to natural rocks.This study investigates mechanical properties of 3D-printed rock analogues prepared by furan resin-bonded silica sand particles.The mechanical property regulation of 3D-printed specimens is realized through quantifying its similarity to sandstone,so that analogous deformation characteristics and failure mode are acquired.Considering similarity conversion,uniaxial compressive strength,cohesion and stress–strain relationship curve of 3D-printed specimen are similar to those of sandstone.In the study ranges,the strength of 3D-printed specimen is positively correlated with the additive content,negatively correlated with the sand particle size,and first increases then decreases with the increase of curing temperature.The regulation scheme with optimal similarity quantification index,that is the sand type of 70/140,additive content of 2.5‰and curing temperature of 81.6℃,is determined for preparing 3D-printed sandstone analogues and models.The effectiveness of mechanical property regulation is proved through uniaxial compression contrast tests.This study provides a reference for preparing rock-like specimens and engineering models using 3D printing technology.展开更多
For accurately identifying the distribution charac-teristic of Gaussian-like noises in unmanned aerial vehicle(UAV)state estimation,this paper proposes a non-parametric scheme based on curve similarity matching.In the...For accurately identifying the distribution charac-teristic of Gaussian-like noises in unmanned aerial vehicle(UAV)state estimation,this paper proposes a non-parametric scheme based on curve similarity matching.In the framework of the pro-posed scheme,a Parzen window(kernel density estimation,KDE)method on sliding window technology is applied for roughly esti-mating the sample probability density,a precise data probability density function(PDF)model is constructed with the least square method on K-fold cross validation,and the testing result based on evaluation method is obtained based on some data characteristic analyses of curve shape,abruptness and symmetry.Some com-parison simulations with classical methods and UAV flight exper-iment shows that the proposed scheme has higher recognition accuracy than classical methods for some kinds of Gaussian-like data,which provides better reference for the design of Kalman filter(KF)in complex water environment.展开更多
Existing lithospheric velocity models exhibit similar structures typically associated with the first-order tectonic features,with dissimilarities due to different data and methods used in model generation.The quantifi...Existing lithospheric velocity models exhibit similar structures typically associated with the first-order tectonic features,with dissimilarities due to different data and methods used in model generation.The quantification of model structural similarity can help in interpreting the geophysical properties of Earth's interior and establishing unified models crucial in natural hazard assessment and resource exploration.Here we employ the complex wavelet structural similarity index measure(CW-SSIM)active in computer image processing to analyze the structural similarity of four lithospheric velocity models of Chinese mainland published in the past decade.We take advantage of this method in its multiscale definition and insensitivity to slight geometrical distortion like translation and scaling,which is particularly crucial in the structural similarity analysis of velocity models accounting for uncertainty and resolution.Our results show that the CW-SSIM values vary in different model pairs,horizontal locations,and depths.While variations in the inter-model CW-SSIM are partly owing to different databases in the model generation,the difference of tomography methods may significantly impact the similar structural features of models,such as the low similarities between the full-wave based FWEA18 and other three models in northeastern China.We finally suggest potential solutions for the next generation of tomographic modeling in different areas according to corresponding structural similarities of existing models.展开更多
The settling flux of biodeposition affects the environmental quality of cage culture areas and determines their environmental carrying capacity.Simple and effective simulation of the settling flux of biodeposition is ...The settling flux of biodeposition affects the environmental quality of cage culture areas and determines their environmental carrying capacity.Simple and effective simulation of the settling flux of biodeposition is extremely important for determining the spatial distribution of biodeposition.Theoretically,biodeposition in cage culture areas without specific emission rules can be simplified as point source pollution.Fluent is a fluid simulation software that can simulate the dispersion of particulate matter simply and efficiently.Based on the simplification of pollution sources and bays,the settling flux of biodeposition can be easily and effectively simulated by Fluent fluid software.In the present work,the feasibility of this method was evaluated by simulation of the settling flux of biodeposition in Maniao Bay,Hainan Province,China,and 20 sampling sites were selected for determining the settling fluxes.At sampling sites P1,P2,P3,P4,P5,Z1,Z2,Z3,Z4,A1,A2,A3,A4,B1,B2,C1,C2,C3 and C4,the measured settling fluxes of biodeposition were 26.02,15.78,10.77,58.16,6.57,72.17,12.37,12.11,106.64,150.96,22.59,11.41,18.03,7.90,19.23,7.06,11.84,5.19 and 2.57 g d^(−1)m^(−2),respectively.The simulated settling fluxes of biodeposition at the corresponding sites were 16.03,23.98,8.87,46.90,4.52,104.77,16.03,8.35,180.83,213.06,39.10,17.47,20.98,9.78,23.25,7.84,15.90,6.06 and 1.65 g d^(−1)m^(−2),respectively.There was a positive correlation between the simulated settling fluxes and measured ones(R=0.94,P=2.22×10^(−9)<0.05),which implies that the spatial differentiation of biodeposition flux was well simulated.Moreover,the posterior difference ratio of the simulation was 0.38,and the small error probability was 0.94,which means that the simulated results reached an acceptable level from the perspective of relative error.Thus,if nonpoint source pollution is simplified to point source pollution and open waters are simplified based on similarity theory,the setting flux of biodeposition in the open waters can be simply and effectively simulated by the fluid simulation software Fluent.展开更多
An internal defect meter is an instrument to detect the internal inclusion defects of cold-rolled strip steel.The detection accuracy of the equipment can be evaluated based on the similarity of the multiple detection ...An internal defect meter is an instrument to detect the internal inclusion defects of cold-rolled strip steel.The detection accuracy of the equipment can be evaluated based on the similarity of the multiple detection data obtained for the same steel coil.Based on the cosine similarity model and eigenvalue matrix model,a comprehensive evaluation method to calculate the weighted average of similarity is proposed.Results show that the new method is consistent with and can even replace artificial evaluation to realize the automatic evaluation of strip defect detection results.展开更多
Based on ERA5 reanalysis data,the present study analyzed the thermal energy development mechanism and kinetic energy conversion characteristics of two extreme rainstorm processes in relation to the shallow southwest v...Based on ERA5 reanalysis data,the present study analyzed the thermal energy development mechanism and kinetic energy conversion characteristics of two extreme rainstorm processes in relation to the shallow southwest vortex in the warm-sector during a“rain-generated vortex”process and the deep southwest vortex in a“vortex-generated rain”process.The findings were as follows:(1)During the extreme rainstorm on August 11,2020(hereinafter referred to as the“8·11”process),intense surface heating and a high-energy unstable environment were observed.The mesoscale convergence system triggered convection to produce heavy rainfall,and the release of latent condensation heat generated by the rainfall promoted the formation of a southwest vortex.The significant increase(decrease)in atmospheric diabatic heating and kinetic energy preceded the increase(decrease)in vorticity.By contrast,the extreme rainstorm on August 16,2020(hereinafter referred to as the“8·16”process)involved the generation of southwest vortex in a low-energy and highhumidity environment.The dynamic uplift of the southwest vortex triggered rainfall,and the release of condensation latent heat from rainfall further strengthened the development of the southwest vortex.The significant increase(decrease)in atmospheric diabatic heating and kinetic energy exhibited a delayed progression compared to the increase(decrease)in vorticity.(2)The heating effect around the southwest vortex region was non-uniform,and the heating intensity varied in different stages.In the“8·11”process,the heating effect was the strongest in the initial stage,but weakened during the vortex's development.On the contrary,the heating effect was initially weak in the“8·16”process,and intensified during the development stage.(3)The available potential energy of the“8·11”process significantly increased in kinetic energy converted from rotational and divergent winds through baroclinic action,and the divergent wind energy continued to convert into rotational wind energy.By contrast,the“8·16”process involved the conversion of rotational wind energy into divergent wind energy,which in turn converted kinetic energy back into available potential energy,thereby impeding the further development and maintenance of the southwest vortex.展开更多
Similarity has been playing an important role in computer science,artificial intelligence(AI)and data science.However,similarity intelligence has been ignored in these disciplines.Similarity intelligence is a process ...Similarity has been playing an important role in computer science,artificial intelligence(AI)and data science.However,similarity intelligence has been ignored in these disciplines.Similarity intelligence is a process of discovering intelligence through similarity.This article will explore similarity intelligence,similarity-based reasoning,similarity computing and analytics.More specifically,this article looks at the similarity as an intelligence and its impact on a few areas in the real world.It explores similarity intelligence accompanying experience-based intelligence,knowledge-based intelligence,and data-based intelligence to play an important role in computer science,AI,and data science.This article explores similarity-based reasoning(SBR)and proposes three similarity-based inference rules.It then examines similarity computing and analytics,and a multiagent SBR system.The main contributions of this article are:1)Similarity intelligence is discovered from experience-based intelligence consisting of data-based intelligence and knowledge-based intelligence.2)Similarity-based reasoning,computing and analytics can be used to create similarity intelligence.The proposed approach will facilitate research and development of similarity intelligence,similarity computing and analytics,machine learning and case-based reasoning.展开更多
Photovoltaic(PV) power generation is characterized by randomness and intermittency due to weather changes.Consequently, large-scale PV power connections to the grid can threaten the stable operation of the power syste...Photovoltaic(PV) power generation is characterized by randomness and intermittency due to weather changes.Consequently, large-scale PV power connections to the grid can threaten the stable operation of the power system. An effective method to resolve this problem is to accurately predict PV power. In this study, an innovative short-term hybrid prediction model(i.e., HKSL) of PV power is established. The model combines K-means++, optimal similar day approach,and long short-term memory(LSTM) network. Historical power data and meteorological factors are utilized. This model searches for the best similar day based on the results of classifying weather types. Then, the data of similar day are inputted into the LSTM network to predict PV power. The validity of the hybrid model is verified based on the datasets from a PV power station in Shandong Province, China. Four evaluation indices, mean absolute error, root mean square error(RMSE),normalized RMSE, and mean absolute deviation, are employed to assess the performance of the HKSL model. The RMSE of the proposed model compared with those of Elman, LSTM, HSE(hybrid model combining similar day approach and Elman), HSL(hybrid model combining similar day approach and LSTM), and HKSE(hybrid model combining K-means++,similar day approach, and LSTM) decreases by 66.73%, 70.22%, 65.59%, 70.51%, and 18.40%, respectively. This proves the reliability and excellent performance of the proposed hybrid model in predicting power.展开更多
To fully exploit the rich characteristic variation laws of an integrated energy system(IES)and further improve the short-term load-forecasting accuracy,a load-forecasting method is proposed for an IES based on LSTM an...To fully exploit the rich characteristic variation laws of an integrated energy system(IES)and further improve the short-term load-forecasting accuracy,a load-forecasting method is proposed for an IES based on LSTM and dynamic similar days with multi-features.Feature expansion was performed to construct a comprehensive load day covering the load and meteorological information with coarse and fine time granularity,far and near time periods.The Gaussian mixture model(GMM)was used to divide the scene of the comprehensive load day,and gray correlation analysis was used to match the scene with the coarse time granularity characteristics of the day to be forecasted.Five typical days with the highest correlation with the day to be predicted in the scene were selected to construct a“dynamic similar day”by weighting.The key features of adjacent days and dynamic similar days were used to forecast multi-loads with fine time granularity using LSTM.Comparing the static features as input and the selection method of similar days based on non-extended single features,the effectiveness of the proposed prediction method was verified.展开更多
Understanding the mechanisms and risks of forest fires by building a spatial prediction model is an important means of controlling forest fires.Non-fire point data are important training data for constructing a model,...Understanding the mechanisms and risks of forest fires by building a spatial prediction model is an important means of controlling forest fires.Non-fire point data are important training data for constructing a model,and their quality significantly impacts the prediction performance of the model.However,non-fire point data obtained using existing sampling methods generally suffer from low representativeness.Therefore,this study proposes a non-fire point data sampling method based on geographical similarity to improve the quality of non-fire point samples.The method is based on the idea that the less similar the geographical environment between a sample point and an already occurred fire point,the greater the confidence in being a non-fire point sample.Yunnan Province,China,with a high frequency of forest fires,was used as the study area.We compared the prediction performance of traditional sampling methods and the proposed method using three commonly used forest fire risk prediction models:logistic regression(LR),support vector machine(SVM),and random forest(RF).The results show that the modeling and prediction accuracies of the forest fire prediction models established based on the proposed sampling method are significantly improved compared with those of the traditional sampling method.Specifically,in 2010,the modeling and prediction accuracies improved by 19.1%and 32.8%,respectively,and in 2020,they improved by 13.1%and 24.3%,respectively.Therefore,we believe that collecting non-fire point samples based on the principle of geographical similarity is an effective way to improve the quality of forest fire samples,and thus enhance the prediction of forest fire risk.展开更多
The municipality of Boa Nova,in northeastern Brazil,is in an ecotone zone between the Caatinga and Atlantic Forest phytogeographic domains.The transition phytophysiognomy is seasonal forest and known locally as mata d...The municipality of Boa Nova,in northeastern Brazil,is in an ecotone zone between the Caatinga and Atlantic Forest phytogeographic domains.The transition phytophysiognomy is seasonal forest and known locally as mata de cipó.In these phytophysiognomies there are lajedos,which are rock outcrops colonized by vegetation welladapted to extreme microclimatic variation and vegetation diversity is affected by the vegetation types of the surrounding areas.Due to the singularity of these environments and the relevance of floristic studies for conservation,this work aimed to identify the species richness and compare the similarity of the flora on four rock outcrops in Boa Nova.The flora was surveyed during exploratory walks along lajedos between 2016 and 2019.In total,162 species were identified on the Boa Nova outcrops.The flora has a composition and structure similar to semiarid outcrops,as well as endemic species that also occur in surrounding phytophysiomies.Despite the proximity,a similarity index revealed there is floristic dissimilarity between the areas.Nine new occurrences were recorded for the region,five species are threatened with extinction(Aosa gilgiana,Ficus cyclophylla,Hippeastrum stigmovittatum,Pleroma caatingae and Trixis pruskii),and 43 species are common in anthropogenic areas.This reinforces the importance of actions to conserve these areas.展开更多
Neurodegeneration is attributable to metabolic disturbances in the various cell types responsible for this condition, in respect of glucose utilisation and dysfunctional mitochondrial oxidative mechanisms. The propert...Neurodegeneration is attributable to metabolic disturbances in the various cell types responsible for this condition, in respect of glucose utilisation and dysfunctional mitochondrial oxidative mechanisms. The properties of neurotoxins and antagonists that limit their action are well documented in disease models, whereas effective therapy is very limited. Cell apoptosis, a general marker of neurodegeneration, is also of therapeutic interest in the treatment of cancer. cGMP nucleotide influences apoptosis and has a role in maintaining equilibrium within cell redox parameters. The chemical structure of cGMP provides a comparative template for demonstrating relative molecular similarity within the structures of natural and synthetic compounds influencing tumour cell apoptosis. The present study uses computational software to investigate molecular similarity within the structures of cGMP and compounds that modulate cell apoptosis in experimental models of diabetic peripheral neuropathy (DPN), Parkinson’s and multiple sclerosis. Differential molecular similarity demonstrated in neurotoxin and antagonist structures implicate metabolite impairment of cGMP signaling function as a common mechanism in the initial phases of these neurodegenerative conditions.展开更多
Given one specific image,it would be quite significant if humanity could simply retrieve all those pictures that fall into a similar category of images.However,traditional methods are inclined to achieve high-quality ...Given one specific image,it would be quite significant if humanity could simply retrieve all those pictures that fall into a similar category of images.However,traditional methods are inclined to achieve high-quality retrieval by utilizing adequate learning instances,ignoring the extraction of the image’s essential information which leads to difficulty in the retrieval of similar category images just using one reference image.Aiming to solve this problem above,we proposed in this paper one refined sparse representation based similar category image retrieval model.On the one hand,saliency detection and multi-level decomposition could contribute to taking salient and spatial information into consideration more fully in the future.On the other hand,the cross mutual sparse coding model aims to extract the image’s essential feature to the maximumextent possible.At last,we set up a database concluding a large number of multi-source images.Adequate groups of comparative experiments show that our method could contribute to retrieving similar category images effectively.Moreover,adequate groups of ablation experiments show that nearly all procedures play their roles,respectively.展开更多
Operation control of power systems has become challenging with an increase in the scale and complexity of power distribution systems and extensive access to renewable energy.Therefore,improvement of the ability of dat...Operation control of power systems has become challenging with an increase in the scale and complexity of power distribution systems and extensive access to renewable energy.Therefore,improvement of the ability of data-driven operation management,intelligent analysis,and mining is urgently required.To investigate and explore similar regularities of the historical operating section of the power distribution system and assist the power grid in obtaining high-value historical operation,maintenance experience,and knowledge by rule and line,a neural information retrieval model with an attention mechanism is proposed based on graph data computing technology.Based on the processing flow of the operating data of the power distribution system,a technical framework of neural information retrieval is established.Combined with the natural graph characteristics of the power distribution system,a unified graph data structure and a data fusion method of data access,data complement,and multi-source data are constructed.Further,a graph node feature-embedding representation learning algorithm and a neural information retrieval algorithm model are constructed.The neural information retrieval algorithm model is trained and tested using the generated graph node feature representation vector set.The model is verified on the operating section of the power distribution system of a provincial grid area.The results show that the proposed method demonstrates high accuracy in the similarity matching of historical operation characteristics and effectively supports intelligent fault diagnosis and elimination in power distribution systems.展开更多
Recently,security issues of smart contracts are arising great attention due to the enormous financial loss caused by vulnerability attacks.There is an increasing need to detect similar codes for hunting vulnerability ...Recently,security issues of smart contracts are arising great attention due to the enormous financial loss caused by vulnerability attacks.There is an increasing need to detect similar codes for hunting vulnerability with the increase of critical security issues in smart contracts.Binary similarity detection that quantitatively measures the given code diffing has been widely adopted to facilitate critical security analysis.However,due to the difference between common programs and smart contract,such as diversity of bytecode generation and highly code homogeneity,directly adopting existing graph matching and machine learning based techniques to smart contracts suffers from low accuracy,poor scalability and the limitation of binary similarity on function level.Therefore,this paper investigates graph neural network to detect smart contract binary code similarity at the program level,where we conduct instruction-level normalization to reduce the noise code for smart contract pre-processing and construct contract control flow graphs to represent smart contracts.In particular,two improved Graph Convolutional Network(GCN)and Message Passing Neural Network(MPNN)models are explored to encode the contract graphs into quantitatively vectors,which can capture the semantic information and the program-wide control flow information with temporal orders.Then we can efficiently accomplish the similarity detection by measuring the distance between two targeted contract embeddings.To evaluate the effectiveness and efficient of our proposed method,extensive experiments are performed on two real-world datasets,i.e.,smart contracts from Ethereum and Enterprise Operation System(EOS)blockchain-based platforms.The results show that our proposed approach outperforms three state-of-the-art methods by a large margin,achieving a great improvement up to 6.1%and 17.06%in accuracy.展开更多
An information system is a type of knowledge representation,and attribute reduction is crucial in big data,machine learning,data mining,and intelligent systems.There are several ways for solving attribute reduction pr...An information system is a type of knowledge representation,and attribute reduction is crucial in big data,machine learning,data mining,and intelligent systems.There are several ways for solving attribute reduction problems,but they all require a common categorization.The selection of features in most scientific studies is a challenge for the researcher.When working with huge datasets,selecting all available attributes is not an option because it frequently complicates the study and decreases performance.On the other side,neglecting some attributes might jeopardize data accuracy.In this case,rough set theory provides a useful approach for identifying superfluous attributes that may be ignored without sacrificing any significant information;nonetheless,investigating all available combinations of attributes will result in some problems.Furthermore,because attribute reduction is primarily a mathematical issue,technical progress in reduction is dependent on the advancement of mathematical models.Because the focus of this study is on the mathematical side of attribute reduction,we propose some methods to make a reduction for information systems according to classical rough set theory,the strength of rules and similarity matrix,we applied our proposed methods to several examples and calculate the reduction for each case.These methods expand the options of attribute reductions for researchers.展开更多
Joint inversion is one of the most effective methods for reducing non-uniqueness for geophysical inversion.The current joint inversion methods can be divided into the structural consistency constraint and petrophysica...Joint inversion is one of the most effective methods for reducing non-uniqueness for geophysical inversion.The current joint inversion methods can be divided into the structural consistency constraint and petrophysical consistency constraint methods,which are mutually independent.Currently,there is a need for joint inversion methods that can comprehensively consider the structural consistency constraints and petrophysical consistency constraints.This paper develops the structural similarity index(SSIM)as a new structural and petrophysical consistency constraint for the joint inversion of gravity and vertical gradient data.The SSIM constraint is in the form of a fraction,which may have analytical singularities.Therefore,converting the fractional form to the subtractive form can solve the problem of analytic singularity and finally form a modified structural consistency index of the joint inversion,which enhances the stability of the SSIM constraint applied to the joint inversion.Compared to the reconstructed results from the cross-gradient inversion,the proposed method presents good performance and stability.The SSIM algorithm is a new joint inversion method for petrophysical and structural constraints.It can promote the consistency of the recovered models from the distribution and the structure of the physical property values.Then,applications to synthetic data illustrate that the algorithm proposed in this paper can well process the synthetic data and acquire good reconstructed results.展开更多
Cholesterol and cholesterol oxides impact on the functional properties of cells, in respect of the intracellular and extracellular distribution of compounds across cell membranes, carcinogenesis and drug resistance. A...Cholesterol and cholesterol oxides impact on the functional properties of cells, in respect of the intracellular and extracellular distribution of compounds across cell membranes, carcinogenesis and drug resistance. Abnormal levels of cholesterol oxides and steroids in cancerous tissues promote interest in steroid receptor cross-talk during cell-signalling and the steroid metabolome of cancer patients. The research literature links the cytotoxic properties of oxysterols to interference with the NO/cGMP pathway. cGMP participates in cell-signalling and has a molecular structure that relates to cancer-inducing and cancer-preventing agents. This study uses a molecular modelling approach to compare the structures of cholesterol oxides to cGMP. Cholesterol and cholesterol oxide structures fit to a cGMP structural template in several ways, some of which are replicated by corticosteroids and gonadal steroid hormones. The results of this study support the concept that cholesterol oxides modulate cell apoptosis and autophagy via the NO/cGMP pathway and in conjunction with steroid hormones participate in modulating regulation of cell function by cGMP.展开更多
Text Summarization models facilitate biomedical clinicians and researchers in acquiring informative data from enormous domain-specific literature within less time and effort.Evaluating and selecting the most informati...Text Summarization models facilitate biomedical clinicians and researchers in acquiring informative data from enormous domain-specific literature within less time and effort.Evaluating and selecting the most informative sentences from biomedical articles is always challenging.This study aims to develop a dual-mode biomedical text summarization model to achieve enhanced coverage and information.The research also includes checking the fitment of appropriate graph ranking techniques for improved performance of the summarization model.The input biomedical text is mapped as a graph where meaningful sentences are evaluated as the central node and the critical associations between them.The proposed framework utilizes the top k similarity technique in a combination of UMLS and a sampled probability-based clustering method which aids in unearthing relevant meanings of the biomedical domain-specific word vectors and finding the best possible associations between crucial sentences.The quality of the framework is assessed via different parameters like information retention,coverage,readability,cohesion,and ROUGE scores in clustering and non-clustering modes.The significant benefits of the suggested technique are capturing crucial biomedical information with increased coverage and reasonable memory consumption.The configurable settings of combined parameters reduce execution time,enhance memory utilization,and extract relevant information outperforming other biomedical baseline models.An improvement of 17%is achieved when the proposed model is checked against similar biomedical text summarizers.展开更多
基金the National Natural Science Foundation of China(No.62302540)with author F.F.S.For more information,please visit their website at https://www.nsfc.gov.cn/.Additionally,it is also funded by the Open Foundation of Henan Key Laboratory of Cyberspace Situation Awareness(No.HNTS2022020)+1 种基金where F.F.S is an author.Further details can be found at http://xt.hnkjt.gov.cn/data/pingtai/.The research is also supported by the Natural Science Foundation of Henan Province Youth Science Fund Project(No.232300420422)for more information,you can visit https://kjt.henan.gov.cn/2022/09-02/2599082.html.Lastly,it receives funding from the Natural Science Foundation of Zhongyuan University of Technology(No.K2023QN018),where F.F.S is an author.You can find more information at https://www.zut.edu.cn/.
文摘As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocreate a misleading perception among users. While early research primarily focused on text-based features forfake news detection mechanisms, there has been relatively limited exploration of learning shared representationsin multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal modelfor detecting fake news, which relies on similarity reasoning and adversarial networks. The model employsBidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) toextract visual features. Subsequently, the model establishes similarity representations between the textual featuresextracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features arefused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigatethe relationship between fake news and events. This paper validates the proposed model using publicly availablemultimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approachachieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodalmodalmodelsand existing multimodal models. In contrast, the overall better performance of our model on the Weibo datasetsurpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarialnetworks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However,current research is limited to the fusion of only text and image modalities. Future research directions should aimto further integrate features fromadditionalmodalities to comprehensively represent themultifaceted informationof fake news.
基金the National Natural Science Foundation of China(Nos.51988101 and 42007262).
文摘3D printing is widely adopted to quickly produce rock mass models with complex structures in batches,improving the consistency and repeatability of physical modeling.It is necessary to regulate the mechanical properties of 3D-printed specimens to make them proportionally similar to natural rocks.This study investigates mechanical properties of 3D-printed rock analogues prepared by furan resin-bonded silica sand particles.The mechanical property regulation of 3D-printed specimens is realized through quantifying its similarity to sandstone,so that analogous deformation characteristics and failure mode are acquired.Considering similarity conversion,uniaxial compressive strength,cohesion and stress–strain relationship curve of 3D-printed specimen are similar to those of sandstone.In the study ranges,the strength of 3D-printed specimen is positively correlated with the additive content,negatively correlated with the sand particle size,and first increases then decreases with the increase of curing temperature.The regulation scheme with optimal similarity quantification index,that is the sand type of 70/140,additive content of 2.5‰and curing temperature of 81.6℃,is determined for preparing 3D-printed sandstone analogues and models.The effectiveness of mechanical property regulation is proved through uniaxial compression contrast tests.This study provides a reference for preparing rock-like specimens and engineering models using 3D printing technology.
基金supported by the National Natural Science Foundation of China(62033010)Qing Lan Project of Jiangsu Province(R2023Q07)。
文摘For accurately identifying the distribution charac-teristic of Gaussian-like noises in unmanned aerial vehicle(UAV)state estimation,this paper proposes a non-parametric scheme based on curve similarity matching.In the framework of the pro-posed scheme,a Parzen window(kernel density estimation,KDE)method on sliding window technology is applied for roughly esti-mating the sample probability density,a precise data probability density function(PDF)model is constructed with the least square method on K-fold cross validation,and the testing result based on evaluation method is obtained based on some data characteristic analyses of curve shape,abruptness and symmetry.Some com-parison simulations with classical methods and UAV flight exper-iment shows that the proposed scheme has higher recognition accuracy than classical methods for some kinds of Gaussian-like data,which provides better reference for the design of Kalman filter(KF)in complex water environment.
基金supported by the National Natural Science Foundation of China(Nos.42174063,92155307,41976046)Guangdong Provincial Key Laboratory of Geophysical High-resolution Imaging Technology under(No.2022B1212010002)Project for introduced Talents Team of Southern Marine Science and Engineering Guangdong(Guangzhou)(No.GML2019ZD0203)。
文摘Existing lithospheric velocity models exhibit similar structures typically associated with the first-order tectonic features,with dissimilarities due to different data and methods used in model generation.The quantification of model structural similarity can help in interpreting the geophysical properties of Earth's interior and establishing unified models crucial in natural hazard assessment and resource exploration.Here we employ the complex wavelet structural similarity index measure(CW-SSIM)active in computer image processing to analyze the structural similarity of four lithospheric velocity models of Chinese mainland published in the past decade.We take advantage of this method in its multiscale definition and insensitivity to slight geometrical distortion like translation and scaling,which is particularly crucial in the structural similarity analysis of velocity models accounting for uncertainty and resolution.Our results show that the CW-SSIM values vary in different model pairs,horizontal locations,and depths.While variations in the inter-model CW-SSIM are partly owing to different databases in the model generation,the difference of tomography methods may significantly impact the similar structural features of models,such as the low similarities between the full-wave based FWEA18 and other three models in northeastern China.We finally suggest potential solutions for the next generation of tomographic modeling in different areas according to corresponding structural similarities of existing models.
基金support from the National Key Research and Development Program of China(No.2018YFD0900704)the National Natural Science Foundation of China(No.31972796).
文摘The settling flux of biodeposition affects the environmental quality of cage culture areas and determines their environmental carrying capacity.Simple and effective simulation of the settling flux of biodeposition is extremely important for determining the spatial distribution of biodeposition.Theoretically,biodeposition in cage culture areas without specific emission rules can be simplified as point source pollution.Fluent is a fluid simulation software that can simulate the dispersion of particulate matter simply and efficiently.Based on the simplification of pollution sources and bays,the settling flux of biodeposition can be easily and effectively simulated by Fluent fluid software.In the present work,the feasibility of this method was evaluated by simulation of the settling flux of biodeposition in Maniao Bay,Hainan Province,China,and 20 sampling sites were selected for determining the settling fluxes.At sampling sites P1,P2,P3,P4,P5,Z1,Z2,Z3,Z4,A1,A2,A3,A4,B1,B2,C1,C2,C3 and C4,the measured settling fluxes of biodeposition were 26.02,15.78,10.77,58.16,6.57,72.17,12.37,12.11,106.64,150.96,22.59,11.41,18.03,7.90,19.23,7.06,11.84,5.19 and 2.57 g d^(−1)m^(−2),respectively.The simulated settling fluxes of biodeposition at the corresponding sites were 16.03,23.98,8.87,46.90,4.52,104.77,16.03,8.35,180.83,213.06,39.10,17.47,20.98,9.78,23.25,7.84,15.90,6.06 and 1.65 g d^(−1)m^(−2),respectively.There was a positive correlation between the simulated settling fluxes and measured ones(R=0.94,P=2.22×10^(−9)<0.05),which implies that the spatial differentiation of biodeposition flux was well simulated.Moreover,the posterior difference ratio of the simulation was 0.38,and the small error probability was 0.94,which means that the simulated results reached an acceptable level from the perspective of relative error.Thus,if nonpoint source pollution is simplified to point source pollution and open waters are simplified based on similarity theory,the setting flux of biodeposition in the open waters can be simply and effectively simulated by the fluid simulation software Fluent.
文摘An internal defect meter is an instrument to detect the internal inclusion defects of cold-rolled strip steel.The detection accuracy of the equipment can be evaluated based on the similarity of the multiple detection data obtained for the same steel coil.Based on the cosine similarity model and eigenvalue matrix model,a comprehensive evaluation method to calculate the weighted average of similarity is proposed.Results show that the new method is consistent with and can even replace artificial evaluation to realize the automatic evaluation of strip defect detection results.
基金Key Project of Joint Meteorological Fund of the National Natural Science Foundation of China (U2242202)Key Project of the National Natural Science Foundation of China (42030611)+1 种基金Innovative Development Special Project of China Meteorological Administration (CXFZ2023J016)Innovation Team Fund of Sichuan Provincial Meteorological Service (SCQXCX7D-202201)。
文摘Based on ERA5 reanalysis data,the present study analyzed the thermal energy development mechanism and kinetic energy conversion characteristics of two extreme rainstorm processes in relation to the shallow southwest vortex in the warm-sector during a“rain-generated vortex”process and the deep southwest vortex in a“vortex-generated rain”process.The findings were as follows:(1)During the extreme rainstorm on August 11,2020(hereinafter referred to as the“8·11”process),intense surface heating and a high-energy unstable environment were observed.The mesoscale convergence system triggered convection to produce heavy rainfall,and the release of latent condensation heat generated by the rainfall promoted the formation of a southwest vortex.The significant increase(decrease)in atmospheric diabatic heating and kinetic energy preceded the increase(decrease)in vorticity.By contrast,the extreme rainstorm on August 16,2020(hereinafter referred to as the“8·16”process)involved the generation of southwest vortex in a low-energy and highhumidity environment.The dynamic uplift of the southwest vortex triggered rainfall,and the release of condensation latent heat from rainfall further strengthened the development of the southwest vortex.The significant increase(decrease)in atmospheric diabatic heating and kinetic energy exhibited a delayed progression compared to the increase(decrease)in vorticity.(2)The heating effect around the southwest vortex region was non-uniform,and the heating intensity varied in different stages.In the“8·11”process,the heating effect was the strongest in the initial stage,but weakened during the vortex's development.On the contrary,the heating effect was initially weak in the“8·16”process,and intensified during the development stage.(3)The available potential energy of the“8·11”process significantly increased in kinetic energy converted from rotational and divergent winds through baroclinic action,and the divergent wind energy continued to convert into rotational wind energy.By contrast,the“8·16”process involved the conversion of rotational wind energy into divergent wind energy,which in turn converted kinetic energy back into available potential energy,thereby impeding the further development and maintenance of the southwest vortex.
文摘Similarity has been playing an important role in computer science,artificial intelligence(AI)and data science.However,similarity intelligence has been ignored in these disciplines.Similarity intelligence is a process of discovering intelligence through similarity.This article will explore similarity intelligence,similarity-based reasoning,similarity computing and analytics.More specifically,this article looks at the similarity as an intelligence and its impact on a few areas in the real world.It explores similarity intelligence accompanying experience-based intelligence,knowledge-based intelligence,and data-based intelligence to play an important role in computer science,AI,and data science.This article explores similarity-based reasoning(SBR)and proposes three similarity-based inference rules.It then examines similarity computing and analytics,and a multiagent SBR system.The main contributions of this article are:1)Similarity intelligence is discovered from experience-based intelligence consisting of data-based intelligence and knowledge-based intelligence.2)Similarity-based reasoning,computing and analytics can be used to create similarity intelligence.The proposed approach will facilitate research and development of similarity intelligence,similarity computing and analytics,machine learning and case-based reasoning.
基金supported by the No. 4 National Project in 2022 of the Ministry of Emergency Response (2022YJBG04)the International Clean Energy Talent Program (201904100014)。
文摘Photovoltaic(PV) power generation is characterized by randomness and intermittency due to weather changes.Consequently, large-scale PV power connections to the grid can threaten the stable operation of the power system. An effective method to resolve this problem is to accurately predict PV power. In this study, an innovative short-term hybrid prediction model(i.e., HKSL) of PV power is established. The model combines K-means++, optimal similar day approach,and long short-term memory(LSTM) network. Historical power data and meteorological factors are utilized. This model searches for the best similar day based on the results of classifying weather types. Then, the data of similar day are inputted into the LSTM network to predict PV power. The validity of the hybrid model is verified based on the datasets from a PV power station in Shandong Province, China. Four evaluation indices, mean absolute error, root mean square error(RMSE),normalized RMSE, and mean absolute deviation, are employed to assess the performance of the HKSL model. The RMSE of the proposed model compared with those of Elman, LSTM, HSE(hybrid model combining similar day approach and Elman), HSL(hybrid model combining similar day approach and LSTM), and HKSE(hybrid model combining K-means++,similar day approach, and LSTM) decreases by 66.73%, 70.22%, 65.59%, 70.51%, and 18.40%, respectively. This proves the reliability and excellent performance of the proposed hybrid model in predicting power.
基金supported by National Natural Science Foundation of China(NSFC)(62103126).
文摘To fully exploit the rich characteristic variation laws of an integrated energy system(IES)and further improve the short-term load-forecasting accuracy,a load-forecasting method is proposed for an IES based on LSTM and dynamic similar days with multi-features.Feature expansion was performed to construct a comprehensive load day covering the load and meteorological information with coarse and fine time granularity,far and near time periods.The Gaussian mixture model(GMM)was used to divide the scene of the comprehensive load day,and gray correlation analysis was used to match the scene with the coarse time granularity characteristics of the day to be forecasted.Five typical days with the highest correlation with the day to be predicted in the scene were selected to construct a“dynamic similar day”by weighting.The key features of adjacent days and dynamic similar days were used to forecast multi-loads with fine time granularity using LSTM.Comparing the static features as input and the selection method of similar days based on non-extended single features,the effectiveness of the proposed prediction method was verified.
基金financially supported by the National Natural Science Fundation of China(Grant Nos.42161065 and 41461038)。
文摘Understanding the mechanisms and risks of forest fires by building a spatial prediction model is an important means of controlling forest fires.Non-fire point data are important training data for constructing a model,and their quality significantly impacts the prediction performance of the model.However,non-fire point data obtained using existing sampling methods generally suffer from low representativeness.Therefore,this study proposes a non-fire point data sampling method based on geographical similarity to improve the quality of non-fire point samples.The method is based on the idea that the less similar the geographical environment between a sample point and an already occurred fire point,the greater the confidence in being a non-fire point sample.Yunnan Province,China,with a high frequency of forest fires,was used as the study area.We compared the prediction performance of traditional sampling methods and the proposed method using three commonly used forest fire risk prediction models:logistic regression(LR),support vector machine(SVM),and random forest(RF).The results show that the modeling and prediction accuracies of the forest fire prediction models established based on the proposed sampling method are significantly improved compared with those of the traditional sampling method.Specifically,in 2010,the modeling and prediction accuracies improved by 19.1%and 32.8%,respectively,and in 2020,they improved by 13.1%and 24.3%,respectively.Therefore,we believe that collecting non-fire point samples based on the principle of geographical similarity is an effective way to improve the quality of forest fire samples,and thus enhance the prediction of forest fire risk.
基金the Coordena??o de Aperfei?oamento de Pessoal de Nível Superior(CAPES)(process number 88882.451229/2019-01)for the scholarship granted to the first authorto Programa de ApoioàPós-Gradua??o(PROAP)for the financial support provided for data collection。
文摘The municipality of Boa Nova,in northeastern Brazil,is in an ecotone zone between the Caatinga and Atlantic Forest phytogeographic domains.The transition phytophysiognomy is seasonal forest and known locally as mata de cipó.In these phytophysiognomies there are lajedos,which are rock outcrops colonized by vegetation welladapted to extreme microclimatic variation and vegetation diversity is affected by the vegetation types of the surrounding areas.Due to the singularity of these environments and the relevance of floristic studies for conservation,this work aimed to identify the species richness and compare the similarity of the flora on four rock outcrops in Boa Nova.The flora was surveyed during exploratory walks along lajedos between 2016 and 2019.In total,162 species were identified on the Boa Nova outcrops.The flora has a composition and structure similar to semiarid outcrops,as well as endemic species that also occur in surrounding phytophysiomies.Despite the proximity,a similarity index revealed there is floristic dissimilarity between the areas.Nine new occurrences were recorded for the region,five species are threatened with extinction(Aosa gilgiana,Ficus cyclophylla,Hippeastrum stigmovittatum,Pleroma caatingae and Trixis pruskii),and 43 species are common in anthropogenic areas.This reinforces the importance of actions to conserve these areas.
文摘Neurodegeneration is attributable to metabolic disturbances in the various cell types responsible for this condition, in respect of glucose utilisation and dysfunctional mitochondrial oxidative mechanisms. The properties of neurotoxins and antagonists that limit their action are well documented in disease models, whereas effective therapy is very limited. Cell apoptosis, a general marker of neurodegeneration, is also of therapeutic interest in the treatment of cancer. cGMP nucleotide influences apoptosis and has a role in maintaining equilibrium within cell redox parameters. The chemical structure of cGMP provides a comparative template for demonstrating relative molecular similarity within the structures of natural and synthetic compounds influencing tumour cell apoptosis. The present study uses computational software to investigate molecular similarity within the structures of cGMP and compounds that modulate cell apoptosis in experimental models of diabetic peripheral neuropathy (DPN), Parkinson’s and multiple sclerosis. Differential molecular similarity demonstrated in neurotoxin and antagonist structures implicate metabolite impairment of cGMP signaling function as a common mechanism in the initial phases of these neurodegenerative conditions.
基金sponsored by the National Natural Science Foundation of China(Grants:62002200,61772319)Shandong Natural Science Foundation of China(Grant:ZR2020QF012).
文摘Given one specific image,it would be quite significant if humanity could simply retrieve all those pictures that fall into a similar category of images.However,traditional methods are inclined to achieve high-quality retrieval by utilizing adequate learning instances,ignoring the extraction of the image’s essential information which leads to difficulty in the retrieval of similar category images just using one reference image.Aiming to solve this problem above,we proposed in this paper one refined sparse representation based similar category image retrieval model.On the one hand,saliency detection and multi-level decomposition could contribute to taking salient and spatial information into consideration more fully in the future.On the other hand,the cross mutual sparse coding model aims to extract the image’s essential feature to the maximumextent possible.At last,we set up a database concluding a large number of multi-source images.Adequate groups of comparative experiments show that our method could contribute to retrieving similar category images effectively.Moreover,adequate groups of ablation experiments show that nearly all procedures play their roles,respectively.
基金supported by the National Key R&D Program of China(2020YFB0905900).
文摘Operation control of power systems has become challenging with an increase in the scale and complexity of power distribution systems and extensive access to renewable energy.Therefore,improvement of the ability of data-driven operation management,intelligent analysis,and mining is urgently required.To investigate and explore similar regularities of the historical operating section of the power distribution system and assist the power grid in obtaining high-value historical operation,maintenance experience,and knowledge by rule and line,a neural information retrieval model with an attention mechanism is proposed based on graph data computing technology.Based on the processing flow of the operating data of the power distribution system,a technical framework of neural information retrieval is established.Combined with the natural graph characteristics of the power distribution system,a unified graph data structure and a data fusion method of data access,data complement,and multi-source data are constructed.Further,a graph node feature-embedding representation learning algorithm and a neural information retrieval algorithm model are constructed.The neural information retrieval algorithm model is trained and tested using the generated graph node feature representation vector set.The model is verified on the operating section of the power distribution system of a provincial grid area.The results show that the proposed method demonstrates high accuracy in the similarity matching of historical operation characteristics and effectively supports intelligent fault diagnosis and elimination in power distribution systems.
基金supported by the Basic Research Program(No.JCKY2019210B029)Network threat depth analysis software(KY10800210013).
文摘Recently,security issues of smart contracts are arising great attention due to the enormous financial loss caused by vulnerability attacks.There is an increasing need to detect similar codes for hunting vulnerability with the increase of critical security issues in smart contracts.Binary similarity detection that quantitatively measures the given code diffing has been widely adopted to facilitate critical security analysis.However,due to the difference between common programs and smart contract,such as diversity of bytecode generation and highly code homogeneity,directly adopting existing graph matching and machine learning based techniques to smart contracts suffers from low accuracy,poor scalability and the limitation of binary similarity on function level.Therefore,this paper investigates graph neural network to detect smart contract binary code similarity at the program level,where we conduct instruction-level normalization to reduce the noise code for smart contract pre-processing and construct contract control flow graphs to represent smart contracts.In particular,two improved Graph Convolutional Network(GCN)and Message Passing Neural Network(MPNN)models are explored to encode the contract graphs into quantitatively vectors,which can capture the semantic information and the program-wide control flow information with temporal orders.Then we can efficiently accomplish the similarity detection by measuring the distance between two targeted contract embeddings.To evaluate the effectiveness and efficient of our proposed method,extensive experiments are performed on two real-world datasets,i.e.,smart contracts from Ethereum and Enterprise Operation System(EOS)blockchain-based platforms.The results show that our proposed approach outperforms three state-of-the-art methods by a large margin,achieving a great improvement up to 6.1%and 17.06%in accuracy.
文摘An information system is a type of knowledge representation,and attribute reduction is crucial in big data,machine learning,data mining,and intelligent systems.There are several ways for solving attribute reduction problems,but they all require a common categorization.The selection of features in most scientific studies is a challenge for the researcher.When working with huge datasets,selecting all available attributes is not an option because it frequently complicates the study and decreases performance.On the other side,neglecting some attributes might jeopardize data accuracy.In this case,rough set theory provides a useful approach for identifying superfluous attributes that may be ignored without sacrificing any significant information;nonetheless,investigating all available combinations of attributes will result in some problems.Furthermore,because attribute reduction is primarily a mathematical issue,technical progress in reduction is dependent on the advancement of mathematical models.Because the focus of this study is on the mathematical side of attribute reduction,we propose some methods to make a reduction for information systems according to classical rough set theory,the strength of rules and similarity matrix,we applied our proposed methods to several examples and calculate the reduction for each case.These methods expand the options of attribute reductions for researchers.
基金supported by the National Key Research and Development Program(Grant No.2021YFA0716100)the National Key Research and Development Program of China Project(Grant No.2018YFC0603502)+1 种基金the Henan Youth Science Fund Program(Grant No.212300410105)the provincial key R&D and promotion special project of Henan Province(Grant No.222102320279).
文摘Joint inversion is one of the most effective methods for reducing non-uniqueness for geophysical inversion.The current joint inversion methods can be divided into the structural consistency constraint and petrophysical consistency constraint methods,which are mutually independent.Currently,there is a need for joint inversion methods that can comprehensively consider the structural consistency constraints and petrophysical consistency constraints.This paper develops the structural similarity index(SSIM)as a new structural and petrophysical consistency constraint for the joint inversion of gravity and vertical gradient data.The SSIM constraint is in the form of a fraction,which may have analytical singularities.Therefore,converting the fractional form to the subtractive form can solve the problem of analytic singularity and finally form a modified structural consistency index of the joint inversion,which enhances the stability of the SSIM constraint applied to the joint inversion.Compared to the reconstructed results from the cross-gradient inversion,the proposed method presents good performance and stability.The SSIM algorithm is a new joint inversion method for petrophysical and structural constraints.It can promote the consistency of the recovered models from the distribution and the structure of the physical property values.Then,applications to synthetic data illustrate that the algorithm proposed in this paper can well process the synthetic data and acquire good reconstructed results.
文摘Cholesterol and cholesterol oxides impact on the functional properties of cells, in respect of the intracellular and extracellular distribution of compounds across cell membranes, carcinogenesis and drug resistance. Abnormal levels of cholesterol oxides and steroids in cancerous tissues promote interest in steroid receptor cross-talk during cell-signalling and the steroid metabolome of cancer patients. The research literature links the cytotoxic properties of oxysterols to interference with the NO/cGMP pathway. cGMP participates in cell-signalling and has a molecular structure that relates to cancer-inducing and cancer-preventing agents. This study uses a molecular modelling approach to compare the structures of cholesterol oxides to cGMP. Cholesterol and cholesterol oxide structures fit to a cGMP structural template in several ways, some of which are replicated by corticosteroids and gonadal steroid hormones. The results of this study support the concept that cholesterol oxides modulate cell apoptosis and autophagy via the NO/cGMP pathway and in conjunction with steroid hormones participate in modulating regulation of cell function by cGMP.
文摘Text Summarization models facilitate biomedical clinicians and researchers in acquiring informative data from enormous domain-specific literature within less time and effort.Evaluating and selecting the most informative sentences from biomedical articles is always challenging.This study aims to develop a dual-mode biomedical text summarization model to achieve enhanced coverage and information.The research also includes checking the fitment of appropriate graph ranking techniques for improved performance of the summarization model.The input biomedical text is mapped as a graph where meaningful sentences are evaluated as the central node and the critical associations between them.The proposed framework utilizes the top k similarity technique in a combination of UMLS and a sampled probability-based clustering method which aids in unearthing relevant meanings of the biomedical domain-specific word vectors and finding the best possible associations between crucial sentences.The quality of the framework is assessed via different parameters like information retention,coverage,readability,cohesion,and ROUGE scores in clustering and non-clustering modes.The significant benefits of the suggested technique are capturing crucial biomedical information with increased coverage and reasonable memory consumption.The configurable settings of combined parameters reduce execution time,enhance memory utilization,and extract relevant information outperforming other biomedical baseline models.An improvement of 17%is achieved when the proposed model is checked against similar biomedical text summarizers.