Purpose:The quantitative rankings of over 55,000 institutions and their institutional programs are based on the individual rankings of approximately 30 million scholars determined by their productivity,impact,and qual...Purpose:The quantitative rankings of over 55,000 institutions and their institutional programs are based on the individual rankings of approximately 30 million scholars determined by their productivity,impact,and quality.Design/methodology/approach:The institutional ranking process developed here considers all institutions in all countries and regions,thereby including those that are established,as well as those that are emerging in scholarly prowess.Rankings of individual scholars worldwide are first generated using the recently introduced,fully indexed ScholarGPS database.The rankings of individual scholars are extended here to determine the lifetime and last-five-year Top 20 rankings of academic institutions over all Fields of scholarly endeavor,in 14 individual Fields,in 177 Disciplines,and in approximately 350,000 unique Specialties.Rankings associated with five specific Fields(Medicine,Engineering&Computer Science,Life Sciences,Physical Sciences&Mathematics,and Social Sciences),and in two Disciplines(Chemistry,and Electrical&Computer Engineering)are presented as examples,and changes in the rankings over time are discussed.Findings:For the Fields considered here,the Top 20 institutional rankings in Medicine have undergone the least change(lifetime versus last five years),while the rankings in Engineering&Computer Science have exhibited significant change.The evolution of institutional rankings over time is largely attributed to the recent emergence of Chinese academic institutions,although this emergence is shown to be highly Field-and Discipline-dependent.Practical implementations:Existing rankings of academic institutions have:(i)often been restricted to pre-selected institutions,clouding the potential discovery of scholarly activity in emerging institutions and countries;(ii)considered only broad areas of research,limiting the ability of university leadership to act on the assessments in a concrete manner,or in contrast;(iii)have considered only a narrow area of research for comparison,diminishing the broader applicability and impact of the assessment.In general,existing institutional rankings depend on which institutions are included in the ranking process,which areas of research are considered,the breadth(or granularity)of the research areas of interest,and the methodologies used to define and quantify research performance.In contrast,the methods presented here can provide important data over a broad range of granularity to allow responsible individuals to gauge the performance of any institution from the Overall(all Fields)level,to the level of the Specialty.The methods may also assist identification of the root causes of shifts in institution rankings,and how these shifts vary across hundreds of thousands of Fields,Disciplines,and Specialties of scholarly endeavor.Originality/value:This study provides the first ranking of all academic institutions worldwide over Fields,Disciplines,and Specialties based on a unique methodology that quantifies the productivity,impact,and quality of individual scholars.展开更多
Based on the characteristics of high-end products,crowd-sourcing user stories can be seen as an effective means of gathering requirements,involving a large user base and generating a substantial amount of unstructured...Based on the characteristics of high-end products,crowd-sourcing user stories can be seen as an effective means of gathering requirements,involving a large user base and generating a substantial amount of unstructured feedback.The key challenge lies in transforming abstract user needs into specific ones,requiring integration and analysis.Therefore,we propose a topic mining-based approach to categorize,summarize,and rank product requirements from user stories.Specifically,after determining the number of story categories based on py LDAvis,we initially classify“I want to”phrases within user stories.Subsequently,classic topic models are applied to each category to generate their names,defining each post-classification user story category as a requirement.Furthermore,a weighted ranking function is devised to calculate the importance of each requirement.Finally,we validate the effectiveness and feasibility of the proposed method using 2966 crowd-sourced user stories related to smart home systems.展开更多
This study develops a procedure to rank agencies based on their incident responses using roadway clearance times for crashes. This analysis is not intended to grade agencies but to assist in identifying agencies requi...This study develops a procedure to rank agencies based on their incident responses using roadway clearance times for crashes. This analysis is not intended to grade agencies but to assist in identifying agencies requiring more training or resources for incident management. Previous NCHRP reports discussed usage of different factors including incident severity, roadway characteristics, number of lanes involved and time of incident separately for estimating the performance. However, it does not tell us how to incorporate all the factors at the same time. Thus, this study aims to account for multiple factors to ensure fair comparisons. This study used 149,174 crashes from Iowa that occurred from 2018 to 2021. A Tobit regression model was used to find the effect of different variables on roadway clearance time. Variables that cannot be controlled directly by agencies such as crash severity, roadway type, weather conditions, lighting conditions, etc., were included in the analysis as it helps to reduce bias in the ranking procedure. Then clearance time of each crash is normalized into a base condition using the regression coefficients. The normalization makes the process more efficient as the effect of uncontrollable factors has already been mitigated. Finally, the agencies were ranked by their average normalized roadway clearance time. This ranking process allows agencies to track their performance of previous crashes, can be used in identifying low performing agencies that could use additional resources and training, and can be used to identify high performing agencies to recognize for their efforts and performance.展开更多
Through the use of the internet and cloud computing,users may access their data as well as the programmes they have installed.It is now more challenging than ever before to choose which cloud service providers to take...Through the use of the internet and cloud computing,users may access their data as well as the programmes they have installed.It is now more challenging than ever before to choose which cloud service providers to take advantage of.When it comes to the dependability of the cloud infrastructure service,those who supply cloud services,as well as those who seek cloud services,have an equal responsibility to exercise utmost care.Because of this,further caution is required to ensure that the appropriate values are reached in light of the ever-increasing need for correct decision-making.The purpose of this study is to provide an updated computational ranking approach for decision-making in an environment with many criteria by using fuzzy logic in the context of a public cloud scenario.This improved computational ranking system is also sometimes referred to as the improvised VlseKriterijumska Optimizacija I Kompromisno Resenje(VIKOR)method.It gives users access to a trustworthy assortment of cloud services that fit their needs.The activity that is part of the suggested technique has been broken down into nine discrete parts for your convenience.To verify these stages,a numerical example has been evaluated for each of the six different scenarios,and the outcomes have been simulated.展开更多
Expanding internet-connected services has increased cyberattacks,many of which have grave and disastrous repercussions.An Intrusion Detection System(IDS)plays an essential role in network security since it helps to pr...Expanding internet-connected services has increased cyberattacks,many of which have grave and disastrous repercussions.An Intrusion Detection System(IDS)plays an essential role in network security since it helps to protect the network from vulnerabilities and attacks.Although extensive research was reported in IDS,detecting novel intrusions with optimal features and reducing false alarm rates are still challenging.Therefore,we developed a novel fusion-based feature importance method to reduce the high dimensional feature space,which helps to identify attacks accurately with less false alarm rate.Initially,to improve training data quality,various preprocessing techniques are utilized.The Adaptive Synthetic oversampling technique generates synthetic samples for minority classes.In the proposed fusion-based feature importance,we use different approaches from the filter,wrapper,and embedded methods like mutual information,random forest importance,permutation importance,Shapley Additive exPlanations(SHAP)-based feature importance,and statistical feature importance methods like the difference of mean and median and standard deviation to rank each feature according to its rank.Then by simple plurality voting,the most optimal features are retrieved.Then the optimal features are fed to various models like Extra Tree(ET),Logistic Regression(LR),Support vector Machine(SVM),Decision Tree(DT),and Extreme Gradient Boosting Machine(XGBM).Then the hyperparameters of classification models are tuned with Halving Random Search cross-validation to enhance the performance.The experiments were carried out on the original imbalanced data and balanced data.The outcomes demonstrate that the balanced data scenario knocked out the imbalanced data.Finally,the experimental analysis proved that our proposed fusionbased feature importance performed well with XGBM giving an accuracy of 99.86%,99.68%,and 92.4%,with 9,7 and 8 features by training time of 1.5,4.5 and 5.5 s on Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD),Canadian Institute for Cybersecurity(CIC-IDS 2017),and UNSW-NB15,datasets respectively.In addition,the suggested technique has been examined and contrasted with the state of art methods on three datasets.展开更多
In the past decade,blockchain has evolved as a promising solution to develop secure distributed ledgers and has gained massive attention.However,current blockchain systems face the problems of limited throughput,poor ...In the past decade,blockchain has evolved as a promising solution to develop secure distributed ledgers and has gained massive attention.However,current blockchain systems face the problems of limited throughput,poor scalability,and high latency.Due to the failure of consensus algorithms in managing nodes’identities,blockchain technology is considered inappropriate for many applications,e.g.,in IoT environments,because of poor scalability.This paper proposes a blockchain consensus mechanism called the Advanced DAG-based Ranking(ADR)protocol to improve blockchain scalability and throughput.The ADR protocol uses the directed acyclic graph ledger,where nodes are placed according to their ranking positions in the graph.It allows honest nodes to use theDirect Acyclic Graph(DAG)topology to write blocks and verify transactions instead of a chain of blocks.By using a three-step strategy,this protocol ensures that the system is secured against doublespending attacks and allows for higher throughput and scalability.The first step involves the safe entry of nodes into the system by verifying their private and public keys.The next step involves developing an advanced DAG ledger so nodes can start block production and verify transactions.In the third step,a ranking algorithm is developed to separate the nodes created by attackers.After eliminating attacker nodes,the nodes are ranked according to their performance in the system,and true nodes are arranged in blocks in topological order.As a result,the ADR protocol is suitable for applications in the Internet of Things(IoT).We evaluated ADR on EC2 clusters with more than 100 nodes and achieved better transaction throughput and liveness of the network while adding malicious nodes.Based on the simulation results,this research determined that the transaction’s performance was significantly improved over blockchains like Internet of Things Applications(IOTA)and ByteBall.展开更多
The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in thi...The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in this paper is based on exploiting the implicit feedbacks of user satisfaction during her web browsing history to construct a user profile storing the web pages the user is highly interested in. A weight is assigned to each page stored in the user’s profile;this weight reflects the user’s interest in this page. We name this weight the relative rank of the page, since it depends on the user issuing the query. Therefore, the ranking algorithm provided in this paper is based on the principle that;the rank assigned to a page is the addition of two rank values R_rank and A_rank. A_rank is an absolute rank, since it is fixed for all users issuing the same query, it only depends on the link structures of the web and on the keywords of the query. Thus, it could be calculated by the PageRank algorithm suggested by Brin and Page in 1998 and used by the google search engine. While, R_rank is the relative rank, it is calculated by the methods given in this paper which depends mainly on recording implicit measures of user satisfaction during her previous browsing history.展开更多
Background: Cause-of-death rankings are often used for planning or evaluating health policy measures. In the European Union, some countries produce cause-of-death statistics by a manual coding of death certificates, w...Background: Cause-of-death rankings are often used for planning or evaluating health policy measures. In the European Union, some countries produce cause-of-death statistics by a manual coding of death certificates, while other countries use an automated coding system. The outcome of these two different methods in terms of the selected underlying cause of death for statistics may vary considerably. Therefore, this study explores the effect of coding method on the ranking of countries by major causes of death. Method: Age and sex standardized rates were extracted for 33 European (related) countries from the cause-of-death registry of the European Statistical Office (Eurostat). Wilcoxon’s rank sum test was applied to the ranking of countries by major causes of death. Results: Statistically significant differences due to coding method were identified for dementia, stroke and pneumonia. These differences could be explained by a different selection of dementia or pneumonia as underlying cause of death and by a different certification practice for stroke. Conclusion: Coding method should be taken into account when constructing or interpreting rankings of countries by cause of death.展开更多
The output of the fuzzy set is reduced by one for the defuzzification procedure.It is employed to provide a comprehensible outcome from a fuzzy inference process.This page provides further information about the defuzzi...The output of the fuzzy set is reduced by one for the defuzzification procedure.It is employed to provide a comprehensible outcome from a fuzzy inference process.This page provides further information about the defuzzifica-tion approach for quadrilateral fuzzy numbers,which may be used to convert them into discrete values.Defuzzification demonstrates how useful fuzzy ranking systems can be.Our major purpose is to develop a new ranking method for gen-eralized quadrilateral fuzzy numbers.The primary objective of the research is to provide a novel approach to the accurate evaluation of various kinds of fuzzy inte-gers.Fuzzy ranking properties are examined.Using the counterexamples of Lee and Chen demonstrates the fallacy of the ranking technique.So,a new approach has been developed for dealing with fuzzy risk analysis,risk management,indus-trial engineering and optimization,medicine,and artificial intelligence problems:the generalized quadrilateral form fuzzy number utilizing centroid methodology.As you can see,the aforementioned scenarios are all amenable to the solution pro-vided by the generalized quadrilateral shape fuzzy number utilizing centroid methodology.It’s laid out in a straightforward manner that’s easy to grasp for everyone.The rating method is explained in detail,along with numerical exam-ples to illustrate it.Last but not least,stability evaluations clarify why the Gener-alized quadrilateral shape fuzzy number obtained by the centroid methodology outperforms other ranking methods.展开更多
Education quality has undoubtedly become an important local and international benchmark for education,and an institute’s ranking is assessed based on the quality of education,research projects,theses,and dissertation...Education quality has undoubtedly become an important local and international benchmark for education,and an institute’s ranking is assessed based on the quality of education,research projects,theses,and dissertations,which has always been controversial.Hence,this research paper is influenced by the institutes ranking all over the world.The data of institutes are obtained through Google Scholar(GS),as input to investigate the United Kingdom’s Research Excellence Framework(UK-REF)process.For this purpose,the current research used a Bespoke Program to evaluate the institutes’ranking based on their source.The bespoke program requires changes to improve the results by addressing these methodological issues:Firstly,Redundant profiles,which increased their citation and rank to produce false results.Secondly,the exclusion of theses and dissertation documents to retrieve the actual publications to count for citations.Thirdly,the elimination of falsely owned articles from scholars’profiles.To accomplish this task,the experimental design referred to collecting data from 120 UK-REF institutes and GS for the present year to enhance its correlation analysis in this new evaluation.The data extracted from GS is processed into structured data,and afterward,it is utilized to generate statistical computations of citations’analysis that contribute to the ranking based on their citations.The research promoted the predictive approach of correlational research.Furthermore,experimental evaluation reported encouraging results in comparison to the previous modi-fication made by the proposed taxonomy.This paper discussed the limitations of the current evaluation and suggested the potential paths to improve the research impact algorithm.展开更多
Using the improved prospect theory with the linear transformations of rewarding good and punishing bad(RGPBIT),a new investment ranking model for power grid construction projects(PGCPs)is proposed.Given the uncertaint...Using the improved prospect theory with the linear transformations of rewarding good and punishing bad(RGPBIT),a new investment ranking model for power grid construction projects(PGCPs)is proposed.Given the uncertainty of each index value under the market environment,fuzzy numbers are used to describe qualitative indicators and interval numbers are used to describe quantitative ones.Taking into account decision-maker’s subjective risk attitudes,a multi-criteria decision-making(MCDM)method based on improved prospect theory is proposed.First,the[−1,1]RGPBIT operator is proposed to normalize the original data,to obtain the best andworst schemes of PGCPs.Furthermore,the correlation coefficients between interval/fuzzy numbers and the best/worst schemes are defined and introduced to the prospect theory to improve its value function and loss function,and the positive and negative prospect value matrices of the project are obtained.Then,the optimization model with the maximum comprehensive prospect value is constructed,the optimal attribute weight is determined,and the PGCPs are ranked accordingly.Taking four PGCPs of the IEEERTS-79 node system as examples,an illustration of the feasibility and effectiveness of the proposed method is provided.展开更多
In the conventional technique,in the evaluation of the severity index,clustering and loading suffer from more iteration leading to more com-putational delay.Hence this research article identifies,a novel progression f...In the conventional technique,in the evaluation of the severity index,clustering and loading suffer from more iteration leading to more com-putational delay.Hence this research article identifies,a novel progression for fast predicting the severity of the line and clustering by incorporating machine learning aspects.The polynomial load modelling or ZIP(constant impedances(Z),Constant Current(I)and Constant active power(P))is developed in the IEEE-14 and Indian 118 bus systems considered for analysis of power system security.The process of finding the severity of the line using a Hybrid Line Stability Ranking Index(HLSRI)is used for assisting the concepts of machine learning with J48 algorithm,infers the superior affected lines by adopting the IEEE standards in concern to be compensated in maintaining the power system stability.The simulation is performed in the WEKA environment and deals with the supervisor learning in order based on severity to ensure the safety of power system.The Unified Power Flow Controller(UPFC),facts devices for the purpose of compensating the losses by maintaining the voltage characteristics.The finite element analysis findings are compared with the existing procedures and numerical equations for authentications.展开更多
Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices...Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.展开更多
On the basis of ESI data,all universities are ranked in 92 out of 105 world-class disciplines.There is no ESI data(either publications or citations)in the rest of 13 world-class disciplines.
文摘Purpose:The quantitative rankings of over 55,000 institutions and their institutional programs are based on the individual rankings of approximately 30 million scholars determined by their productivity,impact,and quality.Design/methodology/approach:The institutional ranking process developed here considers all institutions in all countries and regions,thereby including those that are established,as well as those that are emerging in scholarly prowess.Rankings of individual scholars worldwide are first generated using the recently introduced,fully indexed ScholarGPS database.The rankings of individual scholars are extended here to determine the lifetime and last-five-year Top 20 rankings of academic institutions over all Fields of scholarly endeavor,in 14 individual Fields,in 177 Disciplines,and in approximately 350,000 unique Specialties.Rankings associated with five specific Fields(Medicine,Engineering&Computer Science,Life Sciences,Physical Sciences&Mathematics,and Social Sciences),and in two Disciplines(Chemistry,and Electrical&Computer Engineering)are presented as examples,and changes in the rankings over time are discussed.Findings:For the Fields considered here,the Top 20 institutional rankings in Medicine have undergone the least change(lifetime versus last five years),while the rankings in Engineering&Computer Science have exhibited significant change.The evolution of institutional rankings over time is largely attributed to the recent emergence of Chinese academic institutions,although this emergence is shown to be highly Field-and Discipline-dependent.Practical implementations:Existing rankings of academic institutions have:(i)often been restricted to pre-selected institutions,clouding the potential discovery of scholarly activity in emerging institutions and countries;(ii)considered only broad areas of research,limiting the ability of university leadership to act on the assessments in a concrete manner,or in contrast;(iii)have considered only a narrow area of research for comparison,diminishing the broader applicability and impact of the assessment.In general,existing institutional rankings depend on which institutions are included in the ranking process,which areas of research are considered,the breadth(or granularity)of the research areas of interest,and the methodologies used to define and quantify research performance.In contrast,the methods presented here can provide important data over a broad range of granularity to allow responsible individuals to gauge the performance of any institution from the Overall(all Fields)level,to the level of the Specialty.The methods may also assist identification of the root causes of shifts in institution rankings,and how these shifts vary across hundreds of thousands of Fields,Disciplines,and Specialties of scholarly endeavor.Originality/value:This study provides the first ranking of all academic institutions worldwide over Fields,Disciplines,and Specialties based on a unique methodology that quantifies the productivity,impact,and quality of individual scholars.
基金supported by the National Natural Science Foundation of China(71690233,71901214)。
文摘Based on the characteristics of high-end products,crowd-sourcing user stories can be seen as an effective means of gathering requirements,involving a large user base and generating a substantial amount of unstructured feedback.The key challenge lies in transforming abstract user needs into specific ones,requiring integration and analysis.Therefore,we propose a topic mining-based approach to categorize,summarize,and rank product requirements from user stories.Specifically,after determining the number of story categories based on py LDAvis,we initially classify“I want to”phrases within user stories.Subsequently,classic topic models are applied to each category to generate their names,defining each post-classification user story category as a requirement.Furthermore,a weighted ranking function is devised to calculate the importance of each requirement.Finally,we validate the effectiveness and feasibility of the proposed method using 2966 crowd-sourced user stories related to smart home systems.
文摘This study develops a procedure to rank agencies based on their incident responses using roadway clearance times for crashes. This analysis is not intended to grade agencies but to assist in identifying agencies requiring more training or resources for incident management. Previous NCHRP reports discussed usage of different factors including incident severity, roadway characteristics, number of lanes involved and time of incident separately for estimating the performance. However, it does not tell us how to incorporate all the factors at the same time. Thus, this study aims to account for multiple factors to ensure fair comparisons. This study used 149,174 crashes from Iowa that occurred from 2018 to 2021. A Tobit regression model was used to find the effect of different variables on roadway clearance time. Variables that cannot be controlled directly by agencies such as crash severity, roadway type, weather conditions, lighting conditions, etc., were included in the analysis as it helps to reduce bias in the ranking procedure. Then clearance time of each crash is normalized into a base condition using the regression coefficients. The normalization makes the process more efficient as the effect of uncontrollable factors has already been mitigated. Finally, the agencies were ranked by their average normalized roadway clearance time. This ranking process allows agencies to track their performance of previous crashes, can be used in identifying low performing agencies that could use additional resources and training, and can be used to identify high performing agencies to recognize for their efforts and performance.
文摘Through the use of the internet and cloud computing,users may access their data as well as the programmes they have installed.It is now more challenging than ever before to choose which cloud service providers to take advantage of.When it comes to the dependability of the cloud infrastructure service,those who supply cloud services,as well as those who seek cloud services,have an equal responsibility to exercise utmost care.Because of this,further caution is required to ensure that the appropriate values are reached in light of the ever-increasing need for correct decision-making.The purpose of this study is to provide an updated computational ranking approach for decision-making in an environment with many criteria by using fuzzy logic in the context of a public cloud scenario.This improved computational ranking system is also sometimes referred to as the improvised VlseKriterijumska Optimizacija I Kompromisno Resenje(VIKOR)method.It gives users access to a trustworthy assortment of cloud services that fit their needs.The activity that is part of the suggested technique has been broken down into nine discrete parts for your convenience.To verify these stages,a numerical example has been evaluated for each of the six different scenarios,and the outcomes have been simulated.
文摘Expanding internet-connected services has increased cyberattacks,many of which have grave and disastrous repercussions.An Intrusion Detection System(IDS)plays an essential role in network security since it helps to protect the network from vulnerabilities and attacks.Although extensive research was reported in IDS,detecting novel intrusions with optimal features and reducing false alarm rates are still challenging.Therefore,we developed a novel fusion-based feature importance method to reduce the high dimensional feature space,which helps to identify attacks accurately with less false alarm rate.Initially,to improve training data quality,various preprocessing techniques are utilized.The Adaptive Synthetic oversampling technique generates synthetic samples for minority classes.In the proposed fusion-based feature importance,we use different approaches from the filter,wrapper,and embedded methods like mutual information,random forest importance,permutation importance,Shapley Additive exPlanations(SHAP)-based feature importance,and statistical feature importance methods like the difference of mean and median and standard deviation to rank each feature according to its rank.Then by simple plurality voting,the most optimal features are retrieved.Then the optimal features are fed to various models like Extra Tree(ET),Logistic Regression(LR),Support vector Machine(SVM),Decision Tree(DT),and Extreme Gradient Boosting Machine(XGBM).Then the hyperparameters of classification models are tuned with Halving Random Search cross-validation to enhance the performance.The experiments were carried out on the original imbalanced data and balanced data.The outcomes demonstrate that the balanced data scenario knocked out the imbalanced data.Finally,the experimental analysis proved that our proposed fusionbased feature importance performed well with XGBM giving an accuracy of 99.86%,99.68%,and 92.4%,with 9,7 and 8 features by training time of 1.5,4.5 and 5.5 s on Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD),Canadian Institute for Cybersecurity(CIC-IDS 2017),and UNSW-NB15,datasets respectively.In addition,the suggested technique has been examined and contrasted with the state of art methods on three datasets.
文摘In the past decade,blockchain has evolved as a promising solution to develop secure distributed ledgers and has gained massive attention.However,current blockchain systems face the problems of limited throughput,poor scalability,and high latency.Due to the failure of consensus algorithms in managing nodes’identities,blockchain technology is considered inappropriate for many applications,e.g.,in IoT environments,because of poor scalability.This paper proposes a blockchain consensus mechanism called the Advanced DAG-based Ranking(ADR)protocol to improve blockchain scalability and throughput.The ADR protocol uses the directed acyclic graph ledger,where nodes are placed according to their ranking positions in the graph.It allows honest nodes to use theDirect Acyclic Graph(DAG)topology to write blocks and verify transactions instead of a chain of blocks.By using a three-step strategy,this protocol ensures that the system is secured against doublespending attacks and allows for higher throughput and scalability.The first step involves the safe entry of nodes into the system by verifying their private and public keys.The next step involves developing an advanced DAG ledger so nodes can start block production and verify transactions.In the third step,a ranking algorithm is developed to separate the nodes created by attackers.After eliminating attacker nodes,the nodes are ranked according to their performance in the system,and true nodes are arranged in blocks in topological order.As a result,the ADR protocol is suitable for applications in the Internet of Things(IoT).We evaluated ADR on EC2 clusters with more than 100 nodes and achieved better transaction throughput and liveness of the network while adding malicious nodes.Based on the simulation results,this research determined that the transaction’s performance was significantly improved over blockchains like Internet of Things Applications(IOTA)and ByteBall.
文摘The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in this paper is based on exploiting the implicit feedbacks of user satisfaction during her web browsing history to construct a user profile storing the web pages the user is highly interested in. A weight is assigned to each page stored in the user’s profile;this weight reflects the user’s interest in this page. We name this weight the relative rank of the page, since it depends on the user issuing the query. Therefore, the ranking algorithm provided in this paper is based on the principle that;the rank assigned to a page is the addition of two rank values R_rank and A_rank. A_rank is an absolute rank, since it is fixed for all users issuing the same query, it only depends on the link structures of the web and on the keywords of the query. Thus, it could be calculated by the PageRank algorithm suggested by Brin and Page in 1998 and used by the google search engine. While, R_rank is the relative rank, it is calculated by the methods given in this paper which depends mainly on recording implicit measures of user satisfaction during her previous browsing history.
文摘Background: Cause-of-death rankings are often used for planning or evaluating health policy measures. In the European Union, some countries produce cause-of-death statistics by a manual coding of death certificates, while other countries use an automated coding system. The outcome of these two different methods in terms of the selected underlying cause of death for statistics may vary considerably. Therefore, this study explores the effect of coding method on the ranking of countries by major causes of death. Method: Age and sex standardized rates were extracted for 33 European (related) countries from the cause-of-death registry of the European Statistical Office (Eurostat). Wilcoxon’s rank sum test was applied to the ranking of countries by major causes of death. Results: Statistically significant differences due to coding method were identified for dementia, stroke and pneumonia. These differences could be explained by a different selection of dementia or pneumonia as underlying cause of death and by a different certification practice for stroke. Conclusion: Coding method should be taken into account when constructing or interpreting rankings of countries by cause of death.
文摘The output of the fuzzy set is reduced by one for the defuzzification procedure.It is employed to provide a comprehensible outcome from a fuzzy inference process.This page provides further information about the defuzzifica-tion approach for quadrilateral fuzzy numbers,which may be used to convert them into discrete values.Defuzzification demonstrates how useful fuzzy ranking systems can be.Our major purpose is to develop a new ranking method for gen-eralized quadrilateral fuzzy numbers.The primary objective of the research is to provide a novel approach to the accurate evaluation of various kinds of fuzzy inte-gers.Fuzzy ranking properties are examined.Using the counterexamples of Lee and Chen demonstrates the fallacy of the ranking technique.So,a new approach has been developed for dealing with fuzzy risk analysis,risk management,indus-trial engineering and optimization,medicine,and artificial intelligence problems:the generalized quadrilateral form fuzzy number utilizing centroid methodology.As you can see,the aforementioned scenarios are all amenable to the solution pro-vided by the generalized quadrilateral shape fuzzy number utilizing centroid methodology.It’s laid out in a straightforward manner that’s easy to grasp for everyone.The rating method is explained in detail,along with numerical exam-ples to illustrate it.Last but not least,stability evaluations clarify why the Gener-alized quadrilateral shape fuzzy number obtained by the centroid methodology outperforms other ranking methods.
文摘Education quality has undoubtedly become an important local and international benchmark for education,and an institute’s ranking is assessed based on the quality of education,research projects,theses,and dissertations,which has always been controversial.Hence,this research paper is influenced by the institutes ranking all over the world.The data of institutes are obtained through Google Scholar(GS),as input to investigate the United Kingdom’s Research Excellence Framework(UK-REF)process.For this purpose,the current research used a Bespoke Program to evaluate the institutes’ranking based on their source.The bespoke program requires changes to improve the results by addressing these methodological issues:Firstly,Redundant profiles,which increased their citation and rank to produce false results.Secondly,the exclusion of theses and dissertation documents to retrieve the actual publications to count for citations.Thirdly,the elimination of falsely owned articles from scholars’profiles.To accomplish this task,the experimental design referred to collecting data from 120 UK-REF institutes and GS for the present year to enhance its correlation analysis in this new evaluation.The data extracted from GS is processed into structured data,and afterward,it is utilized to generate statistical computations of citations’analysis that contribute to the ranking based on their citations.The research promoted the predictive approach of correlational research.Furthermore,experimental evaluation reported encouraging results in comparison to the previous modi-fication made by the proposed taxonomy.This paper discussed the limitations of the current evaluation and suggested the potential paths to improve the research impact algorithm.
文摘Using the improved prospect theory with the linear transformations of rewarding good and punishing bad(RGPBIT),a new investment ranking model for power grid construction projects(PGCPs)is proposed.Given the uncertainty of each index value under the market environment,fuzzy numbers are used to describe qualitative indicators and interval numbers are used to describe quantitative ones.Taking into account decision-maker’s subjective risk attitudes,a multi-criteria decision-making(MCDM)method based on improved prospect theory is proposed.First,the[−1,1]RGPBIT operator is proposed to normalize the original data,to obtain the best andworst schemes of PGCPs.Furthermore,the correlation coefficients between interval/fuzzy numbers and the best/worst schemes are defined and introduced to the prospect theory to improve its value function and loss function,and the positive and negative prospect value matrices of the project are obtained.Then,the optimization model with the maximum comprehensive prospect value is constructed,the optimal attribute weight is determined,and the PGCPs are ranked accordingly.Taking four PGCPs of the IEEERTS-79 node system as examples,an illustration of the feasibility and effectiveness of the proposed method is provided.
文摘In the conventional technique,in the evaluation of the severity index,clustering and loading suffer from more iteration leading to more com-putational delay.Hence this research article identifies,a novel progression for fast predicting the severity of the line and clustering by incorporating machine learning aspects.The polynomial load modelling or ZIP(constant impedances(Z),Constant Current(I)and Constant active power(P))is developed in the IEEE-14 and Indian 118 bus systems considered for analysis of power system security.The process of finding the severity of the line using a Hybrid Line Stability Ranking Index(HLSRI)is used for assisting the concepts of machine learning with J48 algorithm,infers the superior affected lines by adopting the IEEE standards in concern to be compensated in maintaining the power system stability.The simulation is performed in the WEKA environment and deals with the supervisor learning in order based on severity to ensure the safety of power system.The Unified Power Flow Controller(UPFC),facts devices for the purpose of compensating the losses by maintaining the voltage characteristics.The finite element analysis findings are compared with the existing procedures and numerical equations for authentications.
基金supported by the National Natural Science Foundation of China(62171088,U19A2052,62020106011)the Medico-Engineering Cooperation Funds from University of Electronic Science and Technology of China(ZYGX2021YGLH215,ZYGX2022YGRH005)。
文摘Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.
文摘On the basis of ESI data,all universities are ranked in 92 out of 105 world-class disciplines.There is no ESI data(either publications or citations)in the rest of 13 world-class disciplines.