期刊文献+
共找到37,065篇文章
< 1 2 250 >
每页显示 20 50 100
Accurate method based on data filtering for quantitative multi-element analysis of soils using CF-LIBS
1
作者 韩伟伟 孙对兄 +7 位作者 张国鼎 董光辉 崔小娜 申金成 王浩亮 张登红 董晨钟 苏茂根 《Plasma Science and Technology》 SCIE EI CAS CSCD 2024年第6期149-158,共10页
To obtain more stable spectral data for accurate quantitative analysis of multi-element,especially for the large-area in-situ elements detection of soils, we propose a method for a multielement quantitative analysis o... To obtain more stable spectral data for accurate quantitative analysis of multi-element,especially for the large-area in-situ elements detection of soils, we propose a method for a multielement quantitative analysis of soils using calibration-free laser-induced breakdown spectroscopy(CF-LIBS) based on data filtering. In this study, we analyze a standard soil sample doped with two heavy metal elements, Cu and Cd, with a specific focus on the line of Cu I324.75 nm for filtering the experimental data of multiple sample sets. Pre-and post-data filtering,the relative standard deviation for Cu decreased from 30% to 10%, The limits of detection(LOD)values for Cu and Cd decreased by 5% and 4%, respectively. Through CF-LIBS, a quantitative analysis was conducted to determine the relative content of elements in soils. Using Cu as a reference, the concentration of Cd was accurately calculated. The results show that post-data filtering, the average relative error of the Cd decreases from 11% to 5%, indicating the effectiveness of data filtering in improving the accuracy of quantitative analysis. Moreover, the content of Si, Fe and other elements can be accurately calculated using this method. To further correct the calculation, the results for Cd was used to provide a more precise calculation. This approach is of great importance for the large-area in-situ heavy metals and trace elements detection in soil, as well as for rapid and accurate quantitative analysis. 展开更多
关键词 laser-induced breakdown spectroscopy SOIL data filtering quantitative analysis multielement
下载PDF
Quantitative Analysis of Seeing with Height and Time at Muztagh-Ata Site Based on ERA5 Database
2
作者 Xiao-Qi Wu Cun-Ying Xiao +3 位作者 Ali Esamdin Jing Xu Ze-Wei Wang Luo Xiao 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2024年第1期87-95,共9页
Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanal... Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy. 展开更多
关键词 site testing atmospheric effects methods:data analysis telescopes EARTH
下载PDF
Block Incremental Dense Tucker Decomposition with Application to Spatial and Temporal Analysis of Air Quality Data
3
作者 SangSeok Lee HaeWon Moon Lee Sael 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期319-336,共18页
How can we efficiently store and mine dynamically generated dense tensors for modeling the behavior of multidimensional dynamic data?Much of the multidimensional dynamic data in the real world is generated in the form... How can we efficiently store and mine dynamically generated dense tensors for modeling the behavior of multidimensional dynamic data?Much of the multidimensional dynamic data in the real world is generated in the form of time-growing tensors.For example,air quality tensor data consists of multiple sensory values gathered from wide locations for a long time.Such data,accumulated over time,is redundant and consumes a lot ofmemory in its raw form.We need a way to efficiently store dynamically generated tensor data that increase over time and to model their behavior on demand between arbitrary time blocks.To this end,we propose a Block IncrementalDense Tucker Decomposition(BID-Tucker)method for efficient storage and on-demand modeling ofmultidimensional spatiotemporal data.Assuming that tensors come in unit blocks where only the time domain changes,our proposed BID-Tucker first slices the blocks into matrices and decomposes them via singular value decomposition(SVD).The SVDs of the time×space sliced matrices are stored instead of the raw tensor blocks to save space.When modeling from data is required at particular time blocks,the SVDs of corresponding time blocks are retrieved and incremented to be used for Tucker decomposition.The factor matrices and core tensor of the decomposed results can then be used for further data analysis.We compared our proposed BID-Tucker with D-Tucker,which our method extends,and vanilla Tucker decomposition.We show that our BID-Tucker is faster than both D-Tucker and vanilla Tucker decomposition and uses less memory for storage with a comparable reconstruction error.We applied our proposed BID-Tucker to model the spatial and temporal trends of air quality data collected in South Korea from 2018 to 2022.We were able to model the spatial and temporal air quality trends.We were also able to verify unusual events,such as chronic ozone alerts and large fire events. 展开更多
关键词 Dynamic decomposition tucker tensor tensor factorization spatiotemporal data tensor analysis air quality
下载PDF
Data-driven analysis of chemicals,proteins and pathways associated with peanut allergy:from molecular networking to biological interpretation
4
作者 Emmanuel Kemmler Julian Braun +5 位作者 Florent Fauchère Sabine Dölle-Bierke Kirsten Beyer Robert Preissner Margitta Worm Priyanka Banerjee 《Food Science and Human Wellness》 SCIE CSCD 2024年第3期1322-1335,共14页
Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fis... Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fish and shellfish are responsible for more than 90%of food allergies.Here,we provide promising insights using a large-scale data-driven analysis,comparing the mechanistic feature and biological relevance of different ingredients presents in peanuts,tree nuts(walnuts,almonds,cashews,pecans and pistachios)and soybean.Additionally,we have analysed the chemical compositions of peanuts in different processed form raw,boiled and dry-roasted.Using the data-driven approach we are able to generate new hypotheses to explain why nuclear receptors like the peroxisome proliferator-activated receptors(PPARs)and its isoform and their interaction with dietary lipids may have significant effect on allergic response.The results obtained from this study will direct future experimeantal and clinical studies to understand the role of dietary lipids and PPARisoforms to exert pro-inflammatory or anti-inflammatory functions on cells of the innate immunity and influence antigen presentation to the cells of the adaptive immunity. 展开更多
关键词 Allergy informatics Knowledge-graph data analysis Food allergy Peroxisome proliferator-activated receptors Fatty acids
下载PDF
A Robust Framework for Multimodal Sentiment Analysis with Noisy Labels Generated from Distributed Data Annotation
5
作者 Kai Jiang Bin Cao Jing Fan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2965-2984,共20页
Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and sha... Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and share such multimodal data.However,due to professional discrepancies among annotators and lax quality control,noisy labels might be introduced.Recent research suggests that deep neural networks(DNNs)will overfit noisy labels,leading to the poor performance of the DNNs.To address this challenging problem,we present a Multimodal Robust Meta Learning framework(MRML)for multimodal sentiment analysis to resist noisy labels and correlate distinct modalities simultaneously.Specifically,we propose a two-layer fusion net to deeply fuse different modalities and improve the quality of the multimodal data features for label correction and network training.Besides,a multiple meta-learner(label corrector)strategy is proposed to enhance the label correction approach and prevent models from overfitting to noisy labels.We conducted experiments on three popular multimodal datasets to verify the superiority of ourmethod by comparing it with four baselines. 展开更多
关键词 Distributed data collection multimodal sentiment analysis meta learning learn with noisy labels
下载PDF
Performance Analysis and Optimization of Energy Harvesting Modulation for Multi-User Integrated Data and Energy Transfer
6
作者 Yizhe Zhao Yanliang Wu +1 位作者 Jie Hu Kun Yang 《China Communications》 SCIE CSCD 2024年第1期148-162,共15页
Integrated data and energy transfer(IDET)enables the electromagnetic waves to transmit wireless energy at the same time of data delivery for lowpower devices.In this paper,an energy harvesting modulation(EHM)assisted ... Integrated data and energy transfer(IDET)enables the electromagnetic waves to transmit wireless energy at the same time of data delivery for lowpower devices.In this paper,an energy harvesting modulation(EHM)assisted multi-user IDET system is studied,where all the received signals at the users are exploited for energy harvesting without the degradation of wireless data transfer(WDT)performance.The joint IDET performance is then analysed theoretically by conceiving a practical time-dependent wireless channel.With the aid of the AO based algorithm,the average effective data rate among users are maximized by ensuring the BER and the wireless energy transfer(WET)performance.Simulation results validate and evaluate the IDET performance of the EHM assisted system,which also demonstrates that the optimal number of user clusters and IDET time slots should be allocated,in order to improve the WET and WDT performance. 展开更多
关键词 energy harvesting modulation(EHM) integrated data and energy transfer(IDET) performance analysis wireless data transfer(WDT) wireless energy transfer(WET)
下载PDF
Enhancing Data Analysis and Automation: Integrating Python with Microsoft Excel for Non-Programmers
7
作者 Osama Magdy Ali Mohamed Breik +2 位作者 Tarek Aly Atef Tayh Nour El-Din Raslan Mervat Gheith 《Journal of Software Engineering and Applications》 2024年第6期530-540,共11页
Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision... Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions. 展开更多
关键词 PYTHON End-User Approach Microsoft Excel data analysis Integration SPREADSHEET PROGRAMMING data Visualization
下载PDF
Data envelopment analysis for scale elasticity measurement in the stochastic case:with an application to Indian banking
8
作者 Alireza Amirteimoori Biresh K.Sahoo Saber Mehdizadeh 《Financial Innovation》 2023年第1期955-990,共36页
In the nonparametric data envelopment analysis literature,scale elasticity is evaluated in two alternative ways:using either the technical efficiency model or the cost efficiency model.This evaluation becomes problema... In the nonparametric data envelopment analysis literature,scale elasticity is evaluated in two alternative ways:using either the technical efficiency model or the cost efficiency model.This evaluation becomes problematic in several situations,for example(a)when input proportions change in the long run,(b)when inputs are heterogeneous,and(c)when firms face ex-ante price uncertainty in making their production decisions.To address these situations,a scale elasticity evaluation was performed using a value-based cost efficiency model.However,this alternative value-based scale elasticity evaluation is sensitive to the uncertainty and variability underlying input and output data.Therefore,in this study,we introduce a stochastic cost-efficiency model based on chance-constrained programming to develop a value-based measure of the scale elasticity of firms facing data uncertainty.An illustrative empirical application to the Indian banking industry comprising 71 banks for eight years(1998–2005)was made to compare inferences about their efficiency and scale properties.The key findings are as follows:First,both the deterministic model and our proposed stochastic model yield distinctly different results concerning the efficiency and scale elasticity scores at various tolerance levels of chance constraints.However,both models yield the same results at a tolerance level of 0.5,implying that the deterministic model is a special case of the stochastic model in that it reveals the same efficiency and returns to scale characterizations of banks.Second,the stochastic model generates higher efficiency scores for inefficient banks than its deterministic counterpart.Third,public banks exhibit higher efficiency than private and foreign banks.Finally,public and old private banks mostly exhibit either decreasing or constant returns to scale,whereas foreign and new private banks experience either increasing or decreasing returns to scale.Although the application of our proposed stochastic model is illustrative,it can be potentially applied to all firms in the information and distribution-intensive industry with high fixed costs,which have ample potential for reaping scale and scope benefits. 展开更多
关键词 data envelopment analysis Stochastic data envelopment analysis Technical efficiency Returns to scale Economies of scale Scale elasticity Indian banking ECONOMETRICS ECONOMICS
下载PDF
Comparison of R and Excel in the Field of Data Analysis
9
作者 Jue Wang 《Journal of Electronic Research and Application》 2024年第3期178-184,共7页
This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and i... This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and its powerful data management and analysis tools make it suitable for handling complex data analysis tasks.It is also highly customizable,allowing users to create custom functions and packages to meet their specific needs.Additionally,R language provides high reproducibility,making it easy to replicate and verify research results,and it has excellent collaboration capabilities,enabling multiple users to work on the same project simultaneously.These advantages make R language a more suitable choice for complex data analysis tasks,particularly in scientific research and business applications.The findings of this study will help people understand that R is not just a language that can handle more data than Excel and demonstrate that r is essential to the field of data analysis.At the same time,it will also help users and organizations make informed decisions regarding their data analysis needs and software preferences. 展开更多
关键词 EXCEL R language data analysis Open source COMPARE data management Advantages Disadvantages FUNCTION
下载PDF
Application of Bayesian Analysis Based on Neural Network and Deep Learning in Data Visualization
10
作者 Jiying Yang Qi Long +1 位作者 Xiaoyun Zhu Yuan Yang 《Journal of Electronic Research and Application》 2024年第4期88-93,共6页
This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,tradit... This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science. 展开更多
关键词 Neural network Deep learning Bayesian analysis data visualization Big data environment
下载PDF
Frame Work of Data Envelopment Analysis—A Model to Evaluate the Environmental Efficiency of China'S Industrial Sectors 被引量:24
11
作者 TAO ZHANG 《Biomedical and Environmental Sciences》 SCIE CAS CSCD 2009年第1期8-13,共6页
Objective To evaluate the environmental and technical efficiencies of China's industrial sectors and provide appropriate advice for policy makers in the context of rapid economic growth and concurrent serious environ... Objective To evaluate the environmental and technical efficiencies of China's industrial sectors and provide appropriate advice for policy makers in the context of rapid economic growth and concurrent serious environmental damages caused by industrial pollutants. Methods A data of envelopment analysis (DEA) framework crediting both reduction of pollution outputs and expansion of good outputs was designed as a model to compute environmental efficiency of China's regional industrial systems. Results As shown by the geometric mean of environmental efficiency, if other inputs were made constant and good outputs were not to be improved, the air pollution outputs would have the potential to be decreased by about 60% in the whole China. Conclusion Both environmental and technical efficiencies have the potential to be greatly improved in China, which may provide some advice for policy-makers. 展开更多
关键词 Technical efficiency Environmental efficiency Directional distance function Technical-environmentalefficiency data of envelopment analysis China
下载PDF
Optimal Machine Learning Driven Sentiment Analysis on COVID-19 Twitter Data 被引量:1
12
作者 Bahjat Fakieh Abdullah S.AL-Malaise AL-Ghamdi +1 位作者 Farrukh Saleem Mahmoud Ragab 《Computers, Materials & Continua》 SCIE EI 2023年第4期81-97,共17页
The outbreak of the pandemic,caused by Coronavirus Disease 2019(COVID-19),has affected the daily activities of people across the globe.During COVID-19 outbreak and the successive lockdowns,Twitter was heavily used and... The outbreak of the pandemic,caused by Coronavirus Disease 2019(COVID-19),has affected the daily activities of people across the globe.During COVID-19 outbreak and the successive lockdowns,Twitter was heavily used and the number of tweets regarding COVID-19 increased tremendously.Several studies used Sentiment Analysis(SA)to analyze the emotions expressed through tweets upon COVID-19.Therefore,in current study,a new Artificial Bee Colony(ABC)with Machine Learning-driven SA(ABCMLSA)model is developed for conducting Sentiment Analysis of COVID-19 Twitter data.The prime focus of the presented ABCML-SA model is to recognize the sentiments expressed in tweets made uponCOVID-19.It involves data pre-processing at the initial stage followed by n-gram based feature extraction to derive the feature vectors.For identification and classification of the sentiments,the Support Vector Machine(SVM)model is exploited.At last,the ABC algorithm is applied to fine tune the parameters involved in SVM.To demonstrate the improved performance of the proposed ABCML-SA model,a sequence of simulations was conducted.The comparative assessment results confirmed the effectual performance of the proposed ABCML-SA model over other approaches. 展开更多
关键词 Sentiment analysis twitter data data mining COVID-19 machine learning artificial bee colony
下载PDF
Fuzzy data envelopment analysis approach based on sample decision making units 被引量:11
13
作者 Muren Zhanxin Ma Wei Cui 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2012年第3期399-407,共9页
The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs ty... The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs typically have some levels of fuzziness. To analyze a decision making unit (DMU) with fuzzy input/output data, previous studies provided the fuzzy DEA model and proposed an associated evaluating approach. Nonetheless, numerous deficiencies must still be improved, including the α- cut approaches, types of fuzzy numbers, and ranking techniques. Moreover, a fuzzy sample DMU still cannot be evaluated for the Fuzzy DEA model. Therefore, this paper proposes a fuzzy DEA model based on sample decision making unit (FSDEA). Five eval- uation approaches and the related algorithm and ranking methods are provided to test the fuzzy sample DMU of the FSDEA model. A numerical experiment is used to demonstrate and compare the results with those obtained using alternative approaches. 展开更多
关键词 fuzzy mathematical programming sample decision making unit fuzzy data envelopment analysis EFFICIENCY α-cut.
下载PDF
Bootstrapping Data Envelopment Analysis of Efficiency and Productivity of County Public Hospitals in Eastern, Central, and Western China after the Public Hospital Reform 被引量:5
14
作者 王曼丽 方海清 +5 位作者 陶红兵 程兆辉 林小军 蔡苗 许昌 蒋帅 《Journal of Huazhong University of Science and Technology(Medical Sciences)》 SCIE CAS 2017年第5期681-692,共12页
China implemented the public hospital reform in 2012. This study utilized bootstrapping data envelopment analysis(DEA) to evaluate the technical efficiency(TE) and productivity of county public hospitals in Easter... China implemented the public hospital reform in 2012. This study utilized bootstrapping data envelopment analysis(DEA) to evaluate the technical efficiency(TE) and productivity of county public hospitals in Eastern, Central, and Western China after the 2012 public hospital reform. Data from 127 county public hospitals(39, 45, and 43 in Eastern, Central, and Western China, respectively) were collected during 2012–2015. Changes of TE and productivity over time were estimated by bootstrapping DEA and bootstrapping Malmquist. The disparities in TE and productivity among public hospitals in the three regions of China were compared by Kruskal–Wallis H test and Mann–Whitney U test. The average bias-corrected TE values for the four-year period were 0.6442, 0.5785, 0.6099, and 0.6094 in Eastern, Central, and Western China, and the entire country respectively, with average non-technical efficiency, low pure technical efficiency(PTE), and high scale efficiency found. Productivity increased by 8.12%, 0.25%, 12.11%, and 11.58% in China and its three regions during 2012–2015, and such increase in productivity resulted from progressive technological changes by 16.42%, 6.32%, 21.08%, and 21.42%, respectively. The TE and PTE of the county hospitals significantly differed among the three regions of China. Eastern and Western China showed significantly higher TE and PTE than Central China. More than 60% of county public hospitals in China and its three areas operated at decreasing return scales. There was a considerable space for TE improvement in county hospitals in China and its three regions. During 2012–2015, the hospitals experienced progressive productivity; however, the PTE changed adversely. Moreover, Central China continuously achieved a significantly lower efficiency score than Eastern and Western China. Decision makers and administrators in China should identify the causes of the observed inefficiencies and take appropriate measures to increase the efficiency of county public hospitals in the three areas of China, especially in Central China. 展开更多
关键词 county public hospital data envelopment analysis technical efficiency Malmquist productivity index BOOTSTRAPPING
下载PDF
Robust data envelopment analysis based MCDM with the consideration of uncertain data 被引量:2
15
作者 Ke Wang Fajie Wei 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2010年第6期981-989,共9页
The application of data envelopment analysis (DEA) as a multiple criteria decision making (MCDM) technique has been gaining more and more attention in recent research. In the practice of applying DEA approach, the... The application of data envelopment analysis (DEA) as a multiple criteria decision making (MCDM) technique has been gaining more and more attention in recent research. In the practice of applying DEA approach, the appearance of uncertainties on input and output data of decision making unit (DMU) might make the nominal solution infeasible and lead to the efficiency scores meaningless from practical view. This paper analyzes the impact of data uncertainty on the evaluation results of DEA, and proposes several robust DEA models based on the adaptation of recently developed robust optimization approaches, which would be immune against input and output data uncertainties. The robust DEA models developed are based on input-oriented and outputoriented CCR model, respectively, when the uncertainties appear in output data and input data separately. Furthermore, the robust DEA models could deal with random symmetric uncertainty and unknown-but-bounded uncertainty, in both of which the distributions of the random data entries are permitted to be unknown. The robust DEA models are implemented in a numerical example and the efficiency scores and rankings of these models are compared. The results indicate that the robust DEA approach could be a more reliable method for efficiency evaluation and ranking in MCDM problems. 展开更多
关键词 data envelopment analysis (DEA) multiple criteria decision making (MCDM) robust optimization uncertain data EFFICIENCY ranking.
下载PDF
Data envelopment analysis procedure with two non-homogeneous DMU groups 被引量:2
16
作者 CHEN Ye WU Liangpeng LU Bo 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2018年第4期780-788,共9页
The classic data envelopment analysis(DEA) model is used to evaluate decision-making units'(DMUs) efficiency under the assumption that all DMUs are evaluated with the same criteria setting. Recently, new research... The classic data envelopment analysis(DEA) model is used to evaluate decision-making units'(DMUs) efficiency under the assumption that all DMUs are evaluated with the same criteria setting. Recently, new researches begin to focus on the efficiency analysis of non-homogeneous DMU arose by real practices such as the evaluation of departments in a university, where departments argue for the adoption of different criteria based on their disciplinary characteristics. A DEA procedure is proposed in this paper to address the efficiency analysis of two non-homogeneous DMU groups. Firstly, an analytical framework is established to compromise diversified input and output(IO) criteria from two nonhomogenous groups. Then, a criteria fusion operation is designed to obtain different DEA analysis strategies. Meanwhile, Friedman test is introduced to analyze the consistency of all efficiency results produced by different strategies. Next, ordered weighted averaging(OWA) operators are applied to integrate different information to reach final conclusions. Finally, a numerical example is used to illustrate the proposed method. The result indicates that the proposed method relaxes the restriction of the classical DEA model,and can provide more analytical flexibility to address different decision analysis scenarios arose from practical applications. 展开更多
关键词 data envelopment analysis (DEA) non-homogeneousdecision-making unit (DMU) criteria fusion Friedman test ordered weighted averaging (OWA) operator
下载PDF
Integrating dual-role variables in data envelopment analysis 被引量:1
17
作者 Feng Yang Liang Liang Zhaoqiong Li Shaofu Du 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2010年第5期771-776,共6页
Traditional data envelopment analysis(DEA) theory assumes that decision variables are regarded as inputs or outputs,and no variable can play the roles of both an input and an output at the same time.In fact,there ex... Traditional data envelopment analysis(DEA) theory assumes that decision variables are regarded as inputs or outputs,and no variable can play the roles of both an input and an output at the same time.In fact,there exist some variables that work as inputs and outputs simultaneously and are called dual-role variables.Traditional DEA models cannot be used to appraise the performance of decision making units containing dual-role variables.The paper analyzes the structure and properties of the production systems comprising dual-role variables,and proposes a DEA model integrating dual-role variables.Finally the proposed model is illustrated to evaluate the efficiency of university departments. 展开更多
关键词 data envelopment analysis efficiency evaluation dual-role variable.
下载PDF
What explains the technical efficiency of banks in Tunisia?Evidence from a two‑stage data envelopment analysis 被引量:1
18
作者 Mohamed Mehdi Jelassi Ezzeddine Delhoumi 《Financial Innovation》 2021年第1期1431-1456,共26页
In this study we examine the potential determinants of technical efficiency for the Tunisian commercial banking sector over the period of 1995–2017.First,we estimate banking technical efficiency with a radial and non... In this study we examine the potential determinants of technical efficiency for the Tunisian commercial banking sector over the period of 1995–2017.First,we estimate banking technical efficiency with a radial and non-radial bootstrap data envelopment analysis.For the radial technique,we use an input-oriented approach and for non-radial we use the Range Adjusted Measure(RAM).Second,we use a double bootstrapping regression technique to estimate the influence of a set of eventual determinants on technical efficiency.Finally,based on all possible regressions,we gauge the overall effect of each determinant.Our results reveal that the input-oriented and RAM approach gave somewhat similar results.We found that the return on equity,the expense to income ratio,the loan to deposit ratio,and the growth rate are insignificant to Tunisian banking technical efficiency.In particular,banking technical efficiency increases with capitalization and inflation,whereas,it decreases with size,number of bank branches,management to staff ratio,and loan to asset ratio.In addition,we identified evidence supporting the moderate success of the last decade of reforms and a noticeable one for the post-revolution reforms in helping improve banking technical efficiency.The post-revolution reforms,largely revolving around reinforcing the rules of good governance and banking supervision,coupled with the restructuring of public banks,were found to be insufficient to raise overall banking technical efficiency despite improvement in the technical efficiency of private banks. 展开更多
关键词 Tunisian banks Financial intermediation Technical efficiency data envelopment analysis BOOTSTRAP Bias correction Truncated regression
下载PDF
Intelligent Electrocardiogram Analysis in Medicine:Data,Methods,and Applications
19
作者 Yu-Xia Guan Ying An +2 位作者 Feng-Yi Guo Wei-Bai Pan Jian-Xin Wang 《Chinese Medical Sciences Journal》 CAS CSCD 2023年第1期38-48,共11页
Electrocardiogram(ECG)is a low-cost,simple,fast,and non-invasive test.It can reflect the heart’s electrical activity and provide valuable diagnostic clues about the health of the entire body.Therefore,ECG has been wi... Electrocardiogram(ECG)is a low-cost,simple,fast,and non-invasive test.It can reflect the heart’s electrical activity and provide valuable diagnostic clues about the health of the entire body.Therefore,ECG has been widely used in various biomedical applications such as arrhythmia detection,disease-specific detection,mortality prediction,and biometric recognition.In recent years,ECG-related studies have been carried out using a variety of publicly available datasets,with many differences in the datasets used,data preprocessing methods,targeted challenges,and modeling and analysis techniques.Here we systematically summarize and analyze the ECGbased automatic analysis methods and applications.Specifically,we first reviewed 22 commonly used ECG public datasets and provided an overview of data preprocessing processes.Then we described some of the most widely used applications of ECG signals and analyzed the advanced methods involved in these applications.Finally,we elucidated some of the challenges in ECG analysis and provided suggestions for further research. 展开更多
关键词 ELECTROCARDIOGRAM dataBASE PREPROCESSING machine learning medical big data analysis
下载PDF
ST-Map:an Interactive Map for Discovering Spatial and Temporal Patterns in Bibliographic Data 被引量:1
20
作者 ZUO Chenyu XU Yifan +1 位作者 DING Lingfang MENG Liqiu 《Journal of Geodesy and Geoinformation Science》 CSCD 2024年第1期3-15,共13页
Getting insight into the spatiotemporal distribution patterns of knowledge innovation is receiving increasing attention from policymakers and economic research organizations.Many studies use bibliometric data to analy... Getting insight into the spatiotemporal distribution patterns of knowledge innovation is receiving increasing attention from policymakers and economic research organizations.Many studies use bibliometric data to analyze the popularity of certain research topics,well-adopted methodologies,influential authors,and the interrelationships among research disciplines.However,the visual exploration of the patterns of research topics with an emphasis on their spatial and temporal distribution remains challenging.This study combined a Space-Time Cube(STC)and a 3D glyph to represent the complex multivariate bibliographic data.We further implemented a visual design by developing an interactive interface.The effectiveness,understandability,and engagement of ST-Map are evaluated by seven experts in geovisualization.The results suggest that it is promising to use three-dimensional visualization to show the overview and on-demand details on a single screen. 展开更多
关键词 space-time cube bibliographic data spatiotemporal analysis user study interactive map
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部