期刊文献+
共找到28,129篇文章
< 1 2 250 >
每页显示 20 50 100
Assessment of the three representative empirical models for zenith tropospheric delay(ZTD)using the CMONOC data
1
作者 Debao Yuan Jian Li +4 位作者 Yifan Yao Fei Yang Yingying Wang Ran Chen Tairan Xu 《Geodesy and Geodynamics》 EI CSCD 2024年第5期488-494,共7页
The precise correction of atmospheric zenith tropospheric delay(ZTD)is significant for the Global Navigation Satellite System(GNSS)performance regarding positioning accuracy and convergence time.In the past decades,ma... The precise correction of atmospheric zenith tropospheric delay(ZTD)is significant for the Global Navigation Satellite System(GNSS)performance regarding positioning accuracy and convergence time.In the past decades,many empirical ZTD models based on whether the gridded or scattered ZTD products have been proposed and widely used in the GNSS positioning applications.But there is no comprehensive evaluation of these models for the whole China region,which features complicated topography and climate.In this study,we completely assess the typical empirical models,the IGGtropSH model(gridded,non-meteorology),the SHAtropE model(scattered,non-meteorology),and the GPT3 model(gridded,meteorology)using the Crustal Movement Observation Network of China(CMONOC)network.In general,the results show that the three models share consistent performance with RMSE/bias of 37.45/1.63,37.13/2.20,and 38.27/1.34 mm for the GPT3,SHAtropE and IGGtropSH model,respectively.However,the models had a distinct performance regarding geographical distribution,elevation,seasonal variations,and daily variation.In the southeastern region of China,RMSE values are around 50 mm,which are much higher than that in the western region,approximately 20 mm.The SHAtropE model exhibits better performance for areas with large variations in elevation.The GPT3 model and the IGGtropSH model are more stable across different months,and the SHAtropE model based on the GNSS data exhibits superior performance across various UTC epochs. 展开更多
关键词 GNSS Zenith tropospheric delay Empirical ZTD model CMONOC data
下载PDF
A Stochastic Model to Assess the Epidemiological Impact of Vaccine Booster Doses on COVID-19 and Viral Hepatitis B Co-Dynamics with Real Data
2
作者 Andrew Omame Mujahid Abbas Dumitru Baleanu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2973-3012,共40页
A patient co-infected with COVID-19 and viral hepatitis B can be atmore risk of severe complications than the one infected with a single infection.This study develops a comprehensive stochastic model to assess the epi... A patient co-infected with COVID-19 and viral hepatitis B can be atmore risk of severe complications than the one infected with a single infection.This study develops a comprehensive stochastic model to assess the epidemiological impact of vaccine booster doses on the co-dynamics of viral hepatitis B and COVID-19.The model is fitted to real COVID-19 data from Pakistan.The proposed model incorporates logistic growth and saturated incidence functions.Rigorous analyses using the tools of stochastic calculus,are performed to study appropriate conditions for the existence of unique global solutions,stationary distribution in the sense of ergodicity and disease extinction.The stochastic threshold estimated from the data fitting is given by:R_(0)^(S)=3.0651.Numerical assessments are implemented to illustrate the impact of double-dose vaccination and saturated incidence functions on the dynamics of both diseases.The effects of stochastic white noise intensities are also highlighted. 展开更多
关键词 Viral hepatitis B COVID-19 stochastic model EXTINCTION ERGODICITY real data
下载PDF
A data and physical model dual-driven based trajectory estimator for long-term navigation
3
作者 Tao Feng Yu Liu +2 位作者 Yue Yu Liang Chen Ruizhi Chen 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第10期78-90,共13页
Long-term navigation ability based on consumer-level wearable inertial sensors plays an essential role towards various emerging fields, for instance, smart healthcare, emergency rescue, soldier positioning et al. The ... Long-term navigation ability based on consumer-level wearable inertial sensors plays an essential role towards various emerging fields, for instance, smart healthcare, emergency rescue, soldier positioning et al. The performance of existing long-term navigation algorithm is limited by the cumulative error of inertial sensors, disturbed local magnetic field, and complex motion modes of the pedestrian. This paper develops a robust data and physical model dual-driven based trajectory estimation(DPDD-TE) framework, which can be applied for long-term navigation tasks. A Bi-directional Long Short-Term Memory(Bi-LSTM) based quasi-static magnetic field(QSMF) detection algorithm is developed for extracting useful magnetic observation for heading calibration, and another Bi-LSTM is adopted for walking speed estimation by considering hybrid human motion information under a specific time period. In addition, a data and physical model dual-driven based multi-source fusion model is proposed to integrate basic INS mechanization and multi-level constraint and observations for maintaining accuracy under long-term navigation tasks, and enhanced by the magnetic and trajectory features assisted loop detection algorithm. Real-world experiments indicate that the proposed DPDD-TE outperforms than existing algorithms, and final estimated heading and positioning accuracy indexes reaches 5° and less than 2 m under the time period of 30 min, respectively. 展开更多
关键词 Long-term navigation Wearable inertial sensors Bi-LSTM QSMF data and physical model dual-driven
下载PDF
Dominant woody plant species recognition with a hierarchical model based on multimodal geospatial data for subtropical forests
4
作者 Xin Chen Yujun Sun 《Journal of Forestry Research》 SCIE EI CAS CSCD 2024年第3期111-130,共20页
Since the launch of the Google Earth Engine(GEE)cloud platform in 2010,it has been widely used,leading to a wealth of valuable information.However,the potential of GEE for forest resource management has not been fully... Since the launch of the Google Earth Engine(GEE)cloud platform in 2010,it has been widely used,leading to a wealth of valuable information.However,the potential of GEE for forest resource management has not been fully exploited.To extract dominant woody plant species,GEE combined Sen-tinel-1(S1)and Sentinel-2(S2)data with the addition of the National Forest Resources Inventory(NFRI)and topographic data,resulting in a 10 m resolution multimodal geospatial dataset for subtropical forests in southeast China.Spectral and texture features,red-edge bands,and vegetation indices of S1 and S2 data were computed.A hierarchical model obtained information on forest distribution and area and the dominant woody plant species.The results suggest that combining data sources from the S1 winter and S2 yearly ranges enhances accuracy in forest distribution and area extraction compared to using either data source independently.Similarly,for dominant woody species recognition,using S1 winter and S2 data across all four seasons was accurate.Including terrain factors and removing spatial correlation from NFRI sample points further improved the recognition accuracy.The optimal forest extraction achieved an overall accuracy(OA)of 97.4%and a maplevel image classification efficacy(MICE)of 96.7%.OA and MICE were 83.6%and 80.7%for dominant species extraction,respectively.The high accuracy and efficacy values indicate that the hierarchical recognition model based on multimodal remote sensing data performed extremely well for extracting information about dominant woody plant species.Visualizing the results using the GEE application allows for an intuitive display of forest and species distribution,offering significant convenience for forest resource monitoring. 展开更多
关键词 Google Earth Engine SENTINEL Forest resource inventory data Dominant woody plant species SUBTROPICS model performance
下载PDF
Analysis of Secured Cloud Data Storage Model for Information
5
作者 Emmanuel Nwabueze Ekwonwune Udo Chukwuebuka Chigozie +1 位作者 Duroha Austin Ekekwe Georgina Chekwube Nwankwo 《Journal of Software Engineering and Applications》 2024年第5期297-320,共24页
This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hac... This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system. 展开更多
关键词 CLOUD data Information model data Storage Cloud Computing Security System data Encryption
下载PDF
Intelligent Energy Utilization Analysis Using IUA-SMD Model Based Optimization Technique for Smart Metering Data
6
作者 K.Rama Devi V.Srinivasan +1 位作者 G.Clara Barathi Priyadharshini J.Gokulapriya 《Journal of Harbin Institute of Technology(New Series)》 CAS 2024年第1期90-98,共9页
Smart metering has gained considerable attention as a research focus due to its reliability and energy-efficient nature compared to traditional electromechanical metering systems. Existing methods primarily focus on d... Smart metering has gained considerable attention as a research focus due to its reliability and energy-efficient nature compared to traditional electromechanical metering systems. Existing methods primarily focus on data management,rather than emphasizing efficiency. Accurate prediction of electricity consumption is crucial for enabling intelligent grid operations,including resource planning and demandsupply balancing. Smart metering solutions offer users the benefits of effectively interpreting their energy utilization and optimizing costs. Motivated by this,this paper presents an Intelligent Energy Utilization Analysis using Smart Metering Data(IUA-SMD)model to determine energy consumption patterns. The proposed IUA-SMD model comprises three major processes:data Pre-processing,feature extraction,and classification,with parameter optimization. We employ the extreme learning machine(ELM)based classification approach within the IUA-SMD model to derive optimal energy utilization labels. Additionally,we apply the shell game optimization(SGO)algorithm to enhance the classification efficiency of the ELM by optimizing its parameters. The effectiveness of the IUA-SMD model is evaluated using an extensive dataset of smart metering data,and the results are analyzed in terms of accuracy and mean square error(MSE). The proposed model demonstrates superior performance,achieving a maximum accuracy of65.917% and a minimum MSE of0.096. These results highlight the potential of the IUA-SMD model for enabling efficient energy utilization through intelligent analysis of smart metering data. 展开更多
关键词 electricity consumption predictive model data analytics smart metering machine learning
下载PDF
Finer topographic data improves distribution modeling of Picea crassifolia in the northern Qilian Mountains
7
作者 ZHANG Xiang GAO Linlin +3 位作者 LUO Yu YUAN Yiyun MA Baolong DENG Yang 《Journal of Mountain Science》 SCIE CSCD 2024年第10期3306-3317,共12页
The Qilian Mountains, a national key ecological function zone in Western China, play a pivotal role in ecosystem services. However, the distribution of its dominant tree species, Picea crassifolia (Qinghai spruce), ha... The Qilian Mountains, a national key ecological function zone in Western China, play a pivotal role in ecosystem services. However, the distribution of its dominant tree species, Picea crassifolia (Qinghai spruce), has decreased dramatically in the past decades due to climate change and human activity, which may have influenced its ecological functions. To restore its ecological functions, reasonable reforestation is the key measure. Many previous efforts have predicted the potential distribution of Picea crassifolia, which provides guidance on regional reforestation policy. However, all of them were performed at low spatial resolution, thus ignoring the natural characteristics of the patchy distribution of Picea crassifolia. Here, we modeled the distribution of Picea crassifolia with species distribution models at high spatial resolutions. For many models, the area under the receiver operating characteristic curve (AUC) is larger than 0.9, suggesting their excellent precision. The AUC of models at 30 m is higher than that of models at 90 m, and the current potential distribution of Picea crassifolia is more closely aligned with its actual distribution at 30 m, demonstrating that finer data resolution improves model performance. Besides, for models at 90 m resolution, annual precipitation (Bio12) played the paramount influence on the distribution of Picea crassifolia, while the aspect became the most important one at 30 m, indicating the crucial role of finer topographic data in modeling species with patchy distribution. The current distribution of Picea crassifolia was concentrated in the northern and central parts of the study area, and this pattern will be maintained under future scenarios, although some habitat loss in the central parts and gain in the eastern regions is expected owing to increasing temperatures and precipitation. Our findings can guide protective and restoration strategies for the Qilian Mountains, which would benefit regional ecological balance. 展开更多
关键词 Species distribution modeling Picea crassifolia High resolution topographic data Climate change Qilian Mountains Nature Reserve Climate scenarios
下载PDF
Ensemble Modeling for the Classification of Birth Data
8
作者 Fiaz Majeed Abdul Razzaq Ahmad Shakir +6 位作者 Maqbool Ahmad Shahzada Khurram Muhammad Qaiser Saleem Muhammad Shafiq Jin-Ghoo Choi Habib Hamam Osama E.Sheta 《Intelligent Automation & Soft Computing》 2024年第4期765-781,共17页
Machine learning(ML)and data mining are used in various fields such as data analysis,prediction,image processing and especially in healthcare.Researchers in the past decade have focused on applying ML and data mining ... Machine learning(ML)and data mining are used in various fields such as data analysis,prediction,image processing and especially in healthcare.Researchers in the past decade have focused on applying ML and data mining to generate conclusions from historical data in order to improve healthcare systems by making predictions about the results.Using ML algorithms,researchers have developed applications for decision support,analyzed clinical aspects,extracted informative information from historical data,predicted the outcomes and categorized diseases which help physicians make better decisions.It is observed that there is a huge difference between women depending on the region and their social lives.Due to these differences,scholars have been encouraged to conduct studies at a local level in order to better understand those factors that affect maternal health and the expected child.In this study,the ensemble modeling technique is applied to classify birth outcomes based on either cesarean section(C-Section)or normal delivery.A voting ensemble model for the classification of a birth dataset was made by using a Random Forest(RF),Gradient Boosting Classifier,Extra Trees Classifier and Bagging Classifier as base learners.It is observed that the voting ensemble modal of proposed classifiers provides the best accuracy,i.e.,94.78%,as compared to the individual classifiers.ML algorithms are more accurate due to ensemble models,which reduce variance and classification errors.It is reported that when a suitable classification model has been developed for birth classification,decision support systems can be created to enable clinicians to gain in-depth insights into the patterns in the datasets.Developing such a system will not only allow health organizations to improve maternal health assessment processes,but also open doors for interdisciplinary research in two different fields in the region. 展开更多
关键词 Birth data classification ensemble model machine learning
下载PDF
A Study of EM Algorithm as an Imputation Method: A Model-Based Simulation Study with Application to a Synthetic Compositional Data
9
作者 Yisa Adeniyi Abolade Yichuan Zhao 《Open Journal of Modelling and Simulation》 2024年第2期33-42,共10页
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode... Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance. 展开更多
关键词 Compositional data Linear Regression model Least Square Method Robust Least Square Method Synthetic data Aitchison Distance Maximum Likelihood Estimation Expectation-Maximization Algorithm k-Nearest Neighbor and Mean imputation
下载PDF
Contribution of the MERISE-Type Conceptual Data Model to the Construction of Monitoring and Evaluation Indicators of the Effectiveness of Training in Relation to the Needs of the Labor Market in the Republic of Congo
10
作者 Roch Corneille Ngoubou Basile Guy Richard Bossoto Régis Babindamana 《Open Journal of Applied Sciences》 2024年第8期2187-2200,共14页
This study proposes the use of the MERISE conceptual data model to create indicators for monitoring and evaluating the effectiveness of vocational training in the Republic of Congo. The importance of MERISE for struct... This study proposes the use of the MERISE conceptual data model to create indicators for monitoring and evaluating the effectiveness of vocational training in the Republic of Congo. The importance of MERISE for structuring and analyzing data is underlined, as it enables the measurement of the adequacy between training and the needs of the labor market. The innovation of the study lies in the adaptation of the MERISE model to the local context, the development of innovative indicators, and the integration of a participatory approach including all relevant stakeholders. Contextual adaptation and local innovation: The study suggests adapting MERISE to the specific context of the Republic of Congo, considering the local particularities of the labor market. Development of innovative indicators and new measurement tools: It proposes creating indicators to assess skills matching and employer satisfaction, which are crucial for evaluating the effectiveness of vocational training. Participatory approach and inclusion of stakeholders: The study emphasizes actively involving training centers, employers, and recruitment agencies in the evaluation process. This participatory approach ensures that the perspectives of all stakeholders are considered, leading to more relevant and practical outcomes. Using the MERISE model allows for: • Rigorous data structuring, organization, and standardization: Clearly defining entities and relationships facilitates data organization and standardization, crucial for effective data analysis. • Facilitation of monitoring, analysis, and relevant indicators: Developing both quantitative and qualitative indicators helps measure the effectiveness of training in relation to the labor market, allowing for a comprehensive evaluation. • Improved communication and common language: By providing a common language for different stakeholders, MERISE enhances communication and collaboration, ensuring that all parties have a shared understanding. The study’s approach and contribution to existing research lie in: • Structured theoretical and practical framework and holistic approach: The study offers a structured framework for data collection and analysis, covering both quantitative and qualitative aspects, thus providing a comprehensive view of the training system. • Reproducible methodology and international comparison: The proposed methodology can be replicated in other contexts, facilitating international comparison and the adoption of best practices. • Extension of knowledge and new perspective: By integrating a participatory approach and developing indicators adapted to local needs, the study extends existing research and offers new perspectives on vocational training evaluation. 展开更多
关键词 MERISE Conceptual data model (MCD) Monitoring Indicators Evaluation of Training Effectiveness Training-Employment Adequacy Labor Market Information Systems Analysis Adjustment of Training Programs EMPLOYABILITY Professional Skills
下载PDF
A panel data model to predict airline passenger volume
11
作者 Xiaoting Wang Junyu Cai Junyan Wang 《Digital Transportation and Safety》 2024年第2期46-52,共7页
Airline passenger volume is an important reference for the implementation of aviation capacity and route adjustment plans.This paper explores the determinants of airline passenger volume and proposes a comprehensive p... Airline passenger volume is an important reference for the implementation of aviation capacity and route adjustment plans.This paper explores the determinants of airline passenger volume and proposes a comprehensive panel data model for predicting volume.First,potential factors influencing airline passenger volume are analyzed from Geo-economic and service-related aspects.Second,the principal component analysis(PCA)is applied to identify key factors that impact the airline passenger volume of city pairs.Then the panel data model is estimated using 120 sets of data,which are a collection of observations for multiple subjects at multiple instances.Finally,the airline data from Chongqing to Shanghai,from 2003 to 2012,was used as a test case to verify the validity of the prediction model.Results show that railway and highway transportation assumed a certain proportion of passenger volumes,and total retail sales of consumer goods in the departure and arrival cities are significantly associated with airline passenger volume.According to the validity test results,the prediction accuracies of the model for 10 sets of data are all greater than 90%.The model performs better than a multivariate regression model,thus assisting airport operators decide which routes to adjust and which new routes to introduce. 展开更多
关键词 Airline passenger volume Traffic prediction Panel data model Airline route decision Transportation engineering
下载PDF
A data assimilation-based forecast model of outer radiation belt electron fluxes 被引量:2
12
作者 Yuan Lei Xing Cao +3 位作者 BinBin Ni Song Fu TaoRong Luo XiaoYu Wang 《Earth and Planetary Physics》 CAS CSCD 2023年第6期620-630,共11页
Because radiation belt electrons can pose a potential threat to the safety of satellites orbiting in space,it is of great importance to develop a reliable model that can predict the highly dynamic variations in outer ... Because radiation belt electrons can pose a potential threat to the safety of satellites orbiting in space,it is of great importance to develop a reliable model that can predict the highly dynamic variations in outer radiation belt electron fluxes.In the present study,we develop a forecast model of radiation belt electron fluxes based on the data assimilation method,in terms of Van Allen Probe measurements combined with three-dimensional radiation belt numerical simulations.Our forecast model can cover the entire outer radiation belt with a high temporal resolution(1 hour)and a spatial resolution of 0.25 L over a wide range of both electron energy(0.1-5.0 MeV)and pitch angle(5°-90°).On the basis of this model,we forecast hourly electron fluxes for the next 1,2,and 3 days during an intense geomagnetic storm and evaluate the corresponding prediction performance.Our model can reasonably predict the stormtime evolution of radiation belt electrons with high prediction efficiency(up to~0.8-1).The best prediction performance is found for~0.3-3 MeV electrons at L=~3.25-4.5,which extends to higher L and lower energies with increasing pitch angle.Our results demonstrate that the forecast model developed can be a powerful tool to predict the spatiotemporal changes in outer radiation belt electron fluxes,and the model has both scientific significance and practical implications. 展开更多
关键词 Earth’s outer radiation belt data assimilation electron flux forecast model performance evaluation
下载PDF
Modeling of Optimal Deep Learning Based Flood Forecasting Model Using Twitter Data 被引量:1
13
作者 G.Indra N.Duraipandian 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期1455-1470,共16页
Aflood is a significant damaging natural calamity that causes loss of life and property.Earlier work on the construction offlood prediction models intended to reduce risks,suggest policies,reduce mortality,and limit prop... Aflood is a significant damaging natural calamity that causes loss of life and property.Earlier work on the construction offlood prediction models intended to reduce risks,suggest policies,reduce mortality,and limit property damage caused byfloods.The massive amount of data generated by social media platforms such as Twitter opens the door toflood analysis.Because of the real-time nature of Twitter data,some government agencies and authorities have used it to track natural catastrophe events in order to build a more rapid rescue strategy.However,due to the shorter duration of Tweets,it is difficult to construct a perfect prediction model for determiningflood.Machine learning(ML)and deep learning(DL)approaches can be used to statistically developflood prediction models.At the same time,the vast amount of Tweets necessitates the use of a big data analytics(BDA)tool forflood prediction.In this regard,this work provides an optimal deep learning-basedflood forecasting model with big data analytics(ODLFF-BDA)based on Twitter data.The suggested ODLFF-BDA technique intends to anticipate the existence offloods using tweets in a big data setting.The ODLFF-BDA technique comprises data pre-processing to convert the input tweets into a usable format.In addition,a Bidirectional Encoder Representations from Transformers(BERT)model is used to generate emotive contextual embed-ding from tweets.Furthermore,a gated recurrent unit(GRU)with a Multilayer Convolutional Neural Network(MLCNN)is used to extract local data and predict theflood.Finally,an Equilibrium Optimizer(EO)is used tofine-tune the hyper-parameters of the GRU and MLCNN models in order to increase prediction performance.The memory usage is pull down lesser than 3.5 MB,if its compared with the other algorithm techniques.The ODLFF-BDA technique’s performance was validated using a benchmark Kaggle dataset,and thefindings showed that it outperformed other recent approaches significantly. 展开更多
关键词 Big data analytics predictive models deep learning flood prediction twitter data hyperparameter tuning
下载PDF
Remaining useful life prediction based on nonlinear random coefficient regression model with fusing failure time data 被引量:1
14
作者 WANG Fengfei TANG Shengjin +3 位作者 SUN Xiaoyan LI Liang YU Chuanqiang SI Xiaosheng 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第1期247-258,共12页
Remaining useful life(RUL) prediction is one of the most crucial elements in prognostics and health management(PHM). Aiming at the imperfect prior information, this paper proposes an RUL prediction method based on a n... Remaining useful life(RUL) prediction is one of the most crucial elements in prognostics and health management(PHM). Aiming at the imperfect prior information, this paper proposes an RUL prediction method based on a nonlinear random coefficient regression(RCR) model with fusing failure time data.Firstly, some interesting natures of parameters estimation based on the nonlinear RCR model are given. Based on these natures,the failure time data can be fused as the prior information reasonably. Specifically, the fixed parameters are calculated by the field degradation data of the evaluated equipment and the prior information of random coefficient is estimated with fusing the failure time data of congeneric equipment. Then, the prior information of the random coefficient is updated online under the Bayesian framework, the probability density function(PDF) of the RUL with considering the limitation of the failure threshold is performed. Finally, two case studies are used for experimental verification. Compared with the traditional Bayesian method, the proposed method can effectively reduce the influence of imperfect prior information and improve the accuracy of RUL prediction. 展开更多
关键词 remaining useful life(RUL)prediction imperfect prior information failure time data NONLINEAR random coefficient regression(RCR)model
下载PDF
ETL Maturity Model for Data Warehouse Systems:A CMMI Compliant Framework
15
作者 Musawwer Khan Islam Ali +6 位作者 Shahzada Khurram Salman Naseer Shafiq Ahmad Ahmed T.Soliman Akber Abid Gardezi Muhammad Shafiq Jin-Ghoo Choi 《Computers, Materials & Continua》 SCIE EI 2023年第2期3849-3863,共15页
The effectiveness of the Business Intelligence(BI)system mainly depends on the quality of knowledge it produces.The decision-making process is hindered,and the user’s trust is lost,if the knowledge offered is undesir... The effectiveness of the Business Intelligence(BI)system mainly depends on the quality of knowledge it produces.The decision-making process is hindered,and the user’s trust is lost,if the knowledge offered is undesired or of poor quality.A Data Warehouse(DW)is a huge collection of data gathered from many sources and an important part of any BI solution to assist management in making better decisions.The Extract,Transform,and Load(ETL)process is the backbone of a DW system,and it is responsible for moving data from source systems into the DW system.The more mature the ETL process the more reliable the DW system.In this paper,we propose the ETL Maturity Model(EMM)that assists organizations in achieving a high-quality ETL system and thereby enhancing the quality of knowledge produced.The EMM is made up of five levels of maturity i.e.,Chaotic,Acceptable,Stable,Efficient and Reliable.Each level of maturity contains Key Process Areas(KPAs)that have been endorsed by industry experts and include all critical features of a good ETL system.Quality Objectives(QOs)are defined procedures that,when implemented,resulted in a high-quality ETL process.Each KPA has its own set of QOs,the execution of which meets the requirements of that KPA.Multiple brainstorming sessions with relevant industry experts helped to enhance the model.EMMwas deployed in two key projects utilizing multiple case studies to supplement the validation process and support our claim.This model can assist organizations in improving their current ETL process and transforming it into a more mature ETL system.This model can also provide high-quality information to assist users inmaking better decisions and gaining their trust. 展开更多
关键词 ETL maturity model CMMI data warehouse maturity model
下载PDF
Augmented Industrial Data-Driven Modeling Under the Curse of Dimensionality
16
作者 Xiaoyu Jiang Xiangyin Kong Zhiqiang Ge 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第6期1445-1461,共17页
The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased si... The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased significantly,making data driven models more challenging to develop.To address this prob lem,data augmentation technology has been introduced as an effective tool to solve the sparsity problem of high-dimensiona industrial data.This paper systematically explores and discusses the necessity,feasibility,and effectiveness of augmented indus trial data-driven modeling in the context of the curse of dimen sionality and virtual big data.Then,the process of data augmen tation modeling is analyzed,and the concept of data boosting augmentation is proposed.The data boosting augmentation involves designing the reliability weight and actual-virtual weigh functions,and developing a double weighted partial least squares model to optimize the three stages of data generation,data fusion and modeling.This approach significantly improves the inter pretability,effectiveness,and practicality of data augmentation in the industrial modeling.Finally,the proposed method is verified using practical examples of fault diagnosis systems and virtua measurement systems in the industry.The results demonstrate the effectiveness of the proposed approach in improving the accu racy and robustness of data-driven models,making them more suitable for real-world industrial applications. 展开更多
关键词 Index Terms—Curse of dimensionality data augmentation data-driven modeling industrial processes machine learning
下载PDF
Developing Blue Spots Model for Tennessee Using GIS, and Advanced Data Analytics: Literature Review
17
作者 Fasesin Kingsley 《Journal of Geoscience and Environment Protection》 2023年第6期145-154,共10页
Climate change and global warming results in natural hazards, including flash floods. Flash floods can create blue spots;areas where transport networks (roads, tunnels, bridges, passageways) and other engineering stru... Climate change and global warming results in natural hazards, including flash floods. Flash floods can create blue spots;areas where transport networks (roads, tunnels, bridges, passageways) and other engineering structures within them are at flood risk. The economic and social impact of flooding revealed that the damage caused by flash floods leading to blue spots is very high in terms of dollar amount and direct impacts on people’s lives. The impact of flooding within blue spots is either infrastructural or social, affecting lives and properties. Currently, more than 16.1 million properties in the U.S are vulnerable to flooding, and this is projected to increase by 3.2% within the next 30 years. Some models have been developed for flood risks analysis and management including some hydrological models, algorithms and machine learning and geospatial models. The models and methods reviewed are based on location data collection, statistical analysis and computation, and visualization (mapping). This research aims to create blue spots model for the State of Tennessee using ArcGIS visual programming language (model) and data analytics pipeline. 展开更多
关键词 Blue Spots Floods Risks and Management GIS Hydrological models GEOSPATIAL model Builder LiDAR data Remote Sensing data Analytics Pipe-line
下载PDF
Integration of SAR Polarimetric Features and Multi-spectral Data for Object-Based Land Cover Classification 被引量:7
18
作者 Yi ZHAO Mi JIANG Zhangfeng MA 《Journal of Geodesy and Geoinformation Science》 2019年第4期64-72,共9页
An object-based approach is proposed for land cover classification using optimal polarimetric parameters.The ability to identify targets is effectively enhanced by the integration of SAR and optical images.The innovat... An object-based approach is proposed for land cover classification using optimal polarimetric parameters.The ability to identify targets is effectively enhanced by the integration of SAR and optical images.The innovation of the presented method can be summarized in the following two main points:①estimating polarimetric parameters(H-A-Alpha decomposition)through the optical image as a driver;②a multi-resolution segmentation based on the optical image only is deployed to refine classification results.The proposed method is verified by using Sentinel-1/2 datasets over the Bakersfield area,California.The results are compared against those from pixel-based SVM classification using the ground truth from the National Land Cover Database(NLCD).A detailed accuracy assessment complied with seven classes shows that the proposed method outperforms the conventional approach by around 10%,with an overall accuracy of 92.6%over regions with rich texture. 展开更多
关键词 synthetic aperture radar(SAR) polarimetric MULTISPECTRAL data fusion object-based land cover classification
下载PDF
Methodology for local correction of the heights of global geoid models to improve the accuracy of GNSS leveling
19
作者 Stepan Savchuk Alina Fedorchuk 《Geodesy and Geodynamics》 EI CSCD 2024年第1期42-49,共8页
At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this met... At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this method only if there is a geoid or quasi-geoid height model available.This paper proposes the methodology for local correction of the heights of high-order global geoid models such as EGM08,EIGEN-6C4,GECO,and XGM2019e_2159.This methodology was tested in different areas of the research field,covering various relief forms.The dependence of the change in corrected height accuracy on the input data was analyzed,and the correction was also conducted for model heights in three tidal systems:"tide free","mean tide",and"zero tide".The results show that the heights of EIGEN-6C4 model can be corrected with an accuracy of up to 1 cm for flat and foothill terrains with the dimensionality of 1°×1°,2°×2°,and 3°×3°.The EGM08 model presents an almost identical result.The EIGEN-6C4 model is best suited for mountainous relief and provides an accuracy of 1.5 cm on the 1°×1°area.The height correction accuracy of GECO and XGM2019e_2159 models is slightly poor,which has fuzziness in terms of numerical fluctuation. 展开更多
关键词 GNSS leveling Global geoid model Gravity anomaly Weight data Correcting data
下载PDF
An Extensive Study and Review of Privacy Preservation Models for the Multi-Institutional Data
20
作者 Sagarkumar Patel Rachna Patel +1 位作者 Ashok Akbari Srinivasa Reddy Mukkala 《Journal of Information Security》 2023年第4期343-365,共23页
The deep learning models hold considerable potential for clinical applications, but there are many challenges to successfully training deep learning models. Large-scale data collection is required, which is frequently... The deep learning models hold considerable potential for clinical applications, but there are many challenges to successfully training deep learning models. Large-scale data collection is required, which is frequently only possible through multi-institutional cooperation. Building large central repositories is one strategy for multi-institution studies. However, this is hampered by issues regarding data sharing, including patient privacy, data de-identification, regulation, intellectual property, and data storage. These difficulties have lessened the impracticality of central data storage. In this survey, we will look at 24 research publications that concentrate on machine learning approaches linked to privacy preservation techniques for multi-institutional data, highlighting the multiple shortcomings of the existing methodologies. Researching different approaches will be made simpler in this case based on a number of factors, such as performance measures, year of publication and journals, achievements of the strategies in numerical assessments, and other factors. A technique analysis that considers the benefits and drawbacks of the strategies is additionally provided. The article also looks at some potential areas for future research as well as the challenges associated with increasing the accuracy of privacy protection techniques. The comparative evaluation of the approaches offers a thorough justification for the research’s purpose. 展开更多
关键词 Privacy Preservation models Multi Institutional data Bio Technologies Clinical Trial and Pharmaceutical Industry
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部