期刊文献+
共找到27,614篇文章
< 1 2 250 >
每页显示 20 50 100
A Stochastic Model to Assess the Epidemiological Impact of Vaccine Booster Doses on COVID-19 and Viral Hepatitis B Co-Dynamics with Real Data
1
作者 Andrew Omame Mujahid Abbas Dumitru Baleanu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2973-3012,共40页
A patient co-infected with COVID-19 and viral hepatitis B can be atmore risk of severe complications than the one infected with a single infection.This study develops a comprehensive stochastic model to assess the epi... A patient co-infected with COVID-19 and viral hepatitis B can be atmore risk of severe complications than the one infected with a single infection.This study develops a comprehensive stochastic model to assess the epidemiological impact of vaccine booster doses on the co-dynamics of viral hepatitis B and COVID-19.The model is fitted to real COVID-19 data from Pakistan.The proposed model incorporates logistic growth and saturated incidence functions.Rigorous analyses using the tools of stochastic calculus,are performed to study appropriate conditions for the existence of unique global solutions,stationary distribution in the sense of ergodicity and disease extinction.The stochastic threshold estimated from the data fitting is given by:R_(0)^(S)=3.0651.Numerical assessments are implemented to illustrate the impact of double-dose vaccination and saturated incidence functions on the dynamics of both diseases.The effects of stochastic white noise intensities are also highlighted. 展开更多
关键词 Viral hepatitis B COVID-19 stochastic model EXTINCTION ERGODICITY real data
下载PDF
Analysis of Secured Cloud Data Storage Model for Information
2
作者 Emmanuel Nwabueze Ekwonwune Udo Chukwuebuka Chigozie +1 位作者 Duroha Austin Ekekwe Georgina Chekwube Nwankwo 《Journal of Software Engineering and Applications》 2024年第5期297-320,共24页
This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hac... This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system. 展开更多
关键词 CLOUD data Information model data Storage Cloud Computing Security System data Encryption
下载PDF
Intelligent Energy Utilization Analysis Using IUA-SMD Model Based Optimization Technique for Smart Metering Data
3
作者 K.Rama Devi V.Srinivasan +1 位作者 G.Clara Barathi Priyadharshini J.Gokulapriya 《Journal of Harbin Institute of Technology(New Series)》 CAS 2024年第1期90-98,共9页
Smart metering has gained considerable attention as a research focus due to its reliability and energy-efficient nature compared to traditional electromechanical metering systems. Existing methods primarily focus on d... Smart metering has gained considerable attention as a research focus due to its reliability and energy-efficient nature compared to traditional electromechanical metering systems. Existing methods primarily focus on data management,rather than emphasizing efficiency. Accurate prediction of electricity consumption is crucial for enabling intelligent grid operations,including resource planning and demandsupply balancing. Smart metering solutions offer users the benefits of effectively interpreting their energy utilization and optimizing costs. Motivated by this,this paper presents an Intelligent Energy Utilization Analysis using Smart Metering Data(IUA-SMD)model to determine energy consumption patterns. The proposed IUA-SMD model comprises three major processes:data Pre-processing,feature extraction,and classification,with parameter optimization. We employ the extreme learning machine(ELM)based classification approach within the IUA-SMD model to derive optimal energy utilization labels. Additionally,we apply the shell game optimization(SGO)algorithm to enhance the classification efficiency of the ELM by optimizing its parameters. The effectiveness of the IUA-SMD model is evaluated using an extensive dataset of smart metering data,and the results are analyzed in terms of accuracy and mean square error(MSE). The proposed model demonstrates superior performance,achieving a maximum accuracy of65.917% and a minimum MSE of0.096. These results highlight the potential of the IUA-SMD model for enabling efficient energy utilization through intelligent analysis of smart metering data. 展开更多
关键词 electricity consumption predictive model data analytics smart metering machine learning
下载PDF
Analysis of Gestational Diabetes Mellitus (GDM) and Its Impact on Maternal and Fetal Health: A Comprehensive Dataset Study Using Data Analytic Tool Power BI
4
作者 Shahistha Jabeen Hashim Arthur McAdams 《Journal of Data Analysis and Information Processing》 2024年第2期232-247,共16页
Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal he... Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal health. Maternal complications of GDM include an increased risk of developing type 2 diabetes later in life, as well as hypertension and preeclampsia during pregnancy. Fetal complications may include macrosomia (large birth weight), birth injuries, and an increased risk of developing metabolic disorders later in life. Understanding the demographics, risk factors, and biomarkers associated with GDM is crucial for effective management and prevention strategies. This research aims to address these aspects comprehensively through the analysis of a dataset comprising 600 pregnant women. By exploring the demographics of the dataset and employing data modeling techniques, the study seeks to identify key risk factors associated with GDM. Moreover, by analyzing various biomarkers, the research aims to gain insights into the physiological mechanisms underlying GDM and its implications for maternal and fetal health. The significance of this research lies in its potential to inform clinical practice and public health policies related to GDM. By identifying demographic patterns and risk factors, healthcare providers can better tailor screening and intervention strategies for pregnant women at risk of GDM. Additionally, insights into biomarkers associated with GDM may contribute to the development of novel diagnostic tools and therapeutic approaches. Ultimately, by enhancing our understanding of GDM, this research aims to improve maternal and fetal outcomes and reduce the burden of this condition on healthcare systems and society. However, it’s important to acknowledge the limitations of the dataset used in this study. Further research utilizing larger and more diverse datasets, perhaps employing advanced data analysis techniques such as Power BI, is warranted to corroborate and expand upon the findings of this research. This underscores the ongoing need for continued investigation into GDM to refine our understanding and improve clinical management strategies. 展开更多
关键词 Gestational Diabetes Visualization data Analytics data modelling PREGNANCY Power BI
下载PDF
A Study of EM Algorithm as an Imputation Method: A Model-Based Simulation Study with Application to a Synthetic Compositional Data
5
作者 Yisa Adeniyi Abolade Yichuan Zhao 《Open Journal of Modelling and Simulation》 2024年第2期33-42,共10页
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode... Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance. 展开更多
关键词 Compositional data Linear Regression model Least Square Method Robust Least Square Method Synthetic data Aitchison Distance Maximum Likelihood Estimation Expectation-Maximization Algorithm k-Nearest Neighbor and Mean imputation
下载PDF
基于CBDM的建筑天然光环境研究进展
6
作者 梁树英 杨易 陈桂奇 《照明工程学报》 2024年第2期86-95,共10页
天然光作为绿色能源,对建筑可持续发展有着重要意义。正确评估建筑自然采光是积极利用天然光的关键因素,而目前的天然采光研究主要以静态计算评估为基础,未考虑天然光的高度动态性以及地域不同对室内采光的影响。因此,基于气候的采光模... 天然光作为绿色能源,对建筑可持续发展有着重要意义。正确评估建筑自然采光是积极利用天然光的关键因素,而目前的天然采光研究主要以静态计算评估为基础,未考虑天然光的高度动态性以及地域不同对室内采光的影响。因此,基于气候的采光模型(Climate-Based Daylight Modeling,CBDM)对建筑天然采光的适用性应用提出了新的要求。本文回顾了近十年来国内外对CBDM的研究成果,从“量化研究”、“技术应用”、“度量标准”三个方面进行了梳理,总结了基于CBDM的建筑天然光环境研究进展,提出了天然采光的未来发展趋势和需要解决的研究问题。 展开更多
关键词 天然采光 基于气候的采光模型(CBdm) 光气候数据 天空亮度模型 动态采光
下载PDF
Modeling of Optimal Deep Learning Based Flood Forecasting Model Using Twitter Data 被引量:1
7
作者 G.Indra N.Duraipandian 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期1455-1470,共16页
Aflood is a significant damaging natural calamity that causes loss of life and property.Earlier work on the construction offlood prediction models intended to reduce risks,suggest policies,reduce mortality,and limit prop... Aflood is a significant damaging natural calamity that causes loss of life and property.Earlier work on the construction offlood prediction models intended to reduce risks,suggest policies,reduce mortality,and limit property damage caused byfloods.The massive amount of data generated by social media platforms such as Twitter opens the door toflood analysis.Because of the real-time nature of Twitter data,some government agencies and authorities have used it to track natural catastrophe events in order to build a more rapid rescue strategy.However,due to the shorter duration of Tweets,it is difficult to construct a perfect prediction model for determiningflood.Machine learning(ML)and deep learning(DL)approaches can be used to statistically developflood prediction models.At the same time,the vast amount of Tweets necessitates the use of a big data analytics(BDA)tool forflood prediction.In this regard,this work provides an optimal deep learning-basedflood forecasting model with big data analytics(ODLFF-BDA)based on Twitter data.The suggested ODLFF-BDA technique intends to anticipate the existence offloods using tweets in a big data setting.The ODLFF-BDA technique comprises data pre-processing to convert the input tweets into a usable format.In addition,a Bidirectional Encoder Representations from Transformers(BERT)model is used to generate emotive contextual embed-ding from tweets.Furthermore,a gated recurrent unit(GRU)with a Multilayer Convolutional Neural Network(MLCNN)is used to extract local data and predict theflood.Finally,an Equilibrium Optimizer(EO)is used tofine-tune the hyper-parameters of the GRU and MLCNN models in order to increase prediction performance.The memory usage is pull down lesser than 3.5 MB,if its compared with the other algorithm techniques.The ODLFF-BDA technique’s performance was validated using a benchmark Kaggle dataset,and thefindings showed that it outperformed other recent approaches significantly. 展开更多
关键词 Big data analytics predictive models deep learning flood prediction twitter data hyperparameter tuning
下载PDF
A data assimilation-based forecast model of outer radiation belt electron fluxes 被引量:1
8
作者 Yuan Lei Xing Cao +3 位作者 BinBin Ni Song Fu TaoRong Luo XiaoYu Wang 《Earth and Planetary Physics》 CAS CSCD 2023年第6期620-630,共11页
Because radiation belt electrons can pose a potential threat to the safety of satellites orbiting in space,it is of great importance to develop a reliable model that can predict the highly dynamic variations in outer ... Because radiation belt electrons can pose a potential threat to the safety of satellites orbiting in space,it is of great importance to develop a reliable model that can predict the highly dynamic variations in outer radiation belt electron fluxes.In the present study,we develop a forecast model of radiation belt electron fluxes based on the data assimilation method,in terms of Van Allen Probe measurements combined with three-dimensional radiation belt numerical simulations.Our forecast model can cover the entire outer radiation belt with a high temporal resolution(1 hour)and a spatial resolution of 0.25 L over a wide range of both electron energy(0.1-5.0 MeV)and pitch angle(5°-90°).On the basis of this model,we forecast hourly electron fluxes for the next 1,2,and 3 days during an intense geomagnetic storm and evaluate the corresponding prediction performance.Our model can reasonably predict the stormtime evolution of radiation belt electrons with high prediction efficiency(up to~0.8-1).The best prediction performance is found for~0.3-3 MeV electrons at L=~3.25-4.5,which extends to higher L and lower energies with increasing pitch angle.Our results demonstrate that the forecast model developed can be a powerful tool to predict the spatiotemporal changes in outer radiation belt electron fluxes,and the model has both scientific significance and practical implications. 展开更多
关键词 Earth’s outer radiation belt data assimilation electron flux forecast model performance evaluation
下载PDF
Remaining useful life prediction based on nonlinear random coefficient regression model with fusing failure time data 被引量:1
9
作者 WANG Fengfei TANG Shengjin +3 位作者 SUN Xiaoyan LI Liang YU Chuanqiang SI Xiaosheng 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第1期247-258,共12页
Remaining useful life(RUL) prediction is one of the most crucial elements in prognostics and health management(PHM). Aiming at the imperfect prior information, this paper proposes an RUL prediction method based on a n... Remaining useful life(RUL) prediction is one of the most crucial elements in prognostics and health management(PHM). Aiming at the imperfect prior information, this paper proposes an RUL prediction method based on a nonlinear random coefficient regression(RCR) model with fusing failure time data.Firstly, some interesting natures of parameters estimation based on the nonlinear RCR model are given. Based on these natures,the failure time data can be fused as the prior information reasonably. Specifically, the fixed parameters are calculated by the field degradation data of the evaluated equipment and the prior information of random coefficient is estimated with fusing the failure time data of congeneric equipment. Then, the prior information of the random coefficient is updated online under the Bayesian framework, the probability density function(PDF) of the RUL with considering the limitation of the failure threshold is performed. Finally, two case studies are used for experimental verification. Compared with the traditional Bayesian method, the proposed method can effectively reduce the influence of imperfect prior information and improve the accuracy of RUL prediction. 展开更多
关键词 remaining useful life(RUL)prediction imperfect prior information failure time data NONLINEAR random coefficient regression(RCR)model
下载PDF
ETL Maturity Model for Data Warehouse Systems:A CMMI Compliant Framework
10
作者 Musawwer Khan Islam Ali +6 位作者 Shahzada Khurram Salman Naseer Shafiq Ahmad Ahmed T.Soliman Akber Abid Gardezi Muhammad Shafiq Jin-Ghoo Choi 《Computers, Materials & Continua》 SCIE EI 2023年第2期3849-3863,共15页
The effectiveness of the Business Intelligence(BI)system mainly depends on the quality of knowledge it produces.The decision-making process is hindered,and the user’s trust is lost,if the knowledge offered is undesir... The effectiveness of the Business Intelligence(BI)system mainly depends on the quality of knowledge it produces.The decision-making process is hindered,and the user’s trust is lost,if the knowledge offered is undesired or of poor quality.A Data Warehouse(DW)is a huge collection of data gathered from many sources and an important part of any BI solution to assist management in making better decisions.The Extract,Transform,and Load(ETL)process is the backbone of a DW system,and it is responsible for moving data from source systems into the DW system.The more mature the ETL process the more reliable the DW system.In this paper,we propose the ETL Maturity Model(EMM)that assists organizations in achieving a high-quality ETL system and thereby enhancing the quality of knowledge produced.The EMM is made up of five levels of maturity i.e.,Chaotic,Acceptable,Stable,Efficient and Reliable.Each level of maturity contains Key Process Areas(KPAs)that have been endorsed by industry experts and include all critical features of a good ETL system.Quality Objectives(QOs)are defined procedures that,when implemented,resulted in a high-quality ETL process.Each KPA has its own set of QOs,the execution of which meets the requirements of that KPA.Multiple brainstorming sessions with relevant industry experts helped to enhance the model.EMMwas deployed in two key projects utilizing multiple case studies to supplement the validation process and support our claim.This model can assist organizations in improving their current ETL process and transforming it into a more mature ETL system.This model can also provide high-quality information to assist users inmaking better decisions and gaining their trust. 展开更多
关键词 ETL maturity model CMMI data warehouse maturity model
下载PDF
Augmented Industrial Data-Driven Modeling Under the Curse of Dimensionality
11
作者 Xiaoyu Jiang Xiangyin Kong Zhiqiang Ge 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第6期1445-1461,共17页
The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased si... The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased significantly,making data driven models more challenging to develop.To address this prob lem,data augmentation technology has been introduced as an effective tool to solve the sparsity problem of high-dimensiona industrial data.This paper systematically explores and discusses the necessity,feasibility,and effectiveness of augmented indus trial data-driven modeling in the context of the curse of dimen sionality and virtual big data.Then,the process of data augmen tation modeling is analyzed,and the concept of data boosting augmentation is proposed.The data boosting augmentation involves designing the reliability weight and actual-virtual weigh functions,and developing a double weighted partial least squares model to optimize the three stages of data generation,data fusion and modeling.This approach significantly improves the inter pretability,effectiveness,and practicality of data augmentation in the industrial modeling.Finally,the proposed method is verified using practical examples of fault diagnosis systems and virtua measurement systems in the industry.The results demonstrate the effectiveness of the proposed approach in improving the accu racy and robustness of data-driven models,making them more suitable for real-world industrial applications. 展开更多
关键词 Index Terms—Curse of dimensionality data augmentation data-driven modeling industrial processes machine learning
下载PDF
Developing Blue Spots Model for Tennessee Using GIS, and Advanced Data Analytics: Literature Review
12
作者 Fasesin Kingsley 《Journal of Geoscience and Environment Protection》 2023年第6期145-154,共10页
Climate change and global warming results in natural hazards, including flash floods. Flash floods can create blue spots;areas where transport networks (roads, tunnels, bridges, passageways) and other engineering stru... Climate change and global warming results in natural hazards, including flash floods. Flash floods can create blue spots;areas where transport networks (roads, tunnels, bridges, passageways) and other engineering structures within them are at flood risk. The economic and social impact of flooding revealed that the damage caused by flash floods leading to blue spots is very high in terms of dollar amount and direct impacts on people’s lives. The impact of flooding within blue spots is either infrastructural or social, affecting lives and properties. Currently, more than 16.1 million properties in the U.S are vulnerable to flooding, and this is projected to increase by 3.2% within the next 30 years. Some models have been developed for flood risks analysis and management including some hydrological models, algorithms and machine learning and geospatial models. The models and methods reviewed are based on location data collection, statistical analysis and computation, and visualization (mapping). This research aims to create blue spots model for the State of Tennessee using ArcGIS visual programming language (model) and data analytics pipeline. 展开更多
关键词 Blue Spots Floods Risks and Management GIS Hydrological models GEOSPATIAL model Builder LiDAR data Remote Sensing data Analytics Pipe-line
下载PDF
Methodology for local correction of the heights of global geoid models to improve the accuracy of GNSS leveling
13
作者 Stepan Savchuk Alina Fedorchuk 《Geodesy and Geodynamics》 EI CSCD 2024年第1期42-49,共8页
At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this met... At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this method only if there is a geoid or quasi-geoid height model available.This paper proposes the methodology for local correction of the heights of high-order global geoid models such as EGM08,EIGEN-6C4,GECO,and XGM2019e_2159.This methodology was tested in different areas of the research field,covering various relief forms.The dependence of the change in corrected height accuracy on the input data was analyzed,and the correction was also conducted for model heights in three tidal systems:"tide free","mean tide",and"zero tide".The results show that the heights of EIGEN-6C4 model can be corrected with an accuracy of up to 1 cm for flat and foothill terrains with the dimensionality of 1°×1°,2°×2°,and 3°×3°.The EGM08 model presents an almost identical result.The EIGEN-6C4 model is best suited for mountainous relief and provides an accuracy of 1.5 cm on the 1°×1°area.The height correction accuracy of GECO and XGM2019e_2159 models is slightly poor,which has fuzziness in terms of numerical fluctuation. 展开更多
关键词 GNSS leveling Global geoid model Gravity anomaly Weight data Correcting data
下载PDF
An Extensive Study and Review of Privacy Preservation Models for the Multi-Institutional Data
14
作者 Sagarkumar Patel Rachna Patel +1 位作者 Ashok Akbari Srinivasa Reddy Mukkala 《Journal of Information Security》 2023年第4期343-365,共23页
The deep learning models hold considerable potential for clinical applications, but there are many challenges to successfully training deep learning models. Large-scale data collection is required, which is frequently... The deep learning models hold considerable potential for clinical applications, but there are many challenges to successfully training deep learning models. Large-scale data collection is required, which is frequently only possible through multi-institutional cooperation. Building large central repositories is one strategy for multi-institution studies. However, this is hampered by issues regarding data sharing, including patient privacy, data de-identification, regulation, intellectual property, and data storage. These difficulties have lessened the impracticality of central data storage. In this survey, we will look at 24 research publications that concentrate on machine learning approaches linked to privacy preservation techniques for multi-institutional data, highlighting the multiple shortcomings of the existing methodologies. Researching different approaches will be made simpler in this case based on a number of factors, such as performance measures, year of publication and journals, achievements of the strategies in numerical assessments, and other factors. A technique analysis that considers the benefits and drawbacks of the strategies is additionally provided. The article also looks at some potential areas for future research as well as the challenges associated with increasing the accuracy of privacy protection techniques. The comparative evaluation of the approaches offers a thorough justification for the research’s purpose. 展开更多
关键词 Privacy Preservation models Multi Institutional data Bio Technologies Clinical Trial and Pharmaceutical Industry
下载PDF
A Support Data-Based Core-Set Selection Method for Signal Recognition
15
作者 Yang Ying Zhu Lidong Cao Changjie 《China Communications》 SCIE CSCD 2024年第4期151-162,共12页
In recent years,deep learning-based signal recognition technology has gained attention and emerged as an important approach for safeguarding the electromagnetic environment.However,training deep learning-based classif... In recent years,deep learning-based signal recognition technology has gained attention and emerged as an important approach for safeguarding the electromagnetic environment.However,training deep learning-based classifiers on large signal datasets with redundant samples requires significant memory and high costs.This paper proposes a support databased core-set selection method(SD)for signal recognition,aiming to screen a representative subset that approximates the large signal dataset.Specifically,this subset can be identified by employing the labeled information during the early stages of model training,as some training samples are labeled as supporting data frequently.This support data is crucial for model training and can be found using a border sample selector.Simulation results demonstrate that the SD method minimizes the impact on model recognition performance while reducing the dataset size,and outperforms five other state-of-the-art core-set selection methods when the fraction of training sample kept is less than or equal to 0.3 on the RML2016.04C dataset or 0.5 on the RML22 dataset.The SD method is particularly helpful for signal recognition tasks with limited memory and computing resources. 展开更多
关键词 core-set selection deep learning model training signal recognition support data
下载PDF
Review of Recent Trends in the Hybridisation of Preprocessing-Based and Parameter Optimisation-Based Hybrid Models to Forecast Univariate Streamflow
16
作者 Baydaa Abdul Kareem Salah L.Zubaidi +1 位作者 Nadhir Al-Ansari Yousif Raad Muhsen 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期1-41,共41页
Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques... Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques have been viewed as a viable method for enhancing the accuracy of univariate streamflow estimation when compared to standalone approaches.Current researchers have also emphasised using hybrid models to improve forecast accuracy.Accordingly,this paper conducts an updated literature review of applications of hybrid models in estimating streamflow over the last five years,summarising data preprocessing,univariate machine learning modelling strategy,advantages and disadvantages of standalone ML techniques,hybrid models,and performance metrics.This study focuses on two types of hybrid models:parameter optimisation-based hybrid models(OBH)and hybridisation of parameter optimisation-based and preprocessing-based hybridmodels(HOPH).Overall,this research supports the idea thatmeta-heuristic approaches precisely improveML techniques.It’s also one of the first efforts to comprehensively examine the efficiency of various meta-heuristic approaches(classified into four primary classes)hybridised with ML techniques.This study revealed that previous research applied swarm,evolutionary,physics,and hybrid metaheuristics with 77%,61%,12%,and 12%,respectively.Finally,there is still room for improving OBH and HOPH models by examining different data pre-processing techniques and metaheuristic algorithms. 展开更多
关键词 Univariate streamflow machine learning hybrid model data pre-processing performance metrics
下载PDF
Health diagnosis of ultrahigh arch dam performance using heterogeneous spatial panel vector model
17
作者 Er-feng Zhao Xin Li Chong-shi Gu 《Water Science and Engineering》 EI CAS CSCD 2024年第2期177-186,共10页
Currently,more than ten ultrahigh arch dams have been constructed or are being constructed in China.Safety control is essential to long-term operation of these dams.This study employed the flexibility coefficient and ... Currently,more than ten ultrahigh arch dams have been constructed or are being constructed in China.Safety control is essential to long-term operation of these dams.This study employed the flexibility coefficient and plastic complementary energy norm to assess the structural safety of arch dams.A comprehensive analysis was conducted,focusing on differences among conventional methods in characterizing the structural behavior of the Xiaowan arch dam in China.Subsequently,the spatiotemporal characteristics of the measured performance of the Xiaowan dam were explored,including periodicity,convergence,and time-effect characteristics.These findings revealed the governing mechanism of main factors.Furthermore,a heterogeneous spatial panel vector model was developed,considering both common factors and specific factors affecting the safety and performance of arch dams.This model aims to comprehensively illustrate spatial heterogeneity between the entire structure and local regions,introducing a specific effect quantity to characterize local deformation differences.Ultimately,the proposed model was applied to the Xiaowan arch dam,accurately quantifying the spatiotemporal heterogeneity of dam performance.Additionally,the spatiotemporal distri-bution characteristics of environmental load effects on different parts of the dam were reasonably interpreted.Validation of the model prediction enhances its credibility,leading to the formulation of health diagnosis criteria for future long-term operation of the Xiaowan dam.The findings not only enhance the predictive ability and timely control of ultrahigh arch dams'performance but also provide a crucial basis for assessing the effectiveness of engineering treatment measures. 展开更多
关键词 Ultrahigh arch dam Structural performance Deformation behavior Diagnosis criterion Panel data model
下载PDF
Enhancing Relational Triple Extraction in Specific Domains:Semantic Enhancement and Synergy of Large Language Models and Small Pre-Trained Language Models
18
作者 Jiakai Li Jianpeng Hu Geng Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第5期2481-2503,共23页
In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple e... In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach. 展开更多
关键词 Relational triple extraction semantic interaction large language models data augmentation specific domains
下载PDF
Multi-source heterogeneous data access management framework and key technologies for electric power Internet of Things
19
作者 Pengtian Guo Kai Xiao +1 位作者 Xiaohui Wang Daoxing Li 《Global Energy Interconnection》 EI CSCD 2024年第1期94-105,共12页
The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initiall... The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT. 展开更多
关键词 Power Internet of Things Object model High concurrency access Zero trust mechanism Multi-source heterogeneous data
下载PDF
Intrusion Detection Model Using Chaotic MAP for Network Coding Enabled Mobile Small Cells
20
作者 Chanumolu Kiran Kumar Nandhakumar Ramachandran 《Computers, Materials & Continua》 SCIE EI 2024年第3期3151-3176,共26页
Wireless Network security management is difficult because of the ever-increasing number of wireless network malfunctions,vulnerabilities,and assaults.Complex security systems,such as Intrusion Detection Systems(IDS),a... Wireless Network security management is difficult because of the ever-increasing number of wireless network malfunctions,vulnerabilities,and assaults.Complex security systems,such as Intrusion Detection Systems(IDS),are essential due to the limitations of simpler security measures,such as cryptography and firewalls.Due to their compact nature and low energy reserves,wireless networks present a significant challenge for security procedures.The features of small cells can cause threats to the network.Network Coding(NC)enabled small cells are vulnerable to various types of attacks.Avoiding attacks and performing secure“peer”to“peer”data transmission is a challenging task in small cells.Due to the low power and memory requirements of the proposed model,it is well suited to use with constrained small cells.An attacker cannot change the contents of data and generate a new Hashed Homomorphic Message Authentication Code(HHMAC)hash between transmissions since the HMAC function is generated using the shared secret.In this research,a chaotic sequence mapping based low overhead 1D Improved Logistic Map is used to secure“peer”to“peer”data transmission model using lightweight H-MAC(1D-LM-P2P-LHHMAC)is proposed with accurate intrusion detection.The proposed model is evaluated with the traditional models by considering various evaluation metrics like Vector Set Generation Accuracy Levels,Key Pair Generation Time Levels,Chaotic Map Accuracy Levels,Intrusion Detection Accuracy Levels,and the results represent that the proposed model performance in chaotic map accuracy level is 98%and intrusion detection is 98.2%.The proposed model is compared with the traditional models and the results represent that the proposed model secure data transmission levels are high. 展开更多
关键词 Network coding small cells data transmission intrusion detection model hashed message authentication code chaotic sequence mapping secure transmission
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部