期刊文献+
共找到335,708篇文章
< 1 2 250 >
每页显示 20 50 100
A novel method for clustering cellular data to improve classification
1
作者 Diek W.Wheeler Giorgio A.Ascoli 《Neural Regeneration Research》 SCIE CAS 2025年第9期2697-2705,共9页
Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subse... Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons. 展开更多
关键词 cellular data clustering dendrogram data classification Levene's one-tailed statistical test unsupervised hierarchical clustering
下载PDF
A New Encryption Mechanism Supporting the Update of Encrypted Data for Secure and Efficient Collaboration in the Cloud Environment
2
作者 Chanhyeong Cho Byeori Kim +1 位作者 Haehyun Cho Taek-Young Youn 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期813-834,共22页
With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud... With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud servers vulnerable due to insufficient encryption.This paper introduces a novel mechanism that encrypts data in‘bundle’units,designed to meet the dual requirements of efficiency and security for frequently updated collaborative data.Each bundle includes updated information,allowing only the updated portions to be reencrypted when changes occur.The encryption method proposed in this paper addresses the inefficiencies of traditional encryption modes,such as Cipher Block Chaining(CBC)and Counter(CTR),which require decrypting and re-encrypting the entire dataset whenever updates occur.The proposed method leverages update-specific information embedded within data bundles and metadata that maps the relationship between these bundles and the plaintext data.By utilizing this information,the method accurately identifies the modified portions and applies algorithms to selectively re-encrypt only those sections.This approach significantly enhances the efficiency of data updates while maintaining high performance,particularly in large-scale data environments.To validate this approach,we conducted experiments measuring execution time as both the size of the modified data and the total dataset size varied.Results show that the proposed method significantly outperforms CBC and CTR modes in execution speed,with greater performance gains as data size increases.Additionally,our security evaluation confirms that this method provides robust protection against both passive and active attacks. 展开更多
关键词 Cloud collaboration mode of operation data update efficiency
下载PDF
Impact of ocean data assimilation on the seasonal forecast of the 2014/15 marine heatwave in the Northeast Pacific Ocean
3
作者 Tiantian Tang Jiaying He +1 位作者 Huihang Sun Jingjia Luo 《Atmospheric and Oceanic Science Letters》 2025年第1期24-31,共8页
A remarkable marine heatwave,known as the“Blob”,occurred in the Northeast Pacific Ocean from late 2013 to early 2016,which displayed strong warm anomalies extending from the surface to a depth of 300 m.This study em... A remarkable marine heatwave,known as the“Blob”,occurred in the Northeast Pacific Ocean from late 2013 to early 2016,which displayed strong warm anomalies extending from the surface to a depth of 300 m.This study employed two assimilation schemes based on the global Climate Forecast System of Nanjing University of Information Science(NUIST-CFS 1.0)to investigate the impact of ocean data assimilation on the seasonal prediction of this extreme marine heatwave.The sea surface temperature(SST)nudging scheme assimilates SST only,while the deterministic ensemble Kalman filter(EnKF)scheme assimilates observations from the surface to the deep ocean.The latter notably improves the forecasting skill for subsurface temperature anomalies,especially at the depth of 100-300 m(the lower layer),outperforming the SST nudging scheme.It excels in predicting both horizontal and vertical heat transport in the lower layer,contributing to improved forecasts of the lower-layer warming during the Blob.These improvements stem from the assimilation of subsurface observational data,which are important in predicting the upper-ocean conditions.The results suggest that assimilating ocean data with the EnKF scheme significantly enhances the accuracy in predicting subsurface temperature anomalies during the Blob and offers better understanding of its underlying mechanisms. 展开更多
关键词 Seasonal forecast Ocean data assimilation Marine heatwave Subsurface temperature
下载PDF
Synthetic data as an investigative tool in hypertension and renal diseases research
4
作者 Aleena Jamal Som Singh Fawad Qureshi 《World Journal of Methodology》 2025年第1期9-13,共5页
There is a growing body of clinical research on the utility of synthetic data derivatives,an emerging research tool in medicine.In nephrology,clinicians can use machine learning and artificial intelligence as powerful... There is a growing body of clinical research on the utility of synthetic data derivatives,an emerging research tool in medicine.In nephrology,clinicians can use machine learning and artificial intelligence as powerful aids in their clinical decision-making while also preserving patient privacy.This is especially important given the epidemiology of chronic kidney disease,renal oncology,and hypertension worldwide.However,there remains a need to create a framework for guidance regarding how to better utilize synthetic data as a practical application in this research. 展开更多
关键词 Synthetic data Artificial intelligence NEPHROLOGY Blood pressure RESEARCH EDITORIAL
下载PDF
User location privacy protection mechanism for location-based services 被引量:6
5
作者 Yan He Jiageng Chen 《Digital Communications and Networks》 SCIE CSCD 2021年第2期264-276,共13页
With the rapid development of the Internet of Things(IoT),Location-Based Services(LBS)are becoming more and more popular.However,for the users being served,how to protect their location privacy has become a growing co... With the rapid development of the Internet of Things(IoT),Location-Based Services(LBS)are becoming more and more popular.However,for the users being served,how to protect their location privacy has become a growing concern.This has led to great difficulty in establishing trust between the users and the service providers,hindering the development of LBS for more comprehensive functions.In this paper,we first establish a strong identity verification mechanism to ensure the authentication security of the system and then design a new location privacy protection mechanism based on the privacy proximity test problem.This mechanism not only guarantees the confidentiality of the user s information during the subsequent information interaction and dynamic data transmission,but also meets the service provider's requirements for related data. 展开更多
关键词 Internet of things location-based services Location privacy Privacy protection mechanism CONFIDENTIALITY
下载PDF
Location-Based Routing Protocols for Wireless Sensor Networks: A Survey 被引量:6
6
作者 Arun Kumar Hnin Yu Shwe +1 位作者 Kai Juan Wong Peter H. J. Chong 《Wireless Sensor Network》 2017年第1期25-72,共48页
Recently, location-based routings in wireless sensor networks (WSNs) are attracting a lot of interest in the research community, especially because of its scalability. In location-based routing, the network size is sc... Recently, location-based routings in wireless sensor networks (WSNs) are attracting a lot of interest in the research community, especially because of its scalability. In location-based routing, the network size is scalable without increasing the signalling overhead as routing decisions are inherently localized. Here, each node is aware of its position in the network through some positioning device like GPS and uses this information in the routing mechanism. In this paper, we first discuss the basics of WSNs including the architecture of the network, energy consumption for the components of a typical sensor node, and draw a detailed picture of classification of location-based routing protocols. Then, we present a systematic and comprehensive taxonomy of location-based routing protocols, mostly for sensor networks. All the schemes are subsequently discussed in depth. Finally, we conclude the paper with some insights on potential research directions for location-based routing in WSNs. 展开更多
关键词 location-based PROTOCOL GEOGRAPHIC ROUTING WIRELESS Sensor Networks Energy CONSERVATION ROUTING
下载PDF
A Relocation-based Initialization Scheme to Improve Track-forecasting of Tropical Cyclones 被引量:2
7
作者 GAO Feng Peter P. CHILDS +2 位作者 Xiang-Yu HUANG Neil A. JACOBS Jinzhong MIN 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2014年第1期27-36,共10页
A relocation procedure to initialize tropical cyclones was developed to improve the representation of the initial conditions and the track forecast for Panasonic Weather Solutions Tropical Operational Forecasts. This ... A relocation procedure to initialize tropical cyclones was developed to improve the representation of the initial conditions and the track forecast for Panasonic Weather Solutions Tropical Operational Forecasts. This scheme separates the vortex perturbation and environment field from the first guess, then relocates the initial vortex perturbations to Lhe observed position by merging them with the environment field. The relationships of wind vector components with stream function and velocity potential are used for separating the vortex disturbance from first guess. For the separation of scalars, a low-pass Barnes filter is employed. The irregular-shaped relocation area corresponding to the specific initial conditions is determined by mapping the edge of the vortex radius in 36 directions.Then, the non-vortex perturbations in the relocation area are removed by a two-pass Barnes filter to retain the vortex perturbations, while the variable fields outside the perimeter of the modified vortex are kept ide.ntical to the original first guess. The potential impacts of this scheme on track forecasts were examined for three hurricane cases in the 2011-12 hurricane season. The experimental results demonstrate that the initialization scheme is able to effectively separate the vortex field from the environment field and maintain a relatively balanced and accurate relocated first guess. As the initial track error is reduced, the following track forecasts are considerably improved. The 72-h average track forecast error was redu,~ed by 32.6% for the cold-start cases, and by 38.4% when using the full-cycling data assimilation because of the accumulatedL improvements from the initialization scheme. 展开更多
关键词 tropical cyclone vortex relocation data assimilation Barnes filtering
下载PDF
Location Privacy in Device-Dependent Location-Based Services:Challenges and Solution 被引量:1
8
作者 Yuhang Wang Yanbin Sun +4 位作者 Shen Su Zhihong Tian Mohan Li Jing Qiu Xianzhi Wang 《Computers, Materials & Continua》 SCIE EI 2019年第6期983-993,共11页
With the evolution of location-based services(LBS),a new type of LBS has already gain a lot of attention and implementation,we name this kind of LBS as the Device-Dependent LBS(DLBS).In DLBS,the service provider(SP)wi... With the evolution of location-based services(LBS),a new type of LBS has already gain a lot of attention and implementation,we name this kind of LBS as the Device-Dependent LBS(DLBS).In DLBS,the service provider(SP)will not only send the information according to the user’s location,more significant,he also provides a service device which will be carried by the user.DLBS has been successfully practised in some of the large cities around the world,for example,the shared bicycle in Beijing and London.In this paper,we,for the first time,blow the whistle of the new location privacy challenges caused by DLBS,since the service device is enabled to perform the localization without the permission of the user.To conquer these threats,we design a service architecture along with a credit system between DLBS provider and the user.The credit system tie together the DLBS device usability with the curious behaviour upon user’s location privacy,DLBS provider has to sacrifice their revenue in order to gain extra location information of their device.We make the simulation of our proposed scheme and the result convince its effectiveness. 展开更多
关键词 Location privacy device-dependent location-based service location-based service credit system location privacy preserving mechanism shared bicycle
下载PDF
基于re3data的中英科学数据仓储平台对比研究 被引量:1
9
作者 袁烨 陈媛媛 《数字图书馆论坛》 CSSCI 2024年第2期13-23,共11页
以re3data为数据获取源,选取中英两国406个科学数据仓储为研究对象,从分布特征、责任类型、仓储许可、技术标准及质量标准等5个方面、11个指标对两国科学数据仓储的建设情况进行对比分析,试图为我国数据仓储的可持续发展提出建议:广泛... 以re3data为数据获取源,选取中英两国406个科学数据仓储为研究对象,从分布特征、责任类型、仓储许可、技术标准及质量标准等5个方面、11个指标对两国科学数据仓储的建设情况进行对比分析,试图为我国数据仓储的可持续发展提出建议:广泛联结国内外异质机构,推进多学科领域的交流与合作,有效扩充仓储许可权限与类型,优化技术标准的应用现况,提高元数据使用的灵活性。 展开更多
关键词 科学数据 数据仓储平台 re3data 中国 英国
下载PDF
A Framework for Improving the Location-Based Service Using Casandra Technology 被引量:1
10
作者 B. Temuujin Jaewon Park Eui-In Choi 《Journal of Computer and Communications》 2019年第12期152-157,共6页
Recently, many researches on positioning technology using LBS (Location-Based Services) have been conducted with the development of wearable devices. In addition, the data used in these devices is helping to perform L... Recently, many researches on positioning technology using LBS (Location-Based Services) have been conducted with the development of wearable devices. In addition, the data used in these devices is helping to perform LBS using Big data technology. And the existing method of finding a specific location is not suitable for collecting and processing all the data [1] [2] [3] [4]. Therefore, in order to process all streaming data in real time, it is necessary to use Big data processing technology. In this paper, we use NoSQL technology to solve this problem. We propose a framework for improving the performance of LBS using NoSQL. 展开更多
关键词 BIG data NOSQL Cassandra LBS
下载PDF
Data Secure Storage Mechanism for IIoT Based on Blockchain 被引量:2
11
作者 Jin Wang Guoshu Huang +2 位作者 R.Simon Sherratt Ding Huang Jia Ni 《Computers, Materials & Continua》 SCIE EI 2024年第3期4029-4048,共20页
With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapi... With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT. 展开更多
关键词 Blockchain IIoT data storage cryptographic commitment
下载PDF
Hadoop-based secure storage solution for big data in cloud computing environment 被引量:1
12
作者 Shaopeng Guan Conghui Zhang +1 位作者 Yilin Wang Wenqing Liu 《Digital Communications and Networks》 SCIE CSCD 2024年第1期227-236,共10页
In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose... In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average. 展开更多
关键词 Big data security data encryption HADOOP Parallel encrypted storage Zookeeper
下载PDF
Defect Detection Model Using Time Series Data Augmentation and Transformation 被引量:1
13
作者 Gyu-Il Kim Hyun Yoo +1 位作者 Han-Jin Cho Kyungyong Chung 《Computers, Materials & Continua》 SCIE EI 2024年第2期1713-1730,共18页
Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal depende... Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight. 展开更多
关键词 Defect detection time series deep learning data augmentation data transformation
下载PDF
Spatiotemporal deformation characteristics of Outang landslide and identification of triggering factors using data mining 被引量:2
14
作者 Beibei Yang Zhongqiang Liu +1 位作者 Suzanne Lacasse Xin Liang 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第10期4088-4104,共17页
Since the impoundment of Three Gorges Reservoir(TGR)in 2003,numerous slopes have experienced noticeable movement or destabilization owing to reservoir level changes and seasonal rainfall.One case is the Outang landsli... Since the impoundment of Three Gorges Reservoir(TGR)in 2003,numerous slopes have experienced noticeable movement or destabilization owing to reservoir level changes and seasonal rainfall.One case is the Outang landslide,a large-scale and active landslide,on the south bank of the Yangtze River.The latest monitoring data and site investigations available are analyzed to establish spatial and temporal landslide deformation characteristics.Data mining technology,including the two-step clustering and Apriori algorithm,is then used to identify the dominant triggers of landslide movement.In the data mining process,the two-step clustering method clusters the candidate triggers and displacement rate into several groups,and the Apriori algorithm generates correlation criteria for the cause-and-effect.The analysis considers multiple locations of the landslide and incorporates two types of time scales:longterm deformation on a monthly basis and short-term deformation on a daily basis.This analysis shows that the deformations of the Outang landslide are driven by both rainfall and reservoir water while its deformation varies spatiotemporally mainly due to the difference in local responses to hydrological factors.The data mining results reveal different dominant triggering factors depending on the monitoring frequency:the monthly and bi-monthly cumulative rainfall control the monthly deformation,and the 10-d cumulative rainfall and the 5-d cumulative drop of water level in the reservoir dominate the daily deformation of the landslide.It is concluded that the spatiotemporal deformation pattern and data mining rules associated with precipitation and reservoir water level have the potential to be broadly implemented for improving landslide prevention and control in the dam reservoirs and other landslideprone areas. 展开更多
关键词 LANDSLIDE Deformation characteristics Triggering factor data mining Three gorges reservoir
下载PDF
Assimilation of GOES-R Geostationary Lightning Mapper Flash Extent Density Data in GSI 3DVar, EnKF, and Hybrid En3DVar for the Analysis and Short-Term Forecast of a Supercell Storm Case 被引量:1
15
作者 Rong KONG Ming XUE +2 位作者 Edward R.MANSELL Chengsi LIU Alexandre O.FIERRO 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第2期263-277,共15页
Capabilities to assimilate Geostationary Operational Environmental Satellite “R-series ”(GOES-R) Geostationary Lightning Mapper(GLM) flash extent density(FED) data within the operational Gridpoint Statistical Interp... Capabilities to assimilate Geostationary Operational Environmental Satellite “R-series ”(GOES-R) Geostationary Lightning Mapper(GLM) flash extent density(FED) data within the operational Gridpoint Statistical Interpolation ensemble Kalman filter(GSI-EnKF) framework were previously developed and tested with a mesoscale convective system(MCS) case. In this study, such capabilities are further developed to assimilate GOES GLM FED data within the GSI ensemble-variational(EnVar) hybrid data assimilation(DA) framework. The results of assimilating the GLM FED data using 3DVar, and pure En3DVar(PEn3DVar, using 100% ensemble covariance and no static covariance) are compared with those of EnKF/DfEnKF for a supercell storm case. The focus of this study is to validate the correctness and evaluate the performance of the new implementation rather than comparing the performance of FED DA among different DA schemes. Only the results of 3DVar and pEn3DVar are examined and compared with EnKF/DfEnKF. Assimilation of a single FED observation shows that the magnitude and horizontal extent of the analysis increments from PEn3DVar are generally larger than from EnKF, which is mainly caused by using different localization strategies in EnFK/DfEnKF and PEn3DVar as well as the integration limits of the graupel mass in the observation operator. Overall, the forecast performance of PEn3DVar is comparable to EnKF/DfEnKF, suggesting correct implementation. 展开更多
关键词 GOES-R LIGHTNING data assimilation ENKF EnVar
下载PDF
An Imbalanced Data Classification Method Based on Hybrid Resampling and Fine Cost Sensitive Support Vector Machine 被引量:2
16
作者 Bo Zhu Xiaona Jing +1 位作者 Lan Qiu Runbo Li 《Computers, Materials & Continua》 SCIE EI 2024年第6期3977-3999,共23页
When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to ... When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles. 展开更多
关键词 Imbalanced data classification Silhouette value Mahalanobis distance RIME algorithm CS-SVM
下载PDF
EDSUCh:A robust ensemble data summarization method for effective medical diagnosis 被引量:1
17
作者 Mohiuddin Ahmed A.N.M.Bazlur Rashid 《Digital Communications and Networks》 SCIE CSCD 2024年第1期182-189,共8页
Identifying rare patterns for medical diagnosis is a challenging task due to heterogeneity and the volume of data.Data summarization can create a concise version of the original data that can be used for effective dia... Identifying rare patterns for medical diagnosis is a challenging task due to heterogeneity and the volume of data.Data summarization can create a concise version of the original data that can be used for effective diagnosis.In this paper,we propose an ensemble summarization method that combines clustering and sampling to create a summary of the original data to ensure the inclusion of rare patterns.To the best of our knowledge,there has been no such technique available to augment the performance of anomaly detection techniques and simultaneously increase the efficiency of medical diagnosis.The performance of popular anomaly detection algorithms increases significantly in terms of accuracy and computational complexity when the summaries are used.Therefore,the medical diagnosis becomes more effective,and our experimental results reflect that the combination of the proposed summarization scheme and all underlying algorithms used in this paper outperforms the most popular anomaly detection techniques. 展开更多
关键词 data summarization ENSEMBLE Medical diagnosis Sampling
下载PDF
Enhanced prediction of anisotropic deformation behavior using machine learning with data augmentation 被引量:1
18
作者 Sujeong Byun Jinyeong Yu +3 位作者 Seho Cheon Seong Ho Lee Sung Hyuk Park Taekyung Lee 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第1期186-196,共11页
Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary w... Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys. 展开更多
关键词 Plastic anisotropy Compression ANNEALING Machine learning data augmentation
下载PDF
Using ontology and rules to retrieve the semantics of disaster remote sensing data 被引量:1
19
作者 DONG Yumin LI Ziyang +1 位作者 LI Xuesong LI Xiaohui 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第5期1211-1218,共8页
Remote sensing data plays an important role in natural disaster management.However,with the increase of the variety and quantity of remote sensors,the problem of“knowledge barriers”arises when data users in disaster... Remote sensing data plays an important role in natural disaster management.However,with the increase of the variety and quantity of remote sensors,the problem of“knowledge barriers”arises when data users in disaster field retrieve remote sensing data.To improve this problem,this paper proposes an ontology and rule based retrieval(ORR)method to retrieve disaster remote sensing data,and this method introduces ontology technology to express earthquake disaster and remote sensing knowledge,on this basis,and realizes the task suitability reasoning of earthquake disaster remote sensing data,mining the semantic relationship between remote sensing metadata and disasters.The prototype system is built according to the ORR method,which is compared with the traditional method,using the ORR method to retrieve disaster remote sensing data can reduce the knowledge requirements of data users in the retrieval process and improve data retrieval efficiency. 展开更多
关键词 remote sensing data DISASTER ONTOLOGY semantic reasoning
下载PDF
Design and implementation of user information sharing system using location-based services for social network services
20
作者 Donsu Lee Junghoon Shin Sangjun Lee 《Journal of Measurement Science and Instrumentation》 CAS 2012年第2期169-172,共4页
Internet takes a role as a place for communication between people beyond a space simply for the acquisition of information.Recently,social network service(SNS)reflecting human’s basic desire for talking and communica... Internet takes a role as a place for communication between people beyond a space simply for the acquisition of information.Recently,social network service(SNS)reflecting human’s basic desire for talking and communicating with others is focused on around the world.And location-based service(LBS)is a service that provides various life conveniences like improving productivity through location information,such as GPS and WiFi.This paper suggests an application combining LBS and SNS based on Android OS.By using smart phone which is personal mobile information equipment,it combines location information with user information and SNS so that the service can be developed.It also maximizes sharing and use of information via twit based on locations of friends.This proposed system is aims for users to show online identity more actively and more conveniently. 展开更多
关键词 android OS social network service(SNS) location-based service(LBS) Google maps TWITTER Open API
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部