期刊文献+
共找到13篇文章
< 1 >
每页显示 20 50 100
Inverse Estimation on Trigger Factors of Simultaneous Slope Failures with Purification of Training Data Sets
1
作者 Hirohito Kojima Ryo Sekine +1 位作者 Tomoya Yoshida Ryo Nozaki 《Journal of Earth Science and Engineering》 2013年第9期594-602,共9页
This paper presents an procedure for purifying training data sets (i.e., past occurrences of slope failures) for inverse estimation on unobserved trigger factors of "different types of simultaneous slope failures"... This paper presents an procedure for purifying training data sets (i.e., past occurrences of slope failures) for inverse estimation on unobserved trigger factors of "different types of simultaneous slope failures". Due to difficulties in pixel-by-pixel observations of trigger factors, as one of the measures, the authors had proposed an inverse analysis algorithm on trigger factors based on SEM (structural equation modeling). Through a measurement equation, the trigger factor is inversely estimated, and a TFI (trigger factor influence) map can be also produced. As a subsequence subject, a purification procedure of training data set should be constructed to improve the accuracy of TFI map which depends on the representativeness of given training data sets of different types of slope failures. The proposed procedure resamples the matched pixels between original groups of past slope failures (i.e., surface slope failures, deep-seated slope failures, landslides) and classified three groups by K-means clustering for all pixels corresponding to those slope failures. For all cases of three types of slope failures, the improvement of success rates with respect to resampled training data sets was confirmed. As a final outcome, the differences between TFI maps produced by using original and resampled training data sets, respectively, are delineated on a DIF map (difference map) which is useful for analyzing trigger factor influence in terms of "risky- and safe-side assessment" sub-areas with respect to "different types of simultaneous slope failures". 展开更多
关键词 Purification of training data simultaneous slope failures inverse analysis of unobserved trigger factor spatial data integration structural equation modeling.
下载PDF
Security Vulnerability Analyses of Large Language Models (LLMs) through Extension of the Common Vulnerability Scoring System (CVSS) Framework
2
作者 Alicia Biju Vishnupriya Ramesh Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期340-358,共19页
Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, a... Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks. 展开更多
关键词 Common Vulnerability Scoring System (CVSS) Large Language Models (LLMs) DALL-E Prompt Injections training data Poisoning CVSS Metrics
下载PDF
Blockchain for Education:Verification and Management of Lifelong Learning Data 被引量:1
3
作者 Ba-Lam Do Van-Thanh Nguyen +2 位作者 Hoang-Nam Dinh Thanh-Chung Dao BinhMinh Nguyen 《Computer Systems Science & Engineering》 SCIE EI 2022年第11期591-604,共14页
In recent years,blockchain technology has been applied in the educational domain because of its salient advantages,i.e.,transparency,decentralization,and immutability.Available systems typically use public blockchain ... In recent years,blockchain technology has been applied in the educational domain because of its salient advantages,i.e.,transparency,decentralization,and immutability.Available systems typically use public blockchain networks such as Ethereum and Bitcoin to store learning results.However,the cost of writing data on these networks is significant,making educational institutions limit data sent to the target network,typically containing only hash codes of the issued certificates.In this paper,we present a system based on a private blockchain network for lifelong learning data authentication and management named B4E(Blockchain For Education).B4E stores not only certificates but also learners’training data such as transcripts and educational programs in order to create a complete record of the lifelong education of each user and verify certificates that they have obtained.As a result,B4E can address two types of fake certificates,i.e.,certificates printed by unlawful organizations and certificates issued by educational institutions for learners who have not met the training requirements.In addition,B4E is designed to allow all participants to easily deploy software packages to manage,share,and check stored information without depending on a single point of access.As such,the system enhances the transparency and reliability of the stored data.Our experiments show that B4E meets expectations for deployment in reality. 展开更多
关键词 training data CERTIFICATE verification and management private blockchain
下载PDF
High-resolution land cover classification:cost-effective approach for extraction of reliable training data from existing land cover datasets
4
作者 Gorica Bratic Vasil Yordanov Maria Antonia Brovelli 《International Journal of Digital Earth》 SCIE EI 2023年第1期3618-3636,共19页
There has been a significant increase in the availability of global high-resolution land cover(HRLC)datasets due to growing demand and favorable technological advancements.However,this has brought forth the challenge ... There has been a significant increase in the availability of global high-resolution land cover(HRLC)datasets due to growing demand and favorable technological advancements.However,this has brought forth the challenge of collecting reference data with a high level of detail for global extents.While photo-interpretation is considered optimal for collecting quality training data for global HRLC mapping,some producers of existing HRLCs use less trustworthy sources,such as existing land cover at a lower resolution,to reduce costs.This work proposes a methodology to extract the most accurate parts of existing HRLCs in response to the challenge of providing reliable reference data at a low cost.The methodology combines existing HRLCs by intersection,and the output represents a Map Of Land Cover Agreement(MOLCA)that can be utilized for selecting training samples.MOLCA’s effectiveness was demonstrated through HRLC map production in Africa,in which it generated 48,000 samples.The best classification test had an overall accuracy of 78%.This level of accuracy is comparable to or better than the accuracy of existing HRLCs obtained from more expensive sources of training data,such as photo-interpretation,highlighting the cost-effectiveness and reliability potential of the developed methodology in supporting global HRLC production. 展开更多
关键词 High-resolution land cover global land cover training data reference data data quality
原文传递
Face recognition using both visible light image and near-infrared image and a deep network 被引量:3
5
作者 Kai Guo Shuai Wu Yong Xu 《CAAI Transactions on Intelligence Technology》 2017年第1期39-47,共9页
In recent years, deep networks has achieved outstanding performance in computer vision, especially in the field of face recognition. In terms of the performance for a face recognition model based on deep network, ther... In recent years, deep networks has achieved outstanding performance in computer vision, especially in the field of face recognition. In terms of the performance for a face recognition model based on deep network, there are two main closely related factors: 1) the structure of the deep neural network, and 2) the number and quality of training data. In real applications, illumination change is one of the most important factors that significantly affect the performance of face recognition algorithms. As for deep network models, only if there is sufficient training data that has various illumination intensity could they achieve expected performance. However, such kind of training data is hard to collect in the real world. In this paper, focusing on the illumination change challenge, we propose a deep network model which takes both visible light image and near-infrared image into account to perform face recognition. Near- infrared image, as we know, is much less sensitive to illuminations. Visible light face image contains abundant texture information which is very useful for face recognition. Thus, we design an adaptive score fusion strategy which hardly has information loss and the nearest neighbor algorithm to conduct the final classification. The experimental results demonstrate that the model is very effective in realworld scenarios and perform much better in terms of illumination change than other state-of-the-art models. 展开更多
关键词 Deep network Face recognition Illumination change Insufficient training data
下载PDF
Fall detection system in enclosed environments based on single Gaussian model
6
作者 Adel Rhuma Jonathon A Chambers 《Journal of Measurement Science and Instrumentation》 CAS 2012年第2期123-128,共6页
In this paper,we propose an efficient fall detection system in enclosed environments based on single Gaussian model using the maximum likelihood method.Online video clips are used to extract the features from two came... In this paper,we propose an efficient fall detection system in enclosed environments based on single Gaussian model using the maximum likelihood method.Online video clips are used to extract the features from two cameras.After the model is constructed,a threshold is set,and the probability for an incoming sample under the single Gaussian model is compared with that threshold to make a decision.Experimental results show that if a proper threshold is set,a good recognition rate for fall activities can be achieved. 展开更多
关键词 humans fall detection enclosed environments one class support vector machine(OCSVM) imperfect training data shape analysis maximum likelihood(ML) background subtraction CODEBOOK voxel person
下载PDF
COLLECTING AND ANALYSING OUTCOMES DATA FROM ACUPUNCTURE TRAINING INSTITUTIONS: A COLLABORATIVE PROJECT
7
作者 Mark Bovey 《World Journal of Traditional Chinese Medicine》 2015年第4期69-70,共2页
If clinical research is to be relevant to real-world decision making it requires interventions that reflect usual practice.Observational studies may provide the best approach to defining this.Siting studies in the stu... If clinical research is to be relevant to real-world decision making it requires interventions that reflect usual practice.Observational studies may provide the best approach to defining this.Siting studies in the student clinics of acupuncture teaching institutions(ATIs).has potential benefits for the institutions as well as for researchers.This is the first such multi-centre study accredited ATIs 展开更多
关键词 A COLLABORATIVE PROJECT COLLECTING AND ANALYSING OUTCOMES data FROM ACUPUNCTURE training INSTITUTIONS
原文传递
Consistency of the k-Nearest Neighbor Classifier for Spatially Dependent Data
8
作者 Ahmad Younso Ziad Kanaya Nour Azhari 《Communications in Mathematics and Statistics》 SCIE CSCD 2023年第3期503-518,共16页
The purpose of this paper is to investigate the k-nearest neighbor classification rule for spatially dependent data.Some spatial mixing conditions are considered,and under such spatial structures,the well known k-neare... The purpose of this paper is to investigate the k-nearest neighbor classification rule for spatially dependent data.Some spatial mixing conditions are considered,and under such spatial structures,the well known k-nearest neighbor rule is suggested to classify spatial data.We established consistency and strong consistency of the classifier under mild assumptions.Our main results extend the consistency result in the i.i.d.case to the spatial case. 展开更多
关键词 Bayes rule Spatial data training data k-nearest neighbor rule Mixing condition CONSISTENCY
原文传递
Hybrid Model of Power Transformer Fault Classification Using C-set and MFCM – MCSVM
9
作者 Ali Abdo Hongshun Liu +4 位作者 Yousif Mahmoud Hongru Zhang Ying Sun Qingquan Li Jian Guo 《CSEE Journal of Power and Energy Systems》 SCIE EI CSCD 2024年第2期672-685,共14页
This paper aims to increase the diagnosis accuracy of the fault classification of power transformers by introducing a new off-line hybrid model based on a combination subset of the et method(C-set)&modified fuzzy ... This paper aims to increase the diagnosis accuracy of the fault classification of power transformers by introducing a new off-line hybrid model based on a combination subset of the et method(C-set)&modified fuzzy C-mean algorithm(MFCM)and the optimizable multiclass-SVM(MCSVM).The innovation in this paper is shown in terms of solving the predicaments of outliers,boundary proportion,and unequal data existing in both traditional and intelligence models.Taking into consideration the closeness of dissolved gas analysis(DGA)data,the C-set method is implemented to subset the DGA data samples based on their type of faults within unrepeated subsets.Then,the MFCM is used for removing outliers from DGA samples by combining highly similar data for every subset within the same cluster to obtain the optimized training data(OTD)set.It is also used to minimize dimensionality of DGA samples and the uncertainty of transformer condition monitoring.After that,the optimized MCSVM is trained by using the(OTD).The proposed model diagnosis accuracy is 93.3%.The obtained results indicate that our model significantly improves the fault identification accuracy in power transformers when compared with other conventional and intelligence models. 展开更多
关键词 Combination subset of set(C-set)method modified fuzzy C-means(MFCM) optimizable multiclass-SVM(MCSVM) optimized training data(OTD)
原文传递
Production of global land cover data-GLCNMO 被引量:12
10
作者 Ryutaro Tateishi Bayaer Uriyangqai +10 位作者 Hussam Al-Bilbisi Mohamed Aboel Ghar Javzandulam Tsend-Ayush Toshiyuki Kobayashi Alimujiang Kasimu Nguyen Thanh Hoan Adel Shalaby Bayan Alsaaideh Tsevenge Enkhzaya Gegentana Hiroshi P.Sato 《International Journal of Digital Earth》 SCIE 2011年第1期22-49,共28页
Global land cover is one of the fundamental contents of Digital Earth.The Global Mapping project coordinated by the International Steering Committee for Global Mapping has produced a 1-km global land cover datasetGlo... Global land cover is one of the fundamental contents of Digital Earth.The Global Mapping project coordinated by the International Steering Committee for Global Mapping has produced a 1-km global land cover datasetGlobal Land Cover by National Mapping Organizations.It has 20 land cover classes defined using the Land Cover Classification System.Of them,14 classes were derived using supervised classification.The remaining six were classified independently:urban,tree open,mangrove,wetland,snow/ice,andwater.Primary source data of this land cover mapping were eight periods of 16-day composite 7-band 1-km MODIS data of 2003.Training data for supervised classification were collected using Landsat images,MODIS NDVI seasonal change patterns,Google Earth,Virtual Earth,existing regional maps,and expert’s comments.The overall accuracy is 76.5%and the overall accuracy with the weight of the mapped area coverage is 81.2%.The data are available from the Global Mapping project website(http://www.iscgm.org/).TheMODISdata used,land cover training data,and a list of existing regional maps are also available from the CEReS website.This mapping attempt demonstrates that training/validation data accumulation from different mapping projects must be promoted to support future global land cover mapping. 展开更多
关键词 land cover remote sensing Digital Earth training data
原文传递
Generating Chinese named entity data from parallel corpora 被引量:2
11
作者 Ruiji FU Bing QIN Ting LIU 《Frontiers of Computer Science》 SCIE EI CSCD 2014年第4期629-641,共13页
Annotating named entity recognition (NER) training corpora is a costly but necessary process for supervised NER approaches. This paper presents a general framework to generate large-scale NER training data from para... Annotating named entity recognition (NER) training corpora is a costly but necessary process for supervised NER approaches. This paper presents a general framework to generate large-scale NER training data from parallel corpora. In our method, we first employ a high performance NER system on one side of a bilingual corpus. Then, we project the named entity (NE) labels to the other side according to the word level alignments. Finally, we propose several strategies to select high-quality auto-labeled NER training data. We apply our approach to Chinese NER using an English-Chinese parallel corpus. Experimental results show that our approach can collect high-quality labeled data and can help improve Chinese NER. 展开更多
关键词 named entity recognition Chinese named entity training data generating parallel corpora
原文传递
Curriculum Development for FAIR Data Stewardship
12
作者 Francisca Oladipo Sakinat Folorunso +2 位作者 Ezekiel Ogundepo Obinna Osigwe Akinyinka Tosin Akindele 《Data Intelligence》 EI 2022年第4期991-1012,1033,共23页
The FAIR Guidelines attempts to make digital data Findable, Accessible, Interoperable, and Reusable(FAIR). To prepare FAIR data, a new data science discipline known as data stewardship is emerging and, as the FAIR Gui... The FAIR Guidelines attempts to make digital data Findable, Accessible, Interoperable, and Reusable(FAIR). To prepare FAIR data, a new data science discipline known as data stewardship is emerging and, as the FAIR Guidelines gain more acceptance, an increase in the demand for data stewards is expected. Consequently, there is a need to develop curricula to foster professional skills in data stewardship through effective knowledge communication. There have been a number of initiatives aimed at bridging the gap in FAIR data management training through both formal and informal programmes. This article describes the experience of developing a digital initiative for FAIR data management training under the Digital Innovations and Skills Hub(DISH) project. The FAIR Data Management course offers 6 short on-demand certificate modules over 12 weeks. The modules are divided into two sets: FAIR data and data science. The core subjects cover elementary topics in data science, regulatory frameworks, FAIR data management, intermediate to advanced topics in FAIR Data Point installation, and FAIR data in the management of healthcare and semantic data. Each week, participants are required to devote 7–8 hours of self-study to the modules, based on the resources provided. Once they have satisfied all requirements, students are certified as FAIR data scientists and qualified to serve as both FAIR data stewards and analysts. It is expected that in-depth and focused curricula development with diverse participants will build a core of FAIR data scientists for Data Competence Centres and encourage the rapid adoption of the FAIR Guidelines for research and development. 展开更多
关键词 data steward data science FAIR Guidelines FAIR Digital technology FDP installation FAIR data Trains Semantic web Personal Health Train
原文传递
Coastal wetland hyperspectral classification under the collaborative of subspace partition and infinite probabilistic latent graph ranking
13
作者 HU YaBin REN GuangBo +6 位作者 MA Yi YANG JunFang WANG JianBu AN JuBai LIANG Jian MA YuanQing SONG XiuKai 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2022年第4期759-777,共19页
The abundance of spectral information provided by hyperspectral imagery offers great benefits for many applications.However,processing such high-dimensional data volumes is a challenge because there may be redundant b... The abundance of spectral information provided by hyperspectral imagery offers great benefits for many applications.However,processing such high-dimensional data volumes is a challenge because there may be redundant bands owing to the high interband correlation.This study aimed to reduce the possibility of“dimension disaster”in the classification of coastal wetlands using hyperspectral images with limited training samples.The study developed a hyperspectral classification algorithm for coastal wetlands using a combination of subspace partitioning and infinite probabilistic latent graph ranking in a random patch network(the SSP-IPLGR-RPnet model).The SSP-IPLGR-RPnet approach applied SSP techniques and an IPLGR algorithm to reduce the dimensions of hyperspectral data.The RPnet model overcame the problem of dimension disaster caused by the mismatch between the dimensionality of hyperspectral bands and the small number of training samples.The results showed that the proposed algorithm had a better classification performance and was more robust with limited training data compared with that of several other state-of-the-art methods.The overall accuracy was nearly 4%higher on average compared with that of multi-kernel SVM and RF algorithms.Compared with the EMAP algorithm,MSTV algorithm,ERF algorithm,ERW algorithm,RMKL algorithm and 3D-CNN algorithm,the SSP-IPLGR-RPnet algorithm provided a better classification performance in a shorter time. 展开更多
关键词 coastal wetland hyperspectral image dimensionality reduction classification limited training data
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部