期刊文献+
共找到81篇文章
< 1 2 5 >
每页显示 20 50 100
Impact of Data Quality on Question Answering System Performances
1
作者 Rachid Karra Abdelali Lasfar 《Intelligent Automation & Soft Computing》 SCIE 2023年第1期335-349,共15页
In contrast with the research of new models,little attention has been paid to the impact of low or high-quality data feeding a dialogue system.The present paper makes thefirst attempt tofill this gap by extending our ... In contrast with the research of new models,little attention has been paid to the impact of low or high-quality data feeding a dialogue system.The present paper makes thefirst attempt tofill this gap by extending our previous work on question-answering(QA)systems by investigating the effect of misspelling on QA agents and how context changes can enhance the responses.Instead of using large language models trained on huge datasets,we propose a method that enhances the model's score by modifying only the quality and structure of the data feed to the model.It is important to identify the features that modify the agent performance because a high rate of wrong answers can make the students lose their interest in using the QA agent as an additional tool for distant learning.The results demonstrate the accuracy of the proposed context simplification exceeds 85%.Thesefindings shed light on the importance of question data quality and context complexity construct as key dimensions of the QA system.In conclusion,the experimental results on questions and contexts showed that controlling and improving the various aspects of data quality around the QA system can significantly enhance his robustness and performance. 展开更多
关键词 dataOps data quality QA system NLP context simplification
下载PDF
Correlation Analysis of Turbidity and Total Phosphorus in Water Quality Monitoring Data
2
作者 Wenwu Tan Jianjun Zhang +7 位作者 Xing Liu Jiang Wu Yifu Sheng Ke Xiao Li Wang Haijun Lin Guang Sun Peng Guo 《Journal on Big Data》 2023年第1期85-97,共13页
At present,water pollution has become an important factor affecting and restricting national and regional economic development.Total phosphorus is one of the main sources of water pollution and eutrophication,so the p... At present,water pollution has become an important factor affecting and restricting national and regional economic development.Total phosphorus is one of the main sources of water pollution and eutrophication,so the prediction of total phosphorus in water quality has good research significance.This paper selects the total phosphorus and turbidity data for analysis by crawling the data of the water quality monitoring platform.By constructing the attribute object mapping relationship,the correlation between the two indicators was analyzed and used to predict the future data.Firstly,the monthly mean and daily mean concentrations of total phosphorus and turbidity outliers were calculated after cleaning,and the correlation between them was analyzed.Secondly,the correlation coefficients of different times and frequencies were used to predict the values for the next five days,and the data trend was predicted by python visualization.Finally,the real value was compared with the predicted value data,and the results showed that the correlation between total phosphorus and turbidity was useful in predicting the water quality. 展开更多
关键词 Correlation analysis CLUSTER water quality predict water quality monitoring data
下载PDF
Research on Welding Quality Traceability Model of Offshore Platform Block Construction Process
3
作者 Jinghua Li Wenhao Yin +1 位作者 Boxin Yang Qinghua Zhou 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第1期699-730,共32页
Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platf... Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platform and the process level of the offshore industry.Currently,qualitymanagement remains in the era of primary information,and there is a lack of effective tracking and recording of welding quality data.When welding defects are encountered,it is difficult to rapidly and accurately determine the root cause of the problem from various complexities and scattered quality data.In this paper,a composite welding quality traceability model for offshore platform block construction process is proposed,it contains the quality early-warning method based on long short-term memory and quality data backtracking query optimization algorithm.By fulfilling the training of the early-warning model and the implementation of the query optimization algorithm,the quality traceability model has the ability to assist enterprises in realizing the rapid identification and positioning of quality problems.Furthermore,the model and the quality traceability algorithm are checked by cases in actual working conditions.Verification analyses suggest that the proposed early-warningmodel for welding quality and the algorithmfor optimizing backtracking requests are effective and can be applied to the actual construction process. 展开更多
关键词 quality traceability model block construction process welding quality management long short-term memory quality data backtracking query optimization algorithm
下载PDF
Digital Continuity Guarantee Approach of Electronic Record Based on Data Quality Theory 被引量:7
4
作者 Yongjun Ren Jian Qi +2 位作者 Yaping Cheng Jin Wang Osama Alfarraj 《Computers, Materials & Continua》 SCIE EI 2020年第6期1471-1483,共13页
Since the British National Archive put forward the concept of the digital continuity in 2007,several developed countries have worked out their digital continuity action plan.However,the technologies of the digital con... Since the British National Archive put forward the concept of the digital continuity in 2007,several developed countries have worked out their digital continuity action plan.However,the technologies of the digital continuity guarantee are still lacked.At first,this paper analyzes the requirements of digital continuity guarantee for electronic record based on data quality theory,then points out the necessity of data quality guarantee for electronic record.Moreover,we convert the digital continuity guarantee of electronic record to ensure the consistency,completeness and timeliness of electronic record,and construct the first technology framework of the digital continuity guarantee for electronic record.Finally,the temporal functional dependencies technology is utilized to build the first integration method to insure the consistency,completeness and timeliness of electronic record. 展开更多
关键词 Electronic record digital continuity data quality
下载PDF
Modeling data quality for risk assessment of GIS 被引量:1
5
作者 Su, Ying Jin, Zhanming Peng, Jie 《Journal of Southeast University(English Edition)》 EI CAS 2008年第S1期37-42,共6页
This paper presents a methodology to determine three data quality (DQ) risk characteristics: accuracy, comprehensiveness and nonmembership. The methodology provides a set of quantitative models to confirm the informat... This paper presents a methodology to determine three data quality (DQ) risk characteristics: accuracy, comprehensiveness and nonmembership. The methodology provides a set of quantitative models to confirm the information quality risks for the database of the geographical information system (GIS). Four quantitative measures are introduced to examine how the quality risks of source information affect the quality of information outputs produced using the relational algebra operations Selection, Projection, and Cubic Product. It can be used to determine how quality risks associated with diverse data sources affect the derived data. The GIS is the prime source of information on the location of cables, and detection time strongly depends on whether maps indicate the presence of cables in the construction business. Poor data quality in the GIS can contribute to increased risk or higher risk avoidance costs. A case study provides a numerical example of the calculation of the trade-offs between risk and detection costs and provides an example of the calculation of the costs of data quality. We conclude that the model contributes valuable new insight. 展开更多
关键词 risk assessment data quality geographical information system PROBABILITY spatial data quality
下载PDF
Novel method for the evaluation of data quality based on fuzzy control 被引量:1
6
作者 Ban Xiaojuan Ning Shurong +1 位作者 Xu Zhaolin Cheng Peng 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2008年第3期606-610,共5页
One of the goals of data collection is preparing for decision-making, so high quality requirement must be satisfied. Rational evaluation of data quality is an effective way to identify data problem in time, and the qu... One of the goals of data collection is preparing for decision-making, so high quality requirement must be satisfied. Rational evaluation of data quality is an effective way to identify data problem in time, and the quality of data after this evaluation is satisfactory with the requirement of decision maker. A fuzzy neural network based research method of data quality evaluation is proposed. First, the criteria for the evaluation of data quality are selected to construct the fuzzy sets of evaluating grades, and then by using the learning ability of NN, the objective evaluation of membership is carried out, which can be used for the effective evaluation of data quality. This research has been used in the platform of 'data report of national compulsory education outlay guarantee' from the Chinese Ministry of Education. This method can be used for the effective evaluation of data quality worldwide, and the data quality situation can be found out more completely, objectively, and in better time by using the method. 展开更多
关键词 data quality evaluation system fuzzy control theory neural network.
下载PDF
On Statistical Measures for Data Quality Evaluation 被引量:1
7
作者 Xiaoxia Han 《Journal of Geographic Information System》 2020年第3期178-187,共10页
<span style="font-family:Verdana;">Most GIS databases contain data errors. The quality of the data sources such as traditional paper maps or more recent remote sensing data determines spatial data qual... <span style="font-family:Verdana;">Most GIS databases contain data errors. The quality of the data sources such as traditional paper maps or more recent remote sensing data determines spatial data quality. In the past several decades, different statistical measures have been developed to evaluate data quality for different types of data, such as nominal categorical data, ordinal categorical data and numerical data. Although these methods were originally proposed for medical research or psychological research, they have been widely used to evaluate spatial data quality. In this paper, we first review statistical methods for evaluating data quality, discuss under what conditions we should use them and how to interpret the results, followed by a brief discussion of statistical software and packages that can be used to compute these data quality measures.</span> 展开更多
关键词 GIS data quality Sensitivity SPECIFICITY KAPPA Weighted Kappa Bland-Altman Analysis Intra-Class Correlation Coefficient
下载PDF
The quality examination of observative data at Geomagnetic observatories
8
作者 程安龙 周锦屏 +3 位作者 高玉芬 赵学敏 赵永芬 黄蔚北 《Earthquake Science》 CSCD 1994年第S1期71-79,共9页
The basic task of geomagnetic observatory is .to produce accurate, relaible,continuous and complete observative data. The aim of examination is to judge the quality status of data. According to the operative principle... The basic task of geomagnetic observatory is .to produce accurate, relaible,continuous and complete observative data. The aim of examination is to judge the quality status of data. According to the operative principle of geomagnetic instruments and its operative status that should be achieved, geomagnetic activity and spread characteristics in time domain and location domain, authers proposed a complete set of data quality examination. The paper discusses respectively physical basement, examination method and the result about scalevalues, base-line values, monthly mean values, daily mean values, maximum and minimum values in daily range, magnetic storm and K index. The practice has proved that this set of examination is feasible and useful to raise and to guarantee the quality of observative data. 展开更多
关键词 geomagnetic observation geomagnetic data data quality quality examination
下载PDF
A Short Review of the Literature on Automatic Data Quality
9
作者 Deepak R. Chandran Vikram Gupta 《Journal of Computer and Communications》 2022年第5期55-73,共19页
Several organizations have migrated to the cloud for better quality in business engagements and security. Data quality is crucial in present-day activities. Information is generated and collected from data representin... Several organizations have migrated to the cloud for better quality in business engagements and security. Data quality is crucial in present-day activities. Information is generated and collected from data representing real-time facts and activities. Poor data quality affects the organizational decision-making policy and customer satisfaction, and influences the organization’s scheme of execution negatively. Data quality also has a massive influence on the accuracy, complexity and efficiency of the machine and deep learning tasks. There are several methods and tools to evaluate data quality to ensure smooth incorporation in model development. The bulk of data quality tools permit the assessment of sources of data only at a certain point in time, and the arrangement and automation are consequently an obligation of the user. In ensuring automatic data quality, several steps are involved in gathering data from different sources and monitoring data quality, and any problems with the data quality must be adequately addressed. There was a gap in the literature as no attempts have been made previously to collate all the advances in different dimensions of automatic data quality. This limited narrative review of existing literature sought to address this gap by correlating different steps and advancements related to the automatic data quality systems. The six crucial data quality dimensions in organizations were discussed, and big data were compared and classified. This review highlights existing data quality models and strategies that can contribute to the development of automatic data quality systems. 展开更多
关键词 data quality MONITORING TOOLKIT DIMENSION ORGANIZATION
下载PDF
Design of a Web Crawler for Water Quality Monitoring Data and Data Visualization
10
作者 Ziwen Yu Jianjun Zhang +6 位作者 Wenwu Tan Ziyi Xiong Peilun Li Liangqing Meng Haijun Lin Guang Sun Peng Guo 《Journal on Big Data》 2022年第2期135-143,共9页
Many countries are paying more and more attention to the protection of water resources at present,and how to protect water resources has received extensive attention from society.Water quality monitoring is the key wo... Many countries are paying more and more attention to the protection of water resources at present,and how to protect water resources has received extensive attention from society.Water quality monitoring is the key work to water resources protection.How to efficiently collect and analyze water quality monitoring data is an important aspect of water resources protection.In this paper,python programming tools and regular expressions were used to design a web crawler for the acquisition of water quality monitoring data from Global Freshwater Quality Database(GEMStat)sites,and the multi-thread parallelism was added to improve the efficiency in the process of downloading and parsing.In order to analyze and process the crawled water quality data,Pandas and Pyecharts are used to visualize the water quality data to show the intrinsic correlation and spatiotemporal relationship of the data. 展开更多
关键词 Water quality monitoring data web crawler data visualization
下载PDF
Assessment of Knowledge and Practices of Community Health Nurses on Data Quality in the Ho Municipality of Ghana
11
作者 Fidelis Zumah John Lapah Niyi +5 位作者 Patrick Freeman Eweh Benjamin Noble Adjei Martin Alhassan Ajuik Emmanuel Amaglo Wisdom Kwami Takramah Livingstone Asem 《Open Journal of Nursing》 2022年第6期428-443,共16页
Background: High data quality provides correct and up-to-date information which is critical to ensure, not only for the maintenance of health care at an optimal level, but also for the provision of high-quality clinic... Background: High data quality provides correct and up-to-date information which is critical to ensure, not only for the maintenance of health care at an optimal level, but also for the provision of high-quality clinical care, continuing health care, clinical and health service research, and planning and management of health systems. For the attainment of achievable improvements in the health sector, good data is core. Aim/Objective: To assess the level of knowledge and practices of Community Health Nurses on data quality in the Ho municipality, Ghana. Methods: A descriptive cross-sectional study was employed for the study, using a standard Likert scale questionnaire. A census was used to collect 77 Community Health Nurses’ information. The statistical software, Epi-Data 3.1 was used to enter the data and exported to STATA 12.0 for the analyses. Chi-square and logistic analyses were performed to establish associations between categorical variables and a p-value of less than 0.05 at 95% significance interval was considered statistically significant. Results: Out of the 77 Community Health Nurses studied, 49 (63.64%) had good knowledge on data accuracy, 51 (66.23%) out of the 77 Community Health Nurses studied had poor knowledge on data completeness, and 64 (83.12%) had poor knowledge on data timeliness out of the 77 studied. Also, 16 (20.78%) and 33 (42.86%) of the 77 Community Health Nurses responded there was no designated staff for data quality review and no feedback from the health directorate respectively. Out of the 16 health facilities studied for data quality practices, half (8, 50.00%) had missing values on copies of their previous months’ report forms. More so, 10 (62.50%) had no reminders (monthly data submission itineraries) at the facility level. Conclusion: Overall, the general level of knowledge of Community Health Nurses on data quality was poor and their practices for improving data quality at the facility level were woefully inadequate. Therefore, Community Health Nurses need to be given on-job training and proper education on data quality and its dimensions. Also, the health directorate should intensify its continuous supportive supervisory visits at all facilities and feedback should be given to the Community Health Nurses on the data submitted. 展开更多
关键词 Community Health Nurses data quality Ho Municipality Ghana
下载PDF
Improve Data Quality by Processing Null Values and Semantic Dependencies
12
作者 Houda Zaidi Faouzi Boufarès Yann Pollet 《Journal of Computer and Communications》 2016年第5期78-85,共8页
Today, the quantity of data continues to increase, furthermore, the data are heterogeneous, from multiple sources (structured, semi-structured and unstructured) and with different levels of quality. Therefore, it is v... Today, the quantity of data continues to increase, furthermore, the data are heterogeneous, from multiple sources (structured, semi-structured and unstructured) and with different levels of quality. Therefore, it is very likely to manipulate data without knowledge about their structures and their semantics. In fact, the meta-data may be insufficient or totally absent. Data Anomalies may be due to the poverty of their semantic descriptions, or even the absence of their description. In this paper, we propose an approach to better understand the semantics and the structure of the data. Our approach helps to correct automatically the intra-column anomalies and the inter-col- umns ones. We aim to improve the quality of data by processing the null values and the semantic dependencies between columns. 展开更多
关键词 data quality Big data Contextual Semantics Semantic Dependencies Functional Dependencies Null Values data Cleaning
下载PDF
Data reliability of the emerging citizen science in the Greater Bay Area of China
13
作者 Xilin Huang Yihong Wang +1 位作者 Yang Liu Lyu Bing Zhang 《Avian Research》 SCIE CSCD 2023年第3期354-360,共7页
The potential of citizen science projects in research has been increasingly acknowledged,but the substantial engagement of these projects is restricted by the quality of citizen science data.Based on the largest emerg... The potential of citizen science projects in research has been increasingly acknowledged,but the substantial engagement of these projects is restricted by the quality of citizen science data.Based on the largest emerging citizen science project in the country-Birdreport Online Database(BOD),we examined the biases of birdwatching data from the Greater Bay Area of China.The results show that the sampling effort is disparate among land cover types due to contributors’ preference towards urban and suburban areas,indicating the environment suitable for species existence could be underrepresented in the BOD data.We tested the contributors’ skill of species identification via a questionnaire targeting the citizen birders in the Greater Bay Area.The questionnaire show that most citizen birdwatchers could correctly identify the common species widely distributed in Southern China and the less common species with conspicuous morphological characteristics,while failed to identify the species from Alaudidae;Caprimulgidae,Emberizidae,Phylloscopidae,Scolopacidae and Scotocercidae.With a study example,we demonstrate that spatially clustered bird watching visits can cause underestimation of species richness in insufficiently sampled areas;and the result of species richness mapping is sensitive to the contributors’ skill of identifying bird species.Our results address how avian research can be influenced by the reliability of citizen science data in a region of generally high accessibility,and highlight the necessity of pre-analysis scrutiny on data reliability regarding to research aims at all spatial and temporal scales.To improve the data quality,we suggest to equip the data collection frame of BOD with a flexible filter for bird abundance,and questionnaires that collect information related to contributors’ bird identification skill.Statistic modelling approaches are encouraged to apply for correcting the bias of sampling effort. 展开更多
关键词 Bird identification skill Citizen science data quality Sampling bias Species richness The Greater Bay Area of China
下载PDF
Data Quality Assurance Techniques for a Monitoring and Diagnosis System
14
作者 ZHANG Qing XU Guang-hua 《International Journal of Plant Engineering and Management》 2007年第2期107-115,共9页
By researching the data quality problem in the monitoring and diagnosis system (MDS) , the method of detecting non-condition data based on the development trend of equipment condition is proposed, and three requirem... By researching the data quality problem in the monitoring and diagnosis system (MDS) , the method of detecting non-condition data based on the development trend of equipment condition is proposed, and three requirements of criteria for detecting non-condition data: dynamic, syntheses and simplicity are discussed. According to the general mode of data management in MDS, a data quality assurance system (DQAS) comprising data quality monitoring, data quality diagnosis, detection criteria adjusting and artificial confirmation is set up. A route inspection system called MTREE realizes the DQAS. Aiming at vibration data of route inspection, two detecting criteria are made. One is the quality monitoring parameter, which is found through combining and optimizing some fundamental parameters by genetic programming (GP). The other is the quality diagnosis criterion, i. e. pseudo distance of Spectral-Energy-Vector (SEV) named Adjacent J-divergence, which indicates the variation degree of adjacent data's spectral energy distribution. Results show that DQAS, including these two criteria, is effective to improve the data quality of MDS. 展开更多
关键词 data quality assurance system monitoring and diagnosis non-condition data
下载PDF
College Basic Development Status Data Management System Based on Data Governance Framework
15
作者 刘琳琅 卢林珍 +1 位作者 吴清红 徐中其 《Journal of Donghua University(English Edition)》 CAS 2023年第4期446-453,共8页
In the era of big data, data application based on data governance has become an inevitable trend in the construction of smart campus in higher education. In this paper, a set of data governance system framework coveri... In the era of big data, data application based on data governance has become an inevitable trend in the construction of smart campus in higher education. In this paper, a set of data governance system framework covering the whole life cycle of data suitable for higher education is proposed, and based on this, the ideas and methods of data governance are applied to the construction of data management system for the basic development status of faculties by combining the practice of data governance of Donghua University.It forms a closed-loop management of data in all aspects, such as collection, information feedback, and statistical analysis of the basic development status data of the college. While optimizing the management business of higher education, the system provides a scientific and reliable basis for precise decision-making and strategic development of higher education. 展开更多
关键词 big data data governance data quality smart campus
下载PDF
Imagery Data Quality of ZY Satellite Reached International Level
16
《Aerospace China》 2012年第2期23-23,共1页
The in-orbit commissioning of ZY-1 02C satellite is proceeding smoothly. According to the relevant experts in this field, the imagery quality of the satellite has reached or nearly reached the level of international s... The in-orbit commissioning of ZY-1 02C satellite is proceeding smoothly. According to the relevant experts in this field, the imagery quality of the satellite has reached or nearly reached the level of international satellites of the same kind. ZY-1 02C satellite and ZY-3 satellite were successfully launched on December 22, 2011 and January 9, 2012 respectively. China Centre for Resources Satellite Data andApplication (CRSDA) was responsible for the building of a ground 展开更多
关键词 Imagery data quality of ZY Satellite Reached International Level
下载PDF
Comprehensive Evaluation Method for Traffic Flow Data Quality Based on Grey Correlation Analysis and Particle Swarm Optimization
17
作者 Wei Ba Baojun Chen Qi Li 《Journal of Systems Science and Systems Engineering》 SCIE EI CSCD 2024年第1期106-128,共23页
Nowadays,data are more and more used for intelligent modeling and prediction,and the comprehensive evaluation of data quality is getting more and more attention as a necessary means to measure whether the data are usa... Nowadays,data are more and more used for intelligent modeling and prediction,and the comprehensive evaluation of data quality is getting more and more attention as a necessary means to measure whether the data are usable or not.However,the comprehensive evaluation method of data quality mostly contains the subjective factors of the evaluator,so how to comprehensively and objectively evaluate the data has become a bottleneck that needs to be solved in the research of comprehensive evaluation method.In order to evaluate the data more comprehensively,objectively and differentially,a novel comprehensive evaluation method based on particle swarm optimization(PSO)and grey correlation analysis(GCA)is presented in this paper.At first,an improved GCA evaluation model based on the technique for order preference by similarity to an ideal solution(TOPSIS)is proposed.Then,an objective function model of maximum difference of the comprehensive evaluation values is built,and the PSO algorithm is used to optimize the weights of the improved GCA evaluation model based on the objective function model.Finally,the performance of the proposed method is investigated through parameter analysis.A performance comparison of traffic flow data is carried out,and the simulation results show that the maximum average difference between the evaluation results and its mean value(MDR)of the proposed comprehensive evaluation method is 33.24%higher than that of TOPSIS-GCA,and 6.86%higher than that of GCA.The proposed method has better differentiation than other methods,which means that it objectively and comprehensively evaluates the data from both the relevance and differentiation of the data,and the results more effectively reflect the differences in data quality,which will provide more effective data support for intelligent modeling,prediction and other applications. 展开更多
关键词 data quality comprehensive evaluation particle swarm optimization grey correlation analysis traffic flow data
原文传递
A blockchain-based audit approach for encrypted data in federated learning 被引量:1
18
作者 Zhe Sun Junping Wan +3 位作者 Lihua Yin Zhiqiang Cao Tianjie Luo Bin Wang 《Digital Communications and Networks》 SCIE CSCD 2022年第5期614-624,共11页
The development of data-driven artificial intelligence technology has given birth to a variety of big data applications.Data has become an essential factor to improve these applications.Federated learning,a privacy-pr... The development of data-driven artificial intelligence technology has given birth to a variety of big data applications.Data has become an essential factor to improve these applications.Federated learning,a privacy-preserving machine learning method,is proposed to leverage data from different data owners.It is typically used in conjunction with cryptographic methods,in which data owners train the global model by sharing encrypted model updates.However,data encryption makes it difficult to identify the quality of these model updates.Malicious data owners may launch attacks such as data poisoning and free-riding.To defend against such attacks,it is necessary to find an approach to audit encrypted model updates.In this paper,we propose a blockchain-based audit approach for encrypted gradients.It uses a behavior chain to record the encrypted gradients from data owners,and an audit chain to evaluate the gradients’quality.Specifically,we propose a privacy-preserving homomorphic noise mechanism in which the noise of each gradient sums to zero after aggregation,ensuring the availability of aggregated gradient.In addition,we design a joint audit algorithm that can locate malicious data owners without decrypting individual gradients.Through security analysis and experimental evaluation,we demonstrate that our approach can defend against malicious gradient attacks in federated learning. 展开更多
关键词 AUDIT data quality Blockchain Secure aggregation Federated learning
下载PDF
Forest aboveground biomass estimates in a tropical rainforest in Madagascar: new insights from the use of wood specific gravity data 被引量:2
19
作者 Tahiana Ramananantoandro Herimanitra P.Rafidimanantsoa Miora F.Ramanakoto 《Journal of Forestry Research》 SCIE CAS CSCD 2015年第1期47-55,共9页
To generate carbon credits under the Reducing Emissions from Deforestation and forest Degradation program(REDD+), accurate estimates of forest carbon stocks are needed. Carbon accounting efforts have focused on car... To generate carbon credits under the Reducing Emissions from Deforestation and forest Degradation program(REDD+), accurate estimates of forest carbon stocks are needed. Carbon accounting efforts have focused on carbon stocks in aboveground biomass(AGB).Although wood specific gravity(WSG) is known to be an important variable in AGB estimates, there is currently a lack of data on WSG for Malagasy tree species. This study aimed to determine whether estimates of carbon stocks calculated from literature-based WSG values differed from those based on WSG values measured on wood core samples. Carbon stocks in forest biomass were assessed using two WSG data sets:(i) values measured from 303 wood core samples extracted in the study area,(ii) values derived from international databases. Results suggested that there is difference between the field and literaturebased WSG at the 0.05 level. The latter data set was on average 16 % higher than the former. However, carbon stocks calculated from the two data sets did not differ significantly at the 0.05 level. Such findings could be attributed to the form of the allometric equation used which gives more weight to tree diameter and tree height than to WSG. The choice of dataset should depend on the level of accuracy(Tier II or III) desired by REDD+. As higher levels of accuracy are rewarded by higher prices, speciesspecific WSG data would be highly desirable. 展开更多
关键词 Biomass estimates Carbon stocks data quality Madagascar REDD+ Wood specific gravity
下载PDF
Data Migration Need, Strategy, Challenges, Methodology, Categories, Risks, Uses with Cloud Computing, and Improvements in Its Using with Cloud Using Suggested Proposed Model (DMig 1) 被引量:1
20
作者 Abou_el_ela Abdou Hussein 《Journal of Information Security》 2021年第1期79-103,共25页
Data Migration is a multi-step process that begins with analyzing old data and culminates in data uploading and reconciliation in new applications. With the rapid growth of data, organizations constantly need to migra... Data Migration is a multi-step process that begins with analyzing old data and culminates in data uploading and reconciliation in new applications. With the rapid growth of data, organizations constantly need to migrate data. Data migration can be a complex process as testing must be done to ensure data quality. Migration also can be very costly if best practices are not followed and hidden costs are not identified in the early stage. <span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">O</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">n the other hand</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">,</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;"> many organizations today instead of buying IT equipment (hardware and/or software) and managing it themselves, they prefer to buy services from IT service providers. The number of service providers is increasing dramatically and the cloud is becoming the preferred tool for more cloud storage services. However, as more information and personal data are transferred to the cloud, to social media sites, DropBox, Baidu WangPan, etc., data security and privacy issues are questioned. So, academia and industry circles strive to find an effective way to secure data migration in the cloud. Various resolving methods and encryption techniques have been implemented. In this work, we will try to cover many important points in data migration as Strategy, Challenges, Need, methodology, Categories, Risks, and Uses with Cloud computing. Finally, we discuss data migration security and privacy challenge and how to solve this problem by making improvements in it’s using with Cloud through suggested proposed model that enhances data security and privacy by gathering Advanced Encryption Standard-256 (ATS256), Data Dispersion Algorithms and Secure Hash Algorithm-512. This model achieves verifiable security ratings and fast execution times.</span></span></span> 展开更多
关键词 CLOUD Organizations Migration data quality Advanced Encryption Standard
下载PDF
上一页 1 2 5 下一页 到第
使用帮助 返回顶部