A knowledge-based network for Section Yidong Bridge,Dongyang River,one tributary of Qiantang River,Zhejiang Province,China,is established in order to model water quality in areas under small data.Then,based on normal ...A knowledge-based network for Section Yidong Bridge,Dongyang River,one tributary of Qiantang River,Zhejiang Province,China,is established in order to model water quality in areas under small data.Then,based on normal transformation of variables with routine monitoring data and normal assumption of variables without routine monitoring data,a conditional linear Gaussian Bayesian network is constructed.A "two-constraint selection" procedure is proposed to estimate potential parameter values under small data.Among all potential parameter values,the ones that are most probable are selected as the "representatives".Finally,the risks of pollutant concentration exceeding national water quality standards are calculated and pollution reduction decisions for decision-making reference are proposed.The final results show that conditional linear Gaussian Bayesian network and "two-constraint selection" procedure are very useful in evaluating risks when there is limited data and can help managers to make sound decisions under small data.展开更多
In the face of data scarcity in the optimization of maintenance strategies for civil aircraft,traditional failure data-driven methods are encountering challenges owing to the increasing reliability of aircraft design....In the face of data scarcity in the optimization of maintenance strategies for civil aircraft,traditional failure data-driven methods are encountering challenges owing to the increasing reliability of aircraft design.This study addresses this issue by presenting a novel combined data fusion algorithm,which serves to enhance the accuracy and reliability of failure rate analysis for a specific aircraft model by integrating historical failure data from similar models as supplementary information.Through a comprehensive analysis of two different maintenance projects,this study illustrates the application process of the algorithm.Building upon the analysis results,this paper introduces the innovative equal integral value method as a replacement for the conventional equal interval method in the context of maintenance schedule optimization.The Monte Carlo simulation example validates that the equivalent essential value method surpasses the traditional method by over 20%in terms of inspection efficiency ratio.This discovery indicates that the equal critical value method not only upholds maintenance efficiency but also substantially decreases workload and maintenance costs.The findings of this study open up novel perspectives for airlines grappling with data scarcity,offer fresh strategies for the optimization of aviation maintenance practices,and chart a new course toward achieving more efficient and cost-effective maintenance schedule optimization through refined data analysis.展开更多
Data is becoming increasingly personal.Individuals regularly interact with a variety of structured data,ranging from SQLite databases on the phone to personal sensors and open government data.The“digital traces left ...Data is becoming increasingly personal.Individuals regularly interact with a variety of structured data,ranging from SQLite databases on the phone to personal sensors and open government data.The“digital traces left by individuals through these interactions”are sometimes referred to as“small data”.Examples of“small data”include driving records,biometric measurements,search histories,weather forecasts and usage alerts.In this paper,we present a flexible protocol called LoRaCTP,which is based on LoRa technology that allows data“chunks”to be transferred over large distances with very low energy expenditure.LoRaCTP provides all the mechanisms necessary to make LoRa transfer reliable by introducing a lightweight connection setup and allowing the ideal sending of an as-long-as necessary data message.We designed this protocol as communication support for small-data edge-based IoT solutions,given its stability,low power usage,and the possibility to cover long distances.We evaluated our protocol using various data content sizes and communication distances to demonstrate its performance and reliability.展开更多
The rapid development of ocean observation technology has resulted in the accumulation of a large amount of data and this is pushing ocean science towards being data-driven.Based on the types and distribution of ocean...The rapid development of ocean observation technology has resulted in the accumulation of a large amount of data and this is pushing ocean science towards being data-driven.Based on the types and distribution of oceanographic data,this paper analyzes the present and makes predictions for the future regarding the use of big and small data in ocean science.The ocean science has not fully entered the era of big data.There are two ways to expand the amount of oceanographic data to better understanding and man-agement of the ocean.On the data level,fully exploit the potential value of big and small ocean data,and transform the limited,small data into rich,big data,will help to achieve this.On the application level,oceanographic data are of great value if realize the federation of the core data owners and the consumers.The oceanographic data will provide not only a reliable scientific basis for climate,ecological,disaster and other scientific research,but also provide an unprecedented rich source of information that can be used to make predictions of the future.展开更多
The small-scale drilling technique can be a fast and reliable method to estimate rock strength parameters. It needs to link the operational drilling parameters and strength properties of rock. The parameters such as b...The small-scale drilling technique can be a fast and reliable method to estimate rock strength parameters. It needs to link the operational drilling parameters and strength properties of rock. The parameters such as bit geometry, bit movement, contact frictions and crushed zone affect the estimated parameters.An analytical model considering operational drilling data and effective parameters can be used for these purposes. In this research, an analytical model was developed based on limit equilibrium of forces in a Tshaped drag bit considering the effective parameters such as bit geometry, crushed zone and contact frictions in drilling process. Based on the model, a method was used to estimate rock strength parameters such as cohesion, internal friction angle and uniaxial compressive strength of different rock types from operational drilling data. Some drilling tests were conducted by a portable and powerful drilling machine which was developed for this work. The obtained results for strength properties of different rock types from the drilling experiments based on the proposed model are in good agreement with the results of standard tests. Experimental results show that the contact friction between the cutting face and rock is close to that between bit end wearing face and rock due to the same bit material. In this case,the strength parameters, especially internal friction angle and cohesion, are estimated only by using a blunt bit drilling data and the bit bluntness does not affect the estimated results.展开更多
In this paper,A MySAS package,which is verified on Windows XP,can easily convert two-dimensional data in small angle neutron and X-ray scattering analysis,operate individually and execute one particular operation as n...In this paper,A MySAS package,which is verified on Windows XP,can easily convert two-dimensional data in small angle neutron and X-ray scattering analysis,operate individually and execute one particular operation as numerical data reduction or analysis,and graphical visualization.This MySAS package can implement the input and output routines via scanning certain properties,thus recalling completely sets of repetition input and selecting the input files.On starting from the two-dimensional files,the MySAS package can correct the anisotropic or isotropic data for physical interpretation and select the relevant pixels.Over 50 model functions are fitted by the POWELL code using x^2 as the figure of merit function.展开更多
As sandstone layers in thin interbedded section are difficult to identify,conventional model-driven seismic inversion and data-driven seismic prediction methods have low precision in predicting them.To solve this prob...As sandstone layers in thin interbedded section are difficult to identify,conventional model-driven seismic inversion and data-driven seismic prediction methods have low precision in predicting them.To solve this problem,a model-data-driven seismic AVO(amplitude variation with offset)inversion method based on a space-variant objective function has been worked out.In this method,zero delay cross-correlation function and F norm are used to establish objective function.Based on inverse distance weighting theory,change of the objective function is controlled according to the location of the target CDP(common depth point),to change the constraint weights of training samples,initial low-frequency models,and seismic data on the inversion.Hence,the proposed method can get high resolution and high-accuracy velocity and density from inversion of small sample data,and is suitable for identifying thin interbedded sand bodies.Tests with thin interbedded geological models show that the proposed method has high inversion accuracy and resolution for small sample data,and can identify sandstone and mudstone layers of about one-30th of the dominant wavelength thick.Tests on the field data of Lishui sag show that the inversion results of the proposed method have small relative error with well-log data,and can identify thin interbedded sandstone layers of about one-15th of the dominant wavelength thick with small sample data.展开更多
In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay fo...In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results.展开更多
Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is di...Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective.展开更多
Technological shifts—coupled with infrastructure, techniques, and applications for big data—have created many new opportunities, business models, and industry expansion that benefit entrepreneurs. At the same time, ...Technological shifts—coupled with infrastructure, techniques, and applications for big data—have created many new opportunities, business models, and industry expansion that benefit entrepreneurs. At the same time, however, entrepreneurs are often unprepared for cybersecurity needs—and the policymakers, industry, and nonprofit groups that support them also face technological and knowledge constraints in keeping up with their needs. To improve the ability of entrepreneurship research to understand, identify, and ultimately help address cybersecurity challenges, we conduct a literature review on the state of cybersecurity. The research highlights the necessity for additional investigation to aid small businesses in securing their confidential data and client information from cyber threats, thereby preventing the potential shutdown of the business.展开更多
The following paper explored data mining issues in Small and Medium Enterprises’ (SMEs), firstly exploring the relationship between data mining and economic development. With SME’s contributing most employment prosp...The following paper explored data mining issues in Small and Medium Enterprises’ (SMEs), firstly exploring the relationship between data mining and economic development. With SME’s contributing most employment prospects and output within any emerging economy such as the Kingdom of Saudi Arabia. Adopting technology will improve SME’s potential for effective decision making and efficient operations. Hence, it is important that SMEs have access to data mining techniques and implement the most suited into their business to improve their business intelligence (BI). The paper is aimed to critically review the existing literature on data mining in the field of SME to provide a theoretical underpinning for any future work. It has been found data mining to be complicated and fragmented with a multitude of options available for businesses from quite basic systems implemented within Excel or Access to more sophisticated cloud-based systems. For any business, data mining is trade-off between the need for data analysis, and intelligence against the cost and resource-use of the system put in place. Multiple challenges have been identified to data mining, most notably the resource-intensive nature of systems (both in terms of labor and capital) and the security issues of data collection, analysis and storage;with General Data Protection Regulation (GDPR) a key focus for Kingdom of Saudi Arabia businesses. With these challenges the paper suggests that any SME starts small with an internal data mining exercise to digitalize and analyze their customer data, scaling up over time as the business grows and acquires the resources needed to properly manage any system.展开更多
基金Project(50809058)supported by the National Natural Science Foundation of China
文摘A knowledge-based network for Section Yidong Bridge,Dongyang River,one tributary of Qiantang River,Zhejiang Province,China,is established in order to model water quality in areas under small data.Then,based on normal transformation of variables with routine monitoring data and normal assumption of variables without routine monitoring data,a conditional linear Gaussian Bayesian network is constructed.A "two-constraint selection" procedure is proposed to estimate potential parameter values under small data.Among all potential parameter values,the ones that are most probable are selected as the "representatives".Finally,the risks of pollutant concentration exceeding national water quality standards are calculated and pollution reduction decisions for decision-making reference are proposed.The final results show that conditional linear Gaussian Bayesian network and "two-constraint selection" procedure are very useful in evaluating risks when there is limited data and can help managers to make sound decisions under small data.
文摘In the face of data scarcity in the optimization of maintenance strategies for civil aircraft,traditional failure data-driven methods are encountering challenges owing to the increasing reliability of aircraft design.This study addresses this issue by presenting a novel combined data fusion algorithm,which serves to enhance the accuracy and reliability of failure rate analysis for a specific aircraft model by integrating historical failure data from similar models as supplementary information.Through a comprehensive analysis of two different maintenance projects,this study illustrates the application process of the algorithm.Building upon the analysis results,this paper introduces the innovative equal integral value method as a replacement for the conventional equal interval method in the context of maintenance schedule optimization.The Monte Carlo simulation example validates that the equivalent essential value method surpasses the traditional method by over 20%in terms of inspection efficiency ratio.This discovery indicates that the equal critical value method not only upholds maintenance efficiency but also substantially decreases workload and maintenance costs.The findings of this study open up novel perspectives for airlines grappling with data scarcity,offer fresh strategies for the optimization of aviation maintenance practices,and chart a new course toward achieving more efficient and cost-effective maintenance schedule optimization through refined data analysis.
基金supported by the“Conselleria de Innovación,Universidades,Ciencia y Sociedad Digital”,Proyectos AICO/2020Spain,under Grant AICO/2020/302 and“Ministerio de Ciencia,Innovación y Universidades,Programa Estatal de Investigación,Desarrollo e Innovación Orientada a los Retos de la Sociedad,Proyectos I+D+I 2018”Spain,under Grant RTI2018-096384-B-I00.
文摘Data is becoming increasingly personal.Individuals regularly interact with a variety of structured data,ranging from SQLite databases on the phone to personal sensors and open government data.The“digital traces left by individuals through these interactions”are sometimes referred to as“small data”.Examples of“small data”include driving records,biometric measurements,search histories,weather forecasts and usage alerts.In this paper,we present a flexible protocol called LoRaCTP,which is based on LoRa technology that allows data“chunks”to be transferred over large distances with very low energy expenditure.LoRaCTP provides all the mechanisms necessary to make LoRa transfer reliable by introducing a lightweight connection setup and allowing the ideal sending of an as-long-as necessary data message.We designed this protocol as communication support for small-data edge-based IoT solutions,given its stability,low power usage,and the possibility to cover long distances.We evaluated our protocol using various data content sizes and communication distances to demonstrate its performance and reliability.
基金the National Natural Science Foundation of China[Nos.41906182,L1824025/XK2018DXC002 and 42030406]Shandong Province's Marine S&T Fund for Pilot National Laboratory for Marine Science and Technology(Qingdao)[No.2018SDKJ0102-8]+1 种基金the Marine Science&Technology Fund of Shandong Province for Pilot National Laboratory for Marine Science and Technology(Qingdao)[Grant No.2018SDKJ102]the National Key Research and Development Program of China[Nos.2019YFD0901001,2018YFC1407003 and 2017YFC1405300].
文摘The rapid development of ocean observation technology has resulted in the accumulation of a large amount of data and this is pushing ocean science towards being data-driven.Based on the types and distribution of oceanographic data,this paper analyzes the present and makes predictions for the future regarding the use of big and small data in ocean science.The ocean science has not fully entered the era of big data.There are two ways to expand the amount of oceanographic data to better understanding and man-agement of the ocean.On the data level,fully exploit the potential value of big and small ocean data,and transform the limited,small data into rich,big data,will help to achieve this.On the application level,oceanographic data are of great value if realize the federation of the core data owners and the consumers.The oceanographic data will provide not only a reliable scientific basis for climate,ecological,disaster and other scientific research,but also provide an unprecedented rich source of information that can be used to make predictions of the future.
文摘The small-scale drilling technique can be a fast and reliable method to estimate rock strength parameters. It needs to link the operational drilling parameters and strength properties of rock. The parameters such as bit geometry, bit movement, contact frictions and crushed zone affect the estimated parameters.An analytical model considering operational drilling data and effective parameters can be used for these purposes. In this research, an analytical model was developed based on limit equilibrium of forces in a Tshaped drag bit considering the effective parameters such as bit geometry, crushed zone and contact frictions in drilling process. Based on the model, a method was used to estimate rock strength parameters such as cohesion, internal friction angle and uniaxial compressive strength of different rock types from operational drilling data. Some drilling tests were conducted by a portable and powerful drilling machine which was developed for this work. The obtained results for strength properties of different rock types from the drilling experiments based on the proposed model are in good agreement with the results of standard tests. Experimental results show that the contact friction between the cutting face and rock is close to that between bit end wearing face and rock due to the same bit material. In this case,the strength parameters, especially internal friction angle and cohesion, are estimated only by using a blunt bit drilling data and the bit bluntness does not affect the estimated results.
基金Supported by Science and Technology Foundation of China Academy of Engineering Physics(No.2010A0103002)Innovation Foundation of Institute of Nuclear Physics and Chemistry,CAEP(No.2009CX01)
文摘In this paper,A MySAS package,which is verified on Windows XP,can easily convert two-dimensional data in small angle neutron and X-ray scattering analysis,operate individually and execute one particular operation as numerical data reduction or analysis,and graphical visualization.This MySAS package can implement the input and output routines via scanning certain properties,thus recalling completely sets of repetition input and selecting the input files.On starting from the two-dimensional files,the MySAS package can correct the anisotropic or isotropic data for physical interpretation and select the relevant pixels.Over 50 model functions are fitted by the POWELL code using x^2 as the figure of merit function.
文摘As sandstone layers in thin interbedded section are difficult to identify,conventional model-driven seismic inversion and data-driven seismic prediction methods have low precision in predicting them.To solve this problem,a model-data-driven seismic AVO(amplitude variation with offset)inversion method based on a space-variant objective function has been worked out.In this method,zero delay cross-correlation function and F norm are used to establish objective function.Based on inverse distance weighting theory,change of the objective function is controlled according to the location of the target CDP(common depth point),to change the constraint weights of training samples,initial low-frequency models,and seismic data on the inversion.Hence,the proposed method can get high resolution and high-accuracy velocity and density from inversion of small sample data,and is suitable for identifying thin interbedded sand bodies.Tests with thin interbedded geological models show that the proposed method has high inversion accuracy and resolution for small sample data,and can identify sandstone and mudstone layers of about one-30th of the dominant wavelength thick.Tests on the field data of Lishui sag show that the inversion results of the proposed method have small relative error with well-log data,and can identify thin interbedded sandstone layers of about one-15th of the dominant wavelength thick with small sample data.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61203147,61374047,61203126,and 61104092)the Humanities and Social Sciences Youth Funds of the Ministry of Education,China(Grant No.12YJCZH218)
文摘In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results.
文摘Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective.
文摘Technological shifts—coupled with infrastructure, techniques, and applications for big data—have created many new opportunities, business models, and industry expansion that benefit entrepreneurs. At the same time, however, entrepreneurs are often unprepared for cybersecurity needs—and the policymakers, industry, and nonprofit groups that support them also face technological and knowledge constraints in keeping up with their needs. To improve the ability of entrepreneurship research to understand, identify, and ultimately help address cybersecurity challenges, we conduct a literature review on the state of cybersecurity. The research highlights the necessity for additional investigation to aid small businesses in securing their confidential data and client information from cyber threats, thereby preventing the potential shutdown of the business.
文摘The following paper explored data mining issues in Small and Medium Enterprises’ (SMEs), firstly exploring the relationship between data mining and economic development. With SME’s contributing most employment prospects and output within any emerging economy such as the Kingdom of Saudi Arabia. Adopting technology will improve SME’s potential for effective decision making and efficient operations. Hence, it is important that SMEs have access to data mining techniques and implement the most suited into their business to improve their business intelligence (BI). The paper is aimed to critically review the existing literature on data mining in the field of SME to provide a theoretical underpinning for any future work. It has been found data mining to be complicated and fragmented with a multitude of options available for businesses from quite basic systems implemented within Excel or Access to more sophisticated cloud-based systems. For any business, data mining is trade-off between the need for data analysis, and intelligence against the cost and resource-use of the system put in place. Multiple challenges have been identified to data mining, most notably the resource-intensive nature of systems (both in terms of labor and capital) and the security issues of data collection, analysis and storage;with General Data Protection Regulation (GDPR) a key focus for Kingdom of Saudi Arabia businesses. With these challenges the paper suggests that any SME starts small with an internal data mining exercise to digitalize and analyze their customer data, scaling up over time as the business grows and acquires the resources needed to properly manage any system.