Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanal...Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy.展开更多
In order to overcome the defects that the analysis of multi-well typical curves of shale gas reservoirs is rarely applied to engineering,this study proposes a robust production data analysis method based on deconvolut...In order to overcome the defects that the analysis of multi-well typical curves of shale gas reservoirs is rarely applied to engineering,this study proposes a robust production data analysis method based on deconvolution,which is used for multi-well inter-well interference research.In this study,a multi-well conceptual trilinear seepage model for multi-stage fractured horizontal wells was established,and its Laplace solutions under two different outer boundary conditions were obtained.Then,an improved pressure deconvolution algorithm was used to normalize the scattered production data.Furthermore,the typical curve fitting was carried out using the production data and the seepage model solution.Finally,some reservoir parameters and fracturing parameters were interpreted,and the intensity of inter-well interference was compared.The effectiveness of the method was verified by analyzing the production dynamic data of six shale gas wells in Duvernay area.The results showed that the fitting effect of typical curves was greatly improved due to the mutual restriction between deconvolution calculation parameter debugging and seepage model parameter debugging.Besides,by using the morphological characteristics of the log-log typical curves and the time corresponding to the intersection point of the log-log typical curves of two models under different outer boundary conditions,the strength of the interference between wells on the same well platform was well judged.This work can provide a reference for the optimization of well spacing and hydraulic fracturing measures for shale gas wells.展开更多
Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fis...Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fish and shellfish are responsible for more than 90%of food allergies.Here,we provide promising insights using a large-scale data-driven analysis,comparing the mechanistic feature and biological relevance of different ingredients presents in peanuts,tree nuts(walnuts,almonds,cashews,pecans and pistachios)and soybean.Additionally,we have analysed the chemical compositions of peanuts in different processed form raw,boiled and dry-roasted.Using the data-driven approach we are able to generate new hypotheses to explain why nuclear receptors like the peroxisome proliferator-activated receptors(PPARs)and its isoform and their interaction with dietary lipids may have significant effect on allergic response.The results obtained from this study will direct future experimeantal and clinical studies to understand the role of dietary lipids and PPARisoforms to exert pro-inflammatory or anti-inflammatory functions on cells of the innate immunity and influence antigen presentation to the cells of the adaptive immunity.展开更多
Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision...Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.展开更多
Objective:To explain the use of concept mapping in a study about family members'experiences in taking care of people with cancer.Methods:This study used a phenomenological study design.In this study,we describe th...Objective:To explain the use of concept mapping in a study about family members'experiences in taking care of people with cancer.Methods:This study used a phenomenological study design.In this study,we describe the analytical process of using concept mapping in our phenomenological studies about family members'experiences in taking care of people with cancer.Results:We developed several concept maps that aided us in analyzing our collected data from the interviews.Conclusions:The use of concept mapping is suggested to researchers who intend to analyze their data in any qualitative studies,including those using a phenomenological design,because it is a time-efficient way of dealing with large amounts of qualitative data during the analytical process.展开更多
This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and i...This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and its powerful data management and analysis tools make it suitable for handling complex data analysis tasks.It is also highly customizable,allowing users to create custom functions and packages to meet their specific needs.Additionally,R language provides high reproducibility,making it easy to replicate and verify research results,and it has excellent collaboration capabilities,enabling multiple users to work on the same project simultaneously.These advantages make R language a more suitable choice for complex data analysis tasks,particularly in scientific research and business applications.The findings of this study will help people understand that R is not just a language that can handle more data than Excel and demonstrate that r is essential to the field of data analysis.At the same time,it will also help users and organizations make informed decisions regarding their data analysis needs and software preferences.展开更多
Gravitational wave detection is one of the most cutting-edge research areas in modern physics, with its success relying on advanced data analysis and signal processing techniques. This study provides a comprehensive r...Gravitational wave detection is one of the most cutting-edge research areas in modern physics, with its success relying on advanced data analysis and signal processing techniques. This study provides a comprehensive review of data analysis methods and signal processing techniques in gravitational wave detection. The research begins by introducing the characteristics of gravitational wave signals and the challenges faced in their detection, such as extremely low signal-to-noise ratios and complex noise backgrounds. It then systematically analyzes the application of time-frequency analysis methods in extracting transient gravitational wave signals, including wavelet transforms and Hilbert-Huang transforms. The study focuses on discussing the crucial role of matched filtering techniques in improving signal detection sensitivity and explores strategies for template bank optimization. Additionally, the research evaluates the potential of machine learning algorithms, especially deep learning networks, in rapidly identifying and classifying gravitational wave events. The study also analyzes the application of Bayesian inference methods in parameter estimation and model selection, as well as their advantages in handling uncertainties. However, the research also points out the challenges faced by current technologies, such as dealing with non-Gaussian noise and improving computational efficiency. To address these issues, the study proposes a hybrid analysis framework combining physical models and data-driven methods. Finally, the research looks ahead to the potential applications of quantum computing in future gravitational wave data analysis. This study provides a comprehensive theoretical foundation for the optimization and innovation of gravitational wave data analysis methods, contributing to the advancement of gravitational wave astronomy.展开更多
The connectivity of sandbodies is a key constraint to the exploration effectiveness of Bohai A Oilfield.Conventional connectivity studies often use methods such as seismic attribute fusion,while the development of con...The connectivity of sandbodies is a key constraint to the exploration effectiveness of Bohai A Oilfield.Conventional connectivity studies often use methods such as seismic attribute fusion,while the development of contiguous composite sandbodies in this area makes it challenging to characterize connectivity changes with conventional seismic attributes.Aiming at the above problem in the Bohai A Oilfield,this study proposes a big data analysis method based on the Deep Forest algorithm to predict the sandbody connectivity.Firstly,by compiling the abundant exploration and development sandbodies data in the study area,typical sandbodies with reliable connectivity were selected.Then,sensitive seismic attribute were extracted to obtain training samples.Finally,based on the Deep Forest algorithm,mapping model between attribute combinations and sandbody connectivity was established through machine learning.This method achieves the first quantitative determination of the connectivity for continuous composite sandbodies in the Bohai Oilfield.Compared with conventional connectivity discrimination methods such as high-resolution processing and seismic attribute analysis,this method can combine the sandbody characteristics of the study area in the process of machine learning,and jointly judge connectivity by combining multiple seismic attributes.The study results show that this method has high accuracy and timeliness in predicting connectivity for continuous composite sandbodies.Applied to the Bohai A Oilfield,it successfully identified multiple sandbody connectivity relationships and provided strong support for the subsequent exploration potential assessment and well placement optimization.This method also provides a new idea and method for studying sandbody connectivity under similar complex geological conditions.展开更多
The advent of the big data era has made data visualization a crucial tool for enhancing the efficiency and insights of data analysis. This theoretical research delves into the current applications and potential future...The advent of the big data era has made data visualization a crucial tool for enhancing the efficiency and insights of data analysis. This theoretical research delves into the current applications and potential future trends of data visualization in big data analysis. The article first systematically reviews the theoretical foundations and technological evolution of data visualization, and thoroughly analyzes the challenges faced by visualization in the big data environment, such as massive data processing, real-time visualization requirements, and multi-dimensional data display. Through extensive literature research, it explores innovative application cases and theoretical models of data visualization in multiple fields including business intelligence, scientific research, and public decision-making. The study reveals that interactive visualization, real-time visualization, and immersive visualization technologies may become the main directions for future development and analyzes the potential of these technologies in enhancing user experience and data comprehension. The paper also delves into the theoretical potential of artificial intelligence technology in enhancing data visualization capabilities, such as automated chart generation, intelligent recommendation of visualization schemes, and adaptive visualization interfaces. The research also focuses on the role of data visualization in promoting interdisciplinary collaboration and data democratization. Finally, the paper proposes theoretical suggestions for promoting data visualization technology innovation and application popularization, including strengthening visualization literacy education, developing standardized visualization frameworks, and promoting open-source sharing of visualization tools. This study provides a comprehensive theoretical perspective for understanding the importance of data visualization in the big data era and its future development directions.展开更多
This study investigates university English teachers’acceptance and willingness to use learning management system(LMS)data analysis tools in their teaching practices.The research employs a mixed-method approach,combin...This study investigates university English teachers’acceptance and willingness to use learning management system(LMS)data analysis tools in their teaching practices.The research employs a mixed-method approach,combining quantitative surveys and qualitative interviews to understand teachers’perceptions and attitudes,and the factors influencing their adoption of LMS data analysis tools.The findings reveal that perceived usefulness,perceived ease of use,technical literacy,organizational support,and data privacy concerns significantly impact teachers’willingness to use these tools.Based on these insights,the study offers practical recommendations for educational institutions to enhance the effective adoption of LMS data analysis tools in English language teaching.展开更多
Maintaining the integrity and longevity of structures is essential in many industries,such as aerospace,nuclear,and petroleum.To achieve the cost-effectiveness of large-scale systems in petroleum drilling,a strong emp...Maintaining the integrity and longevity of structures is essential in many industries,such as aerospace,nuclear,and petroleum.To achieve the cost-effectiveness of large-scale systems in petroleum drilling,a strong emphasis on structural durability and monitoring is required.This study focuses on the mechanical vibrations that occur in rotary drilling systems,which have a substantial impact on the structural integrity of drilling equipment.The study specifically investigates axial,torsional,and lateral vibrations,which might lead to negative consequences such as bit-bounce,chaotic whirling,and high-frequency stick-slip.These events not only hinder the efficiency of drilling but also lead to exhaustion and harm to the system’s components since they are difficult to be detected and controlled in real time.The study investigates the dynamic interactions of these vibrations,specifically in their high-frequency modes,usingfield data obtained from measurement while drilling.Thefindings have demonstrated the effect of strong coupling between the high-frequency modes of these vibrations on drilling sys-tem performance.The obtained results highlight the importance of considering the interconnected impacts of these vibrations when designing and implementing robust control systems.Therefore,integrating these compo-nents can increase the durability of drill bits and drill strings,as well as improve the ability to monitor and detect damage.Moreover,by exploiting thesefindings,the assessment of structural resilience in rotary drilling systems can be enhanced.Furthermore,the study demonstrates the capacity of structural health monitoring to improve the quality,dependability,and efficiency of rotary drilling systems in the petroleum industry.展开更多
There are some limitations when we apply conventional methods to analyze the massive amounts of seismic data acquired with high-density spatial sampling since processors usually obtain the properties of raw data from ...There are some limitations when we apply conventional methods to analyze the massive amounts of seismic data acquired with high-density spatial sampling since processors usually obtain the properties of raw data from common shot gathers or other datasets located at certain points or along lines. We propose a novel method in this paper to observe seismic data on time slices from spatial subsets. The composition of a spatial subset and the unique character of orthogonal or oblique subsets are described and pre-stack subsets are shown by 3D visualization. In seismic data processing, spatial subsets can be used for the following aspects: (1) to check the trace distribution uniformity and regularity; (2) to observe the main features of ground-roll and linear noise; (3) to find abnormal traces from slices of datasets; and (4) to QC the results of pre-stack noise attenuation. The field data application shows that seismic data analysis in spatial subsets is an effective method that may lead to a better discrimination among various wavefields and help us obtain more information.展开更多
The rapidly increasing demand and complexity of manufacturing process potentiates the usage of manufacturing data with the highest priority to achieve precise analyze and control,rather than using simplified physical ...The rapidly increasing demand and complexity of manufacturing process potentiates the usage of manufacturing data with the highest priority to achieve precise analyze and control,rather than using simplified physical models and human expertise.In the era of data-driven manufacturing,the explosion of data amount revolutionized how data is collected and analyzed.This paper overviews the advance of technologies developed for in-process manufacturing data collection and analysis.It can be concluded that groundbreaking sensoring technology to facilitate direct measurement is one important leading trend for advanced data collection,due to the complexity and uncertainty during indirect measurement.On the other hand,physical model-based data analysis contains inevitable simplifications and sometimes ill-posed solutions due to the limited capacity of describing complex manufacturing process.Machine learning,especially deep learning approach has great potential for making better decisions to automate the process when fed with abundant data,while trending data-driven manufacturing approaches succeeded by using limited data to achieve similar or even better decisions.And these trends can demonstrated be by analyzing some typical applications of manufacturing process.展开更多
Objective To evaluate the environmental and technical efficiencies of China's industrial sectors and provide appropriate advice for policy makers in the context of rapid economic growth and concurrent serious environ...Objective To evaluate the environmental and technical efficiencies of China's industrial sectors and provide appropriate advice for policy makers in the context of rapid economic growth and concurrent serious environmental damages caused by industrial pollutants. Methods A data of envelopment analysis (DEA) framework crediting both reduction of pollution outputs and expansion of good outputs was designed as a model to compute environmental efficiency of China's regional industrial systems. Results As shown by the geometric mean of environmental efficiency, if other inputs were made constant and good outputs were not to be improved, the air pollution outputs would have the potential to be decreased by about 60% in the whole China. Conclusion Both environmental and technical efficiencies have the potential to be greatly improved in China, which may provide some advice for policy-makers.展开更多
The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs ty...The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs typically have some levels of fuzziness. To analyze a decision making unit (DMU) with fuzzy input/output data, previous studies provided the fuzzy DEA model and proposed an associated evaluating approach. Nonetheless, numerous deficiencies must still be improved, including the α- cut approaches, types of fuzzy numbers, and ranking techniques. Moreover, a fuzzy sample DMU still cannot be evaluated for the Fuzzy DEA model. Therefore, this paper proposes a fuzzy DEA model based on sample decision making unit (FSDEA). Five eval- uation approaches and the related algorithm and ranking methods are provided to test the fuzzy sample DMU of the FSDEA model. A numerical experiment is used to demonstrate and compare the results with those obtained using alternative approaches.展开更多
In recent years improper allocation of safety input has prevailed in coal mines in China, which resulted in the frequent accidents in coal mining operation. A comprehensive assessment of the input efficiency of coal m...In recent years improper allocation of safety input has prevailed in coal mines in China, which resulted in the frequent accidents in coal mining operation. A comprehensive assessment of the input efficiency of coal mine safety should lead to improved efficiency in the use of funds and management resources. This helps government and enterprise managers better understand how safety inputs are used and to optimize allocation of resources. Study on coal mine's efficiency assessment of safety input was con- ducted in this paper. A C^2R model with non-Archimedean infinitesimal vector based on output is established after consideration of the input characteristics and the model properties. An assessment of an operating mine was done using a specific set of input and output criteria. It is found that the safety input was efficient in 2002 and 2005 and was weakly efficient in 2003. However, the efficiency was relatively low in both 2001 and 2004. The safety input resources can be optimized and adjusted by means of projection theory. Such analysis shows that, on average in 2001 and 2004, 45% of the expended funds could have been saved. Likewise, 10% of the safety management and technical staff could have been eliminated and working hours devoted to safety could have been reduced by 12%. These conditions could have Riven the same results.展开更多
Space debris poses a serious threat to human space activities and needs to be measured and cataloged. As a new technology for space target surveillance, the measurement accuracy of diffuse reflection laser ranging (D...Space debris poses a serious threat to human space activities and needs to be measured and cataloged. As a new technology for space target surveillance, the measurement accuracy of diffuse reflection laser ranging (DRLR) is much higher than that of microwave radar and optoelectronic measurement. Based on the laser ranging data of space debris from the DRLR system at Shanghai Astronomical Observatory acquired in March-April, 2013, the characteristics and precision of the laser ranging data are analyzed and their applications in orbit determination of space debris are discussed, which is implemented for the first time in China. The experiment indicates that the precision of laser ranging data can reach 39 cm-228 cm. When the data are sufficient enough (four arcs measured over three days), the orbital accuracy of space debris can be up to 50 m.展开更多
Quantized kernel least mean square(QKLMS) algorithm is an effective nonlinear adaptive online learning algorithm with good performance in constraining the growth of network size through the use of quantization for inp...Quantized kernel least mean square(QKLMS) algorithm is an effective nonlinear adaptive online learning algorithm with good performance in constraining the growth of network size through the use of quantization for input space. It can serve as a powerful tool to perform complex computing for network service and application. With the purpose of compressing the input to further improve learning performance, this article proposes a novel QKLMS with entropy-guided learning, called EQ-KLMS. Under the consecutive square entropy learning framework, the basic idea of entropy-guided learning technique is to measure the uncertainty of the input vectors used for QKLMS, and delete those data with larger uncertainty, which are insignificant or easy to cause learning errors. Then, the dataset is compressed. Consequently, by using square entropy, the learning performance of proposed EQ-KLMS is improved with high precision and low computational cost. The proposed EQ-KLMS is validated using a weather-related dataset, and the results demonstrate the desirable performance of our scheme.展开更多
China implemented the public hospital reform in 2012. This study utilized bootstrapping data envelopment analysis(DEA) to evaluate the technical efficiency(TE) and productivity of county public hospitals in Easter...China implemented the public hospital reform in 2012. This study utilized bootstrapping data envelopment analysis(DEA) to evaluate the technical efficiency(TE) and productivity of county public hospitals in Eastern, Central, and Western China after the 2012 public hospital reform. Data from 127 county public hospitals(39, 45, and 43 in Eastern, Central, and Western China, respectively) were collected during 2012–2015. Changes of TE and productivity over time were estimated by bootstrapping DEA and bootstrapping Malmquist. The disparities in TE and productivity among public hospitals in the three regions of China were compared by Kruskal–Wallis H test and Mann–Whitney U test. The average bias-corrected TE values for the four-year period were 0.6442, 0.5785, 0.6099, and 0.6094 in Eastern, Central, and Western China, and the entire country respectively, with average non-technical efficiency, low pure technical efficiency(PTE), and high scale efficiency found. Productivity increased by 8.12%, 0.25%, 12.11%, and 11.58% in China and its three regions during 2012–2015, and such increase in productivity resulted from progressive technological changes by 16.42%, 6.32%, 21.08%, and 21.42%, respectively. The TE and PTE of the county hospitals significantly differed among the three regions of China. Eastern and Western China showed significantly higher TE and PTE than Central China. More than 60% of county public hospitals in China and its three areas operated at decreasing return scales. There was a considerable space for TE improvement in county hospitals in China and its three regions. During 2012–2015, the hospitals experienced progressive productivity; however, the PTE changed adversely. Moreover, Central China continuously achieved a significantly lower efficiency score than Eastern and Western China. Decision makers and administrators in China should identify the causes of the observed inefficiencies and take appropriate measures to increase the efficiency of county public hospitals in the three areas of China, especially in Central China.展开更多
Water is one of the basic resources for human survival.Water pollution monitoring and protection have been becoming a major problem for many countries all over the world.Most traditional water quality monitoring syste...Water is one of the basic resources for human survival.Water pollution monitoring and protection have been becoming a major problem for many countries all over the world.Most traditional water quality monitoring systems,however,generally focus only on water quality data collection,ignoring data analysis and data mining.In addition,some dirty data and data loss may occur due to power failures or transmission failures,further affecting data analysis and its application.In order to meet these needs,by using Internet of things,cloud computing,and big data technologies,we designed and implemented a water quality monitoring data intelligent service platform in C#and PHP language.The platform includes monitoring point addition,monitoring point map labeling,monitoring data uploading,monitoring data processing,early warning of exceeding the standard of monitoring indicators,and other functions modules.Using this platform,we can realize the automatic collection of water quality monitoring data,data cleaning,data analysis,intelligent early warning and early warning information push,and other functions.For better security and convenience,we deployed the system in the Tencent Cloud and tested it.The testing results showed that the data analysis platform could run well and will provide decision support for water resource protection.展开更多
基金funded by the National Natural Science Foundation of China(NSFC)the Chinese Academy of Sciences(CAS)(grant No.U2031209)the National Natural Science Foundation of China(NSFC,grant Nos.11872128,42174192,and 91952111)。
文摘Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy.
基金financial support from PetroChina Innovation Foundation。
文摘In order to overcome the defects that the analysis of multi-well typical curves of shale gas reservoirs is rarely applied to engineering,this study proposes a robust production data analysis method based on deconvolution,which is used for multi-well inter-well interference research.In this study,a multi-well conceptual trilinear seepage model for multi-stage fractured horizontal wells was established,and its Laplace solutions under two different outer boundary conditions were obtained.Then,an improved pressure deconvolution algorithm was used to normalize the scattered production data.Furthermore,the typical curve fitting was carried out using the production data and the seepage model solution.Finally,some reservoir parameters and fracturing parameters were interpreted,and the intensity of inter-well interference was compared.The effectiveness of the method was verified by analyzing the production dynamic data of six shale gas wells in Duvernay area.The results showed that the fitting effect of typical curves was greatly improved due to the mutual restriction between deconvolution calculation parameter debugging and seepage model parameter debugging.Besides,by using the morphological characteristics of the log-log typical curves and the time corresponding to the intersection point of the log-log typical curves of two models under different outer boundary conditions,the strength of the interference between wells on the same well platform was well judged.This work can provide a reference for the optimization of well spacing and hydraulic fracturing measures for shale gas wells.
文摘Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fish and shellfish are responsible for more than 90%of food allergies.Here,we provide promising insights using a large-scale data-driven analysis,comparing the mechanistic feature and biological relevance of different ingredients presents in peanuts,tree nuts(walnuts,almonds,cashews,pecans and pistachios)and soybean.Additionally,we have analysed the chemical compositions of peanuts in different processed form raw,boiled and dry-roasted.Using the data-driven approach we are able to generate new hypotheses to explain why nuclear receptors like the peroxisome proliferator-activated receptors(PPARs)and its isoform and their interaction with dietary lipids may have significant effect on allergic response.The results obtained from this study will direct future experimeantal and clinical studies to understand the role of dietary lipids and PPARisoforms to exert pro-inflammatory or anti-inflammatory functions on cells of the innate immunity and influence antigen presentation to the cells of the adaptive immunity.
文摘Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.
基金supported by Faculty of Medicine,Ministry of Education,Cultures,Research and Technology Tanjungpura University(No.3483/UN22.9/PG/2021)。
文摘Objective:To explain the use of concept mapping in a study about family members'experiences in taking care of people with cancer.Methods:This study used a phenomenological study design.In this study,we describe the analytical process of using concept mapping in our phenomenological studies about family members'experiences in taking care of people with cancer.Results:We developed several concept maps that aided us in analyzing our collected data from the interviews.Conclusions:The use of concept mapping is suggested to researchers who intend to analyze their data in any qualitative studies,including those using a phenomenological design,because it is a time-efficient way of dealing with large amounts of qualitative data during the analytical process.
文摘This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and its powerful data management and analysis tools make it suitable for handling complex data analysis tasks.It is also highly customizable,allowing users to create custom functions and packages to meet their specific needs.Additionally,R language provides high reproducibility,making it easy to replicate and verify research results,and it has excellent collaboration capabilities,enabling multiple users to work on the same project simultaneously.These advantages make R language a more suitable choice for complex data analysis tasks,particularly in scientific research and business applications.The findings of this study will help people understand that R is not just a language that can handle more data than Excel and demonstrate that r is essential to the field of data analysis.At the same time,it will also help users and organizations make informed decisions regarding their data analysis needs and software preferences.
文摘Gravitational wave detection is one of the most cutting-edge research areas in modern physics, with its success relying on advanced data analysis and signal processing techniques. This study provides a comprehensive review of data analysis methods and signal processing techniques in gravitational wave detection. The research begins by introducing the characteristics of gravitational wave signals and the challenges faced in their detection, such as extremely low signal-to-noise ratios and complex noise backgrounds. It then systematically analyzes the application of time-frequency analysis methods in extracting transient gravitational wave signals, including wavelet transforms and Hilbert-Huang transforms. The study focuses on discussing the crucial role of matched filtering techniques in improving signal detection sensitivity and explores strategies for template bank optimization. Additionally, the research evaluates the potential of machine learning algorithms, especially deep learning networks, in rapidly identifying and classifying gravitational wave events. The study also analyzes the application of Bayesian inference methods in parameter estimation and model selection, as well as their advantages in handling uncertainties. However, the research also points out the challenges faced by current technologies, such as dealing with non-Gaussian noise and improving computational efficiency. To address these issues, the study proposes a hybrid analysis framework combining physical models and data-driven methods. Finally, the research looks ahead to the potential applications of quantum computing in future gravitational wave data analysis. This study provides a comprehensive theoretical foundation for the optimization and innovation of gravitational wave data analysis methods, contributing to the advancement of gravitational wave astronomy.
文摘The connectivity of sandbodies is a key constraint to the exploration effectiveness of Bohai A Oilfield.Conventional connectivity studies often use methods such as seismic attribute fusion,while the development of contiguous composite sandbodies in this area makes it challenging to characterize connectivity changes with conventional seismic attributes.Aiming at the above problem in the Bohai A Oilfield,this study proposes a big data analysis method based on the Deep Forest algorithm to predict the sandbody connectivity.Firstly,by compiling the abundant exploration and development sandbodies data in the study area,typical sandbodies with reliable connectivity were selected.Then,sensitive seismic attribute were extracted to obtain training samples.Finally,based on the Deep Forest algorithm,mapping model between attribute combinations and sandbody connectivity was established through machine learning.This method achieves the first quantitative determination of the connectivity for continuous composite sandbodies in the Bohai Oilfield.Compared with conventional connectivity discrimination methods such as high-resolution processing and seismic attribute analysis,this method can combine the sandbody characteristics of the study area in the process of machine learning,and jointly judge connectivity by combining multiple seismic attributes.The study results show that this method has high accuracy and timeliness in predicting connectivity for continuous composite sandbodies.Applied to the Bohai A Oilfield,it successfully identified multiple sandbody connectivity relationships and provided strong support for the subsequent exploration potential assessment and well placement optimization.This method also provides a new idea and method for studying sandbody connectivity under similar complex geological conditions.
文摘The advent of the big data era has made data visualization a crucial tool for enhancing the efficiency and insights of data analysis. This theoretical research delves into the current applications and potential future trends of data visualization in big data analysis. The article first systematically reviews the theoretical foundations and technological evolution of data visualization, and thoroughly analyzes the challenges faced by visualization in the big data environment, such as massive data processing, real-time visualization requirements, and multi-dimensional data display. Through extensive literature research, it explores innovative application cases and theoretical models of data visualization in multiple fields including business intelligence, scientific research, and public decision-making. The study reveals that interactive visualization, real-time visualization, and immersive visualization technologies may become the main directions for future development and analyzes the potential of these technologies in enhancing user experience and data comprehension. The paper also delves into the theoretical potential of artificial intelligence technology in enhancing data visualization capabilities, such as automated chart generation, intelligent recommendation of visualization schemes, and adaptive visualization interfaces. The research also focuses on the role of data visualization in promoting interdisciplinary collaboration and data democratization. Finally, the paper proposes theoretical suggestions for promoting data visualization technology innovation and application popularization, including strengthening visualization literacy education, developing standardized visualization frameworks, and promoting open-source sharing of visualization tools. This study provides a comprehensive theoretical perspective for understanding the importance of data visualization in the big data era and its future development directions.
文摘This study investigates university English teachers’acceptance and willingness to use learning management system(LMS)data analysis tools in their teaching practices.The research employs a mixed-method approach,combining quantitative surveys and qualitative interviews to understand teachers’perceptions and attitudes,and the factors influencing their adoption of LMS data analysis tools.The findings reveal that perceived usefulness,perceived ease of use,technical literacy,organizational support,and data privacy concerns significantly impact teachers’willingness to use these tools.Based on these insights,the study offers practical recommendations for educational institutions to enhance the effective adoption of LMS data analysis tools in English language teaching.
文摘Maintaining the integrity and longevity of structures is essential in many industries,such as aerospace,nuclear,and petroleum.To achieve the cost-effectiveness of large-scale systems in petroleum drilling,a strong emphasis on structural durability and monitoring is required.This study focuses on the mechanical vibrations that occur in rotary drilling systems,which have a substantial impact on the structural integrity of drilling equipment.The study specifically investigates axial,torsional,and lateral vibrations,which might lead to negative consequences such as bit-bounce,chaotic whirling,and high-frequency stick-slip.These events not only hinder the efficiency of drilling but also lead to exhaustion and harm to the system’s components since they are difficult to be detected and controlled in real time.The study investigates the dynamic interactions of these vibrations,specifically in their high-frequency modes,usingfield data obtained from measurement while drilling.Thefindings have demonstrated the effect of strong coupling between the high-frequency modes of these vibrations on drilling sys-tem performance.The obtained results highlight the importance of considering the interconnected impacts of these vibrations when designing and implementing robust control systems.Therefore,integrating these compo-nents can increase the durability of drill bits and drill strings,as well as improve the ability to monitor and detect damage.Moreover,by exploiting thesefindings,the assessment of structural resilience in rotary drilling systems can be enhanced.Furthermore,the study demonstrates the capacity of structural health monitoring to improve the quality,dependability,and efficiency of rotary drilling systems in the petroleum industry.
文摘There are some limitations when we apply conventional methods to analyze the massive amounts of seismic data acquired with high-density spatial sampling since processors usually obtain the properties of raw data from common shot gathers or other datasets located at certain points or along lines. We propose a novel method in this paper to observe seismic data on time slices from spatial subsets. The composition of a spatial subset and the unique character of orthogonal or oblique subsets are described and pre-stack subsets are shown by 3D visualization. In seismic data processing, spatial subsets can be used for the following aspects: (1) to check the trace distribution uniformity and regularity; (2) to observe the main features of ground-roll and linear noise; (3) to find abnormal traces from slices of datasets; and (4) to QC the results of pre-stack noise attenuation. The field data application shows that seismic data analysis in spatial subsets is an effective method that may lead to a better discrimination among various wavefields and help us obtain more information.
基金Supported by National Natural Science Foundation of China(Grant No.51805260)National Natural Science Foundation for Distinguished Young Scholars of China(Grant No.51925505)National Natural Science Foundation of China(Grant No.51775278).
文摘The rapidly increasing demand and complexity of manufacturing process potentiates the usage of manufacturing data with the highest priority to achieve precise analyze and control,rather than using simplified physical models and human expertise.In the era of data-driven manufacturing,the explosion of data amount revolutionized how data is collected and analyzed.This paper overviews the advance of technologies developed for in-process manufacturing data collection and analysis.It can be concluded that groundbreaking sensoring technology to facilitate direct measurement is one important leading trend for advanced data collection,due to the complexity and uncertainty during indirect measurement.On the other hand,physical model-based data analysis contains inevitable simplifications and sometimes ill-posed solutions due to the limited capacity of describing complex manufacturing process.Machine learning,especially deep learning approach has great potential for making better decisions to automate the process when fed with abundant data,while trending data-driven manufacturing approaches succeeded by using limited data to achieve similar or even better decisions.And these trends can demonstrated be by analyzing some typical applications of manufacturing process.
文摘Objective To evaluate the environmental and technical efficiencies of China's industrial sectors and provide appropriate advice for policy makers in the context of rapid economic growth and concurrent serious environmental damages caused by industrial pollutants. Methods A data of envelopment analysis (DEA) framework crediting both reduction of pollution outputs and expansion of good outputs was designed as a model to compute environmental efficiency of China's regional industrial systems. Results As shown by the geometric mean of environmental efficiency, if other inputs were made constant and good outputs were not to be improved, the air pollution outputs would have the potential to be decreased by about 60% in the whole China. Conclusion Both environmental and technical efficiencies have the potential to be greatly improved in China, which may provide some advice for policy-makers.
基金supported by the National Natural Science Foundation of China (70961005)211 Project for Postgraduate Student Program of Inner Mongolia University+1 种基金National Natural Science Foundation of Inner Mongolia (2010Zd342011MS1002)
文摘The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs typically have some levels of fuzziness. To analyze a decision making unit (DMU) with fuzzy input/output data, previous studies provided the fuzzy DEA model and proposed an associated evaluating approach. Nonetheless, numerous deficiencies must still be improved, including the α- cut approaches, types of fuzzy numbers, and ranking techniques. Moreover, a fuzzy sample DMU still cannot be evaluated for the Fuzzy DEA model. Therefore, this paper proposes a fuzzy DEA model based on sample decision making unit (FSDEA). Five eval- uation approaches and the related algorithm and ranking methods are provided to test the fuzzy sample DMU of the FSDEA model. A numerical experiment is used to demonstrate and compare the results with those obtained using alternative approaches.
基金Project 70771105 supported by the National Natural Science Foundation of China
文摘In recent years improper allocation of safety input has prevailed in coal mines in China, which resulted in the frequent accidents in coal mining operation. A comprehensive assessment of the input efficiency of coal mine safety should lead to improved efficiency in the use of funds and management resources. This helps government and enterprise managers better understand how safety inputs are used and to optimize allocation of resources. Study on coal mine's efficiency assessment of safety input was con- ducted in this paper. A C^2R model with non-Archimedean infinitesimal vector based on output is established after consideration of the input characteristics and the model properties. An assessment of an operating mine was done using a specific set of input and output criteria. It is found that the safety input was efficient in 2002 and 2005 and was weakly efficient in 2003. However, the efficiency was relatively low in both 2001 and 2004. The safety input resources can be optimized and adjusted by means of projection theory. Such analysis shows that, on average in 2001 and 2004, 45% of the expended funds could have been saved. Likewise, 10% of the safety management and technical staff could have been eliminated and working hours devoted to safety could have been reduced by 12%. These conditions could have Riven the same results.
基金Supported by the National Natural Science Foundation of China
文摘Space debris poses a serious threat to human space activities and needs to be measured and cataloged. As a new technology for space target surveillance, the measurement accuracy of diffuse reflection laser ranging (DRLR) is much higher than that of microwave radar and optoelectronic measurement. Based on the laser ranging data of space debris from the DRLR system at Shanghai Astronomical Observatory acquired in March-April, 2013, the characteristics and precision of the laser ranging data are analyzed and their applications in orbit determination of space debris are discussed, which is implemented for the first time in China. The experiment indicates that the precision of laser ranging data can reach 39 cm-228 cm. When the data are sufficient enough (four arcs measured over three days), the orbital accuracy of space debris can be up to 50 m.
基金supported by the National Key Technologies R&D Program of China under Grant No. 2015BAK38B01the National Natural Science Foundation of China under Grant Nos. 61174103 and 61603032+4 种基金the National Key Research and Development Program of China under Grant Nos. 2016YFB0700502, 2016YFB1001404, and 2017YFB0702300the China Postdoctoral Science Foundation under Grant No. 2016M590048the Fundamental Research Funds for the Central Universities under Grant No. 06500025the University of Science and Technology Beijing - Taipei University of Technology Joint Research Program under Grant No. TW201610the Foundation from the Taipei University of Technology of Taiwan under Grant No. NTUT-USTB-105-4
文摘Quantized kernel least mean square(QKLMS) algorithm is an effective nonlinear adaptive online learning algorithm with good performance in constraining the growth of network size through the use of quantization for input space. It can serve as a powerful tool to perform complex computing for network service and application. With the purpose of compressing the input to further improve learning performance, this article proposes a novel QKLMS with entropy-guided learning, called EQ-KLMS. Under the consecutive square entropy learning framework, the basic idea of entropy-guided learning technique is to measure the uncertainty of the input vectors used for QKLMS, and delete those data with larger uncertainty, which are insignificant or easy to cause learning errors. Then, the dataset is compressed. Consequently, by using square entropy, the learning performance of proposed EQ-KLMS is improved with high precision and low computational cost. The proposed EQ-KLMS is validated using a weather-related dataset, and the results demonstrate the desirable performance of our scheme.
基金supported by the National Natural Science Foundation of China(No.71473099)
文摘China implemented the public hospital reform in 2012. This study utilized bootstrapping data envelopment analysis(DEA) to evaluate the technical efficiency(TE) and productivity of county public hospitals in Eastern, Central, and Western China after the 2012 public hospital reform. Data from 127 county public hospitals(39, 45, and 43 in Eastern, Central, and Western China, respectively) were collected during 2012–2015. Changes of TE and productivity over time were estimated by bootstrapping DEA and bootstrapping Malmquist. The disparities in TE and productivity among public hospitals in the three regions of China were compared by Kruskal–Wallis H test and Mann–Whitney U test. The average bias-corrected TE values for the four-year period were 0.6442, 0.5785, 0.6099, and 0.6094 in Eastern, Central, and Western China, and the entire country respectively, with average non-technical efficiency, low pure technical efficiency(PTE), and high scale efficiency found. Productivity increased by 8.12%, 0.25%, 12.11%, and 11.58% in China and its three regions during 2012–2015, and such increase in productivity resulted from progressive technological changes by 16.42%, 6.32%, 21.08%, and 21.42%, respectively. The TE and PTE of the county hospitals significantly differed among the three regions of China. Eastern and Western China showed significantly higher TE and PTE than Central China. More than 60% of county public hospitals in China and its three areas operated at decreasing return scales. There was a considerable space for TE improvement in county hospitals in China and its three regions. During 2012–2015, the hospitals experienced progressive productivity; however, the PTE changed adversely. Moreover, Central China continuously achieved a significantly lower efficiency score than Eastern and Western China. Decision makers and administrators in China should identify the causes of the observed inefficiencies and take appropriate measures to increase the efficiency of county public hospitals in the three areas of China, especially in Central China.
基金the National Natural Science Foundation of China(No.61304208)Scientific Research Fund of Hunan Province Education Department(18C0003)+5 种基金Researchproject on teaching reform in colleges and universities of Hunan Province Education Department(20190147)Changsha City Science and Technology Plan Program(K1501013-11)Hunan NormalUniversity University-Industry Cooperation.This work is implemented at the 2011 Collaborative Innovation Center for Development and Utilization of Finance and Economics Big Data PropertyUniversities of Hunan ProvinceOpen projectgrant number 20181901CRP04.
文摘Water is one of the basic resources for human survival.Water pollution monitoring and protection have been becoming a major problem for many countries all over the world.Most traditional water quality monitoring systems,however,generally focus only on water quality data collection,ignoring data analysis and data mining.In addition,some dirty data and data loss may occur due to power failures or transmission failures,further affecting data analysis and its application.In order to meet these needs,by using Internet of things,cloud computing,and big data technologies,we designed and implemented a water quality monitoring data intelligent service platform in C#and PHP language.The platform includes monitoring point addition,monitoring point map labeling,monitoring data uploading,monitoring data processing,early warning of exceeding the standard of monitoring indicators,and other functions modules.Using this platform,we can realize the automatic collection of water quality monitoring data,data cleaning,data analysis,intelligent early warning and early warning information push,and other functions.For better security and convenience,we deployed the system in the Tencent Cloud and tested it.The testing results showed that the data analysis platform could run well and will provide decision support for water resource protection.