Condensed and hydrolysable tannins are non-toxic natural polyphenols that are a commercial commodity industrialized for tanning hides to obtain leather and for a growing number of other industrial applications mainly ...Condensed and hydrolysable tannins are non-toxic natural polyphenols that are a commercial commodity industrialized for tanning hides to obtain leather and for a growing number of other industrial applications mainly to substitute petroleum-based products.They are a definite class of sustainable materials of the forestry industry.They have been in operation for hundreds of years to manufacture leather and now for a growing number of applications in a variety of other industries,such as wood adhesives,metal coating,pharmaceutical/medical applications and several others.This review presents the main sources,either already or potentially commercial of this forestry by-materials,their industrial and laboratory extraction systems,their systems of analysis with their advantages and drawbacks,be these methods so simple to even appear primitive but nonetheless of proven effectiveness,or very modern and instrumental.It constitutes a basic but essential summary of what is necessary to know of these sustainable materials.In doing so,the review highlights some of the main challenges that remain to be addressed to deliver the quality and economics of tannin supply necessary to fulfill the industrial production requirements for some materials-based uses.展开更多
Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanal...Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy.展开更多
To obtain more stable spectral data for accurate quantitative analysis of multi-element,especially for the large-area in-situ elements detection of soils, we propose a method for a multielement quantitative analysis o...To obtain more stable spectral data for accurate quantitative analysis of multi-element,especially for the large-area in-situ elements detection of soils, we propose a method for a multielement quantitative analysis of soils using calibration-free laser-induced breakdown spectroscopy(CF-LIBS) based on data filtering. In this study, we analyze a standard soil sample doped with two heavy metal elements, Cu and Cd, with a specific focus on the line of Cu I324.75 nm for filtering the experimental data of multiple sample sets. Pre-and post-data filtering,the relative standard deviation for Cu decreased from 30% to 10%, The limits of detection(LOD)values for Cu and Cd decreased by 5% and 4%, respectively. Through CF-LIBS, a quantitative analysis was conducted to determine the relative content of elements in soils. Using Cu as a reference, the concentration of Cd was accurately calculated. The results show that post-data filtering, the average relative error of the Cd decreases from 11% to 5%, indicating the effectiveness of data filtering in improving the accuracy of quantitative analysis. Moreover, the content of Si, Fe and other elements can be accurately calculated using this method. To further correct the calculation, the results for Cd was used to provide a more precise calculation. This approach is of great importance for the large-area in-situ heavy metals and trace elements detection in soil, as well as for rapid and accurate quantitative analysis.展开更多
The recent pandemic crisis has highlighted the importance of the availability and management of health data to respond quickly and effectively to health emergencies, while respecting the fundamental rights of every in...The recent pandemic crisis has highlighted the importance of the availability and management of health data to respond quickly and effectively to health emergencies, while respecting the fundamental rights of every individual. In this context, it is essential to find a balance between the protection of privacy and the safeguarding of public health, using tools that guarantee transparency and consent to the processing of data by the population. This work, starting from a pilot investigation conducted in the Polyclinic of Bari as part of the Horizon Europe Seeds project entitled “Multidisciplinary analysis of technological tracing models of contagion: the protection of rights in the management of health data”, has the objective of promoting greater patient awareness regarding the processing of their health data and the protection of privacy. The methodology used the PHICAT (Personal Health Information Competence Assessment Tool) as a tool and, through the administration of a questionnaire, the aim was to evaluate the patients’ ability to express their consent to the release and processing of health data. The results that emerged were analyzed in relation to the 4 domains in which the process is divided which allows evaluating the patients’ ability to express a conscious choice and, also, in relation to the socio-demographic and clinical characteristics of the patients themselves. This study can contribute to understanding patients’ ability to give their consent and improve information regarding the management of health data by increasing confidence in granting the use of their data for research and clinical management.展开更多
In order to overcome the defects that the analysis of multi-well typical curves of shale gas reservoirs is rarely applied to engineering,this study proposes a robust production data analysis method based on deconvolut...In order to overcome the defects that the analysis of multi-well typical curves of shale gas reservoirs is rarely applied to engineering,this study proposes a robust production data analysis method based on deconvolution,which is used for multi-well inter-well interference research.In this study,a multi-well conceptual trilinear seepage model for multi-stage fractured horizontal wells was established,and its Laplace solutions under two different outer boundary conditions were obtained.Then,an improved pressure deconvolution algorithm was used to normalize the scattered production data.Furthermore,the typical curve fitting was carried out using the production data and the seepage model solution.Finally,some reservoir parameters and fracturing parameters were interpreted,and the intensity of inter-well interference was compared.The effectiveness of the method was verified by analyzing the production dynamic data of six shale gas wells in Duvernay area.The results showed that the fitting effect of typical curves was greatly improved due to the mutual restriction between deconvolution calculation parameter debugging and seepage model parameter debugging.Besides,by using the morphological characteristics of the log-log typical curves and the time corresponding to the intersection point of the log-log typical curves of two models under different outer boundary conditions,the strength of the interference between wells on the same well platform was well judged.This work can provide a reference for the optimization of well spacing and hydraulic fracturing measures for shale gas wells.展开更多
How can we efficiently store and mine dynamically generated dense tensors for modeling the behavior of multidimensional dynamic data?Much of the multidimensional dynamic data in the real world is generated in the form...How can we efficiently store and mine dynamically generated dense tensors for modeling the behavior of multidimensional dynamic data?Much of the multidimensional dynamic data in the real world is generated in the form of time-growing tensors.For example,air quality tensor data consists of multiple sensory values gathered from wide locations for a long time.Such data,accumulated over time,is redundant and consumes a lot ofmemory in its raw form.We need a way to efficiently store dynamically generated tensor data that increase over time and to model their behavior on demand between arbitrary time blocks.To this end,we propose a Block IncrementalDense Tucker Decomposition(BID-Tucker)method for efficient storage and on-demand modeling ofmultidimensional spatiotemporal data.Assuming that tensors come in unit blocks where only the time domain changes,our proposed BID-Tucker first slices the blocks into matrices and decomposes them via singular value decomposition(SVD).The SVDs of the time×space sliced matrices are stored instead of the raw tensor blocks to save space.When modeling from data is required at particular time blocks,the SVDs of corresponding time blocks are retrieved and incremented to be used for Tucker decomposition.The factor matrices and core tensor of the decomposed results can then be used for further data analysis.We compared our proposed BID-Tucker with D-Tucker,which our method extends,and vanilla Tucker decomposition.We show that our BID-Tucker is faster than both D-Tucker and vanilla Tucker decomposition and uses less memory for storage with a comparable reconstruction error.We applied our proposed BID-Tucker to model the spatial and temporal trends of air quality data collected in South Korea from 2018 to 2022.We were able to model the spatial and temporal air quality trends.We were also able to verify unusual events,such as chronic ozone alerts and large fire events.展开更多
Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fis...Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fish and shellfish are responsible for more than 90%of food allergies.Here,we provide promising insights using a large-scale data-driven analysis,comparing the mechanistic feature and biological relevance of different ingredients presents in peanuts,tree nuts(walnuts,almonds,cashews,pecans and pistachios)and soybean.Additionally,we have analysed the chemical compositions of peanuts in different processed form raw,boiled and dry-roasted.Using the data-driven approach we are able to generate new hypotheses to explain why nuclear receptors like the peroxisome proliferator-activated receptors(PPARs)and its isoform and their interaction with dietary lipids may have significant effect on allergic response.The results obtained from this study will direct future experimeantal and clinical studies to understand the role of dietary lipids and PPARisoforms to exert pro-inflammatory or anti-inflammatory functions on cells of the innate immunity and influence antigen presentation to the cells of the adaptive immunity.展开更多
Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and sha...Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and share such multimodal data.However,due to professional discrepancies among annotators and lax quality control,noisy labels might be introduced.Recent research suggests that deep neural networks(DNNs)will overfit noisy labels,leading to the poor performance of the DNNs.To address this challenging problem,we present a Multimodal Robust Meta Learning framework(MRML)for multimodal sentiment analysis to resist noisy labels and correlate distinct modalities simultaneously.Specifically,we propose a two-layer fusion net to deeply fuse different modalities and improve the quality of the multimodal data features for label correction and network training.Besides,a multiple meta-learner(label corrector)strategy is proposed to enhance the label correction approach and prevent models from overfitting to noisy labels.We conducted experiments on three popular multimodal datasets to verify the superiority of ourmethod by comparing it with four baselines.展开更多
This paper investigates superconvergence properties of the direct discontinuous Galerkin(DDG)method with interface corrections and the symmetric DDG method for diffusion equations.We apply the Fourier analysis techniq...This paper investigates superconvergence properties of the direct discontinuous Galerkin(DDG)method with interface corrections and the symmetric DDG method for diffusion equations.We apply the Fourier analysis technique to symbolically compute eigenvalues and eigenvectors of the amplification matrices for both DDG methods with different coefficient settings in the numerical fluxes.Based on the eigen-structure analysis,we carry out error estimates of the DDG solutions,which can be decomposed into three parts:(i)dissipation errors of the physically relevant eigenvalue,which grow linearly with the time and are of order 2k for P^(k)(k=2,3)approximations;(ii)projection error from a special projection of the exact solution,which is decreasing over the time and is related to the eigenvector corresponding to the physically relevant eigenvalue;(iii)dissipative errors of non-physically relevant eigenvalues,which decay exponentially with respect to the spatial mesh sizeΔx.We observe that the errors are sensitive to the choice of the numerical flux coefficient for even degree P^(2)approximations,but are not for odd degree P^(3)approximations.Numerical experiments are provided to verify the theoretical results.展开更多
Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision...Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.展开更多
Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challeng...Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challenge.This study investigates a bacterial meningitis model through deterministic and stochastic versions.Four-compartment population dynamics explain the concept,particularly the susceptible population,carrier,infected,and recovered.The model predicts the nonnegative equilibrium points and reproduction number,i.e.,the Meningitis-Free Equilibrium(MFE),and Meningitis-Existing Equilibrium(MEE).For the stochastic version of the existing deterministicmodel,the twomethodologies studied are transition probabilities and non-parametric perturbations.Also,positivity,boundedness,extinction,and disease persistence are studiedrigorouslywiththe helpofwell-known theorems.Standard and nonstandard techniques such as EulerMaruyama,stochastic Euler,stochastic Runge Kutta,and stochastic nonstandard finite difference in the sense of delay have been presented for computational analysis of the stochastic model.Unfortunately,standard methods fail to restore the biological properties of the model,so the stochastic nonstandard finite difference approximation is offered as an efficient,low-cost,and independent of time step size.In addition,the convergence,local,and global stability around the equilibria of the nonstandard computational method is studied by assuming the perturbation effect is zero.The simulations and comparison of the methods are presented to support the theoretical results and for the best visualization of results.展开更多
Integrated data and energy transfer(IDET)enables the electromagnetic waves to transmit wireless energy at the same time of data delivery for lowpower devices.In this paper,an energy harvesting modulation(EHM)assisted ...Integrated data and energy transfer(IDET)enables the electromagnetic waves to transmit wireless energy at the same time of data delivery for lowpower devices.In this paper,an energy harvesting modulation(EHM)assisted multi-user IDET system is studied,where all the received signals at the users are exploited for energy harvesting without the degradation of wireless data transfer(WDT)performance.The joint IDET performance is then analysed theoretically by conceiving a practical time-dependent wireless channel.With the aid of the AO based algorithm,the average effective data rate among users are maximized by ensuring the BER and the wireless energy transfer(WET)performance.Simulation results validate and evaluate the IDET performance of the EHM assisted system,which also demonstrates that the optimal number of user clusters and IDET time slots should be allocated,in order to improve the WET and WDT performance.展开更多
Cable-stayed bridges have been widely used in high-speed railway infrastructure.The accurate determination of cable’s representative temperatures is vital during the intricate processes of design,construction,and mai...Cable-stayed bridges have been widely used in high-speed railway infrastructure.The accurate determination of cable’s representative temperatures is vital during the intricate processes of design,construction,and maintenance of cable-stayed bridges.However,the representative temperatures of stayed cables are not specified in the existing design codes.To address this issue,this study investigates the distribution of the cable temperature and determinates its representative temperature.First,an experimental investigation,spanning over a period of one year,was carried out near the bridge site to obtain the temperature data.According to the statistical analysis of the measured data,it reveals that the temperature distribution is generally uniform along the cable cross-section without significant temperature gradient.Then,based on the limited data,the Monte Carlo,the gradient boosted regression trees(GBRT),and univariate linear regression(ULR)methods are employed to predict the cable’s representative temperature throughout the service life.These methods effectively overcome the limitations of insufficient monitoring data and accurately predict the representative temperature of the cables.However,each method has its own advantages and limitations in terms of applicability and accuracy.A comprehensive evaluation of the performance of these methods is conducted,and practical recommendations are provided for their application.The proposed methods and representative temperatures provide a good basis for the operation and maintenance of in-service long-span cable-stayed bridges.展开更多
The inter-city linkage heat data provided by Baidu Migration is employed as a characterization of inter-city linkages in order to facilitate the study of the network linkage characteristics and hierarchical structure ...The inter-city linkage heat data provided by Baidu Migration is employed as a characterization of inter-city linkages in order to facilitate the study of the network linkage characteristics and hierarchical structure of urban agglomeration in the Greater Bay Area through the use of social network analysis method.This is the inaugural application of big data based on location services in the study of urban agglomeration network structure,which represents a novel research perspective on this topic.The study reveals that the density of network linkages in the Greater Bay Area urban agglomeration has reached 100%,indicating a mature network-like spatial structure.This structure has given rise to three distinct communities:Shenzhen-Dongguan-Huizhou,Guangzhou-Foshan-Zhaoqing,and Zhuhai-Zhongshan-Jiangmen.Additionally,cities within the Greater Bay Area urban agglomeration play different roles,suggesting that varying development strategies may be necessary to achieve staggered development.The study demonstrates that large datasets represented by LBS can offer novel insights and methodologies for the examination of urban agglomeration network structures,contingent on the appropriate mining and processing of the data.展开更多
This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and i...This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and its powerful data management and analysis tools make it suitable for handling complex data analysis tasks.It is also highly customizable,allowing users to create custom functions and packages to meet their specific needs.Additionally,R language provides high reproducibility,making it easy to replicate and verify research results,and it has excellent collaboration capabilities,enabling multiple users to work on the same project simultaneously.These advantages make R language a more suitable choice for complex data analysis tasks,particularly in scientific research and business applications.The findings of this study will help people understand that R is not just a language that can handle more data than Excel and demonstrate that r is essential to the field of data analysis.At the same time,it will also help users and organizations make informed decisions regarding their data analysis needs and software preferences.展开更多
This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,tradit...This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.展开更多
This study investigates university English teachers’acceptance and willingness to use learning management system(LMS)data analysis tools in their teaching practices.The research employs a mixed-method approach,combin...This study investigates university English teachers’acceptance and willingness to use learning management system(LMS)data analysis tools in their teaching practices.The research employs a mixed-method approach,combining quantitative surveys and qualitative interviews to understand teachers’perceptions and attitudes,and the factors influencing their adoption of LMS data analysis tools.The findings reveal that perceived usefulness,perceived ease of use,technical literacy,organizational support,and data privacy concerns significantly impact teachers’willingness to use these tools.Based on these insights,the study offers practical recommendations for educational institutions to enhance the effective adoption of LMS data analysis tools in English language teaching.展开更多
Classical Chinese characters,presented through calligraphy,seal engraving,or painting,can exhibit different aesthetics and essences of Chinese characters,making them the most important asset of the Chinese people.Call...Classical Chinese characters,presented through calligraphy,seal engraving,or painting,can exhibit different aesthetics and essences of Chinese characters,making them the most important asset of the Chinese people.Calligraphy and seal engraving,as two closely related systems in traditional Chinese art,have developed through the ages.Due to changes in lifestyle and advancements in modern technology,their original functions of daily writing and verification have gradually diminished.Instead,they have increasingly played a significant role in commercial art.This study utilizes the Evaluation Grid Method(EGM)and the Analytic Hierarchy Process(AHP)to research the key preference factors in the application of calligraphy and seal engraving imagery.Different from the traditional 5-point equal interval semantic questionnaire,this study employs a non-equal interval semantic questionnaire with a golden ratio scale,distinguishing the importance ratio of adjacent semantic meanings and highlighting the weighted emphasis on visual aesthetics.Additionally,the study uses Importance-Performance Analysis(IPA)and Technique for Order Preference by Similarity to Ideal Solution(TOPSIS)to obtain the key preference sequence of calligraphy and seal engraving culture.Plus,the Choquet integral comprehensive evaluation is used as a reference for IPA comparison.It is hoped that this study can provide cultural imagery references and research methods,injecting further creativity into industrial design.展开更多
Electrocardiogram(ECG)is a low-cost,simple,fast,and non-invasive test.It can reflect the heart’s electrical activity and provide valuable diagnostic clues about the health of the entire body.Therefore,ECG has been wi...Electrocardiogram(ECG)is a low-cost,simple,fast,and non-invasive test.It can reflect the heart’s electrical activity and provide valuable diagnostic clues about the health of the entire body.Therefore,ECG has been widely used in various biomedical applications such as arrhythmia detection,disease-specific detection,mortality prediction,and biometric recognition.In recent years,ECG-related studies have been carried out using a variety of publicly available datasets,with many differences in the datasets used,data preprocessing methods,targeted challenges,and modeling and analysis techniques.Here we systematically summarize and analyze the ECGbased automatic analysis methods and applications.Specifically,we first reviewed 22 commonly used ECG public datasets and provided an overview of data preprocessing processes.Then we described some of the most widely used applications of ECG signals and analyzed the advanced methods involved in these applications.Finally,we elucidated some of the challenges in ECG analysis and provided suggestions for further research.展开更多
Big data on product sales are an emerging resource for supporting modular product design to meet diversified customers’requirements of product specification combinations.To better facilitate decision-making of modula...Big data on product sales are an emerging resource for supporting modular product design to meet diversified customers’requirements of product specification combinations.To better facilitate decision-making of modular product design,correlations among specifications and components originated from customers’conscious and subconscious preferences can be investigated by using big data on product sales.This study proposes a framework and the associated methods for supporting modular product design decisions based on correlation analysis of product specifications and components using big sales data.The correlations of the product specifications are determined by analyzing the collected product sales data.By building the relations between the product components and specifications,a matrix for measuring the correlation among product components is formed for component clustering.Six rules for supporting the decision making of modular product design are proposed based on the frequency analysis of the specification values per component cluster.A case study of electric vehicles illustrates the application of the proposed method.展开更多
文摘Condensed and hydrolysable tannins are non-toxic natural polyphenols that are a commercial commodity industrialized for tanning hides to obtain leather and for a growing number of other industrial applications mainly to substitute petroleum-based products.They are a definite class of sustainable materials of the forestry industry.They have been in operation for hundreds of years to manufacture leather and now for a growing number of applications in a variety of other industries,such as wood adhesives,metal coating,pharmaceutical/medical applications and several others.This review presents the main sources,either already or potentially commercial of this forestry by-materials,their industrial and laboratory extraction systems,their systems of analysis with their advantages and drawbacks,be these methods so simple to even appear primitive but nonetheless of proven effectiveness,or very modern and instrumental.It constitutes a basic but essential summary of what is necessary to know of these sustainable materials.In doing so,the review highlights some of the main challenges that remain to be addressed to deliver the quality and economics of tannin supply necessary to fulfill the industrial production requirements for some materials-based uses.
基金funded by the National Natural Science Foundation of China(NSFC)the Chinese Academy of Sciences(CAS)(grant No.U2031209)the National Natural Science Foundation of China(NSFC,grant Nos.11872128,42174192,and 91952111)。
文摘Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy.
基金supported by the Major Science and Technology Project of Gansu Province(No.22ZD6FA021-5)the Industrial Support Project of Gansu Province(Nos.2023CYZC-19 and 2021CYZC-22)the Science and Technology Project of Gansu Province(Nos.23YFFA0074,22JR5RA137 and 22JR5RA151).
文摘To obtain more stable spectral data for accurate quantitative analysis of multi-element,especially for the large-area in-situ elements detection of soils, we propose a method for a multielement quantitative analysis of soils using calibration-free laser-induced breakdown spectroscopy(CF-LIBS) based on data filtering. In this study, we analyze a standard soil sample doped with two heavy metal elements, Cu and Cd, with a specific focus on the line of Cu I324.75 nm for filtering the experimental data of multiple sample sets. Pre-and post-data filtering,the relative standard deviation for Cu decreased from 30% to 10%, The limits of detection(LOD)values for Cu and Cd decreased by 5% and 4%, respectively. Through CF-LIBS, a quantitative analysis was conducted to determine the relative content of elements in soils. Using Cu as a reference, the concentration of Cd was accurately calculated. The results show that post-data filtering, the average relative error of the Cd decreases from 11% to 5%, indicating the effectiveness of data filtering in improving the accuracy of quantitative analysis. Moreover, the content of Si, Fe and other elements can be accurately calculated using this method. To further correct the calculation, the results for Cd was used to provide a more precise calculation. This approach is of great importance for the large-area in-situ heavy metals and trace elements detection in soil, as well as for rapid and accurate quantitative analysis.
文摘The recent pandemic crisis has highlighted the importance of the availability and management of health data to respond quickly and effectively to health emergencies, while respecting the fundamental rights of every individual. In this context, it is essential to find a balance between the protection of privacy and the safeguarding of public health, using tools that guarantee transparency and consent to the processing of data by the population. This work, starting from a pilot investigation conducted in the Polyclinic of Bari as part of the Horizon Europe Seeds project entitled “Multidisciplinary analysis of technological tracing models of contagion: the protection of rights in the management of health data”, has the objective of promoting greater patient awareness regarding the processing of their health data and the protection of privacy. The methodology used the PHICAT (Personal Health Information Competence Assessment Tool) as a tool and, through the administration of a questionnaire, the aim was to evaluate the patients’ ability to express their consent to the release and processing of health data. The results that emerged were analyzed in relation to the 4 domains in which the process is divided which allows evaluating the patients’ ability to express a conscious choice and, also, in relation to the socio-demographic and clinical characteristics of the patients themselves. This study can contribute to understanding patients’ ability to give their consent and improve information regarding the management of health data by increasing confidence in granting the use of their data for research and clinical management.
基金financial support from PetroChina Innovation Foundation。
文摘In order to overcome the defects that the analysis of multi-well typical curves of shale gas reservoirs is rarely applied to engineering,this study proposes a robust production data analysis method based on deconvolution,which is used for multi-well inter-well interference research.In this study,a multi-well conceptual trilinear seepage model for multi-stage fractured horizontal wells was established,and its Laplace solutions under two different outer boundary conditions were obtained.Then,an improved pressure deconvolution algorithm was used to normalize the scattered production data.Furthermore,the typical curve fitting was carried out using the production data and the seepage model solution.Finally,some reservoir parameters and fracturing parameters were interpreted,and the intensity of inter-well interference was compared.The effectiveness of the method was verified by analyzing the production dynamic data of six shale gas wells in Duvernay area.The results showed that the fitting effect of typical curves was greatly improved due to the mutual restriction between deconvolution calculation parameter debugging and seepage model parameter debugging.Besides,by using the morphological characteristics of the log-log typical curves and the time corresponding to the intersection point of the log-log typical curves of two models under different outer boundary conditions,the strength of the interference between wells on the same well platform was well judged.This work can provide a reference for the optimization of well spacing and hydraulic fracturing measures for shale gas wells.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation (IITP)grant funded by the Korean government (MSIT) (No.2022-0-00369)by the NationalResearch Foundation of Korea Grant funded by the Korean government (2018R1A5A1060031,2022R1F1A1065664).
文摘How can we efficiently store and mine dynamically generated dense tensors for modeling the behavior of multidimensional dynamic data?Much of the multidimensional dynamic data in the real world is generated in the form of time-growing tensors.For example,air quality tensor data consists of multiple sensory values gathered from wide locations for a long time.Such data,accumulated over time,is redundant and consumes a lot ofmemory in its raw form.We need a way to efficiently store dynamically generated tensor data that increase over time and to model their behavior on demand between arbitrary time blocks.To this end,we propose a Block IncrementalDense Tucker Decomposition(BID-Tucker)method for efficient storage and on-demand modeling ofmultidimensional spatiotemporal data.Assuming that tensors come in unit blocks where only the time domain changes,our proposed BID-Tucker first slices the blocks into matrices and decomposes them via singular value decomposition(SVD).The SVDs of the time×space sliced matrices are stored instead of the raw tensor blocks to save space.When modeling from data is required at particular time blocks,the SVDs of corresponding time blocks are retrieved and incremented to be used for Tucker decomposition.The factor matrices and core tensor of the decomposed results can then be used for further data analysis.We compared our proposed BID-Tucker with D-Tucker,which our method extends,and vanilla Tucker decomposition.We show that our BID-Tucker is faster than both D-Tucker and vanilla Tucker decomposition and uses less memory for storage with a comparable reconstruction error.We applied our proposed BID-Tucker to model the spatial and temporal trends of air quality data collected in South Korea from 2018 to 2022.We were able to model the spatial and temporal air quality trends.We were also able to verify unusual events,such as chronic ozone alerts and large fire events.
文摘Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fish and shellfish are responsible for more than 90%of food allergies.Here,we provide promising insights using a large-scale data-driven analysis,comparing the mechanistic feature and biological relevance of different ingredients presents in peanuts,tree nuts(walnuts,almonds,cashews,pecans and pistachios)and soybean.Additionally,we have analysed the chemical compositions of peanuts in different processed form raw,boiled and dry-roasted.Using the data-driven approach we are able to generate new hypotheses to explain why nuclear receptors like the peroxisome proliferator-activated receptors(PPARs)and its isoform and their interaction with dietary lipids may have significant effect on allergic response.The results obtained from this study will direct future experimeantal and clinical studies to understand the role of dietary lipids and PPARisoforms to exert pro-inflammatory or anti-inflammatory functions on cells of the innate immunity and influence antigen presentation to the cells of the adaptive immunity.
基金supported by STI 2030-Major Projects 2021ZD0200400National Natural Science Foundation of China(62276233 and 62072405)Key Research Project of Zhejiang Province(2023C01048).
文摘Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and share such multimodal data.However,due to professional discrepancies among annotators and lax quality control,noisy labels might be introduced.Recent research suggests that deep neural networks(DNNs)will overfit noisy labels,leading to the poor performance of the DNNs.To address this challenging problem,we present a Multimodal Robust Meta Learning framework(MRML)for multimodal sentiment analysis to resist noisy labels and correlate distinct modalities simultaneously.Specifically,we propose a two-layer fusion net to deeply fuse different modalities and improve the quality of the multimodal data features for label correction and network training.Besides,a multiple meta-learner(label corrector)strategy is proposed to enhance the label correction approach and prevent models from overfitting to noisy labels.We conducted experiments on three popular multimodal datasets to verify the superiority of ourmethod by comparing it with four baselines.
基金supported by the National Natural Science Foundation of China(Grant Nos.11871428 and 12071214)the Natural Science Foundation for Colleges and Universities of Jiangsu Province of China(Grant No.20KJB110011)+1 种基金supported by the National Science Foundation(Grant No.DMS-1620335)and the Simons Foundation(Grant No.637716)supported by the National Natural Science Foundation of China(Grant Nos.11871428 and 12272347).
文摘This paper investigates superconvergence properties of the direct discontinuous Galerkin(DDG)method with interface corrections and the symmetric DDG method for diffusion equations.We apply the Fourier analysis technique to symbolically compute eigenvalues and eigenvectors of the amplification matrices for both DDG methods with different coefficient settings in the numerical fluxes.Based on the eigen-structure analysis,we carry out error estimates of the DDG solutions,which can be decomposed into three parts:(i)dissipation errors of the physically relevant eigenvalue,which grow linearly with the time and are of order 2k for P^(k)(k=2,3)approximations;(ii)projection error from a special projection of the exact solution,which is decreasing over the time and is related to the eigenvector corresponding to the physically relevant eigenvalue;(iii)dissipative errors of non-physically relevant eigenvalues,which decay exponentially with respect to the spatial mesh sizeΔx.We observe that the errors are sensitive to the choice of the numerical flux coefficient for even degree P^(2)approximations,but are not for odd degree P^(3)approximations.Numerical experiments are provided to verify the theoretical results.
文摘Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.
基金Deanship of Research and Graduate Studies at King Khalid University for funding this work through large Research Project under Grant Number RGP2/302/45supported by the Deanship of Scientific Research,Vice Presidency forGraduate Studies and Scientific Research,King Faisal University,Saudi Arabia(Grant Number A426).
文摘Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challenge.This study investigates a bacterial meningitis model through deterministic and stochastic versions.Four-compartment population dynamics explain the concept,particularly the susceptible population,carrier,infected,and recovered.The model predicts the nonnegative equilibrium points and reproduction number,i.e.,the Meningitis-Free Equilibrium(MFE),and Meningitis-Existing Equilibrium(MEE).For the stochastic version of the existing deterministicmodel,the twomethodologies studied are transition probabilities and non-parametric perturbations.Also,positivity,boundedness,extinction,and disease persistence are studiedrigorouslywiththe helpofwell-known theorems.Standard and nonstandard techniques such as EulerMaruyama,stochastic Euler,stochastic Runge Kutta,and stochastic nonstandard finite difference in the sense of delay have been presented for computational analysis of the stochastic model.Unfortunately,standard methods fail to restore the biological properties of the model,so the stochastic nonstandard finite difference approximation is offered as an efficient,low-cost,and independent of time step size.In addition,the convergence,local,and global stability around the equilibria of the nonstandard computational method is studied by assuming the perturbation effect is zero.The simulations and comparison of the methods are presented to support the theoretical results and for the best visualization of results.
基金supported in part by the MOST Major Research and Development Project(Grant No.2021YFB2900204)the National Natural Science Foundation of China(NSFC)(Grant No.62201123,No.62132004,No.61971102)+3 种基金China Postdoctoral Science Foundation(Grant No.2022TQ0056)in part by the financial support of the Sichuan Science and Technology Program(Grant No.2022YFH0022)Sichuan Major R&D Project(Grant No.22QYCX0168)the Municipal Government of Quzhou(Grant No.2022D031)。
文摘Integrated data and energy transfer(IDET)enables the electromagnetic waves to transmit wireless energy at the same time of data delivery for lowpower devices.In this paper,an energy harvesting modulation(EHM)assisted multi-user IDET system is studied,where all the received signals at the users are exploited for energy harvesting without the degradation of wireless data transfer(WDT)performance.The joint IDET performance is then analysed theoretically by conceiving a practical time-dependent wireless channel.With the aid of the AO based algorithm,the average effective data rate among users are maximized by ensuring the BER and the wireless energy transfer(WET)performance.Simulation results validate and evaluate the IDET performance of the EHM assisted system,which also demonstrates that the optimal number of user clusters and IDET time slots should be allocated,in order to improve the WET and WDT performance.
基金Project(2017G006-N)supported by the Project of Science and Technology Research and Development Program of China Railway Corporation。
文摘Cable-stayed bridges have been widely used in high-speed railway infrastructure.The accurate determination of cable’s representative temperatures is vital during the intricate processes of design,construction,and maintenance of cable-stayed bridges.However,the representative temperatures of stayed cables are not specified in the existing design codes.To address this issue,this study investigates the distribution of the cable temperature and determinates its representative temperature.First,an experimental investigation,spanning over a period of one year,was carried out near the bridge site to obtain the temperature data.According to the statistical analysis of the measured data,it reveals that the temperature distribution is generally uniform along the cable cross-section without significant temperature gradient.Then,based on the limited data,the Monte Carlo,the gradient boosted regression trees(GBRT),and univariate linear regression(ULR)methods are employed to predict the cable’s representative temperature throughout the service life.These methods effectively overcome the limitations of insufficient monitoring data and accurately predict the representative temperature of the cables.However,each method has its own advantages and limitations in terms of applicability and accuracy.A comprehensive evaluation of the performance of these methods is conducted,and practical recommendations are provided for their application.The proposed methods and representative temperatures provide a good basis for the operation and maintenance of in-service long-span cable-stayed bridges.
文摘The inter-city linkage heat data provided by Baidu Migration is employed as a characterization of inter-city linkages in order to facilitate the study of the network linkage characteristics and hierarchical structure of urban agglomeration in the Greater Bay Area through the use of social network analysis method.This is the inaugural application of big data based on location services in the study of urban agglomeration network structure,which represents a novel research perspective on this topic.The study reveals that the density of network linkages in the Greater Bay Area urban agglomeration has reached 100%,indicating a mature network-like spatial structure.This structure has given rise to three distinct communities:Shenzhen-Dongguan-Huizhou,Guangzhou-Foshan-Zhaoqing,and Zhuhai-Zhongshan-Jiangmen.Additionally,cities within the Greater Bay Area urban agglomeration play different roles,suggesting that varying development strategies may be necessary to achieve staggered development.The study demonstrates that large datasets represented by LBS can offer novel insights and methodologies for the examination of urban agglomeration network structures,contingent on the appropriate mining and processing of the data.
文摘This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and its powerful data management and analysis tools make it suitable for handling complex data analysis tasks.It is also highly customizable,allowing users to create custom functions and packages to meet their specific needs.Additionally,R language provides high reproducibility,making it easy to replicate and verify research results,and it has excellent collaboration capabilities,enabling multiple users to work on the same project simultaneously.These advantages make R language a more suitable choice for complex data analysis tasks,particularly in scientific research and business applications.The findings of this study will help people understand that R is not just a language that can handle more data than Excel and demonstrate that r is essential to the field of data analysis.At the same time,it will also help users and organizations make informed decisions regarding their data analysis needs and software preferences.
文摘This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.
文摘This study investigates university English teachers’acceptance and willingness to use learning management system(LMS)data analysis tools in their teaching practices.The research employs a mixed-method approach,combining quantitative surveys and qualitative interviews to understand teachers’perceptions and attitudes,and the factors influencing their adoption of LMS data analysis tools.The findings reveal that perceived usefulness,perceived ease of use,technical literacy,organizational support,and data privacy concerns significantly impact teachers’willingness to use these tools.Based on these insights,the study offers practical recommendations for educational institutions to enhance the effective adoption of LMS data analysis tools in English language teaching.
文摘Classical Chinese characters,presented through calligraphy,seal engraving,or painting,can exhibit different aesthetics and essences of Chinese characters,making them the most important asset of the Chinese people.Calligraphy and seal engraving,as two closely related systems in traditional Chinese art,have developed through the ages.Due to changes in lifestyle and advancements in modern technology,their original functions of daily writing and verification have gradually diminished.Instead,they have increasingly played a significant role in commercial art.This study utilizes the Evaluation Grid Method(EGM)and the Analytic Hierarchy Process(AHP)to research the key preference factors in the application of calligraphy and seal engraving imagery.Different from the traditional 5-point equal interval semantic questionnaire,this study employs a non-equal interval semantic questionnaire with a golden ratio scale,distinguishing the importance ratio of adjacent semantic meanings and highlighting the weighted emphasis on visual aesthetics.Additionally,the study uses Importance-Performance Analysis(IPA)and Technique for Order Preference by Similarity to Ideal Solution(TOPSIS)to obtain the key preference sequence of calligraphy and seal engraving culture.Plus,the Choquet integral comprehensive evaluation is used as a reference for IPA comparison.It is hoped that this study can provide cultural imagery references and research methods,injecting further creativity into industrial design.
基金Supported by the NSFC-Zhejiang Joint Fund for the Integration of Industrialization and Informatization(U1909208)the Science and Technology Major Project of Changsha(kh2202004)the Changsha Municipal Natural Science Foundation(kq2202106).
文摘Electrocardiogram(ECG)is a low-cost,simple,fast,and non-invasive test.It can reflect the heart’s electrical activity and provide valuable diagnostic clues about the health of the entire body.Therefore,ECG has been widely used in various biomedical applications such as arrhythmia detection,disease-specific detection,mortality prediction,and biometric recognition.In recent years,ECG-related studies have been carried out using a variety of publicly available datasets,with many differences in the datasets used,data preprocessing methods,targeted challenges,and modeling and analysis techniques.Here we systematically summarize and analyze the ECGbased automatic analysis methods and applications.Specifically,we first reviewed 22 commonly used ECG public datasets and provided an overview of data preprocessing processes.Then we described some of the most widely used applications of ECG signals and analyzed the advanced methods involved in these applications.Finally,we elucidated some of the challenges in ECG analysis and provided suggestions for further research.
基金National Key R&D Program of China(Grant No.2018YFB1701701)Sailing Talent Program+1 种基金Guangdong Provincial Science and Technologies Program of China(Grant No.2017B090922008)Special Grand Grant from Tianjin City Government of China。
文摘Big data on product sales are an emerging resource for supporting modular product design to meet diversified customers’requirements of product specification combinations.To better facilitate decision-making of modular product design,correlations among specifications and components originated from customers’conscious and subconscious preferences can be investigated by using big data on product sales.This study proposes a framework and the associated methods for supporting modular product design decisions based on correlation analysis of product specifications and components using big sales data.The correlations of the product specifications are determined by analyzing the collected product sales data.By building the relations between the product components and specifications,a matrix for measuring the correlation among product components is formed for component clustering.Six rules for supporting the decision making of modular product design are proposed based on the frequency analysis of the specification values per component cluster.A case study of electric vehicles illustrates the application of the proposed method.