Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to pred...Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to predict the landslide runout but a fundamental problem remained is how to determine the reliable numerical parameters.This study proposes a framework to predict the runout of potential landslides through multi-source data collaboration and numerical analysis of historical landslide events.Specifically,for the historical landslide cases,the landslide-induced seismic signal,geophysical surveys,and possible in-situ drone/phone videos(multi-source data collaboration)can validate the numerical results in terms of landslide dynamics and deposit features and help calibrate the numerical(rheological)parameters.Subsequently,the calibrated numerical parameters can be used to numerically predict the runout of potential landslides in the region with a similar geological setting to the recorded events.Application of the runout prediction approach to the 2020 Jiashanying landslide in Guizhou,China gives reasonable results in comparison to the field observations.The numerical parameters are determined from the multi-source data collaboration analysis of a historical case in the region(2019 Shuicheng landslide).The proposed framework for landslide runout prediction can be of great utility for landslide risk assessment and disaster reduction in mountainous regions worldwide.展开更多
Distribution networks denote important public infrastructure necessary for people’s livelihoods.However,extreme natural disasters,such as earthquakes,typhoons,and mudslides,severely threaten the safe and stable opera...Distribution networks denote important public infrastructure necessary for people’s livelihoods.However,extreme natural disasters,such as earthquakes,typhoons,and mudslides,severely threaten the safe and stable operation of distribution networks and power supplies needed for daily life.Therefore,considering the requirements for distribution network disaster prevention and mitigation,there is an urgent need for in-depth research on risk assessment methods of distribution networks under extreme natural disaster conditions.This paper accessesmultisource data,presents the data quality improvement methods of distribution networks,and conducts data-driven active fault diagnosis and disaster damage analysis and evaluation using data-driven theory.Furthermore,the paper realizes real-time,accurate access to distribution network disaster information.The proposed approach performs an accurate and rapid assessment of cross-sectional risk through case study.The minimal average annual outage time can be reduced to 3 h/a in the ring network through case study.The approach proposed in this paper can provide technical support to the further improvement of the ability of distribution networks to cope with extreme natural disasters.展开更多
The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initiall...The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.展开更多
Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanal...Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy.展开更多
How can we efficiently store and mine dynamically generated dense tensors for modeling the behavior of multidimensional dynamic data?Much of the multidimensional dynamic data in the real world is generated in the form...How can we efficiently store and mine dynamically generated dense tensors for modeling the behavior of multidimensional dynamic data?Much of the multidimensional dynamic data in the real world is generated in the form of time-growing tensors.For example,air quality tensor data consists of multiple sensory values gathered from wide locations for a long time.Such data,accumulated over time,is redundant and consumes a lot ofmemory in its raw form.We need a way to efficiently store dynamically generated tensor data that increase over time and to model their behavior on demand between arbitrary time blocks.To this end,we propose a Block IncrementalDense Tucker Decomposition(BID-Tucker)method for efficient storage and on-demand modeling ofmultidimensional spatiotemporal data.Assuming that tensors come in unit blocks where only the time domain changes,our proposed BID-Tucker first slices the blocks into matrices and decomposes them via singular value decomposition(SVD).The SVDs of the time×space sliced matrices are stored instead of the raw tensor blocks to save space.When modeling from data is required at particular time blocks,the SVDs of corresponding time blocks are retrieved and incremented to be used for Tucker decomposition.The factor matrices and core tensor of the decomposed results can then be used for further data analysis.We compared our proposed BID-Tucker with D-Tucker,which our method extends,and vanilla Tucker decomposition.We show that our BID-Tucker is faster than both D-Tucker and vanilla Tucker decomposition and uses less memory for storage with a comparable reconstruction error.We applied our proposed BID-Tucker to model the spatial and temporal trends of air quality data collected in South Korea from 2018 to 2022.We were able to model the spatial and temporal air quality trends.We were also able to verify unusual events,such as chronic ozone alerts and large fire events.展开更多
Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fis...Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fish and shellfish are responsible for more than 90%of food allergies.Here,we provide promising insights using a large-scale data-driven analysis,comparing the mechanistic feature and biological relevance of different ingredients presents in peanuts,tree nuts(walnuts,almonds,cashews,pecans and pistachios)and soybean.Additionally,we have analysed the chemical compositions of peanuts in different processed form raw,boiled and dry-roasted.Using the data-driven approach we are able to generate new hypotheses to explain why nuclear receptors like the peroxisome proliferator-activated receptors(PPARs)and its isoform and their interaction with dietary lipids may have significant effect on allergic response.The results obtained from this study will direct future experimeantal and clinical studies to understand the role of dietary lipids and PPARisoforms to exert pro-inflammatory or anti-inflammatory functions on cells of the innate immunity and influence antigen presentation to the cells of the adaptive immunity.展开更多
Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and sha...Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and share such multimodal data.However,due to professional discrepancies among annotators and lax quality control,noisy labels might be introduced.Recent research suggests that deep neural networks(DNNs)will overfit noisy labels,leading to the poor performance of the DNNs.To address this challenging problem,we present a Multimodal Robust Meta Learning framework(MRML)for multimodal sentiment analysis to resist noisy labels and correlate distinct modalities simultaneously.Specifically,we propose a two-layer fusion net to deeply fuse different modalities and improve the quality of the multimodal data features for label correction and network training.Besides,a multiple meta-learner(label corrector)strategy is proposed to enhance the label correction approach and prevent models from overfitting to noisy labels.We conducted experiments on three popular multimodal datasets to verify the superiority of ourmethod by comparing it with four baselines.展开更多
Integrated data and energy transfer(IDET)enables the electromagnetic waves to transmit wireless energy at the same time of data delivery for lowpower devices.In this paper,an energy harvesting modulation(EHM)assisted ...Integrated data and energy transfer(IDET)enables the electromagnetic waves to transmit wireless energy at the same time of data delivery for lowpower devices.In this paper,an energy harvesting modulation(EHM)assisted multi-user IDET system is studied,where all the received signals at the users are exploited for energy harvesting without the degradation of wireless data transfer(WDT)performance.The joint IDET performance is then analysed theoretically by conceiving a practical time-dependent wireless channel.With the aid of the AO based algorithm,the average effective data rate among users are maximized by ensuring the BER and the wireless energy transfer(WET)performance.Simulation results validate and evaluate the IDET performance of the EHM assisted system,which also demonstrates that the optimal number of user clusters and IDET time slots should be allocated,in order to improve the WET and WDT performance.展开更多
This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and i...This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and its powerful data management and analysis tools make it suitable for handling complex data analysis tasks.It is also highly customizable,allowing users to create custom functions and packages to meet their specific needs.Additionally,R language provides high reproducibility,making it easy to replicate and verify research results,and it has excellent collaboration capabilities,enabling multiple users to work on the same project simultaneously.These advantages make R language a more suitable choice for complex data analysis tasks,particularly in scientific research and business applications.The findings of this study will help people understand that R is not just a language that can handle more data than Excel and demonstrate that r is essential to the field of data analysis.At the same time,it will also help users and organizations make informed decisions regarding their data analysis needs and software preferences.展开更多
Urban functional area(UFA)is a core scientific issue affecting urban sustainability.The current knowledge gap is mainly reflected in the lack of multi-scale quantitative interpretation methods from the perspective of ...Urban functional area(UFA)is a core scientific issue affecting urban sustainability.The current knowledge gap is mainly reflected in the lack of multi-scale quantitative interpretation methods from the perspective of human-land interaction.In this paper,based on multi-source big data include 250 m×250 m resolution cell phone data,1.81×105 Points of Interest(POI)data and administrative boundary data,we built a UFA identification method and demonstrated empirically in Shenyang City,China.We argue that the method we built can effectively identify multi-scale multi-type UFAs based on human activity and further reveal the spatial correlation between urban facilities and human activity.The empirical study suggests that the employment functional zones in Shenyang City are more concentrated in central cities than other single functional zones.There are more mix functional areas in the central city areas,while the planned industrial new cities need to develop comprehensive functions in Shenyang.UFAs have scale effects and human-land interaction patterns.We suggest that city decision makers should apply multi-sources big data to measure urban functional service in a more refined manner from a supply-demand perspective.展开更多
Multi-Source data plays an important role in the evolution of media convergence.Its fusion processing enables the further mining of data and utilization of data value and broadens the path for the sharing and dissemin...Multi-Source data plays an important role in the evolution of media convergence.Its fusion processing enables the further mining of data and utilization of data value and broadens the path for the sharing and dissemination of media data.However,it also faces serious problems in terms of protecting user and data privacy.Many privacy protectionmethods have been proposed to solve the problemof privacy leakage during the process of data sharing,but they suffer fromtwo flaws:1)the lack of algorithmic frameworks for specific scenarios such as dynamic datasets in the media domain;2)the inability to solve the problem of the high computational complexity of ciphertext in multi-source data privacy protection,resulting in long encryption and decryption times.In this paper,we propose a multi-source data privacy protection method based on homomorphic encryption and blockchain technology,which solves the privacy protection problem ofmulti-source heterogeneous data in the dissemination ofmedia and reduces ciphertext processing time.We deployed the proposedmethod on theHyperledger platformfor testing and compared it with the privacy protection schemes based on k-anonymity and differential privacy.The experimental results showthat the key generation,encryption,and decryption times of the proposedmethod are lower than those in data privacy protection methods based on k-anonymity technology and differential privacy technology.This significantly reduces the processing time ofmulti-source data,which gives it potential for use in many applications.展开更多
Very low frequency(VLF)signals are propagated between the ground-ionosphere.Multimode interference will cause the phase to show oscillatory changes with distance while propagating at night,leading to abnormalities in ...Very low frequency(VLF)signals are propagated between the ground-ionosphere.Multimode interference will cause the phase to show oscillatory changes with distance while propagating at night,leading to abnormalities in the received VLF signal.This study uses the VLF signal received in Qingdao City,Shandong Province,from the Russian Alpha navigation system to explore the multimode interference problem of VLF signal propagation.The characteristics of the effect of multimode interference phenomena on the phase are analyzed according to the variation of the phase of the VLF signal.However,the phase of VLF signals will also be affected by the X-ray and energetic particles that are released during the eruption of solar flares,therefore the two phenomena are studied in this work.It is concluded that the X-ray will not affect the phase of VLF signals at night,but the energetic particles will affect the phase change,and the influence of energetic particles should be excluded in the study of multimode interference phenomena.Using VLF signals for navigation positioning in degraded or unavailable GPS conditions is of great practical significance for VLF navigation systems as it can avoid the influence of multimode interference and improve positioning accuracy.展开更多
Highly turbulent water flows,often encountered near human constructions like bridge piers,spillways,and weirs,display intricate dynamics characterized by the formation of eddies and vortices.These formations,varying i...Highly turbulent water flows,often encountered near human constructions like bridge piers,spillways,and weirs,display intricate dynamics characterized by the formation of eddies and vortices.These formations,varying in sizes and lifespans,significantly influence the distribution of fluid velocities within the flow.Subsequently,the rapid velocity fluctuations in highly turbulent flows lead to elevated shear and normal stress levels.For this reason,to meticulously study these dynamics,more often than not,physical modeling is employed for studying the impact of turbulent flows on the stability and longevity of nearby structures.Despite the effectiveness of physical modeling,various monitoring challenges arise,including flow disruption,the necessity for concurrent gauging at multiple locations,and the duration of measurements.Addressing these challenges,image velocimetry emerges as an ideal method in fluid mechanics,particularly for studying turbulent flows.To account for measurement duration,a probabilistic approach utilizing a probability density function(PDF)is suggested to mitigate uncertainty in estimated average and maximum values.However,it becomes evident that deriving the PDF is not straightforward for all turbulence-induced stresses.In response,this study proposes a novel approach by combining image velocimetry with a stochastic model to provide a generic yet accurate description of flow dynamics in such applications.This integration enables an approach based on the probability of failure,facilitating a more comprehensive analysis of turbulent flows.Such an approach is essential for estimating both short-and long-term stresses on hydraulic constructions under assessment.展开更多
The study involved the evaluation of the hydrocarbon potential of FORMAT Field, coastal swamp depobelt Niger delta, Nigeria to obtain a more efficient reservoir characterization and fluid properties identification. De...The study involved the evaluation of the hydrocarbon potential of FORMAT Field, coastal swamp depobelt Niger delta, Nigeria to obtain a more efficient reservoir characterization and fluid properties identification. Despite advances in seismic data interpretation using traditional 3D seismic data interpretation, obtaining adequate reservoir characteristics at the finest level had proved very challenging with often disappointing results. A method that integrates the amplitude variation with offfset (AVO) analysis is hereby proposed to better illuminate the reservoir. The Hampson Russell 10.3 was used to integrate and study the available seismic and well data. The reservoir of interest was delineated using the available suite of petrophysical data. This was marked by low gamma ray, high resistivity, and low acoustic impedance between a true subsea vertical depth (TVDss) range of 10,350 - 10,450 ft. The AVO fluid substitution yielded a decrease in the density values of pure gas (2.3 - 1.6 g/cc), pure oil (2.3 - 1.8 g/cc) while the Poisson pure brine increased (2.3 to 2.8 g/cc). Result from FORMAT 26 plots yielded a negative intercept and negative gradient at the top and a positive intercept and positive gradient at the Base which conforms to Class III AVO anomaly. FORMAT 30 plots yielded a negative intercept and positive gradient at the top and a positive intercept and negative gradient at the Base which conforms to class IV AVO anomaly. AVO attribute volume slices decreased in the Poisson ratio (0.96 to - 1.0) indicating that the reservoir contains hydrocarbon. The s-wave reflectivity and the product of the intercept and gradient further clarified that there was a Class 3 gas sand in the reservoir and the possibility of a Class 4 gas sand anomaly in that same reservoir.展开更多
Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating du...Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.展开更多
The outbreak of the pandemic,caused by Coronavirus Disease 2019(COVID-19),has affected the daily activities of people across the globe.During COVID-19 outbreak and the successive lockdowns,Twitter was heavily used and...The outbreak of the pandemic,caused by Coronavirus Disease 2019(COVID-19),has affected the daily activities of people across the globe.During COVID-19 outbreak and the successive lockdowns,Twitter was heavily used and the number of tweets regarding COVID-19 increased tremendously.Several studies used Sentiment Analysis(SA)to analyze the emotions expressed through tweets upon COVID-19.Therefore,in current study,a new Artificial Bee Colony(ABC)with Machine Learning-driven SA(ABCMLSA)model is developed for conducting Sentiment Analysis of COVID-19 Twitter data.The prime focus of the presented ABCML-SA model is to recognize the sentiments expressed in tweets made uponCOVID-19.It involves data pre-processing at the initial stage followed by n-gram based feature extraction to derive the feature vectors.For identification and classification of the sentiments,the Support Vector Machine(SVM)model is exploited.At last,the ABC algorithm is applied to fine tune the parameters involved in SVM.To demonstrate the improved performance of the proposed ABCML-SA model,a sequence of simulations was conducted.The comparative assessment results confirmed the effectual performance of the proposed ABCML-SA model over other approaches.展开更多
This study sought to conduct a bibliometric analysis of acupuncture studies focusing on heart rate variability(HRV)and to investigate the correlation between various acupoints and their effects on HRV by utilizing ass...This study sought to conduct a bibliometric analysis of acupuncture studies focusing on heart rate variability(HRV)and to investigate the correlation between various acupoints and their effects on HRV by utilizing association rule mining and network analysis.A total of 536 publications on the topic of acupuncture studies based on HRV.The disease keyword analysis revealed that HRV-related acupuncture studies were mainly related to pain,inflammation,emotional disorders,gastrointestinal function,and hypertension.A separate analysis was conducted on acupuncture prescriptions,and Neiguan(PC6)and Zusanli(ST36)were the most frequently used acupoints.The core acupoints for HRV regulation were identified as PC6,ST36,Shenmen(HT7),Hegu(LI4),Sanyinjiao(SP6),Jianshi(PC5),Taichong(LR3),Quchi(LI11),Guanyuan(CV4),Baihui(GV20),and Taixi(KI3).Additionally,the research encompassed 46 reports on acupuncture animal experiments conducted on HRV,with ST36 being the most frequently utilized acupoint.The research presented in this study offers valuable insights into the global research trend and hotspots in acupuncture-based HRV studies,as well as identifying frequently used combinations of acupoints.The findings may be helpful for further research in this field and provide valuable information about the potential use of acupuncture for improving HRV in both humans and animals.展开更多
Android smartphones largely dominate the smartphone market. For this reason, it is very important to examine these smartphones in terms of digital forensics since they are often used as evidence in trials. It is possi...Android smartphones largely dominate the smartphone market. For this reason, it is very important to examine these smartphones in terms of digital forensics since they are often used as evidence in trials. It is possible to acquire a physical or logical image of these devices. Acquiring physical and logical images has advantages and disadvantages compared to each other. Creating the logical image is done at the file system level. Analysis can be made on this logical image. Both logical image acquisition and analysis of the image can be done by software tools. In this study, the differences between logical image and physical image acquisition in Android smartphones, their advantages and disadvantages compared to each other, the difficulties that may be encountered in obtaining physical images, which type of image contributes to obtaining more useful and effective data, which one should be preferred for different conditions, and the benefits of having root authority are discussed. The practice of getting the logical image of the Android smartphones and making an analysis on the image is also included. Although root privileges are not required for logical image acquisition, it has been observed that very limited data will be obtained with the logical image created without root privileges. Nevertheless, logical image acquisition has advantages too against physical image acquisition.展开更多
Social media is an essential component of our personal and professional lives. We use it extensively to share various things, including our opinions on daily topics and feelings about different subjects. This sharing ...Social media is an essential component of our personal and professional lives. We use it extensively to share various things, including our opinions on daily topics and feelings about different subjects. This sharing of posts provides insights into someone’s current emotions. In artificial intelligence (AI) and deep learning (DL), researchers emphasize opinion mining and analysis of sentiment, particularly on social media platforms such as Twitter (currently known as X), which has a global user base. This research work revolves explicitly around a comparison between two popular approaches: Lexicon-based and Deep learning-based Approaches. To conduct this study, this study has used a Twitter dataset called sentiment140, which contains over 1.5 million data points. The primary focus was the Long Short-Term Memory (LSTM) deep learning sequence model. In the beginning, we used particular techniques to preprocess the data. The dataset is divided into training and test data. We evaluated the performance of our model using the test data. Simultaneously, we have applied the lexicon-based approach to the same test data and recorded the outputs. Finally, we compared the two approaches by creating confusion matrices based on their respective outputs. This allows us to assess their precision, recall, and F1-Score, enabling us to determine which approach yields better accuracy. This research achieved 98% model accuracy for deep learning algorithms and 95% model accuracy for the lexicon-based approach.展开更多
Sentiment analysis is a method to identify and understand the emotion in the text through NLP and text analysis. In the era of information technology, there is often a certain error between the comments on the movie w...Sentiment analysis is a method to identify and understand the emotion in the text through NLP and text analysis. In the era of information technology, there is often a certain error between the comments on the movie website and the actual score of the movie, and sentiment analysis technology provides a new way to solve this problem. In this paper, Python is used to obtain the movie review data from the Douban platform, and the model is constructed and trained by using naive Bayes and Bi-LSTM. According to the index, a better Bi-LSTM model is selected to classify the emotion of users’ movie reviews, and the classification results are scored according to the classification results, and compared with the real ratings on the website. According to the error of the final comparison results, the feasibility of this technology in the scoring direction of film reviews is being verified. By applying this technology, the phenomenon of film rating distortion in the information age can be prevented and the rights and interests of film and television works can be safeguarded.展开更多
基金supported by the National Natural Science Foundation of China(41977215)。
文摘Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to predict the landslide runout but a fundamental problem remained is how to determine the reliable numerical parameters.This study proposes a framework to predict the runout of potential landslides through multi-source data collaboration and numerical analysis of historical landslide events.Specifically,for the historical landslide cases,the landslide-induced seismic signal,geophysical surveys,and possible in-situ drone/phone videos(multi-source data collaboration)can validate the numerical results in terms of landslide dynamics and deposit features and help calibrate the numerical(rheological)parameters.Subsequently,the calibrated numerical parameters can be used to numerically predict the runout of potential landslides in the region with a similar geological setting to the recorded events.Application of the runout prediction approach to the 2020 Jiashanying landslide in Guizhou,China gives reasonable results in comparison to the field observations.The numerical parameters are determined from the multi-source data collaboration analysis of a historical case in the region(2019 Shuicheng landslide).The proposed framework for landslide runout prediction can be of great utility for landslide risk assessment and disaster reduction in mountainous regions worldwide.
文摘Distribution networks denote important public infrastructure necessary for people’s livelihoods.However,extreme natural disasters,such as earthquakes,typhoons,and mudslides,severely threaten the safe and stable operation of distribution networks and power supplies needed for daily life.Therefore,considering the requirements for distribution network disaster prevention and mitigation,there is an urgent need for in-depth research on risk assessment methods of distribution networks under extreme natural disaster conditions.This paper accessesmultisource data,presents the data quality improvement methods of distribution networks,and conducts data-driven active fault diagnosis and disaster damage analysis and evaluation using data-driven theory.Furthermore,the paper realizes real-time,accurate access to distribution network disaster information.The proposed approach performs an accurate and rapid assessment of cross-sectional risk through case study.The minimal average annual outage time can be reduced to 3 h/a in the ring network through case study.The approach proposed in this paper can provide technical support to the further improvement of the ability of distribution networks to cope with extreme natural disasters.
基金supported by the National Key Research and Development Program of China(grant number 2019YFE0123600)。
文摘The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.
基金funded by the National Natural Science Foundation of China(NSFC)the Chinese Academy of Sciences(CAS)(grant No.U2031209)the National Natural Science Foundation of China(NSFC,grant Nos.11872128,42174192,and 91952111)。
文摘Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation (IITP)grant funded by the Korean government (MSIT) (No.2022-0-00369)by the NationalResearch Foundation of Korea Grant funded by the Korean government (2018R1A5A1060031,2022R1F1A1065664).
文摘How can we efficiently store and mine dynamically generated dense tensors for modeling the behavior of multidimensional dynamic data?Much of the multidimensional dynamic data in the real world is generated in the form of time-growing tensors.For example,air quality tensor data consists of multiple sensory values gathered from wide locations for a long time.Such data,accumulated over time,is redundant and consumes a lot ofmemory in its raw form.We need a way to efficiently store dynamically generated tensor data that increase over time and to model their behavior on demand between arbitrary time blocks.To this end,we propose a Block IncrementalDense Tucker Decomposition(BID-Tucker)method for efficient storage and on-demand modeling ofmultidimensional spatiotemporal data.Assuming that tensors come in unit blocks where only the time domain changes,our proposed BID-Tucker first slices the blocks into matrices and decomposes them via singular value decomposition(SVD).The SVDs of the time×space sliced matrices are stored instead of the raw tensor blocks to save space.When modeling from data is required at particular time blocks,the SVDs of corresponding time blocks are retrieved and incremented to be used for Tucker decomposition.The factor matrices and core tensor of the decomposed results can then be used for further data analysis.We compared our proposed BID-Tucker with D-Tucker,which our method extends,and vanilla Tucker decomposition.We show that our BID-Tucker is faster than both D-Tucker and vanilla Tucker decomposition and uses less memory for storage with a comparable reconstruction error.We applied our proposed BID-Tucker to model the spatial and temporal trends of air quality data collected in South Korea from 2018 to 2022.We were able to model the spatial and temporal air quality trends.We were also able to verify unusual events,such as chronic ozone alerts and large fire events.
文摘Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fish and shellfish are responsible for more than 90%of food allergies.Here,we provide promising insights using a large-scale data-driven analysis,comparing the mechanistic feature and biological relevance of different ingredients presents in peanuts,tree nuts(walnuts,almonds,cashews,pecans and pistachios)and soybean.Additionally,we have analysed the chemical compositions of peanuts in different processed form raw,boiled and dry-roasted.Using the data-driven approach we are able to generate new hypotheses to explain why nuclear receptors like the peroxisome proliferator-activated receptors(PPARs)and its isoform and their interaction with dietary lipids may have significant effect on allergic response.The results obtained from this study will direct future experimeantal and clinical studies to understand the role of dietary lipids and PPARisoforms to exert pro-inflammatory or anti-inflammatory functions on cells of the innate immunity and influence antigen presentation to the cells of the adaptive immunity.
基金supported by STI 2030-Major Projects 2021ZD0200400National Natural Science Foundation of China(62276233 and 62072405)Key Research Project of Zhejiang Province(2023C01048).
文摘Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and share such multimodal data.However,due to professional discrepancies among annotators and lax quality control,noisy labels might be introduced.Recent research suggests that deep neural networks(DNNs)will overfit noisy labels,leading to the poor performance of the DNNs.To address this challenging problem,we present a Multimodal Robust Meta Learning framework(MRML)for multimodal sentiment analysis to resist noisy labels and correlate distinct modalities simultaneously.Specifically,we propose a two-layer fusion net to deeply fuse different modalities and improve the quality of the multimodal data features for label correction and network training.Besides,a multiple meta-learner(label corrector)strategy is proposed to enhance the label correction approach and prevent models from overfitting to noisy labels.We conducted experiments on three popular multimodal datasets to verify the superiority of ourmethod by comparing it with four baselines.
基金supported in part by the MOST Major Research and Development Project(Grant No.2021YFB2900204)the National Natural Science Foundation of China(NSFC)(Grant No.62201123,No.62132004,No.61971102)+3 种基金China Postdoctoral Science Foundation(Grant No.2022TQ0056)in part by the financial support of the Sichuan Science and Technology Program(Grant No.2022YFH0022)Sichuan Major R&D Project(Grant No.22QYCX0168)the Municipal Government of Quzhou(Grant No.2022D031)。
文摘Integrated data and energy transfer(IDET)enables the electromagnetic waves to transmit wireless energy at the same time of data delivery for lowpower devices.In this paper,an energy harvesting modulation(EHM)assisted multi-user IDET system is studied,where all the received signals at the users are exploited for energy harvesting without the degradation of wireless data transfer(WDT)performance.The joint IDET performance is then analysed theoretically by conceiving a practical time-dependent wireless channel.With the aid of the AO based algorithm,the average effective data rate among users are maximized by ensuring the BER and the wireless energy transfer(WET)performance.Simulation results validate and evaluate the IDET performance of the EHM assisted system,which also demonstrates that the optimal number of user clusters and IDET time slots should be allocated,in order to improve the WET and WDT performance.
文摘This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and its powerful data management and analysis tools make it suitable for handling complex data analysis tasks.It is also highly customizable,allowing users to create custom functions and packages to meet their specific needs.Additionally,R language provides high reproducibility,making it easy to replicate and verify research results,and it has excellent collaboration capabilities,enabling multiple users to work on the same project simultaneously.These advantages make R language a more suitable choice for complex data analysis tasks,particularly in scientific research and business applications.The findings of this study will help people understand that R is not just a language that can handle more data than Excel and demonstrate that r is essential to the field of data analysis.At the same time,it will also help users and organizations make informed decisions regarding their data analysis needs and software preferences.
基金Under the auspices of Natural Science Foundation of China(No.41971166)。
文摘Urban functional area(UFA)is a core scientific issue affecting urban sustainability.The current knowledge gap is mainly reflected in the lack of multi-scale quantitative interpretation methods from the perspective of human-land interaction.In this paper,based on multi-source big data include 250 m×250 m resolution cell phone data,1.81×105 Points of Interest(POI)data and administrative boundary data,we built a UFA identification method and demonstrated empirically in Shenyang City,China.We argue that the method we built can effectively identify multi-scale multi-type UFAs based on human activity and further reveal the spatial correlation between urban facilities and human activity.The empirical study suggests that the employment functional zones in Shenyang City are more concentrated in central cities than other single functional zones.There are more mix functional areas in the central city areas,while the planned industrial new cities need to develop comprehensive functions in Shenyang.UFAs have scale effects and human-land interaction patterns.We suggest that city decision makers should apply multi-sources big data to measure urban functional service in a more refined manner from a supply-demand perspective.
基金funded by the High-Quality and Cutting-Edge Discipline Construction Project for Universities in Beijing (Internet Information,Communication University of China).
文摘Multi-Source data plays an important role in the evolution of media convergence.Its fusion processing enables the further mining of data and utilization of data value and broadens the path for the sharing and dissemination of media data.However,it also faces serious problems in terms of protecting user and data privacy.Many privacy protectionmethods have been proposed to solve the problemof privacy leakage during the process of data sharing,but they suffer fromtwo flaws:1)the lack of algorithmic frameworks for specific scenarios such as dynamic datasets in the media domain;2)the inability to solve the problem of the high computational complexity of ciphertext in multi-source data privacy protection,resulting in long encryption and decryption times.In this paper,we propose a multi-source data privacy protection method based on homomorphic encryption and blockchain technology,which solves the privacy protection problem ofmulti-source heterogeneous data in the dissemination ofmedia and reduces ciphertext processing time.We deployed the proposedmethod on theHyperledger platformfor testing and compared it with the privacy protection schemes based on k-anonymity and differential privacy.The experimental results showthat the key generation,encryption,and decryption times of the proposedmethod are lower than those in data privacy protection methods based on k-anonymity technology and differential privacy technology.This significantly reduces the processing time ofmulti-source data,which gives it potential for use in many applications.
基金supported by the National Natural Science Foundation of China(U1704134)。
文摘Very low frequency(VLF)signals are propagated between the ground-ionosphere.Multimode interference will cause the phase to show oscillatory changes with distance while propagating at night,leading to abnormalities in the received VLF signal.This study uses the VLF signal received in Qingdao City,Shandong Province,from the Russian Alpha navigation system to explore the multimode interference problem of VLF signal propagation.The characteristics of the effect of multimode interference phenomena on the phase are analyzed according to the variation of the phase of the VLF signal.However,the phase of VLF signals will also be affected by the X-ray and energetic particles that are released during the eruption of solar flares,therefore the two phenomena are studied in this work.It is concluded that the X-ray will not affect the phase of VLF signals at night,but the energetic particles will affect the phase change,and the influence of energetic particles should be excluded in the study of multimode interference phenomena.Using VLF signals for navigation positioning in degraded or unavailable GPS conditions is of great practical significance for VLF navigation systems as it can avoid the influence of multimode interference and improve positioning accuracy.
文摘Highly turbulent water flows,often encountered near human constructions like bridge piers,spillways,and weirs,display intricate dynamics characterized by the formation of eddies and vortices.These formations,varying in sizes and lifespans,significantly influence the distribution of fluid velocities within the flow.Subsequently,the rapid velocity fluctuations in highly turbulent flows lead to elevated shear and normal stress levels.For this reason,to meticulously study these dynamics,more often than not,physical modeling is employed for studying the impact of turbulent flows on the stability and longevity of nearby structures.Despite the effectiveness of physical modeling,various monitoring challenges arise,including flow disruption,the necessity for concurrent gauging at multiple locations,and the duration of measurements.Addressing these challenges,image velocimetry emerges as an ideal method in fluid mechanics,particularly for studying turbulent flows.To account for measurement duration,a probabilistic approach utilizing a probability density function(PDF)is suggested to mitigate uncertainty in estimated average and maximum values.However,it becomes evident that deriving the PDF is not straightforward for all turbulence-induced stresses.In response,this study proposes a novel approach by combining image velocimetry with a stochastic model to provide a generic yet accurate description of flow dynamics in such applications.This integration enables an approach based on the probability of failure,facilitating a more comprehensive analysis of turbulent flows.Such an approach is essential for estimating both short-and long-term stresses on hydraulic constructions under assessment.
文摘The study involved the evaluation of the hydrocarbon potential of FORMAT Field, coastal swamp depobelt Niger delta, Nigeria to obtain a more efficient reservoir characterization and fluid properties identification. Despite advances in seismic data interpretation using traditional 3D seismic data interpretation, obtaining adequate reservoir characteristics at the finest level had proved very challenging with often disappointing results. A method that integrates the amplitude variation with offfset (AVO) analysis is hereby proposed to better illuminate the reservoir. The Hampson Russell 10.3 was used to integrate and study the available seismic and well data. The reservoir of interest was delineated using the available suite of petrophysical data. This was marked by low gamma ray, high resistivity, and low acoustic impedance between a true subsea vertical depth (TVDss) range of 10,350 - 10,450 ft. The AVO fluid substitution yielded a decrease in the density values of pure gas (2.3 - 1.6 g/cc), pure oil (2.3 - 1.8 g/cc) while the Poisson pure brine increased (2.3 to 2.8 g/cc). Result from FORMAT 26 plots yielded a negative intercept and negative gradient at the top and a positive intercept and positive gradient at the Base which conforms to Class III AVO anomaly. FORMAT 30 plots yielded a negative intercept and positive gradient at the top and a positive intercept and negative gradient at the Base which conforms to class IV AVO anomaly. AVO attribute volume slices decreased in the Poisson ratio (0.96 to - 1.0) indicating that the reservoir contains hydrocarbon. The s-wave reflectivity and the product of the intercept and gradient further clarified that there was a Class 3 gas sand in the reservoir and the possibility of a Class 4 gas sand anomaly in that same reservoir.
文摘Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.
基金The Deanship of ScientificResearch (DSR)at King Abdulaziz University,Jeddah,Saudi Arabia has funded this project,under Grant No. (FP-205-43).
文摘The outbreak of the pandemic,caused by Coronavirus Disease 2019(COVID-19),has affected the daily activities of people across the globe.During COVID-19 outbreak and the successive lockdowns,Twitter was heavily used and the number of tweets regarding COVID-19 increased tremendously.Several studies used Sentiment Analysis(SA)to analyze the emotions expressed through tweets upon COVID-19.Therefore,in current study,a new Artificial Bee Colony(ABC)with Machine Learning-driven SA(ABCMLSA)model is developed for conducting Sentiment Analysis of COVID-19 Twitter data.The prime focus of the presented ABCML-SA model is to recognize the sentiments expressed in tweets made uponCOVID-19.It involves data pre-processing at the initial stage followed by n-gram based feature extraction to derive the feature vectors.For identification and classification of the sentiments,the Support Vector Machine(SVM)model is exploited.At last,the ABC algorithm is applied to fine tune the parameters involved in SVM.To demonstrate the improved performance of the proposed ABCML-SA model,a sequence of simulations was conducted.The comparative assessment results confirmed the effectual performance of the proposed ABCML-SA model over other approaches.
基金supported by the Natural Science Foundation of Sichuan Province(2023NSFSC1799)the Science and Technology Development Fund of the Affiliated Hospital of Chengdu University of Traditional Chinese Medicine(21ZS05,23YY07)Chengdu University of Traditional Chinese Medicine Xinglin Scholar Postdoctoral Program BSH2023010.
文摘This study sought to conduct a bibliometric analysis of acupuncture studies focusing on heart rate variability(HRV)and to investigate the correlation between various acupoints and their effects on HRV by utilizing association rule mining and network analysis.A total of 536 publications on the topic of acupuncture studies based on HRV.The disease keyword analysis revealed that HRV-related acupuncture studies were mainly related to pain,inflammation,emotional disorders,gastrointestinal function,and hypertension.A separate analysis was conducted on acupuncture prescriptions,and Neiguan(PC6)and Zusanli(ST36)were the most frequently used acupoints.The core acupoints for HRV regulation were identified as PC6,ST36,Shenmen(HT7),Hegu(LI4),Sanyinjiao(SP6),Jianshi(PC5),Taichong(LR3),Quchi(LI11),Guanyuan(CV4),Baihui(GV20),and Taixi(KI3).Additionally,the research encompassed 46 reports on acupuncture animal experiments conducted on HRV,with ST36 being the most frequently utilized acupoint.The research presented in this study offers valuable insights into the global research trend and hotspots in acupuncture-based HRV studies,as well as identifying frequently used combinations of acupoints.The findings may be helpful for further research in this field and provide valuable information about the potential use of acupuncture for improving HRV in both humans and animals.
文摘Android smartphones largely dominate the smartphone market. For this reason, it is very important to examine these smartphones in terms of digital forensics since they are often used as evidence in trials. It is possible to acquire a physical or logical image of these devices. Acquiring physical and logical images has advantages and disadvantages compared to each other. Creating the logical image is done at the file system level. Analysis can be made on this logical image. Both logical image acquisition and analysis of the image can be done by software tools. In this study, the differences between logical image and physical image acquisition in Android smartphones, their advantages and disadvantages compared to each other, the difficulties that may be encountered in obtaining physical images, which type of image contributes to obtaining more useful and effective data, which one should be preferred for different conditions, and the benefits of having root authority are discussed. The practice of getting the logical image of the Android smartphones and making an analysis on the image is also included. Although root privileges are not required for logical image acquisition, it has been observed that very limited data will be obtained with the logical image created without root privileges. Nevertheless, logical image acquisition has advantages too against physical image acquisition.
文摘Social media is an essential component of our personal and professional lives. We use it extensively to share various things, including our opinions on daily topics and feelings about different subjects. This sharing of posts provides insights into someone’s current emotions. In artificial intelligence (AI) and deep learning (DL), researchers emphasize opinion mining and analysis of sentiment, particularly on social media platforms such as Twitter (currently known as X), which has a global user base. This research work revolves explicitly around a comparison between two popular approaches: Lexicon-based and Deep learning-based Approaches. To conduct this study, this study has used a Twitter dataset called sentiment140, which contains over 1.5 million data points. The primary focus was the Long Short-Term Memory (LSTM) deep learning sequence model. In the beginning, we used particular techniques to preprocess the data. The dataset is divided into training and test data. We evaluated the performance of our model using the test data. Simultaneously, we have applied the lexicon-based approach to the same test data and recorded the outputs. Finally, we compared the two approaches by creating confusion matrices based on their respective outputs. This allows us to assess their precision, recall, and F1-Score, enabling us to determine which approach yields better accuracy. This research achieved 98% model accuracy for deep learning algorithms and 95% model accuracy for the lexicon-based approach.
文摘Sentiment analysis is a method to identify and understand the emotion in the text through NLP and text analysis. In the era of information technology, there is often a certain error between the comments on the movie website and the actual score of the movie, and sentiment analysis technology provides a new way to solve this problem. In this paper, Python is used to obtain the movie review data from the Douban platform, and the model is constructed and trained by using naive Bayes and Bi-LSTM. According to the index, a better Bi-LSTM model is selected to classify the emotion of users’ movie reviews, and the classification results are scored according to the classification results, and compared with the real ratings on the website. According to the error of the final comparison results, the feasibility of this technology in the scoring direction of film reviews is being verified. By applying this technology, the phenomenon of film rating distortion in the information age can be prevented and the rights and interests of film and television works can be safeguarded.