Quantum key distribution provides an unconditional secure key sharing method in theory,but the imperfect factors of practical devices will bring security vulnerabilities.In this paper,we characterize the imperfections...Quantum key distribution provides an unconditional secure key sharing method in theory,but the imperfect factors of practical devices will bring security vulnerabilities.In this paper,we characterize the imperfections of the sender and analyze the possible attack strategies of Eve.Firstly,we present a quantized model for distinguishability of decoy states caused by intensity modulation.Besides,considering that Eve may control the preparation of states through hidden variables,we evaluate the security of preparation in practical quantum key distribution(QKD)scheme based on the weak-randomness model.Finally,we analyze the influence of the distinguishability of decoy state to secure key rate,for Eve may conduct the beam splitting attack and control the channel attenuation of different parts.Through the simulation,it can be seen that the secure key rate is sensitive to the distinguishability of decoy state and weak randomness,especially when Eve can control the channel attenuation.展开更多
Random numbers are one of the key foundations of cryptography.This work implements a discrete quantum random number generator(QRNG)based on the tunneling effect of electrons in an avalanche photo diode.Without any pos...Random numbers are one of the key foundations of cryptography.This work implements a discrete quantum random number generator(QRNG)based on the tunneling effect of electrons in an avalanche photo diode.Without any post-processing and conditioning,this QRNG can output raw sequences at a rate of 100 Mbps.Remarkably,the statistical min-entropy of the 8,000,000 bits sequence reaches 0.9944 bits/bit,and the min-entropy validated by NIST SP 800-90B reaches 0.9872 bits/bit.This metric is currently the highest value we have investigated for QRNG raw sequences.Moreover,this QRNG can continuously and stably output raw sequences with high randomness over extended periods.The system produced a continuous output of 1,174 Gbits raw sequence for a duration of 11,744 s,with every 8 Mbits forming a unit to obtain a statistical min-entropy distribution with an average value of 0.9892 bits/bit.The statistical min-entropy of all data(1,174 Gbits)achieves the value of0.9951 bits/bit.This QRNG can produce high-quality raw sequences with good randomness and stability.It has the potential to meet the high demand in cryptography for random numbers with high quality.展开更多
Is it true that there is an implicit understanding that Brownian motion or fractional Brownian motion is the driving force behind stock price fluctuations? An analysis of daily prices and volumes of a particular stock...Is it true that there is an implicit understanding that Brownian motion or fractional Brownian motion is the driving force behind stock price fluctuations? An analysis of daily prices and volumes of a particular stock revealed the following findings: 1) the logarithms of the moving averages of stock prices and volumes have a strong positive correlation, even though price and volume appear to be fluctuating independently of each other, 2) price and volume fluctuations are messy, but these time series are not necessarily Brownian motion by replacing each daily value by 1 or –1 when it rises or falls compared to the previous day’s value, and 3) the difference between the volume on the previous day and that on the current day is periodic by the frequency analysis. Using these findings, we constructed differential equations for stock prices, the number of buy orders, and the number of sell orders. These equations include terms for both randomness and periodicity. It is apparent that both randomness and periodicity are essential for stock price fluctuations to be sustainable, and that stock prices show large hill-like or valley-like fluctuations stochastically without any increasing or decreasing trend, and repeat themselves over a certain range.展开更多
Six samples of linear high randomness 60PHB/ PET thermotropic liquid crystal copolyesters are made by melt copolymerization at 290℃ , whose randomness about 0.955 is measured by the discernible ’H-NMR spectrometer. ...Six samples of linear high randomness 60PHB/ PET thermotropic liquid crystal copolyesters are made by melt copolymerization at 290℃ , whose randomness about 0.955 is measured by the discernible ’H-NMR spectrometer. High tenacity, high module fiber is prepared by melt spinning in liquid crystal phase. The effect of molecular weight, shear rate, temperature as well as spinning drawn ratio on the mechanical behavior of 60PHB / PET copolyester fiber are shown that, lower shear rate (2<sup> </sup>10 s<sup>-1</sup>), higher temperature melting (300℃ ), lower temperature spinning (280℃ ) and higher molecular weight are favourable to the increase of the fiber mechanical properties. With the variance of drawn ratio, fiber mechanical property has a transition point due to traversion from shear-orientation to drawn-orientation. The copolyester fiber has high crystallinity, high orientation at the crystalline region, high chain orientation and high regular fibrillar structure.展开更多
Quantum randomness amplification protocols have increasingly attracted attention for their fantastic ability to amplify weak randomness to almost ideal randomness by utilizing quantum systems. Recently, a realistic no...Quantum randomness amplification protocols have increasingly attracted attention for their fantastic ability to amplify weak randomness to almost ideal randomness by utilizing quantum systems. Recently, a realistic noise-tolerant randomness amplification protocol using a finite number of untrusted devices was proposed. The protocol has the composable security against non-signalling eavesdroppers and could produce a single bit of randomness from weak randomness sources, which is certified by the violation of certain Bell inequalities. However, the protocol has a non-ignorable limitation on the minentropy of independent sources. In this paper, we further develop the randomness amplification method and present a novel quantum randomness amplification protocol based on an explicit non-malleable two independent-source randomness extractor, which could remarkably reduce the above-mentioned specific limitation. Moreover, the composable security of our improved protocol is also proposed. Our results could significantly expand the application range for practical quantum randomness amplification, and provide a new insight on the practical design method for randomness extraction.展开更多
Cronbach’s Alpha coefficient is the most popular method of examining reliability. It is typically used when the researcher has several Likert-type items that are summed or averaged to make a composite score. Distribu...Cronbach’s Alpha coefficient is the most popular method of examining reliability. It is typically used when the researcher has several Likert-type items that are summed or averaged to make a composite score. Distribution of alpha coefficient has been subjected of many studies. In this study relationship between randomness and Cronbach alpha coefficient were investigated and in this context, present study was examined the question“What is the distribution of the coefficient alpha when a Likert-type scale is answered randomly?” Data were generated in the form of five point Likert-type items and Monte Carlosimulation was run for 5000 times for different item numbers.展开更多
We describe here a comprehensive framework for intelligent information management (IIM) of data collection and decision-making actions for reliable and robust event processing and recognition. This is driven by algori...We describe here a comprehensive framework for intelligent information management (IIM) of data collection and decision-making actions for reliable and robust event processing and recognition. This is driven by algorithmic information theory (AIT), in general, and algorithmic randomness and Kolmogorov complexity (KC), in particular. The processing and recognition tasks addressed include data discrimination and multilayer open set data categorization, change detection, data aggregation, clustering and data segmentation, data selection and link analysis, data cleaning and data revision, and prediction and identification of critical states. The unifying theme throughout the paper is that of “compression entails comprehension”, which is realized using the interrelated concepts of randomness vs. regularity and Kolmogorov complexity. The constructive and all encompassing active learning (AL) methodology, which mediates and supports the above theme, is context-driven and takes advantage of statistical learning, in general, and semi-supervised learning and transduction, in particular. Active learning employs explore and exploit actions characteristic of closed-loop control for evidence accumulation in order to revise its prediction models and to reduce uncertainty. The set-based similarity scores, driven by algorithmic randomness and Kolmogorov complexity, employ strangeness / typicality and p-values. We propose the application of the IIM framework to critical states prediction for complex physical systems;in particular, the prediction of cyclone genesis and intensification.展开更多
This paper deals the randomness effect of the pressure of carbonic gas on the carbonation phenomenon of the reinforced concrete. This analysis concentrates on the evaluation of carbonation depth (Xc) and the carbonati...This paper deals the randomness effect of the pressure of carbonic gas on the carbonation phenomenon of the reinforced concrete. This analysis concentrates on the evaluation of carbonation depth (Xc) and the carbonation time (T1) which is the time necessary so that the face of carbonation arrives until the reinforcement from a probabilistic analysis. Monte Carlo simulations are realized under the assumption that the carbonic gas on the surface of the concrete is random variable with a log-normal probability distribution.展开更多
The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wo...The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wolfowitz one-sample runs test for randomness, to present a novel approach for computing this probability, and to compare the four procedures by generating samples of 10 and 11 data points, varying the parameters n<sub>0</sub> (number of zeros) and n<sub>1</sub> (number of ones), as well as the number of runs. Fifty-nine samples are created to replicate the behavior of the distribution of the number of runs with 10 and 11 data points. The exact two-tailed probabilities for the four procedures were compared using Friedman’s test. Given the significant difference in central tendency, post-hoc comparisons were conducted using Conover’s test with Benjamini-Yekutielli correction. It is concluded that the procedures of Real Statistics using Excel and R exhibit some inadequacies in the calculation of the exact two-tailed probability, whereas the new proposal and the SPSS procedure are deemed more suitable. The proposed robust algorithm has a more transparent rationale than the SPSS one, albeit being somewhat more conservative. We recommend its implementation for this test and its application to others, such as the binomial and sign test.展开更多
This paper investigates the low earth orbit(LEO)satellite-enabled coded compressed sensing(CCS)unsourced random access(URA)in orthogonal frequency division multiple access(OFDMA)framework,where a massive uniform plana...This paper investigates the low earth orbit(LEO)satellite-enabled coded compressed sensing(CCS)unsourced random access(URA)in orthogonal frequency division multiple access(OFDMA)framework,where a massive uniform planar array(UPA)is equipped on the satellite.In LEO satellite communications,unavoidable timing and frequency offsets cause phase shifts in the transmitted signals,substantially diminishing the decoding performance of current terrestrial CCS URA receiver.To cope with this issue,we expand the inner codebook with predefined timing and frequency offsets and formulate the inner decoding as a tractable compressed sensing(CS)problem.Additionally,we leverage the inherent sparsity of the UPA-equipped LEO satellite angular domain channels,thereby enabling the outer decoder to support more active devices.Furthermore,the outputs of the outer decoder are used to reduce the search space of the inner decoder,which cuts down the computational complexity and accelerates the convergence of the inner decoding.Simulation results verify the effectiveness of the proposed scheme.展开更多
In the practical environment,it is very common for the simultaneous occurrence of base excitation and crosswind.Scavenging the combined energy of vibration and wind with a single energy harvesting structure is fascina...In the practical environment,it is very common for the simultaneous occurrence of base excitation and crosswind.Scavenging the combined energy of vibration and wind with a single energy harvesting structure is fascinating.For this purpose,the effects of the wind speed and random excitation level are investigated with the stochastic averaging method(SAM)based on the energy envelope.The results of the analytical prediction are verified with the Monte-Carlo method(MCM).The numerical simulation shows that the introduction of wind can reduce the critical excitation level for triggering an inter-well jump and make a bi-stable energy harvester(BEH)realize the performance enhancement for a weak base excitation.However,as the strength of the wind increases to a particular level,the influence of the random base excitation on the dynamic responses is weakened,and the system exhibits a periodic galloping response.A comparison between a BEH and a linear energy harvester(LEH)indicates that the BEH demonstrates inferior performance for high-speed wind.Relevant experiments are conducted to investigate the validity of the theoretical prediction and numerical simulation.The experimental findings also show that strong random excitation is favorable for the BEH in the range of low wind speeds.However,as the speed of the incoming wind is up to a particular level,the disadvantage of the BEH becomes clear and evident.展开更多
The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenz...The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.展开更多
This paper examines the impacts of information about COVID-19 on pig farmers'production willingness by using endorsement experiments and follow-up surveys conducted in 2020 and 2021 in China.Our results show that,...This paper examines the impacts of information about COVID-19 on pig farmers'production willingness by using endorsement experiments and follow-up surveys conducted in 2020 and 2021 in China.Our results show that,first,farmers were less willing to scale up production when they received information about COVID-19.The information in 2020 that the second wave of COVID-19 might occur without a vaccine reduced farmers'willingness to scale up by 13.4%,while the information in 2021 that COVID-19 might continue to spread despite the introduction of vaccine reduced farmers'willingness by 4.4%.Second,farmers whose production was affected by COVID-19 were considerably less willing to scale up,given the access to COVID-19 information.Third,farmers'production willingness can predict their actual production behavior.展开更多
Experiments are conducted on the evacuation rate of pedestrians through exits with queued evacuation pattern and random evacuation pattern. The experimental results show that the flow rate of pedestrians is larger wit...Experiments are conducted on the evacuation rate of pedestrians through exits with queued evacuation pattern and random evacuation pattern. The experimental results show that the flow rate of pedestrians is larger with the random evacuation pattern than with the queued evacuation pattern. Therefore, the exit width calculated based on the minimum evacuation clear width for every 100 persons, which is on the assumption that the pedestrians pass through the exit in one queue or several queues, is conservative. The number of people crossing the exit simultaneously is greater in the random evacuation experiments than in the queued evacuation experiments, and the time interval between the front row and rear row of people is shortened in large-exit conditions when pedestrians evacuate randomly. The difference between the flow rate with a queued evacuation pattern and the flow rate with a random evacuation pattern is related to the surplus width of the exit, which is greater than the total width of all accommodated people streams. Two dimensionless quantities are defined to explore this relationship. It is found that the difference in flow rate between the two evacuation patterns is stable at a low level when the surplus width of the exit is no more than 45% of the width of a single pedestrian stream. There is a great difference between the flow rate with the queued evacuation pattern and the flow rate with the random evacuation pattern in a scenario with a larger surplus width of the exit. Meanwhile, the pedestrians crowd extraordinarily at the exit in these conditions as well, since the number of pedestrians who want to evacuate through exit simultaneously greatly exceeds the accommodated level. Therefore, the surplus width of exit should be limited especially in the narrow exit condition, and the relationship between the two dimensionless quantities mentioned above could provide the basis to some extent.展开更多
Cell migration plays a significant role in physiological and pathological processes.Understanding the characteristics of cell movement is crucial for comprehending biological processes such as cell functionality,cell ...Cell migration plays a significant role in physiological and pathological processes.Understanding the characteristics of cell movement is crucial for comprehending biological processes such as cell functionality,cell migration,and cell–cell interactions.One of the fundamental characteristics of cell movement is the specific distribution of cell speed,containing valuable information that still requires comprehensive understanding.This article investigates the distribution of mean velocities along cell trajectories,with a focus on optimizing the efficiency of cell food search in the context of the entire colony.We confirm that the specific velocity distribution in the experiments corresponds to an optimal search efficiency when spatial weighting is considered.The simulation results indicate that the distribution of average velocity does not align with the optimal search efficiency when employing average spatial weighting.However,when considering the distribution of central spatial weighting,the specific velocity distribution in the experiment is shown to correspond to the optimal search efficiency.Our simulations reveal that for any given distribution of average velocity,a specific central spatial weighting can be identified among the possible central spatial weighting that aligns with the optimal search strategy.Additionally,our work presents a method for determining the spatial weights embedded in the velocity distribution of cell movement.Our results have provided new avenues for further investigation of significant topics,such as relationship between cell behavior and environmental conditions throughout their evolutionary history,and how cells achieve collective cooperation through cell-cell communication.展开更多
In the context of global mean square error concerning the number of random variables in the representation,the Karhunen–Loève(KL)expansion is the optimal series expansion method for random field discretization.T...In the context of global mean square error concerning the number of random variables in the representation,the Karhunen–Loève(KL)expansion is the optimal series expansion method for random field discretization.The computational efficiency and accuracy of the KL expansion are contingent upon the accurate resolution of the Fredholm integral eigenvalue problem(IEVP).The paper proposes an interpolation method based on different interpolation basis functions such as moving least squares(MLS),least squares(LS),and finite element method(FEM)to solve the IEVP.Compared with the Galerkin method based on finite element or Legendre polynomials,the main advantage of the interpolation method is that,in the calculation of eigenvalues and eigenfunctions in one-dimensional random fields,the integral matrix containing covariance function only requires a single integral,which is less than a two-folded integral by the Galerkin method.The effectiveness and computational efficiency of the proposed interpolation method are verified through various one-dimensional examples.Furthermore,based on theKL expansion and polynomial chaos expansion,the stochastic analysis of two-dimensional regular and irregular domains is conducted,and the basis function of the extended finite element method(XFEM)is introduced as the interpolation basis function in two-dimensional irregular domains to solve the IEVP.展开更多
Objective This study explored the potentially modifiable factors for depression and major depressive disorder(MDD)from the MR-Base database and further evaluated the associations between drug targets with MDD.Methods ...Objective This study explored the potentially modifiable factors for depression and major depressive disorder(MDD)from the MR-Base database and further evaluated the associations between drug targets with MDD.Methods We analyzed two-sample of Mendelian randomization(2SMR)using genetic variant depression(n=113,154)and MDD(n=208,811)from Genome-Wide Association Studies(GWAS).Separate calculations were performed with modifiable risk factors from MR-Base for 1,001 genomes.The MR analysis was performed by screening drug targets with MDD in the DrugBank database to explore the therapeutic targets for MDD.Inverse variance weighted(IVW),fixed-effect inverse variance weighted(FE-IVW),MR-Egger,weighted median,and weighted mode were used for complementary calculation.Results The potential causal relationship between modifiable risk factors and depression contained 459 results for depression and 424 for MDD.Also,the associations between drug targets and MDD showed that SLC6A4,GRIN2A,GRIN2C,SCN10A,and IL1B expression are associated with an increased risk of depression.In contrast,ADRB1,CHRNA3,HTR3A,GSTP1,and GABRG2 genes are candidate protective factors against depression.Conclusion This study identified the risk factors causally associated with depression and MDD,and estimated 10 drug targets with significant impact on MDD,providing essential information for formulating strategies to prevent and treat depression.展开更多
The article introduces a finite element procedure using the bilinear quadrilateral element or four-node rectangular element(namely Q4 element) based on a refined first-order shear deformation theory(rFSDT) and Monte C...The article introduces a finite element procedure using the bilinear quadrilateral element or four-node rectangular element(namely Q4 element) based on a refined first-order shear deformation theory(rFSDT) and Monte Carlo simulation(MCS), so-called refined stochastic finite element method to investigate the random vibration of functionally graded material(FGM) plates subjected to the moving load.The advantage of the proposed method is to use r-FSDT to improve the accuracy of classical FSDT, satisfy the stress-free condition at the plate boundaries, and combine with MCS to analyze the vibration of the FGM plate when the parameter inputs are random quantities following a normal distribution. The obtained results show that the distribution characteristics of the vibration response of the FGM plate depend on the standard deviation of the input parameters and the velocity of the moving load.Furthermore, the numerical results in this study are expected to contribute to improving the understanding of FGM plates subjected to moving loads with uncertain input parameters.展开更多
Infection of leukemia in humans causes many complications in its later stages.It impairs bone marrow’s ability to produce blood.Morphological diagnosis of human blood cells is a well-known and well-proven technique f...Infection of leukemia in humans causes many complications in its later stages.It impairs bone marrow’s ability to produce blood.Morphological diagnosis of human blood cells is a well-known and well-proven technique for diagnosis in this case.The binary classification is employed to distinguish between normal and leukemiainfected cells.In addition,various subtypes of leukemia require different treatments.These sub-classes must also be detected to obtain an accurate diagnosis of the type of leukemia.This entails using multi-class classification to determine the leukemia subtype.This is usually done using a microscopic examination of these blood cells.Due to the requirement of a trained pathologist,the decision process is critical,which leads to the development of an automated software framework for diagnosis.Researchers utilized state-of-the-art machine learning approaches,such as Support Vector Machine(SVM),Random Forest(RF),Na飗e Bayes,K-Nearest Neighbor(KNN),and others,to provide limited accuracies of classification.More advanced deep-learning methods are also utilized.Due to constrained dataset sizes,these approaches result in over-fitting,reducing their outstanding performances.This study introduces a deep learning-machine learning combined approach for leukemia diagnosis.It uses deep transfer learning frameworks to extract and classify features using state-of-the-artmachine learning classifiers.The transfer learning frameworks such as VGGNet,Xception,InceptionResV2,Densenet,and ResNet are employed as feature extractors.The extracted features are given to RF and XGBoost classifiers for the binary and multi-class classification of leukemia cells.For the experimentation,a very popular ALL-IDB dataset is used,approaching a maximum accuracy of 100%.A private real images dataset with three subclasses of leukemia images,including Acute Myloid Leukemia(AML),Chronic Lymphocytic Leukemia(CLL),and Chronic Myloid Leukemia(CML),is also employed to generalize the system.This dataset achieves an impressive multi-class classification accuracy of 97.08%.The proposed approach is robust and generalized by a standardized dataset and the real image dataset with a limited sample size(520 images).Hence,this method can be explored further for leukemia diagnosis having a limited number of dataset samples.展开更多
Insertional mutation,phenotypic evaluation,and mutated gene cloning are widely used to clone genes from scratch.Exogenous genes can be integrated into the genome during non-homologous end joining(NHEJ)of the double-st...Insertional mutation,phenotypic evaluation,and mutated gene cloning are widely used to clone genes from scratch.Exogenous genes can be integrated into the genome during non-homologous end joining(NHEJ)of the double-strand breaks of DNA,causing insertional mutation.The random insertional mutant library constructed using this method has become a method of forward genetics for gene cloning.However,the establishment of a random insertional mutant library requires a high transformation efficiency of exogenous genes.Many microalgal species show a low transformation efficiency,making constructing random insertional mutant libraries difficult.In this study,we established a highly efficient transformation method for constructing a random insertional mutant library of Nannochloropsis oceanica,and tentatively tried to isolate its genes to prove the feasibility of the method.A gene that may control the growth rate and cell size was identified.This method will facilitate the genetic studies of N.oceanica,which should also be a reference for other microalgal species.展开更多
基金the National Key Research and Development Program of China(Grant No.2020YFA0309702)NSAF(Grant No.U2130205)+3 种基金the National Natural Science Foundation of China(Grant Nos.62101597,61605248,and 61505261)the China Postdoctoral Science Foundation(Grant No.2021M691536)the Natural Science Foundation of Henan(Grant Nos.202300410534 and 202300410532)the Anhui Initiative in Quantum Information Technologies。
文摘Quantum key distribution provides an unconditional secure key sharing method in theory,but the imperfect factors of practical devices will bring security vulnerabilities.In this paper,we characterize the imperfections of the sender and analyze the possible attack strategies of Eve.Firstly,we present a quantized model for distinguishability of decoy states caused by intensity modulation.Besides,considering that Eve may control the preparation of states through hidden variables,we evaluate the security of preparation in practical quantum key distribution(QKD)scheme based on the weak-randomness model.Finally,we analyze the influence of the distinguishability of decoy state to secure key rate,for Eve may conduct the beam splitting attack and control the channel attenuation of different parts.Through the simulation,it can be seen that the secure key rate is sensitive to the distinguishability of decoy state and weak randomness,especially when Eve can control the channel attenuation.
基金supported by the National Natural Science Foundation of China(Grant No.51727805)。
文摘Random numbers are one of the key foundations of cryptography.This work implements a discrete quantum random number generator(QRNG)based on the tunneling effect of electrons in an avalanche photo diode.Without any post-processing and conditioning,this QRNG can output raw sequences at a rate of 100 Mbps.Remarkably,the statistical min-entropy of the 8,000,000 bits sequence reaches 0.9944 bits/bit,and the min-entropy validated by NIST SP 800-90B reaches 0.9872 bits/bit.This metric is currently the highest value we have investigated for QRNG raw sequences.Moreover,this QRNG can continuously and stably output raw sequences with high randomness over extended periods.The system produced a continuous output of 1,174 Gbits raw sequence for a duration of 11,744 s,with every 8 Mbits forming a unit to obtain a statistical min-entropy distribution with an average value of 0.9892 bits/bit.The statistical min-entropy of all data(1,174 Gbits)achieves the value of0.9951 bits/bit.This QRNG can produce high-quality raw sequences with good randomness and stability.It has the potential to meet the high demand in cryptography for random numbers with high quality.
文摘Is it true that there is an implicit understanding that Brownian motion or fractional Brownian motion is the driving force behind stock price fluctuations? An analysis of daily prices and volumes of a particular stock revealed the following findings: 1) the logarithms of the moving averages of stock prices and volumes have a strong positive correlation, even though price and volume appear to be fluctuating independently of each other, 2) price and volume fluctuations are messy, but these time series are not necessarily Brownian motion by replacing each daily value by 1 or –1 when it rises or falls compared to the previous day’s value, and 3) the difference between the volume on the previous day and that on the current day is periodic by the frequency analysis. Using these findings, we constructed differential equations for stock prices, the number of buy orders, and the number of sell orders. These equations include terms for both randomness and periodicity. It is apparent that both randomness and periodicity are essential for stock price fluctuations to be sustainable, and that stock prices show large hill-like or valley-like fluctuations stochastically without any increasing or decreasing trend, and repeat themselves over a certain range.
文摘Six samples of linear high randomness 60PHB/ PET thermotropic liquid crystal copolyesters are made by melt copolymerization at 290℃ , whose randomness about 0.955 is measured by the discernible ’H-NMR spectrometer. High tenacity, high module fiber is prepared by melt spinning in liquid crystal phase. The effect of molecular weight, shear rate, temperature as well as spinning drawn ratio on the mechanical behavior of 60PHB / PET copolyester fiber are shown that, lower shear rate (2<sup> </sup>10 s<sup>-1</sup>), higher temperature melting (300℃ ), lower temperature spinning (280℃ ) and higher molecular weight are favourable to the increase of the fiber mechanical properties. With the variance of drawn ratio, fiber mechanical property has a transition point due to traversion from shear-orientation to drawn-orientation. The copolyester fiber has high crystallinity, high orientation at the crystalline region, high chain orientation and high regular fibrillar structure.
基金Project supported by the National Natural Science Foundation of China(Grant No.61775185)
文摘Quantum randomness amplification protocols have increasingly attracted attention for their fantastic ability to amplify weak randomness to almost ideal randomness by utilizing quantum systems. Recently, a realistic noise-tolerant randomness amplification protocol using a finite number of untrusted devices was proposed. The protocol has the composable security against non-signalling eavesdroppers and could produce a single bit of randomness from weak randomness sources, which is certified by the violation of certain Bell inequalities. However, the protocol has a non-ignorable limitation on the minentropy of independent sources. In this paper, we further develop the randomness amplification method and present a novel quantum randomness amplification protocol based on an explicit non-malleable two independent-source randomness extractor, which could remarkably reduce the above-mentioned specific limitation. Moreover, the composable security of our improved protocol is also proposed. Our results could significantly expand the application range for practical quantum randomness amplification, and provide a new insight on the practical design method for randomness extraction.
文摘Cronbach’s Alpha coefficient is the most popular method of examining reliability. It is typically used when the researcher has several Likert-type items that are summed or averaged to make a composite score. Distribution of alpha coefficient has been subjected of many studies. In this study relationship between randomness and Cronbach alpha coefficient were investigated and in this context, present study was examined the question“What is the distribution of the coefficient alpha when a Likert-type scale is answered randomly?” Data were generated in the form of five point Likert-type items and Monte Carlosimulation was run for 5000 times for different item numbers.
文摘We describe here a comprehensive framework for intelligent information management (IIM) of data collection and decision-making actions for reliable and robust event processing and recognition. This is driven by algorithmic information theory (AIT), in general, and algorithmic randomness and Kolmogorov complexity (KC), in particular. The processing and recognition tasks addressed include data discrimination and multilayer open set data categorization, change detection, data aggregation, clustering and data segmentation, data selection and link analysis, data cleaning and data revision, and prediction and identification of critical states. The unifying theme throughout the paper is that of “compression entails comprehension”, which is realized using the interrelated concepts of randomness vs. regularity and Kolmogorov complexity. The constructive and all encompassing active learning (AL) methodology, which mediates and supports the above theme, is context-driven and takes advantage of statistical learning, in general, and semi-supervised learning and transduction, in particular. Active learning employs explore and exploit actions characteristic of closed-loop control for evidence accumulation in order to revise its prediction models and to reduce uncertainty. The set-based similarity scores, driven by algorithmic randomness and Kolmogorov complexity, employ strangeness / typicality and p-values. We propose the application of the IIM framework to critical states prediction for complex physical systems;in particular, the prediction of cyclone genesis and intensification.
文摘This paper deals the randomness effect of the pressure of carbonic gas on the carbonation phenomenon of the reinforced concrete. This analysis concentrates on the evaluation of carbonation depth (Xc) and the carbonation time (T1) which is the time necessary so that the face of carbonation arrives until the reinforcement from a probabilistic analysis. Monte Carlo simulations are realized under the assumption that the carbonic gas on the surface of the concrete is random variable with a log-normal probability distribution.
文摘The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wolfowitz one-sample runs test for randomness, to present a novel approach for computing this probability, and to compare the four procedures by generating samples of 10 and 11 data points, varying the parameters n<sub>0</sub> (number of zeros) and n<sub>1</sub> (number of ones), as well as the number of runs. Fifty-nine samples are created to replicate the behavior of the distribution of the number of runs with 10 and 11 data points. The exact two-tailed probabilities for the four procedures were compared using Friedman’s test. Given the significant difference in central tendency, post-hoc comparisons were conducted using Conover’s test with Benjamini-Yekutielli correction. It is concluded that the procedures of Real Statistics using Excel and R exhibit some inadequacies in the calculation of the exact two-tailed probability, whereas the new proposal and the SPSS procedure are deemed more suitable. The proposed robust algorithm has a more transparent rationale than the SPSS one, albeit being somewhat more conservative. We recommend its implementation for this test and its application to others, such as the binomial and sign test.
基金supported by the National Key R&D Program of China under Grant 2023YFB2904703the National Natural Science Foundation of China under Grant 62341110,62371122 and 62322104+1 种基金the Jiangsu Province Basic Research Project under Grant BK20192002the Fundamental Research Funds for the Central Universities under Grant 2242022k30005 and 2242023K5003。
文摘This paper investigates the low earth orbit(LEO)satellite-enabled coded compressed sensing(CCS)unsourced random access(URA)in orthogonal frequency division multiple access(OFDMA)framework,where a massive uniform planar array(UPA)is equipped on the satellite.In LEO satellite communications,unavoidable timing and frequency offsets cause phase shifts in the transmitted signals,substantially diminishing the decoding performance of current terrestrial CCS URA receiver.To cope with this issue,we expand the inner codebook with predefined timing and frequency offsets and formulate the inner decoding as a tractable compressed sensing(CS)problem.Additionally,we leverage the inherent sparsity of the UPA-equipped LEO satellite angular domain channels,thereby enabling the outer decoder to support more active devices.Furthermore,the outputs of the outer decoder are used to reduce the search space of the inner decoder,which cuts down the computational complexity and accelerates the convergence of the inner decoding.Simulation results verify the effectiveness of the proposed scheme.
基金Project supported by the National Natural Science Foundation of China(Nos.12272355,1202520411902294)+1 种基金the Opening Foundation of Shanxi Provincial Key Laboratory for Advanced Manufacturing Technology of China(No.XJZZ202304)the Shanxi Provincial Graduate Innovation Project of China(No.2023KY629)。
文摘In the practical environment,it is very common for the simultaneous occurrence of base excitation and crosswind.Scavenging the combined energy of vibration and wind with a single energy harvesting structure is fascinating.For this purpose,the effects of the wind speed and random excitation level are investigated with the stochastic averaging method(SAM)based on the energy envelope.The results of the analytical prediction are verified with the Monte-Carlo method(MCM).The numerical simulation shows that the introduction of wind can reduce the critical excitation level for triggering an inter-well jump and make a bi-stable energy harvester(BEH)realize the performance enhancement for a weak base excitation.However,as the strength of the wind increases to a particular level,the influence of the random base excitation on the dynamic responses is weakened,and the system exhibits a periodic galloping response.A comparison between a BEH and a linear energy harvester(LEH)indicates that the BEH demonstrates inferior performance for high-speed wind.Relevant experiments are conducted to investigate the validity of the theoretical prediction and numerical simulation.The experimental findings also show that strong random excitation is favorable for the BEH in the range of low wind speeds.However,as the speed of the incoming wind is up to a particular level,the disadvantage of the BEH becomes clear and evident.
文摘The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.
基金supported by the National Social Science Fund of China(23&ZD045)the Humanities and Social Sciences Youth Foundation of the Ministry of Education of China(21YJC790087)+1 种基金the Center for Social Welfare and Public Governance of Zhejiang University,Chinathe Fundamental Research Funds for the Central Universities,China。
文摘This paper examines the impacts of information about COVID-19 on pig farmers'production willingness by using endorsement experiments and follow-up surveys conducted in 2020 and 2021 in China.Our results show that,first,farmers were less willing to scale up production when they received information about COVID-19.The information in 2020 that the second wave of COVID-19 might occur without a vaccine reduced farmers'willingness to scale up by 13.4%,while the information in 2021 that COVID-19 might continue to spread despite the introduction of vaccine reduced farmers'willingness by 4.4%.Second,farmers whose production was affected by COVID-19 were considerably less willing to scale up,given the access to COVID-19 information.Third,farmers'production willingness can predict their actual production behavior.
基金Project supported by the Special Funds for Basic Operating Expenses of the Centre University of China (Grant No.23ZYJS006)。
文摘Experiments are conducted on the evacuation rate of pedestrians through exits with queued evacuation pattern and random evacuation pattern. The experimental results show that the flow rate of pedestrians is larger with the random evacuation pattern than with the queued evacuation pattern. Therefore, the exit width calculated based on the minimum evacuation clear width for every 100 persons, which is on the assumption that the pedestrians pass through the exit in one queue or several queues, is conservative. The number of people crossing the exit simultaneously is greater in the random evacuation experiments than in the queued evacuation experiments, and the time interval between the front row and rear row of people is shortened in large-exit conditions when pedestrians evacuate randomly. The difference between the flow rate with a queued evacuation pattern and the flow rate with a random evacuation pattern is related to the surplus width of the exit, which is greater than the total width of all accommodated people streams. Two dimensionless quantities are defined to explore this relationship. It is found that the difference in flow rate between the two evacuation patterns is stable at a low level when the surplus width of the exit is no more than 45% of the width of a single pedestrian stream. There is a great difference between the flow rate with the queued evacuation pattern and the flow rate with the random evacuation pattern in a scenario with a larger surplus width of the exit. Meanwhile, the pedestrians crowd extraordinarily at the exit in these conditions as well, since the number of pedestrians who want to evacuate through exit simultaneously greatly exceeds the accommodated level. Therefore, the surplus width of exit should be limited especially in the narrow exit condition, and the relationship between the two dimensionless quantities mentioned above could provide the basis to some extent.
基金Project supported by the National Natural Science Foundation of China(Grant No.31971183).
文摘Cell migration plays a significant role in physiological and pathological processes.Understanding the characteristics of cell movement is crucial for comprehending biological processes such as cell functionality,cell migration,and cell–cell interactions.One of the fundamental characteristics of cell movement is the specific distribution of cell speed,containing valuable information that still requires comprehensive understanding.This article investigates the distribution of mean velocities along cell trajectories,with a focus on optimizing the efficiency of cell food search in the context of the entire colony.We confirm that the specific velocity distribution in the experiments corresponds to an optimal search efficiency when spatial weighting is considered.The simulation results indicate that the distribution of average velocity does not align with the optimal search efficiency when employing average spatial weighting.However,when considering the distribution of central spatial weighting,the specific velocity distribution in the experiment is shown to correspond to the optimal search efficiency.Our simulations reveal that for any given distribution of average velocity,a specific central spatial weighting can be identified among the possible central spatial weighting that aligns with the optimal search strategy.Additionally,our work presents a method for determining the spatial weights embedded in the velocity distribution of cell movement.Our results have provided new avenues for further investigation of significant topics,such as relationship between cell behavior and environmental conditions throughout their evolutionary history,and how cells achieve collective cooperation through cell-cell communication.
基金The authors gratefully acknowledge the support provided by the Postgraduate Research&Practice Program of Jiangsu Province(Grant No.KYCX18_0526)the Fundamental Research Funds for the Central Universities(Grant No.2018B682X14)Guangdong Basic and Applied Basic Research Foundation(No.2021A1515110807).
文摘In the context of global mean square error concerning the number of random variables in the representation,the Karhunen–Loève(KL)expansion is the optimal series expansion method for random field discretization.The computational efficiency and accuracy of the KL expansion are contingent upon the accurate resolution of the Fredholm integral eigenvalue problem(IEVP).The paper proposes an interpolation method based on different interpolation basis functions such as moving least squares(MLS),least squares(LS),and finite element method(FEM)to solve the IEVP.Compared with the Galerkin method based on finite element or Legendre polynomials,the main advantage of the interpolation method is that,in the calculation of eigenvalues and eigenfunctions in one-dimensional random fields,the integral matrix containing covariance function only requires a single integral,which is less than a two-folded integral by the Galerkin method.The effectiveness and computational efficiency of the proposed interpolation method are verified through various one-dimensional examples.Furthermore,based on theKL expansion and polynomial chaos expansion,the stochastic analysis of two-dimensional regular and irregular domains is conducted,and the basis function of the extended finite element method(XFEM)is introduced as the interpolation basis function in two-dimensional irregular domains to solve the IEVP.
基金supported by Natural Science Foundation of Shandong ProvinceChina[ZR2022MH115]the National Natural Science Foundation of China[81301479,82202593]。
文摘Objective This study explored the potentially modifiable factors for depression and major depressive disorder(MDD)from the MR-Base database and further evaluated the associations between drug targets with MDD.Methods We analyzed two-sample of Mendelian randomization(2SMR)using genetic variant depression(n=113,154)and MDD(n=208,811)from Genome-Wide Association Studies(GWAS).Separate calculations were performed with modifiable risk factors from MR-Base for 1,001 genomes.The MR analysis was performed by screening drug targets with MDD in the DrugBank database to explore the therapeutic targets for MDD.Inverse variance weighted(IVW),fixed-effect inverse variance weighted(FE-IVW),MR-Egger,weighted median,and weighted mode were used for complementary calculation.Results The potential causal relationship between modifiable risk factors and depression contained 459 results for depression and 424 for MDD.Also,the associations between drug targets and MDD showed that SLC6A4,GRIN2A,GRIN2C,SCN10A,and IL1B expression are associated with an increased risk of depression.In contrast,ADRB1,CHRNA3,HTR3A,GSTP1,and GABRG2 genes are candidate protective factors against depression.Conclusion This study identified the risk factors causally associated with depression and MDD,and estimated 10 drug targets with significant impact on MDD,providing essential information for formulating strategies to prevent and treat depression.
文摘The article introduces a finite element procedure using the bilinear quadrilateral element or four-node rectangular element(namely Q4 element) based on a refined first-order shear deformation theory(rFSDT) and Monte Carlo simulation(MCS), so-called refined stochastic finite element method to investigate the random vibration of functionally graded material(FGM) plates subjected to the moving load.The advantage of the proposed method is to use r-FSDT to improve the accuracy of classical FSDT, satisfy the stress-free condition at the plate boundaries, and combine with MCS to analyze the vibration of the FGM plate when the parameter inputs are random quantities following a normal distribution. The obtained results show that the distribution characteristics of the vibration response of the FGM plate depend on the standard deviation of the input parameters and the velocity of the moving load.Furthermore, the numerical results in this study are expected to contribute to improving the understanding of FGM plates subjected to moving loads with uncertain input parameters.
基金supported by the Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS),the University of Technology Sydney,the Ministry of Education of the Republic of Korea,and the National Research Foundation of Korea (NRF-2023R1A2C1007742)in part by the Researchers Supporting Project Number RSP-2023/14,King Saud University。
文摘Infection of leukemia in humans causes many complications in its later stages.It impairs bone marrow’s ability to produce blood.Morphological diagnosis of human blood cells is a well-known and well-proven technique for diagnosis in this case.The binary classification is employed to distinguish between normal and leukemiainfected cells.In addition,various subtypes of leukemia require different treatments.These sub-classes must also be detected to obtain an accurate diagnosis of the type of leukemia.This entails using multi-class classification to determine the leukemia subtype.This is usually done using a microscopic examination of these blood cells.Due to the requirement of a trained pathologist,the decision process is critical,which leads to the development of an automated software framework for diagnosis.Researchers utilized state-of-the-art machine learning approaches,such as Support Vector Machine(SVM),Random Forest(RF),Na飗e Bayes,K-Nearest Neighbor(KNN),and others,to provide limited accuracies of classification.More advanced deep-learning methods are also utilized.Due to constrained dataset sizes,these approaches result in over-fitting,reducing their outstanding performances.This study introduces a deep learning-machine learning combined approach for leukemia diagnosis.It uses deep transfer learning frameworks to extract and classify features using state-of-the-artmachine learning classifiers.The transfer learning frameworks such as VGGNet,Xception,InceptionResV2,Densenet,and ResNet are employed as feature extractors.The extracted features are given to RF and XGBoost classifiers for the binary and multi-class classification of leukemia cells.For the experimentation,a very popular ALL-IDB dataset is used,approaching a maximum accuracy of 100%.A private real images dataset with three subclasses of leukemia images,including Acute Myloid Leukemia(AML),Chronic Lymphocytic Leukemia(CLL),and Chronic Myloid Leukemia(CML),is also employed to generalize the system.This dataset achieves an impressive multi-class classification accuracy of 97.08%.The proposed approach is robust and generalized by a standardized dataset and the real image dataset with a limited sample size(520 images).Hence,this method can be explored further for leukemia diagnosis having a limited number of dataset samples.
基金the National Key R&D Program of China(Nos.2018YFD0901506,2018YFD0900305)the Marine S&T Fund of Shandong Province for Pilot National Laboratory for Marine Science and Technology(Qingdao)(No.2018 SDKJ0406-3)。
文摘Insertional mutation,phenotypic evaluation,and mutated gene cloning are widely used to clone genes from scratch.Exogenous genes can be integrated into the genome during non-homologous end joining(NHEJ)of the double-strand breaks of DNA,causing insertional mutation.The random insertional mutant library constructed using this method has become a method of forward genetics for gene cloning.However,the establishment of a random insertional mutant library requires a high transformation efficiency of exogenous genes.Many microalgal species show a low transformation efficiency,making constructing random insertional mutant libraries difficult.In this study,we established a highly efficient transformation method for constructing a random insertional mutant library of Nannochloropsis oceanica,and tentatively tried to isolate its genes to prove the feasibility of the method.A gene that may control the growth rate and cell size was identified.This method will facilitate the genetic studies of N.oceanica,which should also be a reference for other microalgal species.