The research purpose is invention (construction) of a formal logical inference of the Law of Conservation of Energy within a logically formalized axiomatic epistemology-and-axiology theory Sigma from a precisely defin...The research purpose is invention (construction) of a formal logical inference of the Law of Conservation of Energy within a logically formalized axiomatic epistemology-and-axiology theory Sigma from a precisely defined assumption of a-priori-ness of knowledge. For realizing this aim, the following work has been done: 1) a two-valued algebraic system of formal axiology has been defined precisely and applied to proper-philosophy of physics, namely, to an almost unknown (not-recognized) formal-axiological aspect of the physical law of conservation of energy;2) the formal axiomatic epistemology-and-axiology theory Sigma has been defined precisely and applied to proper-physics for realizing the above-indicated purpose. Thus, a discrete mathematical model of relationship between philosophy of physics and universal epistemology united with formal axiology has been constructed. Results: 1) By accurate computing relevant compositions of evaluation-functions within the discrete mathematical model, it is demonstrated that a formal-axiological analog of the great conservation law of proper physics is a formal-axiological law of two-valued algebra of metaphysics. (A precise algorithmic definition of the unhabitual (not-well-known) notion “formal-axiological law of algebra of metaphysics” is given.) 2) The hitherto never published significantly new nontrivial scientific result of investigation presented in this article is a formal logical inference of the law of conservation of energy within the formal axiomatic theory Sigma from conjunction of the formal-axiological analog of the law of conservation of energy and the assumption of a-priori-ness of knowledge.展开更多
Human Immunodeficiency Virus (HIV) dynamics in Africa are purely characterised by sparse sampling of DNA sequences for individuals who are infected. There are some sub-groups that are more at risk than the general pop...Human Immunodeficiency Virus (HIV) dynamics in Africa are purely characterised by sparse sampling of DNA sequences for individuals who are infected. There are some sub-groups that are more at risk than the general population. These sub-groups have higher infectivity rates. We came up with a likelihood inference model of multi-type birth-death process that can be used to make inference for HIV epidemic in an African setting. We employ a likelihood inference that incorporates a probability of removal from infectious pool in the model. We have simulated trees and made parameter inference on the simulated trees as well as investigating whether the model distinguishes between heterogeneous and homogeneous dynamics. The model makes fairly good parameter inference. It distinguishes between heterogeneous and homogeneous dynamics well. Parameter estimation was also performed under sparse sampling scenario. We investigated whether trees obtained from a structured population are more balanced than those from a non-structured host population using tree statistics that measure tree balance and imbalance. Trees from non-structured population were more balanced basing on Colless and Sackin indices.展开更多
Mobile phones are becoming a primary platform for information access. A major aspect of ubiquitous computing is context-aware applications which collect information about the environment that the user is in and use th...Mobile phones are becoming a primary platform for information access. A major aspect of ubiquitous computing is context-aware applications which collect information about the environment that the user is in and use this information to provide better service and improve user experience. Location awareness makes certain applications possible, e.g., recommending nearby businesses and tracking estimated routes. An Android application is able to collect useful Wi-Fi information without registering a location listener with a network-based provider. We passively collected the data of the IDs of Wi-Fi access points and the received signal strengths. We developed and implemented an algorithm to analyse the data;and designed heuristics to infer the location of the device over time—all without ever connecting to the network thus maximally preserving the privacy of the user.展开更多
Deciding the penalty of a law case has always been a complex process, which may involve with much coordination. Despite the judicial study based on the rules and conditions, artificial intelligence and machine learnin...Deciding the penalty of a law case has always been a complex process, which may involve with much coordination. Despite the judicial study based on the rules and conditions, artificial intelligence and machine learning has rarely been used to study the problem of penalty inferring, leaving the large amount of law cases as well as various factors among them untouched. This paper aims to incorporate the state-of-the-art artificial intelligence methods to exploit to what extent this problem can be alleviated. We first analyze 145 000 law cases and observe that there are two sorts of labels, temporal labels and spatial labels, which have unique characteristics. Temporal labels and spatial labels tend to converge towards the final penalty, on condition that the cases are of the same category. In light of this, we propose a latent-class probabilistic generative model, namely Penalty Topic Model (PTM), to infer the topic of law cases, and the temporal and spatial patterns of topics embedded in the case judgment. Then, the learnt knowledge is utilized to automatically cluster all cases accordingly in a unified way. We conduct extensive experiments to evaluate the performance of the proposed PTM on a real large-scale dataset of law cases. The experimental results show the superiority of our proposed PTM.展开更多
Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating du...Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.展开更多
A novel extended Lindley lifetime model that exhibits unimodal or decreasing density shapes as well as increasing,bathtub or unimodal-then-bathtub failure rates, named the Marshall-Olkin-Lindley (MOL) model is studied...A novel extended Lindley lifetime model that exhibits unimodal or decreasing density shapes as well as increasing,bathtub or unimodal-then-bathtub failure rates, named the Marshall-Olkin-Lindley (MOL) model is studied.In this research, using a progressive Type-II censored, various inferences of the MOL model parameters oflife are introduced. Utilizing the maximum likelihood method as a classical approach, the estimators of themodel parameters and various reliability measures are investigated. Against both symmetric and asymmetric lossfunctions, the Bayesian estimates are obtained using the Markov Chain Monte Carlo (MCMC) technique with theassumption of independent gamma priors. From the Fisher information data and the simulatedMarkovian chains,the approximate asymptotic interval and the highest posterior density interval, respectively, of each unknownparameter are calculated. Via an extensive simulated study, the usefulness of the various suggested strategies isassessedwith respect to some evaluationmetrics such as mean squared errors, mean relative absolute biases, averageconfidence lengths, and coverage percentages. Comparing the Bayesian estimations based on the asymmetric lossfunction to the traditional technique or the symmetric loss function-based Bayesian estimations, the analysisdemonstrates that asymmetric loss function-based Bayesian estimations are preferred. Finally, two data sets,representing vinyl chloride and repairable mechanical equipment items, have been investigated to support theapproaches proposed and show the superiority of the proposed model compared to the other fourteen lifetimemodels.展开更多
Remotely sensed data are frequently used for predicting and mapping ecosystem characteristics,and spatially explicit wall-to-wall information is sometimes proposed as the best possible source of information for decisi...Remotely sensed data are frequently used for predicting and mapping ecosystem characteristics,and spatially explicit wall-to-wall information is sometimes proposed as the best possible source of information for decisionmaking.However,wall-to-wall information typically relies on model-based prediction,and several features of model-based prediction should be understood before extensively relying on this type of information.One such feature is that model-based predictors can be considered both unbiased and biased at the same time,which has important implications in several areas of application.In this discussion paper,we first describe the conventional model-unbiasedness paradigm that underpins most prediction techniques using remotely sensed(or other)auxiliary data.From this point of view,model-based predictors are typically unbiased.Secondly,we show that for specific domains,identified based on their true values,the same model-based predictors can be considered biased,and sometimes severely so.We suggest distinguishing between conventional model-bias,defined in the statistical literature as the difference between the expected value of a predictor and the expected value of the quantity being predicted,and design-bias of model-based estimators,defined as the difference between the expected value of a model-based estimator and the true value of the quantity being predicted.We show that model-based estimators(or predictors)are typically design-biased,and that there is a trend in the design-bias from overestimating small true values to underestimating large true values.Further,we give examples of applications where this is important to acknowledge and to potentially make adjustments to correct for the design-bias trend.We argue that relying entirely on conventional model-unbiasedness may lead to mistakes in several areas of application that use predictions from remotely sensed data.展开更多
In the tag recommendation task on academic platforms,existing methods disregard users’customized preferences in favor of extracting tags based just on the content of the articles.Besides,it uses co-occurrence techniq...In the tag recommendation task on academic platforms,existing methods disregard users’customized preferences in favor of extracting tags based just on the content of the articles.Besides,it uses co-occurrence techniques and tries to combine nodes’textual content for modelling.They still do not,however,directly simulate many interactions in network learning.In order to address these issues,we present a novel system that more thoroughly integrates user preferences and citation networks into article labelling recommendations.Specifically,we first employ path similarity to quantify the degree of similarity between user labelling preferences and articles in the citation network.Then,the Commuting Matrix for massive node pair paths is used to improve computational performance.Finally,the two commonalities mentioned above are combined with the interaction paper labels based on the additivity of Poisson distribution.In addition,we also consider solving the model’s parameters by applying variational inference.Experimental results demonstrate that our suggested framework agrees and significantly outperforms the state-of-the-art baseline on two real datasets by efficiently merging the three relational data.Based on the Area Under Curve(AUC)and Mean Average Precision(MAP)analysis,the performance of the suggested task is evaluated,and it is demonstrated to have a greater solving efficiency than current techniques.展开更多
An accurate plasma current profile has irreplaceable value for the steady-state operation of the plasma.In this study,plasma current tomography based on Bayesian inference is applied to an HL-2A device and used to rec...An accurate plasma current profile has irreplaceable value for the steady-state operation of the plasma.In this study,plasma current tomography based on Bayesian inference is applied to an HL-2A device and used to reconstruct the plasma current profile.Two different Bayesian probability priors are tried,namely the Conditional Auto Regressive(CAR)prior and the Advanced Squared Exponential(ASE)kernel prior.Compared to the CAR prior,the ASE kernel prior adopts nonstationary hyperparameters and introduces the current profile of the reference discharge into the hyperparameters,which can make the shape of the current profile more flexible in space.The results indicate that the ASE prior couples more information,reduces the probability of unreasonable solutions,and achieves higher reconstruction accuracy.展开更多
Recent advancements in satellite technologies and the declining cost of access to space have led to the emergence of large satellite constellations in Low Earth Orbit(LEO).However,these constellations often rely on be...Recent advancements in satellite technologies and the declining cost of access to space have led to the emergence of large satellite constellations in Low Earth Orbit(LEO).However,these constellations often rely on bent-pipe architecture,resulting in high communication costs.Existing onboard inference architectures suffer from limitations in terms of low accuracy and inflexibility in the deployment and management of in-orbit applications.To address these challenges,we propose a cloud-native-based satellite design specifically tailored for Earth Observation tasks,enabling diverse computing paradigms.In this work,we present a case study of a satellite-ground collaborative inference system deployed in the Tiansuan constellation,demonstrating a remarkable 50%accuracy improvement and a substantial 90%data reduction.Our work sheds light on in-orbit energy,where in-orbit computing accounts for 17%of the total onboard energy consumption.Our approach represents a significant advancement of cloud-native satellite,aiming to enhance the accuracy of in-orbit computing while simultaneously reducing communication cost.展开更多
The genus Silurus,an important group of catfish,exhibits heterogeneous distribution in Eurasian freshwater systems.This group includes economically important and endangered species,thereby attracting considerable scie...The genus Silurus,an important group of catfish,exhibits heterogeneous distribution in Eurasian freshwater systems.This group includes economically important and endangered species,thereby attracting considerable scientific interest.Despite this interest,the lack of a comprehensive phylogenetic framework impedes our understanding of the mechanisms underlying the extensive diversity found within this genus.Herein,we analyzed 89 newly sequenced and 20 previously published mitochondrial genomes(mitogenomes)from 13 morphological species to reconstruct the phylogenetic relationships,biogeographic history,and species diversity of Silurus.Our phylogenetic reconstructions identified eight clades,supported by both maximum-likelihood and Bayesian inference.Sequence-based species delimitation analyses yielded multiple molecular operational taxonomic units(MOTUs)in several taxa,including the Silurus asotus complex(four MOTUs)and Silurus microdorsalis(two MOTUs),suggesting that species diversity is underestimated in the genus.A reconstructed time-calibrated tree of Silurus species provided an age estimate of the most recent common ancestor of approximately 37.61 million years ago(Ma),with divergences among clades within the genus occurring between 11.56 Ma and 29.44 Ma,and divergences among MOTUs within species occurring between 3.71 Ma and 11.56 Ma.Biogeographic reconstructions suggested that the ancestral area for the genus likely encompassed China and the Korean Peninsula,with multiple inferred dispersal events to Europe and Central and Western Asia between 21.78 Ma and 26.67 Ma and to Japan between 2.51 Ma and 18.42 Ma.Key factors such as the Eocene-Oligocene extinction event,onset and intensification of the monsoon system,and glacial cycles associated with sea-level fluctuations have likely played significant roles in shaping the evolutionary history of the genus Silurus.展开更多
The estimation of sparse underwater acoustic(UWA)channels can be regarded as an inference problem involving hidden variables within the Bayesian framework.While the classical sparse Bayesian learning(SBL),derived thro...The estimation of sparse underwater acoustic(UWA)channels can be regarded as an inference problem involving hidden variables within the Bayesian framework.While the classical sparse Bayesian learning(SBL),derived through the expectation maximization(EM)algorithm,has been widely employed for UWA channel estimation,it still differs from the real posterior expectation of channels.In this paper,we propose an approach that combines variational inference(VI)and Markov chain Monte Carlo(MCMC)methods to provide a more accurate posterior estimation.Specifically,the SBL is first re-derived with VI,allowing us to replace the posterior distribution of the hidden variables with a variational distribution.Then,we determine the full conditional probability distribution for each variable in the variational distribution and then iteratively perform random Gibbs sampling in MCMC to converge the Markov chain.The results of simulation and experiment indicate that our estimation method achieves lower mean square error and bit error rate compared to the classic SBL approach.Additionally,it demonstrates an acceptable convergence speed.展开更多
Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the intro...Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.展开更多
Recently,deep learning-based semantic communication has garnered widespread attention,with numerous systems designed for transmitting diverse data sources,including text,image,and speech,etc.While efforts have been di...Recently,deep learning-based semantic communication has garnered widespread attention,with numerous systems designed for transmitting diverse data sources,including text,image,and speech,etc.While efforts have been directed toward improving system performance,many studies have concentrated on enhancing the structure of the encoder and decoder.However,this often overlooks the resulting increase in model complexity,imposing additional storage and computational burdens on smart devices.Furthermore,existing work tends to prioritize explicit semantics,neglecting the potential of implicit semantics.This paper aims to easily and effectively enhance the receiver's decoding capability without modifying the encoder and decoder structures.We propose a novel semantic communication system with variational neural inference for text transmission.Specifically,we introduce a simple but effective variational neural inferer at the receiver to infer the latent semantic information within the received text.This information is then utilized to assist in the decoding process.The simulation results show a significant enhancement in system performance and improved robustness.展开更多
When designing solar systems and assessing the effectiveness of their many uses,estimating sun irradiance is a crucial first step.This study examined three approaches(ANN,GA-ANN,and ANFIS)for estimating daily global s...When designing solar systems and assessing the effectiveness of their many uses,estimating sun irradiance is a crucial first step.This study examined three approaches(ANN,GA-ANN,and ANFIS)for estimating daily global solar radiation(GSR)in the south of Algeria:Adrar,Ouargla,and Bechar.The proposed hybrid GA-ANN model,based on genetic algorithm-based optimization,was developed to improve the ANN model.The GA-ANN and ANFIS models performed better than the standalone ANN-based model,with GA-ANN being better suited for forecasting in all sites,and it performed the best with the best values in the testing phase of Coefficient of Determination(R=0.9005),Mean Absolute Percentage Error(MAPE=8.40%),and Relative Root Mean Square Error(rRMSE=12.56%).Nevertheless,the ANFIS model outperformed the GA-ANN model in forecasting daily GSR,with the best values of indicators when testing the model being R=0.9374,MAPE=7.78%,and rRMSE=10.54%.Generally,we may conclude that the initial ANN stand-alone model performance when forecasting solar radiation has been improved,and the results obtained after injecting the genetic algorithm into the ANN to optimize its weights were satisfactory.The model can be used to forecast daily GSR in dry climates and other climates and may also be helpful in selecting solar energy system installations and sizes.展开更多
The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kineti...The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kinetic energy,turbulent length scale,and vertical diffusivity coefficient for turbulent kinetic energy in the upper ocean.However,the accurate determination of its value remains a pressing scientific challenge.This study adopted an innovative approach by leveraging deep learning technology to address this challenge of inferring the E_(6).Through the integration of the information of the turbulent length scale equation into a physical-informed neural network(PINN),we achieved an accurate and physically meaningful inference of E_(6).Multiple cases were examined to assess the feasibility of PINN in this task,revealing that under optimal settings,the average mean squared error of the E_(6) inference was only 0.01,attesting to the effectiveness of PINN.The optimal hyperparameter combination was identified using the Tanh activation function,along with a spatiotemporal sampling interval of 1 s and 0.1 m.This resulted in a substantial reduction in the average bias of the E_(6) inference,ranging from O(10^(1))to O(10^(2))times compared with other combinations.This study underscores the potential application of PINN in intricate marine environments,offering a novel and efficient method for optimizing MY-type LT parameterization schemes.展开更多
Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully superv...Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.展开更多
Aiming at the shortcoming that the traditional industrial manipulator using off-line programming cannot change along with the change of external environment,the key technologies such as machine vision and manipulator ...Aiming at the shortcoming that the traditional industrial manipulator using off-line programming cannot change along with the change of external environment,the key technologies such as machine vision and manipulator control are studied,and a complete manipulator vision tracking system is designed.Firstly,Denavit-Hartenberg(D-H)parameters method is used to construct the model of the manipulator and analyze the forward and inverse kinematics equations of the manipulator.At the same time,a binocular camera is used to obtain the threedimensional position of the target.Secondly,in order to make the manipulator track the target more accurately,the fuzzy adaptive square root unscented Kalman filter(FSRUKF)is proposed to estimate the target state.Finally,the manipulator tracking system is built by using the position-based visual servo.The simulation experiments show that FSRUKF converges faster and with less error than the square root unscented Kalman filter(SRUKF),which meets the application requirements of the manipulator tracking system,and basically meets the application requirements of the manipulator tracking system in the practical experiments.展开更多
文摘The research purpose is invention (construction) of a formal logical inference of the Law of Conservation of Energy within a logically formalized axiomatic epistemology-and-axiology theory Sigma from a precisely defined assumption of a-priori-ness of knowledge. For realizing this aim, the following work has been done: 1) a two-valued algebraic system of formal axiology has been defined precisely and applied to proper-philosophy of physics, namely, to an almost unknown (not-recognized) formal-axiological aspect of the physical law of conservation of energy;2) the formal axiomatic epistemology-and-axiology theory Sigma has been defined precisely and applied to proper-physics for realizing the above-indicated purpose. Thus, a discrete mathematical model of relationship between philosophy of physics and universal epistemology united with formal axiology has been constructed. Results: 1) By accurate computing relevant compositions of evaluation-functions within the discrete mathematical model, it is demonstrated that a formal-axiological analog of the great conservation law of proper physics is a formal-axiological law of two-valued algebra of metaphysics. (A precise algorithmic definition of the unhabitual (not-well-known) notion “formal-axiological law of algebra of metaphysics” is given.) 2) The hitherto never published significantly new nontrivial scientific result of investigation presented in this article is a formal logical inference of the law of conservation of energy within the formal axiomatic theory Sigma from conjunction of the formal-axiological analog of the law of conservation of energy and the assumption of a-priori-ness of knowledge.
基金supported by the National Natural Science Foundation of China(61471343)the National Key Technology Research and Development Program of the Ministry of Science and Technology of China(2014BAK14B03)
文摘Human Immunodeficiency Virus (HIV) dynamics in Africa are purely characterised by sparse sampling of DNA sequences for individuals who are infected. There are some sub-groups that are more at risk than the general population. These sub-groups have higher infectivity rates. We came up with a likelihood inference model of multi-type birth-death process that can be used to make inference for HIV epidemic in an African setting. We employ a likelihood inference that incorporates a probability of removal from infectious pool in the model. We have simulated trees and made parameter inference on the simulated trees as well as investigating whether the model distinguishes between heterogeneous and homogeneous dynamics. The model makes fairly good parameter inference. It distinguishes between heterogeneous and homogeneous dynamics well. Parameter estimation was also performed under sparse sampling scenario. We investigated whether trees obtained from a structured population are more balanced than those from a non-structured host population using tree statistics that measure tree balance and imbalance. Trees from non-structured population were more balanced basing on Colless and Sackin indices.
文摘Mobile phones are becoming a primary platform for information access. A major aspect of ubiquitous computing is context-aware applications which collect information about the environment that the user is in and use this information to provide better service and improve user experience. Location awareness makes certain applications possible, e.g., recommending nearby businesses and tracking estimated routes. An Android application is able to collect useful Wi-Fi information without registering a location listener with a network-based provider. We passively collected the data of the IDs of Wi-Fi access points and the received signal strengths. We developed and implemented an algorithm to analyse the data;and designed heuristics to infer the location of the device over time—all without ever connecting to the network thus maximally preserving the privacy of the user.
基金This work is supported in part by the National Key Research and Development Program of China under Grant No. 2016YFC0800805 and the National Natural Science Foundation of China under Grant No. 61690201.
文摘Deciding the penalty of a law case has always been a complex process, which may involve with much coordination. Despite the judicial study based on the rules and conditions, artificial intelligence and machine learning has rarely been used to study the problem of penalty inferring, leaving the large amount of law cases as well as various factors among them untouched. This paper aims to incorporate the state-of-the-art artificial intelligence methods to exploit to what extent this problem can be alleviated. We first analyze 145 000 law cases and observe that there are two sorts of labels, temporal labels and spatial labels, which have unique characteristics. Temporal labels and spatial labels tend to converge towards the final penalty, on condition that the cases are of the same category. In light of this, we propose a latent-class probabilistic generative model, namely Penalty Topic Model (PTM), to infer the topic of law cases, and the temporal and spatial patterns of topics embedded in the case judgment. Then, the learnt knowledge is utilized to automatically cluster all cases accordingly in a unified way. We conduct extensive experiments to evaluate the performance of the proposed PTM on a real large-scale dataset of law cases. The experimental results show the superiority of our proposed PTM.
文摘Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.
文摘A novel extended Lindley lifetime model that exhibits unimodal or decreasing density shapes as well as increasing,bathtub or unimodal-then-bathtub failure rates, named the Marshall-Olkin-Lindley (MOL) model is studied.In this research, using a progressive Type-II censored, various inferences of the MOL model parameters oflife are introduced. Utilizing the maximum likelihood method as a classical approach, the estimators of themodel parameters and various reliability measures are investigated. Against both symmetric and asymmetric lossfunctions, the Bayesian estimates are obtained using the Markov Chain Monte Carlo (MCMC) technique with theassumption of independent gamma priors. From the Fisher information data and the simulatedMarkovian chains,the approximate asymptotic interval and the highest posterior density interval, respectively, of each unknownparameter are calculated. Via an extensive simulated study, the usefulness of the various suggested strategies isassessedwith respect to some evaluationmetrics such as mean squared errors, mean relative absolute biases, averageconfidence lengths, and coverage percentages. Comparing the Bayesian estimations based on the asymmetric lossfunction to the traditional technique or the symmetric loss function-based Bayesian estimations, the analysisdemonstrates that asymmetric loss function-based Bayesian estimations are preferred. Finally, two data sets,representing vinyl chloride and repairable mechanical equipment items, have been investigated to support theapproaches proposed and show the superiority of the proposed model compared to the other fourteen lifetimemodels.
基金part of the programme Mistra Digital Forests and of the Center for Research-based Innovation Smart Forest:Bringing Industry 4.0to the Norwegian forest sector(NFR SFI project no.309671,smartforest.no)。
文摘Remotely sensed data are frequently used for predicting and mapping ecosystem characteristics,and spatially explicit wall-to-wall information is sometimes proposed as the best possible source of information for decisionmaking.However,wall-to-wall information typically relies on model-based prediction,and several features of model-based prediction should be understood before extensively relying on this type of information.One such feature is that model-based predictors can be considered both unbiased and biased at the same time,which has important implications in several areas of application.In this discussion paper,we first describe the conventional model-unbiasedness paradigm that underpins most prediction techniques using remotely sensed(or other)auxiliary data.From this point of view,model-based predictors are typically unbiased.Secondly,we show that for specific domains,identified based on their true values,the same model-based predictors can be considered biased,and sometimes severely so.We suggest distinguishing between conventional model-bias,defined in the statistical literature as the difference between the expected value of a predictor and the expected value of the quantity being predicted,and design-bias of model-based estimators,defined as the difference between the expected value of a model-based estimator and the true value of the quantity being predicted.We show that model-based estimators(or predictors)are typically design-biased,and that there is a trend in the design-bias from overestimating small true values to underestimating large true values.Further,we give examples of applications where this is important to acknowledge and to potentially make adjustments to correct for the design-bias trend.We argue that relying entirely on conventional model-unbiasedness may lead to mistakes in several areas of application that use predictions from remotely sensed data.
基金supported by the National Natural Science Foundation of China(No.62271274).
文摘In the tag recommendation task on academic platforms,existing methods disregard users’customized preferences in favor of extracting tags based just on the content of the articles.Besides,it uses co-occurrence techniques and tries to combine nodes’textual content for modelling.They still do not,however,directly simulate many interactions in network learning.In order to address these issues,we present a novel system that more thoroughly integrates user preferences and citation networks into article labelling recommendations.Specifically,we first employ path similarity to quantify the degree of similarity between user labelling preferences and articles in the citation network.Then,the Commuting Matrix for massive node pair paths is used to improve computational performance.Finally,the two commonalities mentioned above are combined with the interaction paper labels based on the additivity of Poisson distribution.In addition,we also consider solving the model’s parameters by applying variational inference.Experimental results demonstrate that our suggested framework agrees and significantly outperforms the state-of-the-art baseline on two real datasets by efficiently merging the three relational data.Based on the Area Under Curve(AUC)and Mean Average Precision(MAP)analysis,the performance of the suggested task is evaluated,and it is demonstrated to have a greater solving efficiency than current techniques.
基金supported by the National MCF Energy R&D Program of China (Nos. 2018 YFE0301105, 2022YFE03010002 and 2018YFE0302100)the National Key R&D Program of China (Nos. 2022YFE03070004 and 2022YFE03070000)National Natural Science Foundation of China (Nos. 12205195, 12075155 and 11975277)
文摘An accurate plasma current profile has irreplaceable value for the steady-state operation of the plasma.In this study,plasma current tomography based on Bayesian inference is applied to an HL-2A device and used to reconstruct the plasma current profile.Two different Bayesian probability priors are tried,namely the Conditional Auto Regressive(CAR)prior and the Advanced Squared Exponential(ASE)kernel prior.Compared to the CAR prior,the ASE kernel prior adopts nonstationary hyperparameters and introduces the current profile of the reference discharge into the hyperparameters,which can make the shape of the current profile more flexible in space.The results indicate that the ASE prior couples more information,reduces the probability of unreasonable solutions,and achieves higher reconstruction accuracy.
基金supported by National Natural Science Foundation of China(62032003).
文摘Recent advancements in satellite technologies and the declining cost of access to space have led to the emergence of large satellite constellations in Low Earth Orbit(LEO).However,these constellations often rely on bent-pipe architecture,resulting in high communication costs.Existing onboard inference architectures suffer from limitations in terms of low accuracy and inflexibility in the deployment and management of in-orbit applications.To address these challenges,we propose a cloud-native-based satellite design specifically tailored for Earth Observation tasks,enabling diverse computing paradigms.In this work,we present a case study of a satellite-ground collaborative inference system deployed in the Tiansuan constellation,demonstrating a remarkable 50%accuracy improvement and a substantial 90%data reduction.Our work sheds light on in-orbit energy,where in-orbit computing accounts for 17%of the total onboard energy consumption.Our approach represents a significant advancement of cloud-native satellite,aiming to enhance the accuracy of in-orbit computing while simultaneously reducing communication cost.
基金National Natural Science Foundation of China(32000306)Project of Innovation Team of Survey and Assessment of the Pearl River Fishery Resources(2023TD-10)Natural Science Foundation of Shaanxi Province(2023-JC-YB-325)。
文摘The genus Silurus,an important group of catfish,exhibits heterogeneous distribution in Eurasian freshwater systems.This group includes economically important and endangered species,thereby attracting considerable scientific interest.Despite this interest,the lack of a comprehensive phylogenetic framework impedes our understanding of the mechanisms underlying the extensive diversity found within this genus.Herein,we analyzed 89 newly sequenced and 20 previously published mitochondrial genomes(mitogenomes)from 13 morphological species to reconstruct the phylogenetic relationships,biogeographic history,and species diversity of Silurus.Our phylogenetic reconstructions identified eight clades,supported by both maximum-likelihood and Bayesian inference.Sequence-based species delimitation analyses yielded multiple molecular operational taxonomic units(MOTUs)in several taxa,including the Silurus asotus complex(four MOTUs)and Silurus microdorsalis(two MOTUs),suggesting that species diversity is underestimated in the genus.A reconstructed time-calibrated tree of Silurus species provided an age estimate of the most recent common ancestor of approximately 37.61 million years ago(Ma),with divergences among clades within the genus occurring between 11.56 Ma and 29.44 Ma,and divergences among MOTUs within species occurring between 3.71 Ma and 11.56 Ma.Biogeographic reconstructions suggested that the ancestral area for the genus likely encompassed China and the Korean Peninsula,with multiple inferred dispersal events to Europe and Central and Western Asia between 21.78 Ma and 26.67 Ma and to Japan between 2.51 Ma and 18.42 Ma.Key factors such as the Eocene-Oligocene extinction event,onset and intensification of the monsoon system,and glacial cycles associated with sea-level fluctuations have likely played significant roles in shaping the evolutionary history of the genus Silurus.
基金funded by the Excellent Youth Science Fund of Heilongjiang Province(Grant No.YQ2022F001).
文摘The estimation of sparse underwater acoustic(UWA)channels can be regarded as an inference problem involving hidden variables within the Bayesian framework.While the classical sparse Bayesian learning(SBL),derived through the expectation maximization(EM)algorithm,has been widely employed for UWA channel estimation,it still differs from the real posterior expectation of channels.In this paper,we propose an approach that combines variational inference(VI)and Markov chain Monte Carlo(MCMC)methods to provide a more accurate posterior estimation.Specifically,the SBL is first re-derived with VI,allowing us to replace the posterior distribution of the hidden variables with a variational distribution.Then,we determine the full conditional probability distribution for each variable in the variational distribution and then iteratively perform random Gibbs sampling in MCMC to converge the Markov chain.The results of simulation and experiment indicate that our estimation method achieves lower mean square error and bit error rate compared to the classic SBL approach.Additionally,it demonstrates an acceptable convergence speed.
基金National College Students’Training Programs of Innovation and Entrepreneurship,Grant/Award Number:S202210022060the CACMS Innovation Fund,Grant/Award Number:CI2021A00512the National Nature Science Foundation of China under Grant,Grant/Award Number:62206021。
文摘Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.
基金supported in part by the National Science Foundation of China(NSFC)with grant no.62271514in part by the Science,Technology and Innovation Commission of Shenzhen Municipality with grant no.JCYJ20210324120002007 and ZDSYS20210623091807023in part by the State Key Laboratory of Public Big Data with grant no.PBD2023-01。
文摘Recently,deep learning-based semantic communication has garnered widespread attention,with numerous systems designed for transmitting diverse data sources,including text,image,and speech,etc.While efforts have been directed toward improving system performance,many studies have concentrated on enhancing the structure of the encoder and decoder.However,this often overlooks the resulting increase in model complexity,imposing additional storage and computational burdens on smart devices.Furthermore,existing work tends to prioritize explicit semantics,neglecting the potential of implicit semantics.This paper aims to easily and effectively enhance the receiver's decoding capability without modifying the encoder and decoder structures.We propose a novel semantic communication system with variational neural inference for text transmission.Specifically,we introduce a simple but effective variational neural inferer at the receiver to infer the latent semantic information within the received text.This information is then utilized to assist in the decoding process.The simulation results show a significant enhancement in system performance and improved robustness.
文摘When designing solar systems and assessing the effectiveness of their many uses,estimating sun irradiance is a crucial first step.This study examined three approaches(ANN,GA-ANN,and ANFIS)for estimating daily global solar radiation(GSR)in the south of Algeria:Adrar,Ouargla,and Bechar.The proposed hybrid GA-ANN model,based on genetic algorithm-based optimization,was developed to improve the ANN model.The GA-ANN and ANFIS models performed better than the standalone ANN-based model,with GA-ANN being better suited for forecasting in all sites,and it performed the best with the best values in the testing phase of Coefficient of Determination(R=0.9005),Mean Absolute Percentage Error(MAPE=8.40%),and Relative Root Mean Square Error(rRMSE=12.56%).Nevertheless,the ANFIS model outperformed the GA-ANN model in forecasting daily GSR,with the best values of indicators when testing the model being R=0.9374,MAPE=7.78%,and rRMSE=10.54%.Generally,we may conclude that the initial ANN stand-alone model performance when forecasting solar radiation has been improved,and the results obtained after injecting the genetic algorithm into the ANN to optimize its weights were satisfactory.The model can be used to forecast daily GSR in dry climates and other climates and may also be helpful in selecting solar energy system installations and sizes.
基金The National Key Research and Development Program of China under contract No.2022YFC3105002the National Natural Science Foundation of China under contract No.42176020the project from the Key Laboratory of Marine Environmental Information Technology,Ministry of Natural Resources,under contract No.2023GFW-1047.
文摘The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kinetic energy,turbulent length scale,and vertical diffusivity coefficient for turbulent kinetic energy in the upper ocean.However,the accurate determination of its value remains a pressing scientific challenge.This study adopted an innovative approach by leveraging deep learning technology to address this challenge of inferring the E_(6).Through the integration of the information of the turbulent length scale equation into a physical-informed neural network(PINN),we achieved an accurate and physically meaningful inference of E_(6).Multiple cases were examined to assess the feasibility of PINN in this task,revealing that under optimal settings,the average mean squared error of the E_(6) inference was only 0.01,attesting to the effectiveness of PINN.The optimal hyperparameter combination was identified using the Tanh activation function,along with a spatiotemporal sampling interval of 1 s and 0.1 m.This resulted in a substantial reduction in the average bias of the E_(6) inference,ranging from O(10^(1))to O(10^(2))times compared with other combinations.This study underscores the potential application of PINN in intricate marine environments,offering a novel and efficient method for optimizing MY-type LT parameterization schemes.
文摘Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.
基金supported by Natural Science Basic Research Program of Shaanxi(2022JQ-593)Key Research and Development Program of Shaanxi(2022GY-089)。
文摘Aiming at the shortcoming that the traditional industrial manipulator using off-line programming cannot change along with the change of external environment,the key technologies such as machine vision and manipulator control are studied,and a complete manipulator vision tracking system is designed.Firstly,Denavit-Hartenberg(D-H)parameters method is used to construct the model of the manipulator and analyze the forward and inverse kinematics equations of the manipulator.At the same time,a binocular camera is used to obtain the threedimensional position of the target.Secondly,in order to make the manipulator track the target more accurately,the fuzzy adaptive square root unscented Kalman filter(FSRUKF)is proposed to estimate the target state.Finally,the manipulator tracking system is built by using the position-based visual servo.The simulation experiments show that FSRUKF converges faster and with less error than the square root unscented Kalman filter(SRUKF),which meets the application requirements of the manipulator tracking system,and basically meets the application requirements of the manipulator tracking system in the practical experiments.