Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating du...Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.展开更多
An accurate plasma current profile has irreplaceable value for the steady-state operation of the plasma.In this study,plasma current tomography based on Bayesian inference is applied to an HL-2A device and used to rec...An accurate plasma current profile has irreplaceable value for the steady-state operation of the plasma.In this study,plasma current tomography based on Bayesian inference is applied to an HL-2A device and used to reconstruct the plasma current profile.Two different Bayesian probability priors are tried,namely the Conditional Auto Regressive(CAR)prior and the Advanced Squared Exponential(ASE)kernel prior.Compared to the CAR prior,the ASE kernel prior adopts nonstationary hyperparameters and introduces the current profile of the reference discharge into the hyperparameters,which can make the shape of the current profile more flexible in space.The results indicate that the ASE prior couples more information,reduces the probability of unreasonable solutions,and achieves higher reconstruction accuracy.展开更多
Recently,deep learning-based semantic communication has garnered widespread attention,with numerous systems designed for transmitting diverse data sources,including text,image,and speech,etc.While efforts have been di...Recently,deep learning-based semantic communication has garnered widespread attention,with numerous systems designed for transmitting diverse data sources,including text,image,and speech,etc.While efforts have been directed toward improving system performance,many studies have concentrated on enhancing the structure of the encoder and decoder.However,this often overlooks the resulting increase in model complexity,imposing additional storage and computational burdens on smart devices.Furthermore,existing work tends to prioritize explicit semantics,neglecting the potential of implicit semantics.This paper aims to easily and effectively enhance the receiver's decoding capability without modifying the encoder and decoder structures.We propose a novel semantic communication system with variational neural inference for text transmission.Specifically,we introduce a simple but effective variational neural inferer at the receiver to infer the latent semantic information within the received text.This information is then utilized to assist in the decoding process.The simulation results show a significant enhancement in system performance and improved robustness.展开更多
The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kineti...The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kinetic energy,turbulent length scale,and vertical diffusivity coefficient for turbulent kinetic energy in the upper ocean.However,the accurate determination of its value remains a pressing scientific challenge.This study adopted an innovative approach by leveraging deep learning technology to address this challenge of inferring the E_(6).Through the integration of the information of the turbulent length scale equation into a physical-informed neural network(PINN),we achieved an accurate and physically meaningful inference of E_(6).Multiple cases were examined to assess the feasibility of PINN in this task,revealing that under optimal settings,the average mean squared error of the E_(6) inference was only 0.01,attesting to the effectiveness of PINN.The optimal hyperparameter combination was identified using the Tanh activation function,along with a spatiotemporal sampling interval of 1 s and 0.1 m.This resulted in a substantial reduction in the average bias of the E_(6) inference,ranging from O(10^(1))to O(10^(2))times compared with other combinations.This study underscores the potential application of PINN in intricate marine environments,offering a novel and efficient method for optimizing MY-type LT parameterization schemes.展开更多
Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully superv...Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.展开更多
Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the intro...Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.展开更多
The present research work attempted to delineate and characterize the reservoir facies from the Dawson Canyon Formation in the Penobscot field,Scotian Basin.An integrated study of instantaneous frequency,P-impedance,v...The present research work attempted to delineate and characterize the reservoir facies from the Dawson Canyon Formation in the Penobscot field,Scotian Basin.An integrated study of instantaneous frequency,P-impedance,volume of clay and neutron-porosity attributes,and structural framework was done to unravel the Late Cretaceous depositional system and reservoir facies distribution patterns within the study area.Fault strikes were found in the EW and NEE-SWW directions indicating the dominant course of tectonic activities during the Late Cretaceous period in the region.P-impedance was estimated using model-based seismic inversion.Petrophysical properties such as the neutron porosity(NPHI)and volume of clay(VCL)were estimated using the multilayer perceptron neural network with high accuracy.Comparatively,a combination of low instantaneous frequency(15-30 Hz),moderate to high impedance(7000-9500 gm/cc*m/s),low neutron porosity(27%-40%)and low volume of clay(40%-60%),suggests fair-to-good sandstone development in the Dawson Canyon Formation.After calibration with the welllog data,it is found that further lowering in these attribute responses signifies the clean sandstone facies possibly containing hydrocarbons.The present study suggests that the shale lithofacies dominates the Late Cretaceous deposition(Dawson Canyon Formation)in the Penobscot field,Scotian Basin.Major faults and overlying shale facies provide structural and stratigraphic seals and act as a suitable hydrocarbon entrapment mechanism in the Dawson Canyon Formation's reservoirs.The present research advocates the integrated analysis of multi-attributes estimated using different methods to minimize the risk involved in hydrocarbon exploration.展开更多
Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes...Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes the performance gains from parallel versus sequential hyperparameter optimization. Using scikit-learn’s Randomized SearchCV, this project tuned a Random Forest classifier for fake news detection via randomized grid search. Setting n_jobs to -1 enabled full parallelization across CPU cores. Results show the parallel implementation achieved over 5× faster CPU times and 3× faster total run times compared to sequential tuning. However, test accuracy slightly dropped from 99.26% sequentially to 99.15% with parallelism, indicating a trade-off between evaluation efficiency and model performance. Still, the significant computational gains allow more extensive hyperparameter exploration within reasonable timeframes, outweighing the small accuracy decrease. Further analysis could better quantify this trade-off across different models, tuning techniques, tasks, and hardware.展开更多
The challenge of transitioning from temporary humanitarian settlements to more sustainable human settlements is due to a significant increase in the number of forcibly displaced people over recent decades, difficultie...The challenge of transitioning from temporary humanitarian settlements to more sustainable human settlements is due to a significant increase in the number of forcibly displaced people over recent decades, difficulties in providing social services that meet the required standards, and the prolongation of emergencies. Despite this challenging context, short-term considerations continue to guide their planning and management rather than more integrated, longer-term perspectives, thus preventing viable, sustainable development. Over the years, the design of humanitarian settlements has not been adapted to local contexts and perspectives, nor to the dynamics of urbanization and population growth and data. In addition, the current approach to temporary settlement harms the environment and can strain limited resources. Inefficient land use and ad hoc development models have compounded difficulties and generated new challenges. As a result, living conditions in settlements have deteriorated over the last few decades and continue to pose new challenges. The stakes are such that major shortcomings have emerged along the way, leading to disruption, budget overruns in a context marked by a steady decline in funding. However, some attempts have been made to shift towards more sustainable approaches, but these have mainly focused on vague, sector-oriented themes, failing to consider systematic and integration views. This study is a contribution in addressing these shortcomings by designing a model-driving solution, emphasizing an integrated system conceptualized as a system of systems. This paper proposes a new methodology for designing an integrated and sustainable human settlement model, based on Model-Based Systems Engineering and a Systems Modeling Language to provide valuable insights toward sustainable solutions for displaced populations aligning with the United Nations 2030 agenda for sustainable development.展开更多
Knowledge graph technology has distinct advantages in terms of fault diagnosis.In this study,the control rod drive mechanism(CRDM)of the liquid fuel thorium molten salt reactor(TMSR-LF1)was taken as the research objec...Knowledge graph technology has distinct advantages in terms of fault diagnosis.In this study,the control rod drive mechanism(CRDM)of the liquid fuel thorium molten salt reactor(TMSR-LF1)was taken as the research object,and a fault diagnosis system was proposed based on knowledge graph.The subject–relation–object triples are defined based on CRDM unstructured data,including design specification,operation and maintenance manual,alarm list,and other forms of expert experience.In this study,we constructed a fault event ontology model to label the entity and relationship involved in the corpus of CRDM fault events.A three-layer robustly optimized bidirectional encoder representation from transformers(RBT3)pre-training approach combined with a text convolutional neural network(TextCNN)was introduced to facilitate the application of the constructed CRDM fault diagnosis graph database for fault query.The RBT3-TextCNN model along with the Jieba tool is proposed for extracting entities and recognizing the fault query intent simultaneously.Experiments on the dataset collected from TMSR-LF1 CRDM fault diagnosis unstructured data demonstrate that this model has the potential to improve the effect of intent recognition and entity extraction.Additionally,a fault alarm monitoring module was developed based on WebSocket protocol to deliver detailed information about the appeared fault to the operator automatically.Furthermore,the Bayesian inference method combined with the variable elimination algorithm was proposed to enable the development of a relatively intelligent and reliable fault diagnosis system.Finally,a CRDM fault diagnosis Web interface integrated with graph data visualization was constructed,making the CRDM fault diagnosis process intuitive and effective.展开更多
The estimation of model parameters is an important subject in engineering.In this area of work,the prevailing approach is to estimate or calculate these as deterministic parameters.In this study,we consider the model ...The estimation of model parameters is an important subject in engineering.In this area of work,the prevailing approach is to estimate or calculate these as deterministic parameters.In this study,we consider the model parameters from the perspective of random variables and describe the general form of the parameter distribution inference problem.Under this framework,we propose an ensemble Bayesian method by introducing Bayesian inference and the Markov chain Monte Carlo(MCMC)method.Experiments on a finite cylindrical reactor and a 2D IAEA benchmark problem show that the proposed method converges quickly and can estimate parameters effectively,even for several correlated parameters simultaneously.Our experiments include cases of engineering software calls,demonstrating that the method can be applied to engineering,such as nuclear reactor engineering.展开更多
In this work,we perform a Bayesian inference of the crust-core transition density ρ_(t) of neutron stars based on the neutron-star radius and neutron-skin thickness data using a thermodynamical method.Uniform and Gau...In this work,we perform a Bayesian inference of the crust-core transition density ρ_(t) of neutron stars based on the neutron-star radius and neutron-skin thickness data using a thermodynamical method.Uniform and Gaussian distributions for the ρ_(t) prior were adopted in the Bayesian approach.It has a larger probability of having values higher than 0.1 fm^(−3) for ρ_(t) as the uniform prior and neutron-star radius data were used.This was found to be controlled by the curvature K_(sym) of the nuclear symmetry energy.This phenomenon did not occur if K_(sym) was not extremely negative,namely,K_(sym)>−200 MeV.The value ofρ_(t) obtained was 0.075_(−0.01)^(+0.005) fm^(−3) at a confidence level of 68%when both the neutron-star radius and neutron-skin thickness data were considered.Strong anti-correlations were observed between ρ_(t),slope L,and curvature of the nuclear symmetry energy.The dependence of the three L-K_(sym) correlations predicted in the literature on crust-core density and pressure was quantitatively investigated.The most probable value of 0.08 fm^(−3) for ρ_(t) was obtained from the L-K_(sym) relationship proposed by Holt et al.while larger values were preferred for the other two relationships.展开更多
Estimating the intention of space objects plays an important role in air-craft design,aviation safety,military and otherfields,and is an important refer-ence basis for air situation analysis and command decision-making...Estimating the intention of space objects plays an important role in air-craft design,aviation safety,military and otherfields,and is an important refer-ence basis for air situation analysis and command decision-making.This paper studies an intention estimation method based on fuzzy theory,combining prob-ability to calculate the intention between two objects.This method takes a space object as the origin of coordinates,observes the target’s distance,speed,relative heading angle,altitude difference,steering trend and etc.,then introduces the spe-cific calculation methods of these parameters.Through calculation,values are input into the fuzzy inference model,andfinally the action intention of the target is obtained through the fuzzy rule table and historical weighted probability.Ver-ified by simulation experiment,the target intention inferred by this method is roughly the same as the actual behavior of the target,which proves that the meth-od for identifying the target intention is effective.展开更多
In the field of model-based system assessment,mathematical models are used to interpret the system behaviors.However,the industrial systems in this intelligent era will be more manageable.Various management operations...In the field of model-based system assessment,mathematical models are used to interpret the system behaviors.However,the industrial systems in this intelligent era will be more manageable.Various management operations will be dynamically set,and the system will be no longer static as it is initially designed.Thus,the static model generated by the traditional model-based safety assessment(MBSA)approach cannot be used to accurately assess the dependability.There mainly exists three problems.Complex:huge and complex behaviors make the modeling to be trivial manual;Dynamic:though there are thousands of states and transitions,the previous model must be resubmitted to assess whenever new management arrives;Unreusable:as for different systems,the model must be resubmitted by reconsidering both the management and the system itself at the same time though the management is the same.Motivated by solving the above problems,this research studies a formal management specifying approach with the advantages of agility modeling,dynamic modeling,and specification design that can be re-suable.Finally,three typical managements are specified in a series-parallel system as a demonstration to show the potential.展开更多
Regression is a widely used econometric tool in research. In observational studies, based on a number of assumptions, regression-based statistical control methods attempt to analyze the causation between treatment and...Regression is a widely used econometric tool in research. In observational studies, based on a number of assumptions, regression-based statistical control methods attempt to analyze the causation between treatment and outcome by adding control variables. However, this approach may not produce reliable estimates of causal effects. In addition to the shortcomings of the method, this lack of confidence is mainly related to ambiguous formulations in econometrics, such as the definition of selection bias, selection of core control variables, and method of testing for robustness. Within the framework of the causal models, we clarify the assumption of causal inference using regression-based statistical controls, as described in econometrics, and discuss how to select core control variables to satisfy this assumption and conduct robustness tests for regression estimates.展开更多
Context-Sensitive Task(CST)is a complex task type in crowdsourc-ing,such as handwriting recognition,route plan,and audio transcription.The current result inference algorithms can perform well in simple crowd-sourcing ...Context-Sensitive Task(CST)is a complex task type in crowdsourc-ing,such as handwriting recognition,route plan,and audio transcription.The current result inference algorithms can perform well in simple crowd-sourcing tasks,but cannot obtain high-quality inference results for CSTs.The conventional method to solve CSTs is to divide a CST into multiple independent simple subtasks for crowdsourcing,but this method ignores the context correlation among subtasks and reduces the quality of result inference.To solve this problem,we propose a result inference algorithm based on the Partially ordered set and Tree augmented naive Bayes Infer(P&T-Inf)for CSTs.Firstly,we screen the candidate results of context-sensitive tasks based on the partially ordered set.If there are parallel candidate sets,the conditional mutual information among subtasks containing context infor-mation in external knowledge(such as Google n-gram corpus,American Contemporary English corpus,etc.)will be calculated.Combined with the tree augmented naive(TAN)Bayes model,the maximum weighted spanning tree is used to model the dependencies among subtasks in each CST.We collect two crowdsourcing datasets of handwriting recognition tasks and audio transcription tasks from the real crowdsourcing platform.The experimental results show that our approach improves the quality of result inference in CSTs and reduces the time cost compared with the latest methods.展开更多
In this proceeding,some highlight results on the constraints of the nuclear matter equation of state(EOS)from the data of nucleus resonance and neutron-skin thickness using the Bayesian approach based on the Skyrme-Ha...In this proceeding,some highlight results on the constraints of the nuclear matter equation of state(EOS)from the data of nucleus resonance and neutron-skin thickness using the Bayesian approach based on the Skyrme-Hartree-Fock model and its extension have been presented.Typically,the anti-correlation and positive correlations between the slope parameter and the value of the symmetry energy at the saturation density under the constraint of the neutron-skin thickness and the isovector giant dipole resonance have been discussed respectively.It’s shown that the Bayesian analysis can help to find a compromise for the“PREXII puzzle”and the“soft Tin puzzle”.The possible modifications on the constraints of lower-order EOS parameters as well as the relevant correlation when higher-order EOS parameters are incorporated as independent variables have been further illustrated.For a given model and parameter space,the Bayesian approach serves as a good analysis tool suitable for multi-messengers versus multi-variables,and is helpful for constraining quantitatively the model parameters as well as their correlations.展开更多
Spectral unmixing helps to identify different components present in the spectral mixtures which occur in the uppermost layer of the area owing to the low spatial resolution of hyperspectral images.Most spectral unmixi...Spectral unmixing helps to identify different components present in the spectral mixtures which occur in the uppermost layer of the area owing to the low spatial resolution of hyperspectral images.Most spectral unmixing methods are globally based and do not consider the spectral variability among its endmembers that occur due to illumination,atmospheric,and environmental conditions.Here,endmember bundle extraction plays a major role in overcoming the above-mentioned limitations leading to more accurate abundance fractions.Accordingly,a two-stage approach is proposed to extract endmembers through endmember bundles in hyperspectral images.The divide and conquer method is applied as the first step in subset images with only the non-redundant bands to extract endmembers using the Vertex Component Analysis(VCA)and N-FINDR algorithms.A fuzzy rule-based inference system utilizing spectral matching parameters is proposed in the second step to categorize endmembers.The endmember with the minimum error is chosen as the final endmember in each specific category.The proposed method is simple and automatically considers endmember variability in hyperspectral images.The efficiency of the proposed method is evaluated using two real hyperspectral datasets.The average spectral angle and abundance angle are used to analyze the performance measures.展开更多
In social science,health care,digital therapeutics,etc.,smartphone data have played important roles to infer users’daily lives.However,smartphone data col-lection systems could not be used effectively and widely beca...In social science,health care,digital therapeutics,etc.,smartphone data have played important roles to infer users’daily lives.However,smartphone data col-lection systems could not be used effectively and widely because they did not exploit any Internet of Things(IoT)standards(e.g.,oneM2M)and class labeling methods for machine learning(ML)services.Therefore,in this paper,we propose a novel Android IoT lifelog system complying with oneM2M standards to collect various lifelog data in smartphones and provide two manual and automated class labeling methods for inference of users’daily lives.The proposed system consists of an Android IoT client application,an oneM2M-compliant IoT server,and an ML server whose high-level functional architecture was carefully designed to be open,accessible,and internation-ally recognized in accordance with the oneM2M standards.In particular,we explain implementation details of activity diagrams for the Android IoT client application,the primary component of the proposed system.Experimental results verified that this application could work with the oneM2M-compliant IoT server normally and provide corresponding class labels properly.As an application of the proposed system,we also propose motion inference based on three multi-class ML classifiers(i.e.,k nearest neighbors,Naive Bayes,and support vector machine)which were created by using only motion and location data(i.e.,acceleration force,gyroscope rate of rotation,and speed)and motion class labels(i.e.,driving,cycling,running,walking,and stil-ling).When compared with confusion matrices of the ML classifiers,the k nearest neighbors classifier outperformed the other two overall.Furthermore,we evaluated its output quality by analyzing the receiver operating characteristic(ROC)curves with area under the curve(AUC)values.The AUC values of the ROC curves for all motion classes were more than 0.9,and the macro-average and micro-average ROC curves achieved very high AUC values of 0.96 and 0.99,respectively.展开更多
文摘Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.
基金supported by the National MCF Energy R&D Program of China (Nos. 2018 YFE0301105, 2022YFE03010002 and 2018YFE0302100)the National Key R&D Program of China (Nos. 2022YFE03070004 and 2022YFE03070000)National Natural Science Foundation of China (Nos. 12205195, 12075155 and 11975277)
文摘An accurate plasma current profile has irreplaceable value for the steady-state operation of the plasma.In this study,plasma current tomography based on Bayesian inference is applied to an HL-2A device and used to reconstruct the plasma current profile.Two different Bayesian probability priors are tried,namely the Conditional Auto Regressive(CAR)prior and the Advanced Squared Exponential(ASE)kernel prior.Compared to the CAR prior,the ASE kernel prior adopts nonstationary hyperparameters and introduces the current profile of the reference discharge into the hyperparameters,which can make the shape of the current profile more flexible in space.The results indicate that the ASE prior couples more information,reduces the probability of unreasonable solutions,and achieves higher reconstruction accuracy.
基金supported in part by the National Science Foundation of China(NSFC)with grant no.62271514in part by the Science,Technology and Innovation Commission of Shenzhen Municipality with grant no.JCYJ20210324120002007 and ZDSYS20210623091807023in part by the State Key Laboratory of Public Big Data with grant no.PBD2023-01。
文摘Recently,deep learning-based semantic communication has garnered widespread attention,with numerous systems designed for transmitting diverse data sources,including text,image,and speech,etc.While efforts have been directed toward improving system performance,many studies have concentrated on enhancing the structure of the encoder and decoder.However,this often overlooks the resulting increase in model complexity,imposing additional storage and computational burdens on smart devices.Furthermore,existing work tends to prioritize explicit semantics,neglecting the potential of implicit semantics.This paper aims to easily and effectively enhance the receiver's decoding capability without modifying the encoder and decoder structures.We propose a novel semantic communication system with variational neural inference for text transmission.Specifically,we introduce a simple but effective variational neural inferer at the receiver to infer the latent semantic information within the received text.This information is then utilized to assist in the decoding process.The simulation results show a significant enhancement in system performance and improved robustness.
基金The National Key Research and Development Program of China under contract No.2022YFC3105002the National Natural Science Foundation of China under contract No.42176020the project from the Key Laboratory of Marine Environmental Information Technology,Ministry of Natural Resources,under contract No.2023GFW-1047.
文摘The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kinetic energy,turbulent length scale,and vertical diffusivity coefficient for turbulent kinetic energy in the upper ocean.However,the accurate determination of its value remains a pressing scientific challenge.This study adopted an innovative approach by leveraging deep learning technology to address this challenge of inferring the E_(6).Through the integration of the information of the turbulent length scale equation into a physical-informed neural network(PINN),we achieved an accurate and physically meaningful inference of E_(6).Multiple cases were examined to assess the feasibility of PINN in this task,revealing that under optimal settings,the average mean squared error of the E_(6) inference was only 0.01,attesting to the effectiveness of PINN.The optimal hyperparameter combination was identified using the Tanh activation function,along with a spatiotemporal sampling interval of 1 s and 0.1 m.This resulted in a substantial reduction in the average bias of the E_(6) inference,ranging from O(10^(1))to O(10^(2))times compared with other combinations.This study underscores the potential application of PINN in intricate marine environments,offering a novel and efficient method for optimizing MY-type LT parameterization schemes.
文摘Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.
基金National College Students’Training Programs of Innovation and Entrepreneurship,Grant/Award Number:S202210022060the CACMS Innovation Fund,Grant/Award Number:CI2021A00512the National Nature Science Foundation of China under Grant,Grant/Award Number:62206021。
文摘Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.
文摘The present research work attempted to delineate and characterize the reservoir facies from the Dawson Canyon Formation in the Penobscot field,Scotian Basin.An integrated study of instantaneous frequency,P-impedance,volume of clay and neutron-porosity attributes,and structural framework was done to unravel the Late Cretaceous depositional system and reservoir facies distribution patterns within the study area.Fault strikes were found in the EW and NEE-SWW directions indicating the dominant course of tectonic activities during the Late Cretaceous period in the region.P-impedance was estimated using model-based seismic inversion.Petrophysical properties such as the neutron porosity(NPHI)and volume of clay(VCL)were estimated using the multilayer perceptron neural network with high accuracy.Comparatively,a combination of low instantaneous frequency(15-30 Hz),moderate to high impedance(7000-9500 gm/cc*m/s),low neutron porosity(27%-40%)and low volume of clay(40%-60%),suggests fair-to-good sandstone development in the Dawson Canyon Formation.After calibration with the welllog data,it is found that further lowering in these attribute responses signifies the clean sandstone facies possibly containing hydrocarbons.The present study suggests that the shale lithofacies dominates the Late Cretaceous deposition(Dawson Canyon Formation)in the Penobscot field,Scotian Basin.Major faults and overlying shale facies provide structural and stratigraphic seals and act as a suitable hydrocarbon entrapment mechanism in the Dawson Canyon Formation's reservoirs.The present research advocates the integrated analysis of multi-attributes estimated using different methods to minimize the risk involved in hydrocarbon exploration.
文摘Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes the performance gains from parallel versus sequential hyperparameter optimization. Using scikit-learn’s Randomized SearchCV, this project tuned a Random Forest classifier for fake news detection via randomized grid search. Setting n_jobs to -1 enabled full parallelization across CPU cores. Results show the parallel implementation achieved over 5× faster CPU times and 3× faster total run times compared to sequential tuning. However, test accuracy slightly dropped from 99.26% sequentially to 99.15% with parallelism, indicating a trade-off between evaluation efficiency and model performance. Still, the significant computational gains allow more extensive hyperparameter exploration within reasonable timeframes, outweighing the small accuracy decrease. Further analysis could better quantify this trade-off across different models, tuning techniques, tasks, and hardware.
文摘The challenge of transitioning from temporary humanitarian settlements to more sustainable human settlements is due to a significant increase in the number of forcibly displaced people over recent decades, difficulties in providing social services that meet the required standards, and the prolongation of emergencies. Despite this challenging context, short-term considerations continue to guide their planning and management rather than more integrated, longer-term perspectives, thus preventing viable, sustainable development. Over the years, the design of humanitarian settlements has not been adapted to local contexts and perspectives, nor to the dynamics of urbanization and population growth and data. In addition, the current approach to temporary settlement harms the environment and can strain limited resources. Inefficient land use and ad hoc development models have compounded difficulties and generated new challenges. As a result, living conditions in settlements have deteriorated over the last few decades and continue to pose new challenges. The stakes are such that major shortcomings have emerged along the way, leading to disruption, budget overruns in a context marked by a steady decline in funding. However, some attempts have been made to shift towards more sustainable approaches, but these have mainly focused on vague, sector-oriented themes, failing to consider systematic and integration views. This study is a contribution in addressing these shortcomings by designing a model-driving solution, emphasizing an integrated system conceptualized as a system of systems. This paper proposes a new methodology for designing an integrated and sustainable human settlement model, based on Model-Based Systems Engineering and a Systems Modeling Language to provide valuable insights toward sustainable solutions for displaced populations aligning with the United Nations 2030 agenda for sustainable development.
基金the Young Potential Program of Shanghai Institute of Applied Physics,Chinese Academy of Sciences(No.E0553101).
文摘Knowledge graph technology has distinct advantages in terms of fault diagnosis.In this study,the control rod drive mechanism(CRDM)of the liquid fuel thorium molten salt reactor(TMSR-LF1)was taken as the research object,and a fault diagnosis system was proposed based on knowledge graph.The subject–relation–object triples are defined based on CRDM unstructured data,including design specification,operation and maintenance manual,alarm list,and other forms of expert experience.In this study,we constructed a fault event ontology model to label the entity and relationship involved in the corpus of CRDM fault events.A three-layer robustly optimized bidirectional encoder representation from transformers(RBT3)pre-training approach combined with a text convolutional neural network(TextCNN)was introduced to facilitate the application of the constructed CRDM fault diagnosis graph database for fault query.The RBT3-TextCNN model along with the Jieba tool is proposed for extracting entities and recognizing the fault query intent simultaneously.Experiments on the dataset collected from TMSR-LF1 CRDM fault diagnosis unstructured data demonstrate that this model has the potential to improve the effect of intent recognition and entity extraction.Additionally,a fault alarm monitoring module was developed based on WebSocket protocol to deliver detailed information about the appeared fault to the operator automatically.Furthermore,the Bayesian inference method combined with the variable elimination algorithm was proposed to enable the development of a relatively intelligent and reliable fault diagnosis system.Finally,a CRDM fault diagnosis Web interface integrated with graph data visualization was constructed,making the CRDM fault diagnosis process intuitive and effective.
基金partially sponsored by the Natural Science Foundation of Shanghai(No.23ZR1429300)the Innovation Fund of CNNC(Lingchuang Fund)。
文摘The estimation of model parameters is an important subject in engineering.In this area of work,the prevailing approach is to estimate or calculate these as deterministic parameters.In this study,we consider the model parameters from the perspective of random variables and describe the general form of the parameter distribution inference problem.Under this framework,we propose an ensemble Bayesian method by introducing Bayesian inference and the Markov chain Monte Carlo(MCMC)method.Experiments on a finite cylindrical reactor and a 2D IAEA benchmark problem show that the proposed method converges quickly and can estimate parameters effectively,even for several correlated parameters simultaneously.Our experiments include cases of engineering software calls,demonstrating that the method can be applied to engineering,such as nuclear reactor engineering.
基金supported by the Shanxi Provincial Foundation for Returned Overseas Scholars (No. 20220037)Natural Science Foundation of Shanxi Province (No. 20210302123085)Discipline Construction Project of Yuncheng University
文摘In this work,we perform a Bayesian inference of the crust-core transition density ρ_(t) of neutron stars based on the neutron-star radius and neutron-skin thickness data using a thermodynamical method.Uniform and Gaussian distributions for the ρ_(t) prior were adopted in the Bayesian approach.It has a larger probability of having values higher than 0.1 fm^(−3) for ρ_(t) as the uniform prior and neutron-star radius data were used.This was found to be controlled by the curvature K_(sym) of the nuclear symmetry energy.This phenomenon did not occur if K_(sym) was not extremely negative,namely,K_(sym)>−200 MeV.The value ofρ_(t) obtained was 0.075_(−0.01)^(+0.005) fm^(−3) at a confidence level of 68%when both the neutron-star radius and neutron-skin thickness data were considered.Strong anti-correlations were observed between ρ_(t),slope L,and curvature of the nuclear symmetry energy.The dependence of the three L-K_(sym) correlations predicted in the literature on crust-core density and pressure was quantitatively investigated.The most probable value of 0.08 fm^(−3) for ρ_(t) was obtained from the L-K_(sym) relationship proposed by Holt et al.while larger values were preferred for the other two relationships.
基金supported by the National Key R&D Program of China,Grant No.2018YFA0306703 and J2019-V-0001-0092.
文摘Estimating the intention of space objects plays an important role in air-craft design,aviation safety,military and otherfields,and is an important refer-ence basis for air situation analysis and command decision-making.This paper studies an intention estimation method based on fuzzy theory,combining prob-ability to calculate the intention between two objects.This method takes a space object as the origin of coordinates,observes the target’s distance,speed,relative heading angle,altitude difference,steering trend and etc.,then introduces the spe-cific calculation methods of these parameters.Through calculation,values are input into the fuzzy inference model,andfinally the action intention of the target is obtained through the fuzzy rule table and historical weighted probability.Ver-ified by simulation experiment,the target intention inferred by this method is roughly the same as the actual behavior of the target,which proves that the meth-od for identifying the target intention is effective.
基金the National Natural Science Foundation of China(52105070,U21B2074)Department of Science and Technology of Liaoning Province China(2033JH1/10400007).
文摘In the field of model-based system assessment,mathematical models are used to interpret the system behaviors.However,the industrial systems in this intelligent era will be more manageable.Various management operations will be dynamically set,and the system will be no longer static as it is initially designed.Thus,the static model generated by the traditional model-based safety assessment(MBSA)approach cannot be used to accurately assess the dependability.There mainly exists three problems.Complex:huge and complex behaviors make the modeling to be trivial manual;Dynamic:though there are thousands of states and transitions,the previous model must be resubmitted to assess whenever new management arrives;Unreusable:as for different systems,the model must be resubmitted by reconsidering both the management and the system itself at the same time though the management is the same.Motivated by solving the above problems,this research studies a formal management specifying approach with the advantages of agility modeling,dynamic modeling,and specification design that can be re-suable.Finally,three typical managements are specified in a series-parallel system as a demonstration to show the potential.
基金This research was funded by the National Natural Science Foundation of China(Grant No.72074060).
文摘Regression is a widely used econometric tool in research. In observational studies, based on a number of assumptions, regression-based statistical control methods attempt to analyze the causation between treatment and outcome by adding control variables. However, this approach may not produce reliable estimates of causal effects. In addition to the shortcomings of the method, this lack of confidence is mainly related to ambiguous formulations in econometrics, such as the definition of selection bias, selection of core control variables, and method of testing for robustness. Within the framework of the causal models, we clarify the assumption of causal inference using regression-based statistical controls, as described in econometrics, and discuss how to select core control variables to satisfy this assumption and conduct robustness tests for regression estimates.
基金supported by the National Social Science Fund of China(Grant No.22BTQ033).
文摘Context-Sensitive Task(CST)is a complex task type in crowdsourc-ing,such as handwriting recognition,route plan,and audio transcription.The current result inference algorithms can perform well in simple crowd-sourcing tasks,but cannot obtain high-quality inference results for CSTs.The conventional method to solve CSTs is to divide a CST into multiple independent simple subtasks for crowdsourcing,but this method ignores the context correlation among subtasks and reduces the quality of result inference.To solve this problem,we propose a result inference algorithm based on the Partially ordered set and Tree augmented naive Bayes Infer(P&T-Inf)for CSTs.Firstly,we screen the candidate results of context-sensitive tasks based on the partially ordered set.If there are parallel candidate sets,the conditional mutual information among subtasks containing context infor-mation in external knowledge(such as Google n-gram corpus,American Contemporary English corpus,etc.)will be calculated.Combined with the tree augmented naive(TAN)Bayes model,the maximum weighted spanning tree is used to model the dependencies among subtasks in each CST.We collect two crowdsourcing datasets of handwriting recognition tasks and audio transcription tasks from the real crowdsourcing platform.The experimental results show that our approach improves the quality of result inference in CSTs and reduces the time cost compared with the latest methods.
基金Supported by National Natural Science Foundation of China (11922514)。
文摘In this proceeding,some highlight results on the constraints of the nuclear matter equation of state(EOS)from the data of nucleus resonance and neutron-skin thickness using the Bayesian approach based on the Skyrme-Hartree-Fock model and its extension have been presented.Typically,the anti-correlation and positive correlations between the slope parameter and the value of the symmetry energy at the saturation density under the constraint of the neutron-skin thickness and the isovector giant dipole resonance have been discussed respectively.It’s shown that the Bayesian analysis can help to find a compromise for the“PREXII puzzle”and the“soft Tin puzzle”.The possible modifications on the constraints of lower-order EOS parameters as well as the relevant correlation when higher-order EOS parameters are incorporated as independent variables have been further illustrated.For a given model and parameter space,the Bayesian approach serves as a good analysis tool suitable for multi-messengers versus multi-variables,and is helpful for constraining quantitatively the model parameters as well as their correlations.
文摘Spectral unmixing helps to identify different components present in the spectral mixtures which occur in the uppermost layer of the area owing to the low spatial resolution of hyperspectral images.Most spectral unmixing methods are globally based and do not consider the spectral variability among its endmembers that occur due to illumination,atmospheric,and environmental conditions.Here,endmember bundle extraction plays a major role in overcoming the above-mentioned limitations leading to more accurate abundance fractions.Accordingly,a two-stage approach is proposed to extract endmembers through endmember bundles in hyperspectral images.The divide and conquer method is applied as the first step in subset images with only the non-redundant bands to extract endmembers using the Vertex Component Analysis(VCA)and N-FINDR algorithms.A fuzzy rule-based inference system utilizing spectral matching parameters is proposed in the second step to categorize endmembers.The endmember with the minimum error is chosen as the final endmember in each specific category.The proposed method is simple and automatically considers endmember variability in hyperspectral images.The efficiency of the proposed method is evaluated using two real hyperspectral datasets.The average spectral angle and abundance angle are used to analyze the performance measures.
文摘In social science,health care,digital therapeutics,etc.,smartphone data have played important roles to infer users’daily lives.However,smartphone data col-lection systems could not be used effectively and widely because they did not exploit any Internet of Things(IoT)standards(e.g.,oneM2M)and class labeling methods for machine learning(ML)services.Therefore,in this paper,we propose a novel Android IoT lifelog system complying with oneM2M standards to collect various lifelog data in smartphones and provide two manual and automated class labeling methods for inference of users’daily lives.The proposed system consists of an Android IoT client application,an oneM2M-compliant IoT server,and an ML server whose high-level functional architecture was carefully designed to be open,accessible,and internation-ally recognized in accordance with the oneM2M standards.In particular,we explain implementation details of activity diagrams for the Android IoT client application,the primary component of the proposed system.Experimental results verified that this application could work with the oneM2M-compliant IoT server normally and provide corresponding class labels properly.As an application of the proposed system,we also propose motion inference based on three multi-class ML classifiers(i.e.,k nearest neighbors,Naive Bayes,and support vector machine)which were created by using only motion and location data(i.e.,acceleration force,gyroscope rate of rotation,and speed)and motion class labels(i.e.,driving,cycling,running,walking,and stil-ling).When compared with confusion matrices of the ML classifiers,the k nearest neighbors classifier outperformed the other two overall.Furthermore,we evaluated its output quality by analyzing the receiver operating characteristic(ROC)curves with area under the curve(AUC)values.The AUC values of the ROC curves for all motion classes were more than 0.9,and the macro-average and micro-average ROC curves achieved very high AUC values of 0.96 and 0.99,respectively.