Artificial intelligence(AI)is making significant strides in revolutionizing the detection of Barrett's esophagus(BE),a precursor to esophageal adenocarcinoma.In the research article by Tsai et al,researchers utili...Artificial intelligence(AI)is making significant strides in revolutionizing the detection of Barrett's esophagus(BE),a precursor to esophageal adenocarcinoma.In the research article by Tsai et al,researchers utilized endoscopic images to train an AI model,challenging the traditional distinction between endoscopic and histological BE.This approach yielded remarkable results,with the AI system achieving an accuracy of 94.37%,sensitivity of 94.29%,and specificity of 94.44%.The study's extensive dataset enhances the AI model's practicality,offering valuable support to endoscopists by minimizing unnecessary biopsies.However,questions about the applicability to different endoscopic systems remain.The study underscores the potential of AI in BE detection while highlighting the need for further research to assess its adaptability to diverse clinical settings.展开更多
In the wake of the largest‐ever recorded outbreak of mpox in terms of magnitude and geographical spread in human history since May 2022,we innovatively developed an automated online sewage virus enrichment and concen...In the wake of the largest‐ever recorded outbreak of mpox in terms of magnitude and geographical spread in human history since May 2022,we innovatively developed an automated online sewage virus enrichment and concentration robot for disease tracking.Coupled with an artificial intelligence(AI)model,our research aims to estimate mpox cases based on the concentration of the monkeypox virus(MPXV)in wastewater.Our research has revealed a compelling link between the levels of MPXV in wastewater and the number of clinically confirmed mpox infections,a finding that is reinforced by the ability of our AI prediction model to forecast cases with remarkable precision,capturing 87%of the data’s variability.However,it is worth noting that this high precision in predictions may be related to the relatively high frequency of data acquisition and the relatively non‐mobile isolated environment of the hospital itself.In conclusion,this study represents a significant step forward in our ability to track and respond to mpox outbreaks.It has the potential to revolutionize public health surveillance by utilizing innovative technologies for disease surveillance and prediction。展开更多
A recent systematic experimental characterisation of technological thin films,based on elaborated design of experiments as well as probe calibration and correction procedures,allowed for the first time the determinati...A recent systematic experimental characterisation of technological thin films,based on elaborated design of experiments as well as probe calibration and correction procedures,allowed for the first time the determination of nanoscale friction under the concurrent influence of several process parameters,comprising normal forces,sliding velocities,and temperature,thus providing an indication of the intricate correlations induced by their interactions and mutual effects.This created the preconditions to undertake in this work an effort to model friction in the nanometric domain with the goal of overcoming the limitations of currently available models in ascertaining the effects of the physicochemical processes and phenomena involved in nanoscale contacts.Due to the stochastic nature of nanoscale friction and the relatively sparse available experimental data,meta-modelling tools fail,however,at predicting the factual behaviour.Based on the acquired experimental data,data mining,incorporating various state-of-the-art machine learning(ML)numerical regression algorithms,is therefore used.The results of the numerical analyses are assessed on an unseen test dataset via a comparative statistical validation.It is therefore shown that the black box ML methods provide effective predictions of the studied correlations with rather good accuracy levels,but the intrinsic nature of such algorithms prevents their usage in most practical applications.Genetic programming-based artificial intelligence(AI)methods are consequently finally used.Despite the marked complexity of the analysed phenomena and the inherent dispersion of the measurements,the developed AI-based symbolic regression models allow attaining an excellent predictive performance with the respective prediction accuracy,depending on the sample type,between 72%and 91%,allowing also to attain an extremely simple functional description of the multidimensional dependence of nanoscale friction on the studied variable process parameters.An effective tool for nanoscale friction prediction,adaptive control purposes,and further scientific and technological nanotribological analyses is thus obtained.展开更多
In this article,we present an application of Adaptive Genetic Algorithm Energy Demand Estimation(AGAEDE) optimal model to improve the efficiency of energy demand prediction.The coefficients of the two forms of the mod...In this article,we present an application of Adaptive Genetic Algorithm Energy Demand Estimation(AGAEDE) optimal model to improve the efficiency of energy demand prediction.The coefficients of the two forms of the model(both linear and quadratic) are optimized by AGA using factors,such as GDP,population,urbanization rate,and R&D inputs together with energy consumption structure,that affect demand.Since the spurious regression phenomenon occurs for a wide range of time series analysis in econometrics,we also discuss this problem for the current artificial intelligence model.The simulation results show that the proposed model is more accurate and reliable compared with other existing methods and the China's energy demand will be 5.23 billion TCE in 2020 according to the average results of the AGAEDE optimal model.Further discussion illustrates that there will be great pressure for China to fulfill the planned goal of controlling energy demand set in the National Energy Demand Project(2014—2020).展开更多
Using gas and rock samples from major petroliferous basins in the world,the helium content,composition,isotopic compositions and the U and Th contents in rocks are analyzed to clarify the helium enrichment mechanism a...Using gas and rock samples from major petroliferous basins in the world,the helium content,composition,isotopic compositions and the U and Th contents in rocks are analyzed to clarify the helium enrichment mechanism and distribution pattern and the exploration ideas for helium-rich gas reservoirs.It is believed that the formation of helium-rich gas reservoirs depends on the amount of helium supplied to the reservoir and the degree of helium dilution by natural gas,and that the reservoir-forming process can be summarized as"multi-source helium supply,main-source helium enrichment,helium-nitrogen coupling,and homogeneous symbiosis".Helium mainly comes from the radioactive decay of U and Th in rocks.All rocks contain trace amounts of U and Th,so they are effective helium sources.Especially,large-scale ancient basement dominated by granite or metamorphic rocks is the main helium source.The helium generated by the decay of U and Th in the ancient basement in a long geologic history,together with the nitrogen generated by the cracking of the inorganic nitrogenous compounds in the basement rocks,is dissolved in the water and preserved.With the tectonic uplift,the ground water is transported upward along the fracture to the gas reservoirs,with helium and nitrogen released.Thus,the reservoirs are enriched with both helium and nitrogen,which present a clear concomitant and coupling relationship.In tensional basins in eastern China,where tectonic activities are strong,a certain proportion of mantle-derived helium is mixed in the natural gas.The helium-rich gas reservoirs are mostly located in normal or low-pressure zones above ancient basement with fracture communication,which later experience substantial tectonic uplift and present relatively weak seal,low intensity of natural gas charging,and active groundwater.Helium exploration should focus on gas reservoirs with fractures connecting ancient basement,large tectonic uplift,relatively weak sealing capacity,insufficient natural gas charging intensity,and rich ancient formation water,depending on the characteristics of helium enrichment,beyond the traditional idea of searching for natural gas sweetspots and high-yield giant gas fields simultaneously.展开更多
A large language model(LLM)is constructed to address the sophisticated demands of data retrieval and analysis,detailed well profiling,computation of key technical indicators,and the solutions to complex problems in re...A large language model(LLM)is constructed to address the sophisticated demands of data retrieval and analysis,detailed well profiling,computation of key technical indicators,and the solutions to complex problems in reservoir performance analysis(RPA).The LLM is constructed for RPA scenarios with incremental pre-training,fine-tuning,and functional subsystems coupling.Functional subsystem and efficient coupling methods are proposed based on named entity recognition(NER),tool invocation,and Text-to-SQL construction,all aimed at resolving pivotal challenges in developing the specific application of LLMs for RDA.This study conducted a detailed accuracy test on feature extraction models,tool classification models,data retrieval models and analysis recommendation models.The results indicate that these models have demonstrated good performance in various key aspects of reservoir dynamic analysis.The research takes some injection and production well groups in the PK3 Block of the Daqing Oilfield as an example for testing.Testing results show that our model has significant potential and practical value in assisting reservoir engineers with RDA.The research results provide a powerful support to the application of LLM in reservoir performance analysis.展开更多
This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.Although deep neural networks have exhibited ...This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.Although deep neural networks have exhibited superior performance in various tasks,interpretability is always Achilles' heel of deep neural networks.At present,deep neural networks obtain high discrimination power at the cost of a low interpretability of their black-box representations.We believe that high model interpretability may help people break several bottlenecks of deep learning,e.g.,learning from a few annotations,learning via human–computer communications at the semantic level,and semantically debugging network representations.We focus on convolutional neural networks(CNNs),and revisit the visualization of CNN representations,methods of diagnosing representations of pre-trained CNNs,approaches for disentangling pre-trained CNN representations,learning of CNNs with disentangled representations,and middle-to-end learning based on model interpretability.Finally,we discuss prospective trends in explainable artificial intelligence.展开更多
文摘Artificial intelligence(AI)is making significant strides in revolutionizing the detection of Barrett's esophagus(BE),a precursor to esophageal adenocarcinoma.In the research article by Tsai et al,researchers utilized endoscopic images to train an AI model,challenging the traditional distinction between endoscopic and histological BE.This approach yielded remarkable results,with the AI system achieving an accuracy of 94.37%,sensitivity of 94.29%,and specificity of 94.44%.The study's extensive dataset enhances the AI model's practicality,offering valuable support to endoscopists by minimizing unnecessary biopsies.However,questions about the applicability to different endoscopic systems remain.The study underscores the potential of AI in BE detection while highlighting the need for further research to assess its adaptability to diverse clinical settings.
基金supported by National Key Research and Development Program of China(2023YFC3041500)Shenzhen Medical Research Funding(D2301014)+2 种基金Shenzhen High‐level Hospital Construction Fund(23250G1001,XKJS‐CRGRK‐005)Shenzhen Clinical Research Center for Emerging Infectious Diseases(No.LCYSSQ20220823091203007)The Science and Technology Innovation and Entrepreneurship Special Foundation of Shenzhen(JSGG20220226090203006).
文摘In the wake of the largest‐ever recorded outbreak of mpox in terms of magnitude and geographical spread in human history since May 2022,we innovatively developed an automated online sewage virus enrichment and concentration robot for disease tracking.Coupled with an artificial intelligence(AI)model,our research aims to estimate mpox cases based on the concentration of the monkeypox virus(MPXV)in wastewater.Our research has revealed a compelling link between the levels of MPXV in wastewater and the number of clinically confirmed mpox infections,a finding that is reinforced by the ability of our AI prediction model to forecast cases with remarkable precision,capturing 87%of the data’s variability.However,it is worth noting that this high precision in predictions may be related to the relatively high frequency of data acquisition and the relatively non‐mobile isolated environment of the hospital itself.In conclusion,this study represents a significant step forward in our ability to track and respond to mpox outbreaks.It has the potential to revolutionize public health surveillance by utilizing innovative technologies for disease surveillance and prediction。
基金The work described in this paper is enabled by using the equipment funded via the EU European Regional Development Fund project entitled“Research Infrastructure for Campus-based Laboratories at the University of Rijeka–RISK”(Project RC.2.2.06-0001)the support of the University of Rijeka,Croatia,grant entitled“Advanced mechatronics devices for smart technological solutions”(Grant uniri-tehnic-18-32).
文摘A recent systematic experimental characterisation of technological thin films,based on elaborated design of experiments as well as probe calibration and correction procedures,allowed for the first time the determination of nanoscale friction under the concurrent influence of several process parameters,comprising normal forces,sliding velocities,and temperature,thus providing an indication of the intricate correlations induced by their interactions and mutual effects.This created the preconditions to undertake in this work an effort to model friction in the nanometric domain with the goal of overcoming the limitations of currently available models in ascertaining the effects of the physicochemical processes and phenomena involved in nanoscale contacts.Due to the stochastic nature of nanoscale friction and the relatively sparse available experimental data,meta-modelling tools fail,however,at predicting the factual behaviour.Based on the acquired experimental data,data mining,incorporating various state-of-the-art machine learning(ML)numerical regression algorithms,is therefore used.The results of the numerical analyses are assessed on an unseen test dataset via a comparative statistical validation.It is therefore shown that the black box ML methods provide effective predictions of the studied correlations with rather good accuracy levels,but the intrinsic nature of such algorithms prevents their usage in most practical applications.Genetic programming-based artificial intelligence(AI)methods are consequently finally used.Despite the marked complexity of the analysed phenomena and the inherent dispersion of the measurements,the developed AI-based symbolic regression models allow attaining an excellent predictive performance with the respective prediction accuracy,depending on the sample type,between 72%and 91%,allowing also to attain an extremely simple functional description of the multidimensional dependence of nanoscale friction on the studied variable process parameters.An effective tool for nanoscale friction prediction,adaptive control purposes,and further scientific and technological nanotribological analyses is thus obtained.
基金supported by the Fundamental Research Funds for the Central Universities[Grant No.JBK1507159]
文摘In this article,we present an application of Adaptive Genetic Algorithm Energy Demand Estimation(AGAEDE) optimal model to improve the efficiency of energy demand prediction.The coefficients of the two forms of the model(both linear and quadratic) are optimized by AGA using factors,such as GDP,population,urbanization rate,and R&D inputs together with energy consumption structure,that affect demand.Since the spurious regression phenomenon occurs for a wide range of time series analysis in econometrics,we also discuss this problem for the current artificial intelligence model.The simulation results show that the proposed model is more accurate and reliable compared with other existing methods and the China's energy demand will be 5.23 billion TCE in 2020 according to the average results of the AGAEDE optimal model.Further discussion illustrates that there will be great pressure for China to fulfill the planned goal of controlling energy demand set in the National Energy Demand Project(2014—2020).
基金Supported by the National Natural Science Foundation of China(42141022,42272189)Project of Ministry of Natural Resources of China(QGYQZYPJ2022-1)CNPC Core Project(2021ZG12)。
文摘Using gas and rock samples from major petroliferous basins in the world,the helium content,composition,isotopic compositions and the U and Th contents in rocks are analyzed to clarify the helium enrichment mechanism and distribution pattern and the exploration ideas for helium-rich gas reservoirs.It is believed that the formation of helium-rich gas reservoirs depends on the amount of helium supplied to the reservoir and the degree of helium dilution by natural gas,and that the reservoir-forming process can be summarized as"multi-source helium supply,main-source helium enrichment,helium-nitrogen coupling,and homogeneous symbiosis".Helium mainly comes from the radioactive decay of U and Th in rocks.All rocks contain trace amounts of U and Th,so they are effective helium sources.Especially,large-scale ancient basement dominated by granite or metamorphic rocks is the main helium source.The helium generated by the decay of U and Th in the ancient basement in a long geologic history,together with the nitrogen generated by the cracking of the inorganic nitrogenous compounds in the basement rocks,is dissolved in the water and preserved.With the tectonic uplift,the ground water is transported upward along the fracture to the gas reservoirs,with helium and nitrogen released.Thus,the reservoirs are enriched with both helium and nitrogen,which present a clear concomitant and coupling relationship.In tensional basins in eastern China,where tectonic activities are strong,a certain proportion of mantle-derived helium is mixed in the natural gas.The helium-rich gas reservoirs are mostly located in normal or low-pressure zones above ancient basement with fracture communication,which later experience substantial tectonic uplift and present relatively weak seal,low intensity of natural gas charging,and active groundwater.Helium exploration should focus on gas reservoirs with fractures connecting ancient basement,large tectonic uplift,relatively weak sealing capacity,insufficient natural gas charging intensity,and rich ancient formation water,depending on the characteristics of helium enrichment,beyond the traditional idea of searching for natural gas sweetspots and high-yield giant gas fields simultaneously.
基金Supported by the National Talent Fund of the Ministry of Science and Technology of China(20230240011)China University of Geosciences(Wuhan)Research Fund(162301192687)。
文摘A large language model(LLM)is constructed to address the sophisticated demands of data retrieval and analysis,detailed well profiling,computation of key technical indicators,and the solutions to complex problems in reservoir performance analysis(RPA).The LLM is constructed for RPA scenarios with incremental pre-training,fine-tuning,and functional subsystems coupling.Functional subsystem and efficient coupling methods are proposed based on named entity recognition(NER),tool invocation,and Text-to-SQL construction,all aimed at resolving pivotal challenges in developing the specific application of LLMs for RDA.This study conducted a detailed accuracy test on feature extraction models,tool classification models,data retrieval models and analysis recommendation models.The results indicate that these models have demonstrated good performance in various key aspects of reservoir dynamic analysis.The research takes some injection and production well groups in the PK3 Block of the Daqing Oilfield as an example for testing.Testing results show that our model has significant potential and practical value in assisting reservoir engineers with RDA.The research results provide a powerful support to the application of LLM in reservoir performance analysis.
基金supported by the ONR MURI pro ject(No.N00014-16-1-2007)the DARPA XAI Award(No.N66001-17-2-4029)NSF IIS(No.1423305)
文摘This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.Although deep neural networks have exhibited superior performance in various tasks,interpretability is always Achilles' heel of deep neural networks.At present,deep neural networks obtain high discrimination power at the cost of a low interpretability of their black-box representations.We believe that high model interpretability may help people break several bottlenecks of deep learning,e.g.,learning from a few annotations,learning via human–computer communications at the semantic level,and semantically debugging network representations.We focus on convolutional neural networks(CNNs),and revisit the visualization of CNN representations,methods of diagnosing representations of pre-trained CNNs,approaches for disentangling pre-trained CNN representations,learning of CNNs with disentangled representations,and middle-to-end learning based on model interpretability.Finally,we discuss prospective trends in explainable artificial intelligence.