To assess whether a development strategy will be profitable enough,production forecasting is a crucial and difficult step in the process.The development history of other reservoirs in the same class tends to be studie...To assess whether a development strategy will be profitable enough,production forecasting is a crucial and difficult step in the process.The development history of other reservoirs in the same class tends to be studied to make predictions accurate.However,the permeability field,well patterns,and development regime must all be similar for two reservoirs to be considered in the same class.This results in very few available experiences from other reservoirs even though there is a lot of historical information on numerous reservoirs because it is difficult to find such similar reservoirs.This paper proposes a learn-to-learn method,which can better utilize a vast amount of historical data from various reservoirs.Intuitively,the proposed method first learns how to learn samples before directly learning rules in samples.Technically,by utilizing gradients from networks with independent parameters and copied structure in each class of reservoirs,the proposed network obtains the optimal shared initial parameters which are regarded as transferable information across different classes.Based on that,the network is able to predict future production indices for the target reservoir by only training with very limited samples collected from reservoirs in the same class.Two cases further demonstrate its superiority in accuracy to other widely-used network methods.展开更多
With the rapid development of virtual reality technology,it has been widely used in the field of education.It can promote the development of learning transfer,which is an effective method for learners to learn effecti...With the rapid development of virtual reality technology,it has been widely used in the field of education.It can promote the development of learning transfer,which is an effective method for learners to learn effectively.Therefore,this paper describes how to use virtual reality technology to achieve learning transfer in order to achieve teaching goals and improve learning efficiency.展开更多
The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper ...The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.展开更多
With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become v...With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become vulnerable to sophisticated cyber-attacks,especially privacy-related attacks such as inference and data poisoning ones.Federated Learning(FL)has been regarded as a hopeful method to enable distributed learning with privacypreserved intelligence in IoT applications.Even though the significance of developing privacy-preserving FL has drawn as a great research interest,the current research only concentrates on FL with independent identically distributed(i.i.d)data and few studies have addressed the non-i.i.d setting.FL is known to be vulnerable to Generative Adversarial Network(GAN)attacks,where an adversary can presume to act as a contributor participating in the training process to acquire the private data of other contributors.This paper proposes an innovative Privacy Protection-based Federated Deep Learning(PP-FDL)framework,which accomplishes data protection against privacy-related GAN attacks,along with high classification rates from non-i.i.d data.PP-FDL is designed to enable fog nodes to cooperate to train the FDL model in a way that ensures contributors have no access to the data of each other,where class probabilities are protected utilizing a private identifier generated for each class.The PP-FDL framework is evaluated for image classification using simple convolutional networks which are trained using MNIST and CIFAR-10 datasets.The empirical results have revealed that PF-DFL can achieve data protection and the framework outperforms the other three state-of-the-art models with 3%–8%as accuracy improvements.展开更多
When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ...When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.展开更多
Forest habitats are critical for biodiversity,ecosystem services,human livelihoods,and well-being.Capacity to conduct theoretical and applied forest ecology research addressing direct(e.g.,deforestation)and indirect(e...Forest habitats are critical for biodiversity,ecosystem services,human livelihoods,and well-being.Capacity to conduct theoretical and applied forest ecology research addressing direct(e.g.,deforestation)and indirect(e.g.,climate change)anthropogenic pressures has benefited considerably from new field-and statistical-techniques.We used machine learning and bibliometric structural topic modelling to identify 20 latent topics comprising four principal fields from a corpus of 16,952 forest ecology/forestry articles published in eight ecology and five forestry journals between 2010 and 2022.Articles published per year increased from 820 in 2010 to 2,354 in 2021,shifting toward more applied topics.Publications from China and some countries in North America and Europe dominated,with relatively fewer articles from some countries in West and Central Africa and West Asia,despite globally important forest resources.Most study sites were in some countries in North America,Central Asia,and South America,and Australia.Articles utilizing R statistical software predominated,increasing from 29.5%in 2010 to 71.4%in 2022.The most frequently used packages included lme4,vegan,nlme,MuMIn,ggplot2,car,MASS,mgcv,multcomp and raster.R was more often used in forest ecology than applied forestry articles.R software offers advantages in script and workflow-sharing compared to other statistical packages.Our findings demonstrate that the disciplines of forest ecology/forestry are expanding both in number and scope,aided by more sophisticated statistical tools,to tackle the challenges of redressing forest habitat loss and the socio-economic impacts of deforestation.展开更多
Neural networks are becoming ubiquitous in various areas of physics as a successful machine learning(ML)technique for addressing different tasks.Based on ML technique,we propose and experimentally demonstrate an effic...Neural networks are becoming ubiquitous in various areas of physics as a successful machine learning(ML)technique for addressing different tasks.Based on ML technique,we propose and experimentally demonstrate an efficient method for state reconstruction of the widely used Sagnac polarization-entangled photon source.By properly modeling the target states,a multi-output fully connected neural network is well trained using only six of the sixteen measurement bases in standard tomography technique,and hence our method reduces the resource consumption without loss of accuracy.We demonstrate the ability of the neural network to predict state parameters with a high precision by using both simulated and experimental data.Explicitly,the mean absolute error for all the parameters is below 0.05 for the simulated data and a mean fidelity of 0.99 is achieved for experimentally generated states.Our method could be generalized to estimate other kinds of states,as well as other quantum information tasks.展开更多
Many magnetohydrodynamic stability analyses require generation of a set of equilibria with a fixed safety factor q-profile while varying other plasma parameters.A neural network(NN)-based approach is investigated that...Many magnetohydrodynamic stability analyses require generation of a set of equilibria with a fixed safety factor q-profile while varying other plasma parameters.A neural network(NN)-based approach is investigated that facilitates such a process.Both multilayer perceptron(MLP)-based NN and convolutional neural network(CNN)models are trained to map the q-profile to the plasma current density J-profile,and vice versa,while satisfying the Grad–Shafranov radial force balance constraint.When the initial target models are trained,using a database of semianalytically constructed numerical equilibria,an initial CNN with one convolutional layer is found to perform better than an initial MLP model.In particular,a trained initial CNN model can also predict the q-or J-profile for experimental tokamak equilibria.The performance of both initial target models is further improved by fine-tuning the training database,i.e.by adding realistic experimental equilibria with Gaussian noise.The fine-tuned target models,referred to as fine-tuned MLP and fine-tuned CNN,well reproduce the target q-or J-profile across multiple tokamak devices.As an important application,these NN-based equilibrium profile convertors can be utilized to provide a good initial guess for iterative equilibrium solvers,where the desired input quantity is the safety factor instead of the plasma current density.展开更多
Magnetic resonance(MR)imaging is a widely employed medical imaging technique that produces detailed anatomical images of the human body.The segmentation of MR im-ages plays a crucial role in medical image analysis,as ...Magnetic resonance(MR)imaging is a widely employed medical imaging technique that produces detailed anatomical images of the human body.The segmentation of MR im-ages plays a crucial role in medical image analysis,as it enables accurate diagnosis,treatment planning,and monitoring of various diseases and conditions.Due to the lack of sufficient medical images,it is challenging to achieve an accurate segmentation,especially with the application of deep learning networks.The aim of this work is to study transfer learning from T1-weighted(T1-w)to T2-weighted(T2-w)MR sequences to enhance bone segmentation with minimal required computation resources.With the use of an excitation-based convolutional neural networks,four transfer learning mechanisms are proposed:transfer learning without fine tuning,open fine tuning,conservative fine tuning,and hybrid transfer learning.Moreover,a multi-parametric segmentation model is proposed using T2-w MR as an intensity-based augmentation technique.The novelty of this work emerges in the hybrid transfer learning approach that overcomes the overfitting issue and preserves the features of both modalities with minimal computation time and resources.The segmentation results are evaluated using 14 clinical 3D brain MR and CT images.The results reveal that hybrid transfer learning is superior for bone segmentation in terms of performance and computation time with DSCs of 0.5393±0.0007.Although T2-w-based augmentation has no significant impact on the performance of T1-w MR segmentation,it helps in improving T2-w MR segmentation and developing a multi-sequences segmentation model.展开更多
This study delves into the latest advancements in machine learning and deep learning applications in geothermal resource development,extending the analysis up to 2024.It focuses on artificial intelligence's transf...This study delves into the latest advancements in machine learning and deep learning applications in geothermal resource development,extending the analysis up to 2024.It focuses on artificial intelligence's transformative role in the geothermal industry,analyzing recent literature from Scopus and Google Scholar to identify emerging trends,challenges,and future opportunities.The results reveal a marked increase in artificial intelligence(AI)applications,particularly in reservoir engineering,with significant advancements observed post‐2019.This study highlights AI's potential in enhancing drilling and exploration,emphasizing the integration of detailed case studies and practical applications.It also underscores the importance of ongoing research and tailored AI applications,in light of the rapid technological advancements and future trends in the field.展开更多
When existing deep learning models are used for road extraction tasks from high-resolution images,they are easily affected by noise factors such as tree and building occlusion and complex backgrounds,resulting in inco...When existing deep learning models are used for road extraction tasks from high-resolution images,they are easily affected by noise factors such as tree and building occlusion and complex backgrounds,resulting in incomplete road extraction and low accuracy.We propose the introduction of spatial and channel attention modules to the convolutional neural network ConvNeXt.Then,ConvNeXt is used as the backbone network,which cooperates with the perceptual analysis network UPerNet,retains the detection head of the semantic segmentation,and builds a new model ConvNeXt-UPerNet to suppress noise interference.Training on the open-source DeepGlobe and CHN6-CUG datasets and introducing the DiceLoss on the basis of CrossEntropyLoss solves the problem of positive and negative sample imbalance.Experimental results show that the new network model can achieve the following performance on the DeepGlobe dataset:79.40%for precision(Pre),97.93% for accuracy(Acc),69.28% for intersection over union(IoU),and 83.56% for mean intersection over union(MIoU).On the CHN6-CUG dataset,the model achieves the respective values of 78.17%for Pre,97.63%for Acc,65.4% for IoU,and 81.46% for MIoU.Compared with other network models,the fused ConvNeXt-UPerNet model can extract road information better when faced with the influence of noise contained in high-resolution remote sensing images.It also achieves multiscale image feature information with unified perception,ultimately improving the generalization ability of deep learning technology in extracting complex roads from high-resolution remote sensing images.展开更多
A comparative study of two force perception skill learning approaches for robot‐assisted spinal surgery,the impedance model method and the imitation learning(IL)method,is presented.The impedance model method develops...A comparative study of two force perception skill learning approaches for robot‐assisted spinal surgery,the impedance model method and the imitation learning(IL)method,is presented.The impedance model method develops separate models for the surgeon and patient,incorporating spring‐damper and bone‐grinding models.Expert surgeons'feature parameters are collected and mapped using support vector regression and image navi-gation techniques.The imitation learning approach utilises long short‐term memory networks(LSTM)and addresses accurate data labelling challenges with custom models.Experimental results demonstrate skill recognition rates of 63.61%-74.62%for the impedance model approach,relying on manual feature extraction.Conversely,the imitation learning approach achieves a force perception recognition rate of 91.06%,outperforming the impedance model on curved bone surfaces.The findings demonstrate the potential of imitation learning to enhance skill acquisition in robot‐assisted spinal surgery by eliminating the laborious process of manual feature extraction.展开更多
One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelli...One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.展开更多
In this editorial,we comment on the article by Zhang et al entitled Development of a machine learning-based model for predicting the risk of early postoperative recurrence of hepatocellular carcinoma.Hepatocellular ca...In this editorial,we comment on the article by Zhang et al entitled Development of a machine learning-based model for predicting the risk of early postoperative recurrence of hepatocellular carcinoma.Hepatocellular carcinoma(HCC),which is characterized by high incidence and mortality rates,remains a major global health challenge primarily due to the critical issue of postoperative recurrence.Early recurrence,defined as recurrence that occurs within 2 years posttreatment,is linked to the hidden spread of the primary tumor and significantly impacts patient survival.Traditional predictive factors,including both patient-and treatment-related factors,have limited predictive ability with respect to HCC recurrence.The integration of machine learning algorithms is fueled by the exponential growth of computational power and has revolutionized HCC research.The study by Zhang et al demonstrated the use of a groundbreaking preoperative prediction model for early postoperative HCC recurrence.Challenges persist,including sample size constraints,issues with handling data,and the need for further validation and interpretability.This study emphasizes the need for collaborative efforts,multicenter studies and comparative analyses to validate and refine the model.Overcoming these challenges and exploring innovative approaches,such as multi-omics integration,will enhance personalized oncology care.This study marks a significant stride toward precise,efficient,and personalized oncology practices,thus offering hope for improved patient outcomes in the field of HCC treatment.展开更多
The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera im...The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.展开更多
BACKGROUND Non-small cell lung cancer(NSCLC)is the primary form of lung cancer,and the combination of chemotherapy with immunotherapy offers promising treatment options for patients suffering from this disease.However...BACKGROUND Non-small cell lung cancer(NSCLC)is the primary form of lung cancer,and the combination of chemotherapy with immunotherapy offers promising treatment options for patients suffering from this disease.However,the emergence of drug resistance significantly limits the effectiveness of these therapeutic strategies.Consequently,it is imperative to devise methods for accurately detecting and evaluating the efficacy of these treatments.AIM To identify the metabolic signatures associated with neutrophil extracellular traps(NETs)and chemoimmunotherapy efficacy in NSCLC patients.METHODS In total,159 NSCLC patients undergoing first-line chemoimmunotherapy were enrolled.We first investigated the characteristics influencing clinical efficacy.Circulating levels of NETs and cytokines were measured by commercial kits.Liquid chromatography tandem mass spectrometry quantified plasma metabolites,and differential metabolites were identified.Least absolute shrinkage and selection operator,support vector machine-recursive feature elimination,and random forest algorithms were employed.By using plasma metabolic profiles and machine learning algorithms,predictive metabolic signatures were established.RESULTS First,the levels of circulating interleukin-8,neutrophil-to-lymphocyte ratio,and NETs were closely related to poor efficacy of first-line chemoimmunotherapy.Patients were classed into a low NET group or a high NET group.A total of 54 differential plasma metabolites were identified.These metabolites were primarily involved in arachidonic acid and purine metabolism.Three key metabolites were identified as crucial variables,including 8,9-epoxyeicosatrienoic acid,L-malate,and bis(monoacylglycerol)phosphate(18:1/16:0).Using metabolomic sequencing data and machine learning methods,key metabolic signatures were screened to predict NET level as well as chemoimmunotherapy efficacy.CONCLUSION The identified metabolic signatures may effectively distinguish NET levels and predict clinical benefit from chemoimmunotherapy in NSCLC patients.展开更多
High-quality datasets are critical for the development of advanced machine-learning algorithms in seismology.Here,we present an earthquake dataset based on the ChinArray Phase I records(X1).ChinArray Phase I was deplo...High-quality datasets are critical for the development of advanced machine-learning algorithms in seismology.Here,we present an earthquake dataset based on the ChinArray Phase I records(X1).ChinArray Phase I was deployed in the southern north-south seismic zone(20°N-32°N,95°E-110°E)in 2011-2013 using 355 portable broadband seismic stations.CREDIT-X1local,the first release of the ChinArray Reference Earthquake Dataset for Innovative Techniques(CREDIT),includes comprehensive information for the 105,455 local events that occurred in the southern north-south seismic zone during array observation,incorporating them into a single HDF5 file.Original 100-Hz sampled three-component waveforms are organized by event for stations within epicenter distances of 1,000 km,and records of≥200 s are included for each waveform.Two types of phase labels are provided.The first includes manually picked labels for 5,999 events with magnitudes≥2.0,providing 66,507 Pg,42,310 Sg,12,823 Pn,and 546 Sn phases.The second contains automatically labeled phases for 105,442 events with magnitudes of−1.6 to 7.6.These phases were picked using a recurrent neural network phase picker and screened using the corresponding travel time curves,resulting in 1,179,808 Pg,884,281 Sg,176,089 Pn,and 22,986 Sn phases.Additionally,first-motion polarities are included for 31,273 Pg phases.The event and station locations are provided,so that deep learning networks for both conventional phase picking and phase association can be trained and validated.The CREDIT-X1local dataset is the first million-scale dataset constructed from a dense seismic array,which is designed to support various multi-station deep-learning methods,high-precision focal mechanism inversion,and seismic tomography studies.Additionally,owing to the high seismicity in the southern north-south seismic zone in China,this dataset has great potential for future scientific discoveries.展开更多
Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Prof...Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Professor Kang Yan from Capital Normal University,published in September 2022,makes a systematic introduction to foreign language teacher learning,which to some extent makes up for this shortcoming.Her book presents the lineage of foreign language teacher learning research at home and abroad,analyzes both theoretical and practical aspects,reviews the cuttingedge research results,and foresees the future development trend,painting a complete research picture for researchers in the field of foreign language teaching and teacher education as well as front-line teachers interested in foreign language teacher learning.This is an important inspiration for conducting foreign language teacher learning research in the future.And this paper makes a review of the book from aspects such as its content,major characteristics,contributions and limitations.展开更多
Content-based instruction(CBI)effectively combines language skills with subject.According to CBI teaching concept,teachers can improve students’academic English ability by effectively designing academic English class...Content-based instruction(CBI)effectively combines language skills with subject.According to CBI teaching concept,teachers can improve students’academic English ability by effectively designing academic English classroom activities.In this paper,the author tries to integrate the teaching idea of CBI in the course of academic English teaching,so as to cultivate students’ability to communicate in academic English.展开更多
This article reviews the psychological and neuroscience achievements in concept learning since 2010 from the perspectives of individual learning and social learning,and discusses several issues related to concept lear...This article reviews the psychological and neuroscience achievements in concept learning since 2010 from the perspectives of individual learning and social learning,and discusses several issues related to concept learning,including the assistance of machine learning about concept learning.In terms of individual learning,current evidence shows that the brain tends to process concrete concepts through typical features(shared features);and for abstract concepts,semantic processing is the most important cognitive way.In terms of social learning,interpersonal neural synchrony(INS)is considered the main indicator of efficient knowledge transfer(such as teaching activities between teachers and students),but this phenomenon only broadens the channels for concept sources and does not change the basic mode of individual concept learning.Ultimately,this article argues that the way the human brain processes concepts depends on the concept’s own characteristics,so there are no“better”strategies in teaching,only more“suitable”strategies.展开更多
基金This work is supported by the National Natural Science Foundation of China under Grant 52274057,52074340 and 51874335the Major Scientific and Technological Projects of CNPC under Grant ZD2019-183-008+2 种基金the Major Scientific and Technological Projects of CNOOC under Grant CCL2022RCPS0397RSNthe Science and Technology Support Plan for Youth Innovation of University in Shandong Province under Grant 2019KJH002111 Project under Grant B08028.
文摘To assess whether a development strategy will be profitable enough,production forecasting is a crucial and difficult step in the process.The development history of other reservoirs in the same class tends to be studied to make predictions accurate.However,the permeability field,well patterns,and development regime must all be similar for two reservoirs to be considered in the same class.This results in very few available experiences from other reservoirs even though there is a lot of historical information on numerous reservoirs because it is difficult to find such similar reservoirs.This paper proposes a learn-to-learn method,which can better utilize a vast amount of historical data from various reservoirs.Intuitively,the proposed method first learns how to learn samples before directly learning rules in samples.Technically,by utilizing gradients from networks with independent parameters and copied structure in each class of reservoirs,the proposed network obtains the optimal shared initial parameters which are regarded as transferable information across different classes.Based on that,the network is able to predict future production indices for the target reservoir by only training with very limited samples collected from reservoirs in the same class.Two cases further demonstrate its superiority in accuracy to other widely-used network methods.
文摘With the rapid development of virtual reality technology,it has been widely used in the field of education.It can promote the development of learning transfer,which is an effective method for learners to learn effectively.Therefore,this paper describes how to use virtual reality technology to achieve learning transfer in order to achieve teaching goals and improve learning efficiency.
文摘The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.
文摘With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become vulnerable to sophisticated cyber-attacks,especially privacy-related attacks such as inference and data poisoning ones.Federated Learning(FL)has been regarded as a hopeful method to enable distributed learning with privacypreserved intelligence in IoT applications.Even though the significance of developing privacy-preserving FL has drawn as a great research interest,the current research only concentrates on FL with independent identically distributed(i.i.d)data and few studies have addressed the non-i.i.d setting.FL is known to be vulnerable to Generative Adversarial Network(GAN)attacks,where an adversary can presume to act as a contributor participating in the training process to acquire the private data of other contributors.This paper proposes an innovative Privacy Protection-based Federated Deep Learning(PP-FDL)framework,which accomplishes data protection against privacy-related GAN attacks,along with high classification rates from non-i.i.d data.PP-FDL is designed to enable fog nodes to cooperate to train the FDL model in a way that ensures contributors have no access to the data of each other,where class probabilities are protected utilizing a private identifier generated for each class.The PP-FDL framework is evaluated for image classification using simple convolutional networks which are trained using MNIST and CIFAR-10 datasets.The empirical results have revealed that PF-DFL can achieve data protection and the framework outperforms the other three state-of-the-art models with 3%–8%as accuracy improvements.
基金the R&D&I,Spain grants PID2020-119478GB-I00 and,PID2020-115832GB-I00 funded by MCIN/AEI/10.13039/501100011033.N.Rodríguez-Barroso was supported by the grant FPU18/04475 funded by MCIN/AEI/10.13039/501100011033 and by“ESF Investing in your future”Spain.J.Moyano was supported by a postdoctoral Juan de la Cierva Formación grant FJC2020-043823-I funded by MCIN/AEI/10.13039/501100011033 and by European Union NextGenerationEU/PRTR.J.Del Ser acknowledges funding support from the Spanish Centro para el Desarrollo Tecnológico Industrial(CDTI)through the AI4ES projectthe Department of Education of the Basque Government(consolidated research group MATHMODE,IT1456-22)。
文摘When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.
基金financially supported by the National Natural Science Foundation of China(31971541).
文摘Forest habitats are critical for biodiversity,ecosystem services,human livelihoods,and well-being.Capacity to conduct theoretical and applied forest ecology research addressing direct(e.g.,deforestation)and indirect(e.g.,climate change)anthropogenic pressures has benefited considerably from new field-and statistical-techniques.We used machine learning and bibliometric structural topic modelling to identify 20 latent topics comprising four principal fields from a corpus of 16,952 forest ecology/forestry articles published in eight ecology and five forestry journals between 2010 and 2022.Articles published per year increased from 820 in 2010 to 2,354 in 2021,shifting toward more applied topics.Publications from China and some countries in North America and Europe dominated,with relatively fewer articles from some countries in West and Central Africa and West Asia,despite globally important forest resources.Most study sites were in some countries in North America,Central Asia,and South America,and Australia.Articles utilizing R statistical software predominated,increasing from 29.5%in 2010 to 71.4%in 2022.The most frequently used packages included lme4,vegan,nlme,MuMIn,ggplot2,car,MASS,mgcv,multcomp and raster.R was more often used in forest ecology than applied forestry articles.R software offers advantages in script and workflow-sharing compared to other statistical packages.Our findings demonstrate that the disciplines of forest ecology/forestry are expanding both in number and scope,aided by more sophisticated statistical tools,to tackle the challenges of redressing forest habitat loss and the socio-economic impacts of deforestation.
基金Project supported by the National Key Research and Development Program of China (Grant No.2019YFA0705000)Leading-edge technology Program of Jiangsu Natural Science Foundation (Grant No.BK20192001)the National Natural Science Foundation of China (Grant No.11974178)。
文摘Neural networks are becoming ubiquitous in various areas of physics as a successful machine learning(ML)technique for addressing different tasks.Based on ML technique,we propose and experimentally demonstrate an efficient method for state reconstruction of the widely used Sagnac polarization-entangled photon source.By properly modeling the target states,a multi-output fully connected neural network is well trained using only six of the sixteen measurement bases in standard tomography technique,and hence our method reduces the resource consumption without loss of accuracy.We demonstrate the ability of the neural network to predict state parameters with a high precision by using both simulated and experimental data.Explicitly,the mean absolute error for all the parameters is below 0.05 for the simulated data and a mean fidelity of 0.99 is achieved for experimentally generated states.Our method could be generalized to estimate other kinds of states,as well as other quantum information tasks.
基金supported by National Natural Science Foundation of China (Nos. 12205033, 12105317, 11905022 and 11975062)Dalian Youth Science and Technology Project (No. 2022RQ039)+1 种基金the Fundamental Research Funds for the Central Universities (No. 3132023192)the Young Scientists Fund of the Natural Science Foundation of Sichuan Province (No. 2023NSFSC1291)
文摘Many magnetohydrodynamic stability analyses require generation of a set of equilibria with a fixed safety factor q-profile while varying other plasma parameters.A neural network(NN)-based approach is investigated that facilitates such a process.Both multilayer perceptron(MLP)-based NN and convolutional neural network(CNN)models are trained to map the q-profile to the plasma current density J-profile,and vice versa,while satisfying the Grad–Shafranov radial force balance constraint.When the initial target models are trained,using a database of semianalytically constructed numerical equilibria,an initial CNN with one convolutional layer is found to perform better than an initial MLP model.In particular,a trained initial CNN model can also predict the q-or J-profile for experimental tokamak equilibria.The performance of both initial target models is further improved by fine-tuning the training database,i.e.by adding realistic experimental equilibria with Gaussian noise.The fine-tuned target models,referred to as fine-tuned MLP and fine-tuned CNN,well reproduce the target q-or J-profile across multiple tokamak devices.As an important application,these NN-based equilibrium profile convertors can be utilized to provide a good initial guess for iterative equilibrium solvers,where the desired input quantity is the safety factor instead of the plasma current density.
基金Swiss National Science Foundation,Grant/Award Number:SNSF 320030_176052Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung,Grant/Award Number:320030_176052。
文摘Magnetic resonance(MR)imaging is a widely employed medical imaging technique that produces detailed anatomical images of the human body.The segmentation of MR im-ages plays a crucial role in medical image analysis,as it enables accurate diagnosis,treatment planning,and monitoring of various diseases and conditions.Due to the lack of sufficient medical images,it is challenging to achieve an accurate segmentation,especially with the application of deep learning networks.The aim of this work is to study transfer learning from T1-weighted(T1-w)to T2-weighted(T2-w)MR sequences to enhance bone segmentation with minimal required computation resources.With the use of an excitation-based convolutional neural networks,four transfer learning mechanisms are proposed:transfer learning without fine tuning,open fine tuning,conservative fine tuning,and hybrid transfer learning.Moreover,a multi-parametric segmentation model is proposed using T2-w MR as an intensity-based augmentation technique.The novelty of this work emerges in the hybrid transfer learning approach that overcomes the overfitting issue and preserves the features of both modalities with minimal computation time and resources.The segmentation results are evaluated using 14 clinical 3D brain MR and CT images.The results reveal that hybrid transfer learning is superior for bone segmentation in terms of performance and computation time with DSCs of 0.5393±0.0007.Although T2-w-based augmentation has no significant impact on the performance of T1-w MR segmentation,it helps in improving T2-w MR segmentation and developing a multi-sequences segmentation model.
文摘This study delves into the latest advancements in machine learning and deep learning applications in geothermal resource development,extending the analysis up to 2024.It focuses on artificial intelligence's transformative role in the geothermal industry,analyzing recent literature from Scopus and Google Scholar to identify emerging trends,challenges,and future opportunities.The results reveal a marked increase in artificial intelligence(AI)applications,particularly in reservoir engineering,with significant advancements observed post‐2019.This study highlights AI's potential in enhancing drilling and exploration,emphasizing the integration of detailed case studies and practical applications.It also underscores the importance of ongoing research and tailored AI applications,in light of the rapid technological advancements and future trends in the field.
基金This work was supported in part by the Key Project of Natural Science Research of Anhui Provincial Department of Education under Grant KJ2017A416in part by the Fund of National Sensor Network Engineering Technology Research Center(No.NSNC202103).
文摘When existing deep learning models are used for road extraction tasks from high-resolution images,they are easily affected by noise factors such as tree and building occlusion and complex backgrounds,resulting in incomplete road extraction and low accuracy.We propose the introduction of spatial and channel attention modules to the convolutional neural network ConvNeXt.Then,ConvNeXt is used as the backbone network,which cooperates with the perceptual analysis network UPerNet,retains the detection head of the semantic segmentation,and builds a new model ConvNeXt-UPerNet to suppress noise interference.Training on the open-source DeepGlobe and CHN6-CUG datasets and introducing the DiceLoss on the basis of CrossEntropyLoss solves the problem of positive and negative sample imbalance.Experimental results show that the new network model can achieve the following performance on the DeepGlobe dataset:79.40%for precision(Pre),97.93% for accuracy(Acc),69.28% for intersection over union(IoU),and 83.56% for mean intersection over union(MIoU).On the CHN6-CUG dataset,the model achieves the respective values of 78.17%for Pre,97.63%for Acc,65.4% for IoU,and 81.46% for MIoU.Compared with other network models,the fused ConvNeXt-UPerNet model can extract road information better when faced with the influence of noise contained in high-resolution remote sensing images.It also achieves multiscale image feature information with unified perception,ultimately improving the generalization ability of deep learning technology in extracting complex roads from high-resolution remote sensing images.
基金National Key Research and Development Program of China,Grant/Award Number:2022YFB4700701National Natural Science Foundation of China,Grant/Award Numbers:52375035,U21A20489+1 种基金CAMS Innovation Fund for Medical Sciences,Grant/Award Number:2022‐I2M‐C&T‐A‐005Shenzhen Science and Technology Program,Grant/Award Numbers:JSGG20220831100202004,JCYJ20220818101412026。
文摘A comparative study of two force perception skill learning approaches for robot‐assisted spinal surgery,the impedance model method and the imitation learning(IL)method,is presented.The impedance model method develops separate models for the surgeon and patient,incorporating spring‐damper and bone‐grinding models.Expert surgeons'feature parameters are collected and mapped using support vector regression and image navi-gation techniques.The imitation learning approach utilises long short‐term memory networks(LSTM)and addresses accurate data labelling challenges with custom models.Experimental results demonstrate skill recognition rates of 63.61%-74.62%for the impedance model approach,relying on manual feature extraction.Conversely,the imitation learning approach achieves a force perception recognition rate of 91.06%,outperforming the impedance model on curved bone surfaces.The findings demonstrate the potential of imitation learning to enhance skill acquisition in robot‐assisted spinal surgery by eliminating the laborious process of manual feature extraction.
文摘One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.
文摘In this editorial,we comment on the article by Zhang et al entitled Development of a machine learning-based model for predicting the risk of early postoperative recurrence of hepatocellular carcinoma.Hepatocellular carcinoma(HCC),which is characterized by high incidence and mortality rates,remains a major global health challenge primarily due to the critical issue of postoperative recurrence.Early recurrence,defined as recurrence that occurs within 2 years posttreatment,is linked to the hidden spread of the primary tumor and significantly impacts patient survival.Traditional predictive factors,including both patient-and treatment-related factors,have limited predictive ability with respect to HCC recurrence.The integration of machine learning algorithms is fueled by the exponential growth of computational power and has revolutionized HCC research.The study by Zhang et al demonstrated the use of a groundbreaking preoperative prediction model for early postoperative HCC recurrence.Challenges persist,including sample size constraints,issues with handling data,and the need for further validation and interpretability.This study emphasizes the need for collaborative efforts,multicenter studies and comparative analyses to validate and refine the model.Overcoming these challenges and exploring innovative approaches,such as multi-omics integration,will enhance personalized oncology care.This study marks a significant stride toward precise,efficient,and personalized oncology practices,thus offering hope for improved patient outcomes in the field of HCC treatment.
基金supported in part by the Gusu Innovation and Entrepreneurship Leading Talents in Suzhou City,grant numbers ZXL2021425 and ZXL2022476Doctor of Innovation and Entrepreneurship Program in Jiangsu Province,grant number JSSCBS20211440+6 种基金Jiangsu Province Key R&D Program,grant number BE2019682Natural Science Foundation of Jiangsu Province,grant number BK20200214National Key R&D Program of China,grant number 2017YFB0403701National Natural Science Foundation of China,grant numbers 61605210,61675226,and 62075235Youth Innovation Promotion Association of Chinese Academy of Sciences,grant number 2019320Frontier Science Research Project of the Chinese Academy of Sciences,grant number QYZDB-SSW-JSC03Strategic Priority Research Program of the Chinese Academy of Sciences,grant number XDB02060000.
文摘The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.
基金the National Natural Science Foundation of Hunan Province,No.2023JJ60039Natural Science Foundation of Hunan Province National Health Commission,No.B202303027655+3 种基金Natural Science Foundation of Changsha Science and Technology Bureau,No.Kq2208150Wu Jieping Foundation of China,No.320.6750.2022-22-59,320.6750.2022-17-41Guangdong Association of Clinical Trials(GACT)/Chinese Thoracic Oncology Group(CTONG)and Guangdong Provincial Key Lab of Translational Medicine in Lung Cancer,No.2017B030314120.
文摘BACKGROUND Non-small cell lung cancer(NSCLC)is the primary form of lung cancer,and the combination of chemotherapy with immunotherapy offers promising treatment options for patients suffering from this disease.However,the emergence of drug resistance significantly limits the effectiveness of these therapeutic strategies.Consequently,it is imperative to devise methods for accurately detecting and evaluating the efficacy of these treatments.AIM To identify the metabolic signatures associated with neutrophil extracellular traps(NETs)and chemoimmunotherapy efficacy in NSCLC patients.METHODS In total,159 NSCLC patients undergoing first-line chemoimmunotherapy were enrolled.We first investigated the characteristics influencing clinical efficacy.Circulating levels of NETs and cytokines were measured by commercial kits.Liquid chromatography tandem mass spectrometry quantified plasma metabolites,and differential metabolites were identified.Least absolute shrinkage and selection operator,support vector machine-recursive feature elimination,and random forest algorithms were employed.By using plasma metabolic profiles and machine learning algorithms,predictive metabolic signatures were established.RESULTS First,the levels of circulating interleukin-8,neutrophil-to-lymphocyte ratio,and NETs were closely related to poor efficacy of first-line chemoimmunotherapy.Patients were classed into a low NET group or a high NET group.A total of 54 differential plasma metabolites were identified.These metabolites were primarily involved in arachidonic acid and purine metabolism.Three key metabolites were identified as crucial variables,including 8,9-epoxyeicosatrienoic acid,L-malate,and bis(monoacylglycerol)phosphate(18:1/16:0).Using metabolomic sequencing data and machine learning methods,key metabolic signatures were screened to predict NET level as well as chemoimmunotherapy efficacy.CONCLUSION The identified metabolic signatures may effectively distinguish NET levels and predict clinical benefit from chemoimmunotherapy in NSCLC patients.
基金funded by the National Key R&D Program of China (No. 2021YFC3000702)the Special Fund of the Institute of Geophysics, China Earthquake Administration (No. DQJB20B15)+2 种基金the National Natural Science Foundation of China youth Grant (No. 41804059)the Joint Funds of the National Natural Science Foundation of China (No. U223920029)the Science for Earthquake Resilience of China Earthquake Administration (No. XH211103)
文摘High-quality datasets are critical for the development of advanced machine-learning algorithms in seismology.Here,we present an earthquake dataset based on the ChinArray Phase I records(X1).ChinArray Phase I was deployed in the southern north-south seismic zone(20°N-32°N,95°E-110°E)in 2011-2013 using 355 portable broadband seismic stations.CREDIT-X1local,the first release of the ChinArray Reference Earthquake Dataset for Innovative Techniques(CREDIT),includes comprehensive information for the 105,455 local events that occurred in the southern north-south seismic zone during array observation,incorporating them into a single HDF5 file.Original 100-Hz sampled three-component waveforms are organized by event for stations within epicenter distances of 1,000 km,and records of≥200 s are included for each waveform.Two types of phase labels are provided.The first includes manually picked labels for 5,999 events with magnitudes≥2.0,providing 66,507 Pg,42,310 Sg,12,823 Pn,and 546 Sn phases.The second contains automatically labeled phases for 105,442 events with magnitudes of−1.6 to 7.6.These phases were picked using a recurrent neural network phase picker and screened using the corresponding travel time curves,resulting in 1,179,808 Pg,884,281 Sg,176,089 Pn,and 22,986 Sn phases.Additionally,first-motion polarities are included for 31,273 Pg phases.The event and station locations are provided,so that deep learning networks for both conventional phase picking and phase association can be trained and validated.The CREDIT-X1local dataset is the first million-scale dataset constructed from a dense seismic array,which is designed to support various multi-station deep-learning methods,high-precision focal mechanism inversion,and seismic tomography studies.Additionally,owing to the high seismicity in the southern north-south seismic zone in China,this dataset has great potential for future scientific discoveries.
文摘Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Professor Kang Yan from Capital Normal University,published in September 2022,makes a systematic introduction to foreign language teacher learning,which to some extent makes up for this shortcoming.Her book presents the lineage of foreign language teacher learning research at home and abroad,analyzes both theoretical and practical aspects,reviews the cuttingedge research results,and foresees the future development trend,painting a complete research picture for researchers in the field of foreign language teaching and teacher education as well as front-line teachers interested in foreign language teacher learning.This is an important inspiration for conducting foreign language teacher learning research in the future.And this paper makes a review of the book from aspects such as its content,major characteristics,contributions and limitations.
文摘Content-based instruction(CBI)effectively combines language skills with subject.According to CBI teaching concept,teachers can improve students’academic English ability by effectively designing academic English classroom activities.In this paper,the author tries to integrate the teaching idea of CBI in the course of academic English teaching,so as to cultivate students’ability to communicate in academic English.
文摘This article reviews the psychological and neuroscience achievements in concept learning since 2010 from the perspectives of individual learning and social learning,and discusses several issues related to concept learning,including the assistance of machine learning about concept learning.In terms of individual learning,current evidence shows that the brain tends to process concrete concepts through typical features(shared features);and for abstract concepts,semantic processing is the most important cognitive way.In terms of social learning,interpersonal neural synchrony(INS)is considered the main indicator of efficient knowledge transfer(such as teaching activities between teachers and students),but this phenomenon only broadens the channels for concept sources and does not change the basic mode of individual concept learning.Ultimately,this article argues that the way the human brain processes concepts depends on the concept’s own characteristics,so there are no“better”strategies in teaching,only more“suitable”strategies.