The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces ...The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces a new method named Big Data Tensor Multi-Cluster Distributed Incremental Update(BDTMCDIncreUpdate),which combines distributed computing,storage technology,and incremental update techniques to provide an efficient and effective means for clustering analysis.Firstly,the original dataset is divided into multiple subblocks,and distributed computing resources are utilized to process the sub-blocks in parallel,enhancing efficiency.Then,initial clustering is performed on each sub-block using tensor-based multi-clustering techniques to obtain preliminary results.When new data arrives,incremental update technology is employed to update the core tensor and factor matrix,ensuring that the clustering model can adapt to changes in data.Finally,by combining the updated core tensor and factor matrix with historical computational results,refined clustering results are obtained,achieving real-time adaptation to dynamic data.Through experimental simulation on the Aminer dataset,the BDTMCDIncreUpdate method has demonstrated outstanding performance in terms of accuracy(ACC)and normalized mutual information(NMI)metrics,achieving an accuracy rate of 90%and an NMI score of 0.85,which outperforms existing methods such as TClusInitUpdate and TKLClusUpdate in most scenarios.Therefore,the BDTMCDIncreUpdate method offers an innovative solution to the field of big data analysis,integrating distributed computing,incremental updates,and tensor-based multi-clustering techniques.It not only improves the efficiency and scalability in processing large-scale high-dimensional datasets but also has been validated for its effectiveness and accuracy through experiments.This method shows great potential in real-world applications where dynamic data growth is common,and it is of significant importance for advancing the development of data analysis technology.展开更多
Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and s...Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and security.One of the core features of TSN is traffic scheduling with bounded low delay in the network.However,traffic scheduling schemes in TSN are usually synthesized offline and lack dynamism.To implement incremental scheduling of newly arrived traffic in TSN,we propose a Dynamic Response Incremental Scheduling(DR-IS)method for time-sensitive traffic and deploy it on a software-defined time-sensitive network architecture.Under the premise of meeting the traffic scheduling requirements,we adopt two modes,traffic shift and traffic exchange,to dynamically adjust the time slot injection position of the traffic in the original scheme,and determine the sending offset time of the new timesensitive traffic to minimize the global traffic transmission jitter.The evaluation results show that DRIS method can effectively control the large increase of traffic transmission jitter in incremental scheduling without affecting the transmission delay,thus realizing the dynamic incremental scheduling of time-sensitive traffic in TSN.展开更多
Currently,distributed routing protocols are constrained by offering a single path between any pair of nodes,thereby limiting the potential throughput and overall network performance.This approach not only restricts th...Currently,distributed routing protocols are constrained by offering a single path between any pair of nodes,thereby limiting the potential throughput and overall network performance.This approach not only restricts the flow of data but also makes the network susceptible to failures in case the primary path is disrupted.In contrast,routing protocols that leverage multiple paths within the network offer a more resilient and efficient solution.Multipath routing,as a fundamental concept,surpasses the limitations of traditional shortest path first protocols.It not only redirects traffic to unused resources,effectively mitigating network congestion,but also ensures load balancing across the network.This optimization significantly improves network utilization and boosts the overall performance,making it a widely recognized efficient method for enhancing network reliability.To further strengthen network resilience against failures,we introduce a routing scheme known as Multiple Nodes with at least Two Choices(MNTC).This innovative approach aims to significantly enhance network availability by providing each node with at least two routing choices.By doing so,it not only reduces the dependency on a single path but also creates redundant paths that can be utilized in case of failures,thereby enhancing the overall resilience of the network.To ensure the optimal placement of nodes,we propose three incremental deployment algorithms.These algorithms carefully select the most suitable set of nodes for deployment,taking into account various factors such as node connectivity,traffic patterns,and network topology.By deployingMNTCon a carefully chosen set of nodes,we can significantly enhance network reliability without the need for a complete overhaul of the existing infrastructure.We have conducted extensive evaluations of MNTC in diverse topological spaces,demonstrating its effectiveness in maintaining high network availability with minimal path stretch.The results are impressive,showing that even when implemented on just 60%of nodes,our incremental deployment method significantly boosts network availability.This underscores the potential of MNTC in enhancing network resilience and performance,making it a viable solution for modern networks facing increasing demands and complexities.The algorithms OSPF,TBFH,DC and LFC perform fast rerouting based on strict conditions,while MNTC is not restricted by these conditions.In five real network topologies,the average network availability ofMNTCis improved by 14.68%,6.28%,4.76%and 2.84%,respectively,compared with OSPF,TBFH,DC and LFC.展开更多
Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identi...Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.展开更多
Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values...Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values or make ethical decisions,they may not meet the expectations of humans.Traditionally,an ethical decision-making framework is constructed by rule-based or statistical approaches.In this paper,we propose an ethical decision-making framework based on incremental ILP(Inductive Logic Programming),which can overcome the brittleness of rule-based approaches and little interpretability of statistical approaches.As the current incremental ILP makes it difficult to solve conflicts,we propose a novel ethical decision-making framework considering conflicts in this paper,which adopts our proposed incremental ILP system.The framework consists of two processes:the learning process and the deduction process.The first process records bottom clauses with their score functions and learns rules guided by the entailment and the score function.The second process obtains an ethical decision based on the rules.In an ethical scenario about chatbots for teenagers’mental health,we verify that our framework can learn ethical rules and make ethical decisions.Besides,we extract incremental ILP from the framework and compare it with the state-of-the-art ILP systems based on ASP(Answer Set Programming)focusing on conflict resolution.The results of comparisons show that our proposed system can generate better-quality rules than most other systems.展开更多
To improve the prediction accuracy of chaotic time series and reconstruct a more reasonable phase space structure of the prediction network,we propose a convolutional neural network-long short-term memory(CNN-LSTM)pre...To improve the prediction accuracy of chaotic time series and reconstruct a more reasonable phase space structure of the prediction network,we propose a convolutional neural network-long short-term memory(CNN-LSTM)prediction model based on the incremental attention mechanism.Firstly,a traversal search is conducted through the traversal layer for finite parameters in the phase space.Then,an incremental attention layer is utilized for parameter judgment based on the dimension weight criteria(DWC).The phase space parameters that best meet DWC are selected and fed into the input layer.Finally,the constructed CNN-LSTM network extracts spatio-temporal features and provides the final prediction results.The model is verified using Logistic,Lorenz,and sunspot chaotic time series,and the performance is compared from the two dimensions of prediction accuracy and network phase space structure.Additionally,the CNN-LSTM network based on incremental attention is compared with long short-term memory(LSTM),convolutional neural network(CNN),recurrent neural network(RNN),and support vector regression(SVR)for prediction accuracy.The experiment results indicate that the proposed composite network model possesses enhanced capability in extracting temporal features and achieves higher prediction accuracy.Also,the algorithm to estimate the phase space parameter is compared with the traditional CAO,false nearest neighbor,and C-C,three typical methods for determining the chaotic phase space parameters.The experiments reveal that the phase space parameter estimation algorithm based on the incremental attention mechanism is superior in prediction accuracy compared with the traditional phase space reconstruction method in five networks,including CNN-LSTM,LSTM,CNN,RNN,and SVR.展开更多
The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation fo...The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.展开更多
We investigated the parametric optimization on incremental sheet forming of stainless steel using Grey Relational Analysis(GRA) coupled with Principal Component Analysis(PCA). AISI 316L stainless steel sheets were use...We investigated the parametric optimization on incremental sheet forming of stainless steel using Grey Relational Analysis(GRA) coupled with Principal Component Analysis(PCA). AISI 316L stainless steel sheets were used to develop double wall angle pyramid with aid of tungsten carbide tool. GRA coupled with PCA was used to plan the experiment conditions. Control factors such as Tool Diameter(TD), Step Depth(SD), Bottom Wall Angle(BWA), Feed Rate(FR) and Spindle Speed(SS) on Top Wall Angle(TWA) and Top Wall Angle Surface Roughness(TWASR) have been studied. Wall angle increases with increasing tool diameter due to large contact area between tool and workpiece. As the step depth, feed rate and spindle speed increase,TWASR decreases with increasing tool diameter. As the step depth increasing, the hydrostatic stress is raised causing severe cracks in the deformed surface. Hence it was concluded that the proposed hybrid method was suitable for optimizing the factors and response.展开更多
Objective:Robotic-assisted spine surgeries(RASS)have been shown to enhance precision,reduce operative time,prevent complications,facilitate minimally invasive spinal surgery,and decrease revision surgery rates,leading...Objective:Robotic-assisted spine surgeries(RASS)have been shown to enhance precision,reduce operative time,prevent complications,facilitate minimally invasive spinal surgery,and decrease revision surgery rates,leading to improved patient outco mes This study aimed to compare the cost-effectiveness of RAs's and non-robotic-assisted surgery for degenerative spine disease at a single center.Me thods:This retrospective study,including 122 patients,was conducted at a single center from March 2015 to February 2022.Patients who underwent ro bot-assisted surgery were assigned to the robotgroup,and patients who underwent non-robotic-assisted surgery were assigned to the non-mmbot group.Various data,indluding demographic information,surgical details,outcomes,and cost-effectiveness,were colected for both groups.The cost-effectiveness was determined using the incremental cost-effectiveness ratio(ICER),and subgroup analysis was conducted for patients with 1 or 2 levels of spi-nal instrumentation.The analysis was performed using STATA SE version 15 and Tree.Age Pro 2020,with Monte Caro simulations for the cost-effectiveness acceptability curve.Results The owerallICER was$22,572,but it decreased to$16,980 when considering cases with only 1or 2 levels of instrumentation.RASS is deemed cost-effective when the willi ingness to pay is$3000-$4000 if less than 2 levels of the spine are instrumented.Conchsions:The cost-effectiveness of robot icassistance be comes apparent whenthere isa reduced need for open surgeries,leading to decreased d revision rates caused by complications such as misplaced screwsor infctions.Therefore,it is advisable to allocate healthcare budget resou Irces to spine robots,as RASS PIDves to be cost-effective,partic cularly when only two or Ewer spinal levels require instrumentation.展开更多
Objective This study aims to estimate the cost-effectiveness of the combined chemotherapy regimen containing Bedaquiline(BR)and the conventional treatment regimen(CR,not containing Bedaquiline)for the treatment of adu...Objective This study aims to estimate the cost-effectiveness of the combined chemotherapy regimen containing Bedaquiline(BR)and the conventional treatment regimen(CR,not containing Bedaquiline)for the treatment of adults with multidrug-resistant tuberculosis(MDR-TB)in China.Methods A combination of a decision tree and a Markov model was developed to estimate the cost and effects of MDR patients in BR and CR within ten years.The model parameter data were synthesized from the literature,the national TB surveillance information system,and consultation with experts.The incremental cost-effectiveness ratio(ICER)of BR vs.CR was determined.Results BR(vs.CR)had a higher sputum culture conversion rate and cure rate and prevented many premature deaths(decreased by 12.8%),thereby obtaining more quality-adjusted life years(QALYs)(increased by 2.31 years).The per capita cost in BR was as high as 138,000 yuan,roughly double that of CR.The ICER for BR was 33,700 yuan/QALY,which was lower than China's 1×per capita Gross Domestic Product(GDP)in 2020(72,400 yuan).Conclusion BR is shown to be cost effective.When the unit price of Bedaquiline reaches or falls below57.21 yuan per unit,BR is expected to be the dominant strategy in China over CR.展开更多
Flavonoids are the primary functional components in the flowers of Hibiscus manihot L.(HMLF).In this study,an efficient and green ionic liquid-high-speed homogenization coupled with microwave-assisted extraction(IL-HS...Flavonoids are the primary functional components in the flowers of Hibiscus manihot L.(HMLF).In this study,an efficient and green ionic liquid-high-speed homogenization coupled with microwave-assisted extraction(IL-HSH-MAE)technique was firstly established and implemented to extract seven target flavonoids from HMLF.Single-factor experiments and Box-Behnken design(BBD)were utilized to maximize the extraction conditions of IL-HSH-MAE,which were as follows:0.1 M of[C4mim]Br,homogenate speed of 7,000 rpm,homogenate time of 120 s,liquid/solid ratio of 24 mL/g,extraction temperature of 62℃and extraction time of 15 min.The maximal total extraction yield of seven target flavonoids attained 22.04 mg/g,which was considerably greater than the yields obtained by IL-HSH,IL-MAE,60%ethanol-HSH-MAE and 60%ethanol-MAE.These findings suggested that the IL-HSH-MAE method can be exploited as a rapid and efficient approach for extracting natural products from plants.The process also possesses outstanding superiority in being environmentally friendly and having a high extraction efficiency and is expected to be a luciferous prospect extraction technology.展开更多
BACKGROUND Over the past years,patient specific instrumentation(PSI)for total knee arthroplasty(TKA)has been implemented and routinely used.No clear answer has been given on its associated cost and cost-effectiveness ...BACKGROUND Over the past years,patient specific instrumentation(PSI)for total knee arthroplasty(TKA)has been implemented and routinely used.No clear answer has been given on its associated cost and cost-effectiveness when compared to conventional instrumentation(CI)for TKA.AIM To compare the cost and cost-effectiveness of PSI TKA compared to CI TKA.METHODS A literature search was performed in healthcare,economical healthcare,and medical databases(MEDLINE,EMBASE,CINAHL,Web of Science,Cochrane Library,EconLit).It was conducted in April 2021 and again in January 2022.Relevant literature included randomised controlled trials,retrospective studies,prospective studies,observational studies,and case control studies.All studies were assessed on methodological quality.Relevant outcomes included incremental cost-effectiveness ratio,quality-adjusted life years,total costs,imaging costs,production costs,sterilization associated costs,surgery duration costs and readmission rate costs.All eligible studies were assessed for risk of bias.Meta-analysis was performed for outcomes with sufficient data.RESULTS Thirty-two studies were included into the systematic review.Two were included in the metaanalysis.3994 PSI TKAs and 13267 CI TKAs were included in the sample size.The methodological quality of the included studies,based on Consensus on Health Economic Criteria-scores and risk of bias,ranged from average to good.PSI TKA costs less than CI TKA when considering mean operating room time and its associated costs and tray sterilization per patient case.PSI TKA costs more compared to CI TKA when considering imaging and production costs.Considering total costs per patient case,PSI TKA is more expensive in comparison to CI TKA.Meta-analysis comparing total costs for PSI TKA,and CI TKA showed a significant higher cost for PSI TKA.CONCLUSION Cost for PSI and CI TKA can differ when considering distinct aspects of their implementation.Total costs per patient case are increased for PSI TKA when compared to CI TKA.展开更多
Prunus serotina and Robinia pseudoacacia are the most widespread invasive trees in Central Europe.In addition,according to climate models,decreased growth of many economically and ecologically important native trees w...Prunus serotina and Robinia pseudoacacia are the most widespread invasive trees in Central Europe.In addition,according to climate models,decreased growth of many economically and ecologically important native trees will likely be observed in the future.We aimed to assess the impact of these two neophytes,which differ in the biomass range and nitrogen-fixing abilities observed in Central European conditions,on the relative aboveground biomass increments of native oaks Qucrcus robur and Q.petraea and Scots pine Pinus sylvestris.We aimed to increase our understanding of the relationship between facilitation and competition between woody alien species and overstory native trees.We established 72 circular plots(0.05 ha)in two different forest habitat types and stands varying in age in western Poland.We chose plots with different abundances of the studied neophytes to determine how effects scaled along the quantitative invasion gradient.Furthermore,we collected growth cores of the studied native species,and we calculated aboveground biomass increments at the tree and stand levels.Then,we used generalized linear mixed-effects models to assess the impact of invasive species abundances on relative aboveground biomass increments of native tree species.We did not find a biologically or statistically significant impact of invasive R.pseudoacacia or P.serotina on the relative aboveground,biomass increments of native oaks and pines along the quantitative gradient of invader biomass or on the proportion of total stand biomass accounted for by invaders.The neophytes did not act as native tree growth stimulators but also did not compete with them for resources,which would escalate the negative impact of climate change on pines and oaks.The neophytes should not significantly modify the carbon sequestration capacity of the native species.Our work combines elements of the per capita effect of invasion with research on mixed forest management.展开更多
Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the d...Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the dynamically changing world,e.g.,classifying newly discovered fish species,remains an open problem.We address an even more challenging and realistic setting of this problem where new class samples are insufficient,i.e.,Few-Shot Class-Incremental Learning(FSCIL).Current FSCIL methods augment the training data to alleviate the overfitting of novel classes.By contrast,we propose Filter Bank Networks(FBNs)that augment the learnable filters to capture fine-detailed features for adapting to future new classes.In the forward pass,FBNs augment each convolutional filter to a virtual filter bank containing the canonical one,i.e.,itself,and multiple transformed versions.During back-propagation,FBNs explicitly stimulate fine-detailed features to emerge and collectively align all gradients of each filter bank to learn the canonical one.FBNs capture pattern variants that do not yet exist in the pretraining session,thus making it easy to incorporate new classes in the incremental learning phase.Moreover,FBNs introduce model-level prior knowledge to efficiently utilize the limited few-shot data.Extensive experiments on MNIST,CIFAR100,CUB200,andMini-ImageNet datasets show that FBNs consistently outperformthe baseline by a significantmargin,reporting new state-of-the-art FSCIL results.In addition,we contribute a challenging FSCIL benchmark,Fishshot1K,which contains 8261 underwater images covering 1000 ocean fish species.The code is included in the supplementary materials.展开更多
Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of...Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of data prepared in advance, which is often challenging in real-world applications, such as streaming data and concept drift. For this reason, incremental learning (continual learning) has attracted increasing attention from scholars. However, incremental learning is associated with the challenge of catastrophic forgetting: the performance on previous tasks drastically degrades after learning a new task. In this paper, we propose a new strategy to alleviate catastrophic forgetting when neural networks are trained in continual domains. Specifically, two components are applied: data translation based on transfer learning and knowledge distillation. The former translates a portion of new data to reconstruct the partial data distribution of the old domain. The latter uses an old model as a teacher to guide a new model. The experimental results on three datasets have shown that our work can effectively alleviate catastrophic forgetting by a combination of the two methods aforementioned.展开更多
Background With the development of information technology,there is a significant increase in the number of network traffic logs mixed with various types of cyberattacks.Traditional intrusion detection systems(IDSs)are...Background With the development of information technology,there is a significant increase in the number of network traffic logs mixed with various types of cyberattacks.Traditional intrusion detection systems(IDSs)are limited in detecting new inconstant patterns and identifying malicious traffic traces in real time.Therefore,there is an urgent need to implement more effective intrusion detection technologies to protect computer security.Methods In this study,we designed a hybrid IDS by combining our incremental learning model(KANSOINN)and active learning to learn new log patterns and detect various network anomalies in real time.Conclusions Experimental results on the NSLKDD dataset showed that KAN-SOINN can be continuously improved and effectively detect malicious logs.Meanwhile,comparative experiments proved that using a hybrid query strategy in active learning can improve the model learning efficiency.展开更多
Attribute reduction,also known as feature selection,for decision information systems is one of the most pivotal issues in machine learning and data mining.Approaches based on the rough set theory and some extensions w...Attribute reduction,also known as feature selection,for decision information systems is one of the most pivotal issues in machine learning and data mining.Approaches based on the rough set theory and some extensions were proved to be efficient for dealing with the problemof attribute reduction.Unfortunately,the intuitionistic fuzzy sets based methods have not received much interest,while these methods are well-known as a very powerful approach to noisy decision tables,i.e.,data tables with the low initial classification accuracy.Therefore,this paper provides a novel incremental attribute reductionmethod to dealmore effectivelywith noisy decision tables,especially for highdimensional ones.In particular,we define a new reduct and then design an original attribute reduction method based on the distance measure between two intuitionistic fuzzy partitions.It should be noted that the intuitionistic fuzzypartitiondistance iswell-knownas aneffectivemeasure todetermine important attributes.More interestingly,an incremental formula is also developed to quickly compute the intuitionistic fuzzy partition distance in case when the decision table increases in the number of objects.This formula is then applied to construct an incremental attribute reduction algorithm for handling such dynamic tables.Besides,some experiments are conducted on real datasets to show that our method is far superior to the fuzzy rough set based methods in terms of the size of reduct and the classification accuracy.展开更多
In the traditional incremental analysis update(IAU)process,all analysis increments are treated as constant forcing in a model’s prognostic equations over a certain time window.This approach effectively reduces high-f...In the traditional incremental analysis update(IAU)process,all analysis increments are treated as constant forcing in a model’s prognostic equations over a certain time window.This approach effectively reduces high-frequency oscillations introduced by data assimilation.However,as different scales of increments have unique evolutionary speeds and life histories in a numerical model,the traditional IAU scheme cannot fully meet the requirements of short-term forecasting for the damping of high-frequency noise and may even cause systematic drifts.Therefore,a multi-scale IAU scheme is proposed in this paper.Analysis increments were divided into different scale parts using a spatial filtering technique.For each scale increment,the optimal relaxation time in the IAU scheme was determined by the skill of the forecasting results.Finally,different scales of analysis increments were added to the model integration during their optimal relaxation time.The multi-scale IAU scheme can effectively reduce the noise and further improve the balance between large-scale and small-scale increments in the model initialization stage.To evaluate its performance,several numerical experiments were conducted to simulate the path and intensity of Typhoon Mangkhut(2018)and showed that:(1)the multi-scale IAU scheme had an obvious effect on noise control at the initial stage of data assimilation;(2)the optimal relaxation time for large-scale and small-scale increments was estimated as 6 h and 3 h,respectively;(3)the forecast performance of the multi-scale IAU scheme in the prediction of Typhoon Mangkhut(2018)was better than that of the traditional IAU scheme.The results demonstrate the superiority of the multi-scale IAU scheme.展开更多
Multispecies forests have received increased scientific attention,driven by the hypothesis that biodiversity improves ecological resilience.However,a greater species diversity presents challenges for forest management...Multispecies forests have received increased scientific attention,driven by the hypothesis that biodiversity improves ecological resilience.However,a greater species diversity presents challenges for forest management and research.Our study aims to develop basal area growth models for tree species cohorts.The analysis is based on a dataset of 423 permanent plots(2,500 m^(2))located in temperate forests in Durango,Mexico.First,we define tree species cohorts based on individual and neighborhood-based variables using a combination of principal component and cluster analyses.Then,we estimate the basal area increment of each cohort through the generalized additive model to describe the effect of tree size,competition,stand density and site quality.The principal component and cluster analyses assign a total of 37 tree species to eight cohorts that differed primarily with regard to the distribution of tree size and vertical position within the community.The generalized additive models provide satisfactory estimates of tree growth for the species cohorts,explaining between 19 and 53 percent of the total variation of basal area increment,and highlight the following results:i)most cohorts show a"rise-and-fall"effect of tree size on tree growth;ii)surprisingly,the competition index"basal area of larger trees"had showed a positive effect in four of the eight cohorts;iii)stand density had a negative effect on basal area increment,though the effect was minor in medium-and high-density stands,and iv)basal area growth was positively correlated with site quality except for an oak cohort.The developed species cohorts and growth models provide insight into their particular ecological features and growth patterns that may support the development of sustainable management strategies for temperate multispecies forests.展开更多
Gene synthesis has provided important contributions in various fields including genomics and medicine. Current genes are 7 - 30 cents depending on the assembly and sequencing methods performed. Demand for gene synthes...Gene synthesis has provided important contributions in various fields including genomics and medicine. Current genes are 7 - 30 cents depending on the assembly and sequencing methods performed. Demand for gene synthesis has been increasing for the past few decades, yet available methods remain expensive. A solution to this problem involves microchip-derived oligonucleotides (oligos), an oligo pool with a substantial number of oligo fragments. Microchips have been proposed as a tool for gene synthesis, but this approach has been criticized for its high error rate during sequencing. This study tests a possible cost-effective method for gene synthesis utilizing fragment assembly and golden gate assembly, which can be employed for quicker manufacturing and efficient execution of genes in the near future. The droplet method was tested in two trials to determine the viability of the method through the accuracy of the oligos sequenced. A preliminary research experiment was performed to determine the efficacy of oligo lengths ranging from two to four overlapping oligos through Gibson assembly. Of the three oligo lengths tested, only two fragment oligos were correctly sequenced. Two fragment oligos were used for the second experiment, which determined the efficacy of the droplet method in reducing gene synthesis cost and speed. The first trial utilized a high-fidelity polymerase and resulted in 3% correctly sequenced oligos, so the second trial utilized a non-high-fidelity polymerase, resulting in 8% correctly sequenced oligos. After calculating, the cost of gene synthesis lowers down to 0.8 cents/base. The final calculated cost of 0.8 cents/base is significantly cheaper than other manufacturing costs of 7 - 30 cents/base. Reducing the cost of gene synthesis provides new insight into the cost-effectiveness of present technologies and protocols and has the potential to benefit the fields of bioengineering and gene therapy.展开更多
基金sponsored by the National Natural Science Foundation of China(Nos.61972208,62102194 and 62102196)National Natural Science Foundation of China(Youth Project)(No.62302237)+3 种基金Six Talent Peaks Project of Jiangsu Province(No.RJFW-111),China Postdoctoral Science Foundation Project(No.2018M640509)Postgraduate Research and Practice Innovation Program of Jiangsu Province(Nos.KYCX22_1019,KYCX23_1087,KYCX22_1027,KYCX23_1087,SJCX24_0339 and SJCX24_0346)Innovative Training Program for College Students of Nanjing University of Posts and Telecommunications(No.XZD2019116)Nanjing University of Posts and Telecommunications College Students Innovation Training Program(Nos.XZD2019116,XYB2019331).
文摘The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces a new method named Big Data Tensor Multi-Cluster Distributed Incremental Update(BDTMCDIncreUpdate),which combines distributed computing,storage technology,and incremental update techniques to provide an efficient and effective means for clustering analysis.Firstly,the original dataset is divided into multiple subblocks,and distributed computing resources are utilized to process the sub-blocks in parallel,enhancing efficiency.Then,initial clustering is performed on each sub-block using tensor-based multi-clustering techniques to obtain preliminary results.When new data arrives,incremental update technology is employed to update the core tensor and factor matrix,ensuring that the clustering model can adapt to changes in data.Finally,by combining the updated core tensor and factor matrix with historical computational results,refined clustering results are obtained,achieving real-time adaptation to dynamic data.Through experimental simulation on the Aminer dataset,the BDTMCDIncreUpdate method has demonstrated outstanding performance in terms of accuracy(ACC)and normalized mutual information(NMI)metrics,achieving an accuracy rate of 90%and an NMI score of 0.85,which outperforms existing methods such as TClusInitUpdate and TKLClusUpdate in most scenarios.Therefore,the BDTMCDIncreUpdate method offers an innovative solution to the field of big data analysis,integrating distributed computing,incremental updates,and tensor-based multi-clustering techniques.It not only improves the efficiency and scalability in processing large-scale high-dimensional datasets but also has been validated for its effectiveness and accuracy through experiments.This method shows great potential in real-world applications where dynamic data growth is common,and it is of significant importance for advancing the development of data analysis technology.
基金supported by the Innovation Scientists and Technicians Troop Construction Projects of Henan Province(224000510002)。
文摘Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and security.One of the core features of TSN is traffic scheduling with bounded low delay in the network.However,traffic scheduling schemes in TSN are usually synthesized offline and lack dynamism.To implement incremental scheduling of newly arrived traffic in TSN,we propose a Dynamic Response Incremental Scheduling(DR-IS)method for time-sensitive traffic and deploy it on a software-defined time-sensitive network architecture.Under the premise of meeting the traffic scheduling requirements,we adopt two modes,traffic shift and traffic exchange,to dynamically adjust the time slot injection position of the traffic in the original scheme,and determine the sending offset time of the new timesensitive traffic to minimize the global traffic transmission jitter.The evaluation results show that DRIS method can effectively control the large increase of traffic transmission jitter in incremental scheduling without affecting the transmission delay,thus realizing the dynamic incremental scheduling of time-sensitive traffic in TSN.
基金supported by Fundamental Research Program of Shanxi Province(No.20210302123444)the Research Project at the College Level of China Institute of Labor Relations(No.23XYJS018)+2 种基金the ICH Digitalization and Multi-Source Information Fusion Fujian Provincial University Engineering Research Center 2022 Open Fund Project(G3-KF2207)the China University Industry University Research Innovation Fund(No.2021FNA02009)the Key R&D Program(International Science and Technology Cooperation Project)of Shanxi Province China(No.201903D421003).
文摘Currently,distributed routing protocols are constrained by offering a single path between any pair of nodes,thereby limiting the potential throughput and overall network performance.This approach not only restricts the flow of data but also makes the network susceptible to failures in case the primary path is disrupted.In contrast,routing protocols that leverage multiple paths within the network offer a more resilient and efficient solution.Multipath routing,as a fundamental concept,surpasses the limitations of traditional shortest path first protocols.It not only redirects traffic to unused resources,effectively mitigating network congestion,but also ensures load balancing across the network.This optimization significantly improves network utilization and boosts the overall performance,making it a widely recognized efficient method for enhancing network reliability.To further strengthen network resilience against failures,we introduce a routing scheme known as Multiple Nodes with at least Two Choices(MNTC).This innovative approach aims to significantly enhance network availability by providing each node with at least two routing choices.By doing so,it not only reduces the dependency on a single path but also creates redundant paths that can be utilized in case of failures,thereby enhancing the overall resilience of the network.To ensure the optimal placement of nodes,we propose three incremental deployment algorithms.These algorithms carefully select the most suitable set of nodes for deployment,taking into account various factors such as node connectivity,traffic patterns,and network topology.By deployingMNTCon a carefully chosen set of nodes,we can significantly enhance network reliability without the need for a complete overhaul of the existing infrastructure.We have conducted extensive evaluations of MNTC in diverse topological spaces,demonstrating its effectiveness in maintaining high network availability with minimal path stretch.The results are impressive,showing that even when implemented on just 60%of nodes,our incremental deployment method significantly boosts network availability.This underscores the potential of MNTC in enhancing network resilience and performance,making it a viable solution for modern networks facing increasing demands and complexities.The algorithms OSPF,TBFH,DC and LFC perform fast rerouting based on strict conditions,while MNTC is not restricted by these conditions.In five real network topologies,the average network availability ofMNTCis improved by 14.68%,6.28%,4.76%and 2.84%,respectively,compared with OSPF,TBFH,DC and LFC.
基金the National Natural Science Foun-dation of China(Nos.61471263,61872267 and U21B2024)the Natural Science Foundation of Tianjin,China(No.16JCZDJC31100)Tianjin University Innovation Foundation(No.2021XZC0024).
文摘Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.
基金This work was funded by the National Natural Science Foundation of China Nos.U22A2099,61966009,62006057the Graduate Innovation Program No.YCSW2022286.
文摘Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values or make ethical decisions,they may not meet the expectations of humans.Traditionally,an ethical decision-making framework is constructed by rule-based or statistical approaches.In this paper,we propose an ethical decision-making framework based on incremental ILP(Inductive Logic Programming),which can overcome the brittleness of rule-based approaches and little interpretability of statistical approaches.As the current incremental ILP makes it difficult to solve conflicts,we propose a novel ethical decision-making framework considering conflicts in this paper,which adopts our proposed incremental ILP system.The framework consists of two processes:the learning process and the deduction process.The first process records bottom clauses with their score functions and learns rules guided by the entailment and the score function.The second process obtains an ethical decision based on the rules.In an ethical scenario about chatbots for teenagers’mental health,we verify that our framework can learn ethical rules and make ethical decisions.Besides,we extract incremental ILP from the framework and compare it with the state-of-the-art ILP systems based on ASP(Answer Set Programming)focusing on conflict resolution.The results of comparisons show that our proposed system can generate better-quality rules than most other systems.
文摘To improve the prediction accuracy of chaotic time series and reconstruct a more reasonable phase space structure of the prediction network,we propose a convolutional neural network-long short-term memory(CNN-LSTM)prediction model based on the incremental attention mechanism.Firstly,a traversal search is conducted through the traversal layer for finite parameters in the phase space.Then,an incremental attention layer is utilized for parameter judgment based on the dimension weight criteria(DWC).The phase space parameters that best meet DWC are selected and fed into the input layer.Finally,the constructed CNN-LSTM network extracts spatio-temporal features and provides the final prediction results.The model is verified using Logistic,Lorenz,and sunspot chaotic time series,and the performance is compared from the two dimensions of prediction accuracy and network phase space structure.Additionally,the CNN-LSTM network based on incremental attention is compared with long short-term memory(LSTM),convolutional neural network(CNN),recurrent neural network(RNN),and support vector regression(SVR)for prediction accuracy.The experiment results indicate that the proposed composite network model possesses enhanced capability in extracting temporal features and achieves higher prediction accuracy.Also,the algorithm to estimate the phase space parameter is compared with the traditional CAO,false nearest neighbor,and C-C,three typical methods for determining the chaotic phase space parameters.The experiments reveal that the phase space parameter estimation algorithm based on the incremental attention mechanism is superior in prediction accuracy compared with the traditional phase space reconstruction method in five networks,including CNN-LSTM,LSTM,CNN,RNN,and SVR.
文摘The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.
文摘We investigated the parametric optimization on incremental sheet forming of stainless steel using Grey Relational Analysis(GRA) coupled with Principal Component Analysis(PCA). AISI 316L stainless steel sheets were used to develop double wall angle pyramid with aid of tungsten carbide tool. GRA coupled with PCA was used to plan the experiment conditions. Control factors such as Tool Diameter(TD), Step Depth(SD), Bottom Wall Angle(BWA), Feed Rate(FR) and Spindle Speed(SS) on Top Wall Angle(TWA) and Top Wall Angle Surface Roughness(TWASR) have been studied. Wall angle increases with increasing tool diameter due to large contact area between tool and workpiece. As the step depth, feed rate and spindle speed increase,TWASR decreases with increasing tool diameter. As the step depth increasing, the hydrostatic stress is raised causing severe cracks in the deformed surface. Hence it was concluded that the proposed hybrid method was suitable for optimizing the factors and response.
文摘Objective:Robotic-assisted spine surgeries(RASS)have been shown to enhance precision,reduce operative time,prevent complications,facilitate minimally invasive spinal surgery,and decrease revision surgery rates,leading to improved patient outco mes This study aimed to compare the cost-effectiveness of RAs's and non-robotic-assisted surgery for degenerative spine disease at a single center.Me thods:This retrospective study,including 122 patients,was conducted at a single center from March 2015 to February 2022.Patients who underwent ro bot-assisted surgery were assigned to the robotgroup,and patients who underwent non-robotic-assisted surgery were assigned to the non-mmbot group.Various data,indluding demographic information,surgical details,outcomes,and cost-effectiveness,were colected for both groups.The cost-effectiveness was determined using the incremental cost-effectiveness ratio(ICER),and subgroup analysis was conducted for patients with 1 or 2 levels of spi-nal instrumentation.The analysis was performed using STATA SE version 15 and Tree.Age Pro 2020,with Monte Caro simulations for the cost-effectiveness acceptability curve.Results The owerallICER was$22,572,but it decreased to$16,980 when considering cases with only 1or 2 levels of instrumentation.RASS is deemed cost-effective when the willi ingness to pay is$3000-$4000 if less than 2 levels of the spine are instrumented.Conchsions:The cost-effectiveness of robot icassistance be comes apparent whenthere isa reduced need for open surgeries,leading to decreased d revision rates caused by complications such as misplaced screwsor infctions.Therefore,it is advisable to allocate healthcare budget resou Irces to spine robots,as RASS PIDves to be cost-effective,partic cularly when only two or Ewer spinal levels require instrumentation.
基金supported by The National 13th Five-year Mega-Scientific Projects of Infectious Diseases in China[Grant Number:2017ZX10201302001004]。
文摘Objective This study aims to estimate the cost-effectiveness of the combined chemotherapy regimen containing Bedaquiline(BR)and the conventional treatment regimen(CR,not containing Bedaquiline)for the treatment of adults with multidrug-resistant tuberculosis(MDR-TB)in China.Methods A combination of a decision tree and a Markov model was developed to estimate the cost and effects of MDR patients in BR and CR within ten years.The model parameter data were synthesized from the literature,the national TB surveillance information system,and consultation with experts.The incremental cost-effectiveness ratio(ICER)of BR vs.CR was determined.Results BR(vs.CR)had a higher sputum culture conversion rate and cure rate and prevented many premature deaths(decreased by 12.8%),thereby obtaining more quality-adjusted life years(QALYs)(increased by 2.31 years).The per capita cost in BR was as high as 138,000 yuan,roughly double that of CR.The ICER for BR was 33,700 yuan/QALY,which was lower than China's 1×per capita Gross Domestic Product(GDP)in 2020(72,400 yuan).Conclusion BR is shown to be cost effective.When the unit price of Bedaquiline reaches or falls below57.21 yuan per unit,BR is expected to be the dominant strategy in China over CR.
基金support from China Postdoctoral Science Foundation(2021M692893,2021M702927)National Natural Science Fund of China(82204552)+2 种基金Natural Science Foundation of Zhejiang Province(LQ22H280007)Research Project of Zhejiang Chinese Medical University(2022JKZKTS10)Zhejiang Province Traditional Chinese Medicine Science and Technology(2023ZR079,2023ZR087).
文摘Flavonoids are the primary functional components in the flowers of Hibiscus manihot L.(HMLF).In this study,an efficient and green ionic liquid-high-speed homogenization coupled with microwave-assisted extraction(IL-HSH-MAE)technique was firstly established and implemented to extract seven target flavonoids from HMLF.Single-factor experiments and Box-Behnken design(BBD)were utilized to maximize the extraction conditions of IL-HSH-MAE,which were as follows:0.1 M of[C4mim]Br,homogenate speed of 7,000 rpm,homogenate time of 120 s,liquid/solid ratio of 24 mL/g,extraction temperature of 62℃and extraction time of 15 min.The maximal total extraction yield of seven target flavonoids attained 22.04 mg/g,which was considerably greater than the yields obtained by IL-HSH,IL-MAE,60%ethanol-HSH-MAE and 60%ethanol-MAE.These findings suggested that the IL-HSH-MAE method can be exploited as a rapid and efficient approach for extracting natural products from plants.The process also possesses outstanding superiority in being environmentally friendly and having a high extraction efficiency and is expected to be a luciferous prospect extraction technology.
文摘BACKGROUND Over the past years,patient specific instrumentation(PSI)for total knee arthroplasty(TKA)has been implemented and routinely used.No clear answer has been given on its associated cost and cost-effectiveness when compared to conventional instrumentation(CI)for TKA.AIM To compare the cost and cost-effectiveness of PSI TKA compared to CI TKA.METHODS A literature search was performed in healthcare,economical healthcare,and medical databases(MEDLINE,EMBASE,CINAHL,Web of Science,Cochrane Library,EconLit).It was conducted in April 2021 and again in January 2022.Relevant literature included randomised controlled trials,retrospective studies,prospective studies,observational studies,and case control studies.All studies were assessed on methodological quality.Relevant outcomes included incremental cost-effectiveness ratio,quality-adjusted life years,total costs,imaging costs,production costs,sterilization associated costs,surgery duration costs and readmission rate costs.All eligible studies were assessed for risk of bias.Meta-analysis was performed for outcomes with sufficient data.RESULTS Thirty-two studies were included into the systematic review.Two were included in the metaanalysis.3994 PSI TKAs and 13267 CI TKAs were included in the sample size.The methodological quality of the included studies,based on Consensus on Health Economic Criteria-scores and risk of bias,ranged from average to good.PSI TKA costs less than CI TKA when considering mean operating room time and its associated costs and tray sterilization per patient case.PSI TKA costs more compared to CI TKA when considering imaging and production costs.Considering total costs per patient case,PSI TKA is more expensive in comparison to CI TKA.Meta-analysis comparing total costs for PSI TKA,and CI TKA showed a significant higher cost for PSI TKA.CONCLUSION Cost for PSI and CI TKA can differ when considering distinct aspects of their implementation.Total costs per patient case are increased for PSI TKA when compared to CI TKA.
基金financed by the National Science Centre,Poland,under project No.2019/35/B/NZ8/01381 entitled"Impact of invasive tree species on ecosystem services:plant biodiversity,carbon and nitrogen cycling and climate regulation"by the Institute of Dendrology,Polish Academy of Sciences。
文摘Prunus serotina and Robinia pseudoacacia are the most widespread invasive trees in Central Europe.In addition,according to climate models,decreased growth of many economically and ecologically important native trees will likely be observed in the future.We aimed to assess the impact of these two neophytes,which differ in the biomass range and nitrogen-fixing abilities observed in Central European conditions,on the relative aboveground biomass increments of native oaks Qucrcus robur and Q.petraea and Scots pine Pinus sylvestris.We aimed to increase our understanding of the relationship between facilitation and competition between woody alien species and overstory native trees.We established 72 circular plots(0.05 ha)in two different forest habitat types and stands varying in age in western Poland.We chose plots with different abundances of the studied neophytes to determine how effects scaled along the quantitative invasion gradient.Furthermore,we collected growth cores of the studied native species,and we calculated aboveground biomass increments at the tree and stand levels.Then,we used generalized linear mixed-effects models to assess the impact of invasive species abundances on relative aboveground biomass increments of native tree species.We did not find a biologically or statistically significant impact of invasive R.pseudoacacia or P.serotina on the relative aboveground,biomass increments of native oaks and pines along the quantitative gradient of invader biomass or on the proportion of total stand biomass accounted for by invaders.The neophytes did not act as native tree growth stimulators but also did not compete with them for resources,which would escalate the negative impact of climate change on pines and oaks.The neophytes should not significantly modify the carbon sequestration capacity of the native species.Our work combines elements of the per capita effect of invasion with research on mixed forest management.
基金support from the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No.XDA27000000.
文摘Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the dynamically changing world,e.g.,classifying newly discovered fish species,remains an open problem.We address an even more challenging and realistic setting of this problem where new class samples are insufficient,i.e.,Few-Shot Class-Incremental Learning(FSCIL).Current FSCIL methods augment the training data to alleviate the overfitting of novel classes.By contrast,we propose Filter Bank Networks(FBNs)that augment the learnable filters to capture fine-detailed features for adapting to future new classes.In the forward pass,FBNs augment each convolutional filter to a virtual filter bank containing the canonical one,i.e.,itself,and multiple transformed versions.During back-propagation,FBNs explicitly stimulate fine-detailed features to emerge and collectively align all gradients of each filter bank to learn the canonical one.FBNs capture pattern variants that do not yet exist in the pretraining session,thus making it easy to incorporate new classes in the incremental learning phase.Moreover,FBNs introduce model-level prior knowledge to efficiently utilize the limited few-shot data.Extensive experiments on MNIST,CIFAR100,CUB200,andMini-ImageNet datasets show that FBNs consistently outperformthe baseline by a significantmargin,reporting new state-of-the-art FSCIL results.In addition,we contribute a challenging FSCIL benchmark,Fishshot1K,which contains 8261 underwater images covering 1000 ocean fish species.The code is included in the supplementary materials.
文摘Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of data prepared in advance, which is often challenging in real-world applications, such as streaming data and concept drift. For this reason, incremental learning (continual learning) has attracted increasing attention from scholars. However, incremental learning is associated with the challenge of catastrophic forgetting: the performance on previous tasks drastically degrades after learning a new task. In this paper, we propose a new strategy to alleviate catastrophic forgetting when neural networks are trained in continual domains. Specifically, two components are applied: data translation based on transfer learning and knowledge distillation. The former translates a portion of new data to reconstruct the partial data distribution of the old domain. The latter uses an old model as a teacher to guide a new model. The experimental results on three datasets have shown that our work can effectively alleviate catastrophic forgetting by a combination of the two methods aforementioned.
基金Supported by SJTU-HUAWEI TECH Cybersecurity Innovation Lab。
文摘Background With the development of information technology,there is a significant increase in the number of network traffic logs mixed with various types of cyberattacks.Traditional intrusion detection systems(IDSs)are limited in detecting new inconstant patterns and identifying malicious traffic traces in real time.Therefore,there is an urgent need to implement more effective intrusion detection technologies to protect computer security.Methods In this study,we designed a hybrid IDS by combining our incremental learning model(KANSOINN)and active learning to learn new log patterns and detect various network anomalies in real time.Conclusions Experimental results on the NSLKDD dataset showed that KAN-SOINN can be continuously improved and effectively detect malicious logs.Meanwhile,comparative experiments proved that using a hybrid query strategy in active learning can improve the model learning efficiency.
基金funded by Hanoi University of Industry under Grant Number 27-2022-RD/HD-DHCN (URL:https://www.haui.edu.vn/).
文摘Attribute reduction,also known as feature selection,for decision information systems is one of the most pivotal issues in machine learning and data mining.Approaches based on the rough set theory and some extensions were proved to be efficient for dealing with the problemof attribute reduction.Unfortunately,the intuitionistic fuzzy sets based methods have not received much interest,while these methods are well-known as a very powerful approach to noisy decision tables,i.e.,data tables with the low initial classification accuracy.Therefore,this paper provides a novel incremental attribute reductionmethod to dealmore effectivelywith noisy decision tables,especially for highdimensional ones.In particular,we define a new reduct and then design an original attribute reduction method based on the distance measure between two intuitionistic fuzzy partitions.It should be noted that the intuitionistic fuzzypartitiondistance iswell-knownas aneffectivemeasure todetermine important attributes.More interestingly,an incremental formula is also developed to quickly compute the intuitionistic fuzzy partition distance in case when the decision table increases in the number of objects.This formula is then applied to construct an incremental attribute reduction algorithm for handling such dynamic tables.Besides,some experiments are conducted on real datasets to show that our method is far superior to the fuzzy rough set based methods in terms of the size of reduct and the classification accuracy.
基金jointly sponsored by the Shenzhen Science and Technology Innovation Commission (Grant No. KCXFZ20201221173610028)the key program of the National Natural Science Foundation of China (Grant No. 42130605)
文摘In the traditional incremental analysis update(IAU)process,all analysis increments are treated as constant forcing in a model’s prognostic equations over a certain time window.This approach effectively reduces high-frequency oscillations introduced by data assimilation.However,as different scales of increments have unique evolutionary speeds and life histories in a numerical model,the traditional IAU scheme cannot fully meet the requirements of short-term forecasting for the damping of high-frequency noise and may even cause systematic drifts.Therefore,a multi-scale IAU scheme is proposed in this paper.Analysis increments were divided into different scale parts using a spatial filtering technique.For each scale increment,the optimal relaxation time in the IAU scheme was determined by the skill of the forecasting results.Finally,different scales of analysis increments were added to the model integration during their optimal relaxation time.The multi-scale IAU scheme can effectively reduce the noise and further improve the balance between large-scale and small-scale increments in the model initialization stage.To evaluate its performance,several numerical experiments were conducted to simulate the path and intensity of Typhoon Mangkhut(2018)and showed that:(1)the multi-scale IAU scheme had an obvious effect on noise control at the initial stage of data assimilation;(2)the optimal relaxation time for large-scale and small-scale increments was estimated as 6 h and 3 h,respectively;(3)the forecast performance of the multi-scale IAU scheme in the prediction of Typhoon Mangkhut(2018)was better than that of the traditional IAU scheme.The results demonstrate the superiority of the multi-scale IAU scheme.
基金The National Forestry Commission of Mexico and The Mexican National Council for Science and Technology(CONAFOR-CONACYT-115900)。
文摘Multispecies forests have received increased scientific attention,driven by the hypothesis that biodiversity improves ecological resilience.However,a greater species diversity presents challenges for forest management and research.Our study aims to develop basal area growth models for tree species cohorts.The analysis is based on a dataset of 423 permanent plots(2,500 m^(2))located in temperate forests in Durango,Mexico.First,we define tree species cohorts based on individual and neighborhood-based variables using a combination of principal component and cluster analyses.Then,we estimate the basal area increment of each cohort through the generalized additive model to describe the effect of tree size,competition,stand density and site quality.The principal component and cluster analyses assign a total of 37 tree species to eight cohorts that differed primarily with regard to the distribution of tree size and vertical position within the community.The generalized additive models provide satisfactory estimates of tree growth for the species cohorts,explaining between 19 and 53 percent of the total variation of basal area increment,and highlight the following results:i)most cohorts show a"rise-and-fall"effect of tree size on tree growth;ii)surprisingly,the competition index"basal area of larger trees"had showed a positive effect in four of the eight cohorts;iii)stand density had a negative effect on basal area increment,though the effect was minor in medium-and high-density stands,and iv)basal area growth was positively correlated with site quality except for an oak cohort.The developed species cohorts and growth models provide insight into their particular ecological features and growth patterns that may support the development of sustainable management strategies for temperate multispecies forests.
文摘Gene synthesis has provided important contributions in various fields including genomics and medicine. Current genes are 7 - 30 cents depending on the assembly and sequencing methods performed. Demand for gene synthesis has been increasing for the past few decades, yet available methods remain expensive. A solution to this problem involves microchip-derived oligonucleotides (oligos), an oligo pool with a substantial number of oligo fragments. Microchips have been proposed as a tool for gene synthesis, but this approach has been criticized for its high error rate during sequencing. This study tests a possible cost-effective method for gene synthesis utilizing fragment assembly and golden gate assembly, which can be employed for quicker manufacturing and efficient execution of genes in the near future. The droplet method was tested in two trials to determine the viability of the method through the accuracy of the oligos sequenced. A preliminary research experiment was performed to determine the efficacy of oligo lengths ranging from two to four overlapping oligos through Gibson assembly. Of the three oligo lengths tested, only two fragment oligos were correctly sequenced. Two fragment oligos were used for the second experiment, which determined the efficacy of the droplet method in reducing gene synthesis cost and speed. The first trial utilized a high-fidelity polymerase and resulted in 3% correctly sequenced oligos, so the second trial utilized a non-high-fidelity polymerase, resulting in 8% correctly sequenced oligos. After calculating, the cost of gene synthesis lowers down to 0.8 cents/base. The final calculated cost of 0.8 cents/base is significantly cheaper than other manufacturing costs of 7 - 30 cents/base. Reducing the cost of gene synthesis provides new insight into the cost-effectiveness of present technologies and protocols and has the potential to benefit the fields of bioengineering and gene therapy.