为了在数据密集型工作流下有效降低缓存碎片整理开销并提高缓存命中率,提出一种持久性分布式文件系统客户端缓存DFS-Cache(Distributed File System Cache)。DFS-Cache基于非易失性内存(NVM)设计实现,能够保证数据的持久性和崩溃一致性...为了在数据密集型工作流下有效降低缓存碎片整理开销并提高缓存命中率,提出一种持久性分布式文件系统客户端缓存DFS-Cache(Distributed File System Cache)。DFS-Cache基于非易失性内存(NVM)设计实现,能够保证数据的持久性和崩溃一致性,并大幅减少冷启动时间。DFS-Cache包括基于虚拟内存重映射的缓存碎片整理机制和基于生存时间(TTL)的缓存空间管理策略。前者基于NVM可被内存控制器直接寻址的特性,动态修改虚拟地址和物理地址之间的映射关系,实现零拷贝的内存碎片整理;后者是一种冷热分离的分组管理策略,借助重映射的缓存碎片整理机制,提升缓存空间的管理效率。实验采用真实的Intel傲腾持久性内存设备,对比商用的分布式文件系统MooseFS和GlusterFS,采用Fio和Filebench等标准测试程序,DFS-Cache最高能提升5.73倍和1.89倍的系统吞吐量。展开更多
This paper proposed a novel multilevel data cache model by Web cache (MDWC) based on network cost in data grid. By constructing a communicating tree of grid sites based on network cost and using a single leader for ...This paper proposed a novel multilevel data cache model by Web cache (MDWC) based on network cost in data grid. By constructing a communicating tree of grid sites based on network cost and using a single leader for each data segment within each region, the MDWC makes the most use of the Web cache of other sites whose bandwidth is as broad as covering the job executing site. The experiment result indicates that the MDWC reduces data response time and data update cost by avoiding network congestions while designing on the parameters concluded by the environment of application.展开更多
针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立...针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立数学模型,并分析了算法的划分流程。仿真实验结果表明,MT-FTP算法在系统吞吐率方面表现较好,其平均IPC(Instructions Per Cycles)值比UCP(Use Case Point)算法高1.3%,比LRU(Least Recently Used)算法高11.6%。MT-FTP算法对应的系统平均公平性比LRU算法的系统平均公平性高17%,比UCP算法的平均公平性高16.5%。该算法实现了共享Cache划分公平性并兼顾了系统的吞吐率。展开更多
The influences of biological,chemical,and flow processes on soil structure through microbially induced carbonate precipitation(MICP)are not yet fully understood.In this study,we use a multi-level thresholding segmenta...The influences of biological,chemical,and flow processes on soil structure through microbially induced carbonate precipitation(MICP)are not yet fully understood.In this study,we use a multi-level thresholding segmentation algorithm,genetic algorithm(GA)enhanced Kapur entropy(KE)(GAE-KE),to accomplish quantitative characterization of sandy soil structure altered by MICP cementation.A sandy soil sample was treated using MICP method and scanned by the synchrotron radiation(SR)micro-CT with a resolution of 6.5 mm.After validation,tri-level thresholding segmentation using GAE-KE successfully separated the precipitated calcium carbonate crystals from sand particles and pores.The spatial distributions of porosity,pore structure parameters,and flow characteristics were calculated for quantitative characterization.The results offer pore-scale insights into the MICP treatment effect,and the quantitative understanding confirms the feasibility of the GAE-KE multi-level thresholding segmentation algorithm.展开更多
Early screening of diabetes retinopathy(DR)plays an important role in preventing irreversible blindness.Existing research has failed to fully explore effective DR lesion information in fundus maps.Besides,traditional ...Early screening of diabetes retinopathy(DR)plays an important role in preventing irreversible blindness.Existing research has failed to fully explore effective DR lesion information in fundus maps.Besides,traditional attention schemes have not considered the impact of lesion type differences on grading,resulting in unreasonable extraction of important lesion features.Therefore,this paper proposes a DR diagnosis scheme that integrates a multi-level patch attention generator(MPAG)and a lesion localization module(LLM).Firstly,MPAGis used to predict patches of different sizes and generate a weighted attention map based on the prediction score and the types of lesions contained in the patches,fully considering the impact of lesion type differences on grading,solving the problem that the attention maps of lesions cannot be further refined and then adapted to the final DR diagnosis task.Secondly,the LLM generates a global attention map based on localization.Finally,the weighted attention map and global attention map are weighted with the fundus map to fully explore effective DR lesion information and increase the attention of the classification network to lesion details.This paper demonstrates the effectiveness of the proposed method through extensive experiments on the public DDR dataset,obtaining an accuracy of 0.8064.展开更多
With the explosive growth of highdefinition video streaming data,a substantial increase in network traffic has ensued.The emergency of mobile edge caching(MEC)can not only alleviate the burden on core network,but also...With the explosive growth of highdefinition video streaming data,a substantial increase in network traffic has ensued.The emergency of mobile edge caching(MEC)can not only alleviate the burden on core network,but also significantly improve user experience.Integrating with the MEC and satellite networks,the network is empowered popular content ubiquitously and seamlessly.Addressing the research gap between multilayer satellite networks and MEC,we study the caching placement problem in this paper.Initially,we introduce a three-layer distributed network caching management architecture designed for efficient and flexible handling of large-scale networks.Considering the constraint on satellite capacity and content propagation delay,the cache placement problem is then formulated and transformed into a markov decision process(MDP),where the content coded caching mechanism is utilized to promote the efficiency of content delivery.Furthermore,a new generic metric,content delivery cost,is proposed to elaborate the performance of caching decision in large-scale networks.Then,we introduce a graph convolutional network(GCN)-based multi-agent advantage actor-critic(A2C)algorithm to optimize the caching decision.Finally,extensive simulations are conducted to evaluate the proposed algorithm in terms of content delivery cost and transferability.展开更多
Existing glass segmentation networks have high computational complexity and large memory occupation,leading to high hardware requirements and time overheads for model inference,which is not conducive to efficiency-see...Existing glass segmentation networks have high computational complexity and large memory occupation,leading to high hardware requirements and time overheads for model inference,which is not conducive to efficiency-seeking real-time tasks such as autonomous driving.The inefficiency of the models is mainly due to employing homogeneous modules to process features of different layers.These modules require computationally intensive convolutions and weight calculation branches with numerous parameters to accommodate the differences in information across layers.We propose an efficient glass segmentation network(EGSNet)based on multi-level heterogeneous architecture and boundary awareness to balance the model performance and efficiency.EGSNet divides the feature layers from different stages into low-level understanding,semantic-level understanding,and global understanding with boundary guidance.Based on the information differences among the different layers,we further propose the multi-angle collaborative enhancement(MCE)module,which extracts the detailed information from shallow features,and the large-scale contextual feature extraction(LCFE)module to understand semantic logic through deep features.The models are trained and evaluated on the glass segmentation datasets HSO(Home-Scene-Oriented)and Trans10k-stuff,respectively,and EGSNet achieves the best efficiency and performance compared to advanced methods.In the HSO test set results,the IoU,Fβ,MAE(Mean Absolute Error),and BER(Balance Error Rate)of EGSNet are 0.804,0.847,0.084,and 0.085,and the GFLOPs(Giga Floating Point Operations Per Second)are only 27.15.Experimental results show that EGSNet significantly improves the efficiency of the glass segmentation task with better performance.展开更多
As users’access to the network has evolved into the acquisition of mass contents instead of IP addresses,the IP network architecture based on end-to-end communication cannot meet users’needs.Therefore,the Informatio...As users’access to the network has evolved into the acquisition of mass contents instead of IP addresses,the IP network architecture based on end-to-end communication cannot meet users’needs.Therefore,the Information-Centric Networking(ICN)came into being.From a technical point of view,ICN is a promising future network architecture.Researching and customizing a reasonable pricing mechanism plays a positive role in promoting the deployment of ICN.The current research on ICN pricing mechanism is focused on paid content.Therefore,we study an ICN pricing model for free content,which uses game theory based on Nash equilibrium to analysis.In this work,advertisers are considered,and an advertiser model is established to describe the economic interaction between advertisers and ICN entities.This solution can formulate the best pricing strategy for all ICN entities and maximize the benefits of each entity.Our extensive analysis and numerical results show that the proposed pricing framework is significantly better than existing solutions when it comes to free content.展开更多
Structured illumination microscopy(SIM)is a popular and powerful super-resolution(SR)technique in biomedical research.However,the conventional reconstruction algorithm for SIM heavily relies on the accurate prior know...Structured illumination microscopy(SIM)is a popular and powerful super-resolution(SR)technique in biomedical research.However,the conventional reconstruction algorithm for SIM heavily relies on the accurate prior knowledge of illumination patterns and signal-to-noise ratio(SNR)of raw images.To obtain high-quality SR images,several raw images need to be captured under high fluorescence level,which further restricts SIM’s temporal resolution and its applications.Deep learning(DL)is a data-driven technology that has been used to expand the limits of optical microscopy.In this study,we propose a deep neural network based on multi-level wavelet and attention mechanism(MWAM)for SIM.Our results show that the MWAM network can extract high-frequency information contained in SIM raw images and accurately integrate it into the output image,resulting in superior SR images compared to those generated using wide-field images as input data.We also demonstrate that the number of SIM raw images can be reduced to three,with one image in each illumination orientation,to achieve the optimal tradeoff between temporal and spatial resolution.Furthermore,our MWAM network exhibits superior reconstruction ability on low-SNR images compared to conventional SIM algorithms.We have also analyzed the adaptability of this network on other biological samples and successfully applied the pretrained model to other SIM systems.展开更多
The emergence of various new services has posed a huge challenge to the existing network architecture.To improve the network delay and backhaul pressure,caching popular contents at the edge of network has been conside...The emergence of various new services has posed a huge challenge to the existing network architecture.To improve the network delay and backhaul pressure,caching popular contents at the edge of network has been considered as a feasible scheme.However,how to efficiently utilize the limited caching resources to cache diverse contents has been confirmed as a tough problem in the past decade.In this paper,considering the time-varying user requests and the heterogeneous content sizes,a user preference aware hierarchical cooperative caching strategy in edge-user caching architecture is proposed.We divide the caching strategy into three phases,that is,the content placement,the content delivery and the content update.In the content placement phase,a cooperative content placement algorithm for local content popularity is designed to cache contents proactively.In the content delivery phase,a cooperative delivery algorithm is proposed to deliver the cached contents.In the content update phase,a content update algorithm is proposed according to the popularity of the contents.Finally,the proposed caching strategy is validated using the MovieLens dataset,and the results reveal that the proposed strategy improves the delay performance by at least 35.3%compared with the other three benchmark strategies.展开更多
Achieving reliable and efficient weather classification for autonomous vehicles is crucial for ensuring safety and operational effectiveness.However,accurately classifying diverse and complex weather conditions remain...Achieving reliable and efficient weather classification for autonomous vehicles is crucial for ensuring safety and operational effectiveness.However,accurately classifying diverse and complex weather conditions remains a significant challenge.While advanced techniques such as Vision Transformers have been developed,they face key limitations,including high computational costs and limited generalization across varying weather conditions.These challenges present a critical research gap,particularly in applications where scalable and efficient solutions are needed to handle weather phenomena’intricate and dynamic nature in real-time.To address this gap,we propose a Multi-level Knowledge Distillation(MLKD)framework,which leverages the complementary strengths of state-of-the-art pre-trained models to enhance classification performance while minimizing computational overhead.Specifically,we employ ResNet50V2 and EfficientNetV2B3 as teacher models,known for their ability to capture complex image features and distil their knowledge into a custom lightweight Convolutional Neural Network(CNN)student model.This framework balances the trade-off between high classification accuracy and efficient resource consumption,ensuring real-time applicability in autonomous systems.Our Response-based Multi-level Knowledge Distillation(R-MLKD)approach effectively transfers rich,high-level feature representations from the teacher models to the student model,allowing the student to perform robustly with significantly fewer parameters and lower computational demands.The proposed method was evaluated on three public datasets(DAWN,BDD100K,and CITS traffic alerts),each containing seven weather classes with 2000 samples per class.The results demonstrate the effectiveness of MLKD,achieving a 97.3%accuracy,which surpasses conventional deep learning models.This work improves classification accuracy and tackles the practical challenges of model complexity,resource consumption,and real-time deployment,offering a scalable solution for weather classification in autonomous driving systems.展开更多
One of the challenges of Informationcentric Networking(ICN)is finding the optimal location for caching content and processing users’requests.In this paper,we address this challenge by leveraging Software-defined Netw...One of the challenges of Informationcentric Networking(ICN)is finding the optimal location for caching content and processing users’requests.In this paper,we address this challenge by leveraging Software-defined Networking(SDN)for efficient ICN management.To achieve this,we formulate the problem as a mixed-integer nonlinear programming(MINLP)model,incorporating caching,routing,and load balancing decisions.We explore two distinct scenarios to tackle the problem.Firstly,we solve the problem in an offline mode using the GAMS environment,assuming a stable network state to demonstrate the superior performance of the cacheenabled network compared to non-cache networks.Subsequently,we investigate the problem in an online mode where the network state dynamically changes over time.Given the computational complexity associated with MINLP,we propose the software-defined caching,routing,and load balancing(SDCRL)algorithm as an efficient and scalable solution.Our evaluation demonstrates that the SDCRL algorithm significantly reduces computational time while maintaining results that closely resemble those achieved by GAMS.展开更多
The cache-based covert channel is one of the common vulnerabilities exploited in the Spectre attacks.Current mitigation strategies focus on blocking the eviction-based channel by using a random/encrypted mapping funct...The cache-based covert channel is one of the common vulnerabilities exploited in the Spectre attacks.Current mitigation strategies focus on blocking the eviction-based channel by using a random/encrypted mapping function to translate memory address to the cache address,while the updated-based channel is still vulnerable.In addition,some mitigation strategies are also costly as it needs software and hardware modifications.In this paper,our objective is to devise low-cost,comprehensive-protection techniques for mitigating the Spectre attacks.We proposed a novel cache structure,named EBCache,which focuses on the RISC-V processor and applies the address encryption and blacklist to resist the Spectre attacks.The addresses encryption mechanism increases the difficulty of pruning a minimal eviction set.The blacklist mechanism makes the updated cache lines loaded by the malicious updates invisible.Our experiments demonstrated that the EBCache can prevent malicious modifications.The EBCache,however,reduces the processor’s performance by about 23%but involves only a low-cost modification in the hardware.展开更多
Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy....Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms.展开更多
文摘为了在数据密集型工作流下有效降低缓存碎片整理开销并提高缓存命中率,提出一种持久性分布式文件系统客户端缓存DFS-Cache(Distributed File System Cache)。DFS-Cache基于非易失性内存(NVM)设计实现,能够保证数据的持久性和崩溃一致性,并大幅减少冷启动时间。DFS-Cache包括基于虚拟内存重映射的缓存碎片整理机制和基于生存时间(TTL)的缓存空间管理策略。前者基于NVM可被内存控制器直接寻址的特性,动态修改虚拟地址和物理地址之间的映射关系,实现零拷贝的内存碎片整理;后者是一种冷热分离的分组管理策略,借助重映射的缓存碎片整理机制,提升缓存空间的管理效率。实验采用真实的Intel傲腾持久性内存设备,对比商用的分布式文件系统MooseFS和GlusterFS,采用Fio和Filebench等标准测试程序,DFS-Cache最高能提升5.73倍和1.89倍的系统吞吐量。
基金Supported by SEC E-Institute :Shanghai HighIn-stitutions Grid Project
文摘This paper proposed a novel multilevel data cache model by Web cache (MDWC) based on network cost in data grid. By constructing a communicating tree of grid sites based on network cost and using a single leader for each data segment within each region, the MDWC makes the most use of the Web cache of other sites whose bandwidth is as broad as covering the job executing site. The experiment result indicates that the MDWC reduces data response time and data update cost by avoiding network congestions while designing on the parameters concluded by the environment of application.
文摘针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立数学模型,并分析了算法的划分流程。仿真实验结果表明,MT-FTP算法在系统吞吐率方面表现较好,其平均IPC(Instructions Per Cycles)值比UCP(Use Case Point)算法高1.3%,比LRU(Least Recently Used)算法高11.6%。MT-FTP算法对应的系统平均公平性比LRU算法的系统平均公平性高17%,比UCP算法的平均公平性高16.5%。该算法实现了共享Cache划分公平性并兼顾了系统的吞吐率。
基金supported by the National Natural Science Foundation of China(Grant Nos.42077232 and 42077235)the Key Research and Development Plan of Jiangsu Province(Grant No.BE2022156).
文摘The influences of biological,chemical,and flow processes on soil structure through microbially induced carbonate precipitation(MICP)are not yet fully understood.In this study,we use a multi-level thresholding segmentation algorithm,genetic algorithm(GA)enhanced Kapur entropy(KE)(GAE-KE),to accomplish quantitative characterization of sandy soil structure altered by MICP cementation.A sandy soil sample was treated using MICP method and scanned by the synchrotron radiation(SR)micro-CT with a resolution of 6.5 mm.After validation,tri-level thresholding segmentation using GAE-KE successfully separated the precipitated calcium carbonate crystals from sand particles and pores.The spatial distributions of porosity,pore structure parameters,and flow characteristics were calculated for quantitative characterization.The results offer pore-scale insights into the MICP treatment effect,and the quantitative understanding confirms the feasibility of the GAE-KE multi-level thresholding segmentation algorithm.
基金supported in part by the Research on the Application of Multimodal Artificial Intelligence in Diagnosis and Treatment of Type 2 Diabetes under Grant No.2020SK50910in part by the Hunan Provincial Natural Science Foundation of China under Grant 2023JJ60020.
文摘Early screening of diabetes retinopathy(DR)plays an important role in preventing irreversible blindness.Existing research has failed to fully explore effective DR lesion information in fundus maps.Besides,traditional attention schemes have not considered the impact of lesion type differences on grading,resulting in unreasonable extraction of important lesion features.Therefore,this paper proposes a DR diagnosis scheme that integrates a multi-level patch attention generator(MPAG)and a lesion localization module(LLM).Firstly,MPAGis used to predict patches of different sizes and generate a weighted attention map based on the prediction score and the types of lesions contained in the patches,fully considering the impact of lesion type differences on grading,solving the problem that the attention maps of lesions cannot be further refined and then adapted to the final DR diagnosis task.Secondly,the LLM generates a global attention map based on localization.Finally,the weighted attention map and global attention map are weighted with the fundus map to fully explore effective DR lesion information and increase the attention of the classification network to lesion details.This paper demonstrates the effectiveness of the proposed method through extensive experiments on the public DDR dataset,obtaining an accuracy of 0.8064.
基金supported by the National Key Research and Development Program of China under Grant 2020YFB1807700the National Natural Science Foundation of China(NSFC)under Grant(No.62201414,62201432)+2 种基金the Qinchuangyuan Project(OCYRCXM-2022-362)the Fundamental Research Funds for the Central Universities and the Innovation Fund of Xidian University under Grant YJSJ24017the Guangzhou Science and Technology Program under Grant 202201011732。
文摘With the explosive growth of highdefinition video streaming data,a substantial increase in network traffic has ensued.The emergency of mobile edge caching(MEC)can not only alleviate the burden on core network,but also significantly improve user experience.Integrating with the MEC and satellite networks,the network is empowered popular content ubiquitously and seamlessly.Addressing the research gap between multilayer satellite networks and MEC,we study the caching placement problem in this paper.Initially,we introduce a three-layer distributed network caching management architecture designed for efficient and flexible handling of large-scale networks.Considering the constraint on satellite capacity and content propagation delay,the cache placement problem is then formulated and transformed into a markov decision process(MDP),where the content coded caching mechanism is utilized to promote the efficiency of content delivery.Furthermore,a new generic metric,content delivery cost,is proposed to elaborate the performance of caching decision in large-scale networks.Then,we introduce a graph convolutional network(GCN)-based multi-agent advantage actor-critic(A2C)algorithm to optimize the caching decision.Finally,extensive simulations are conducted to evaluate the proposed algorithm in terms of content delivery cost and transferability.
文摘Existing glass segmentation networks have high computational complexity and large memory occupation,leading to high hardware requirements and time overheads for model inference,which is not conducive to efficiency-seeking real-time tasks such as autonomous driving.The inefficiency of the models is mainly due to employing homogeneous modules to process features of different layers.These modules require computationally intensive convolutions and weight calculation branches with numerous parameters to accommodate the differences in information across layers.We propose an efficient glass segmentation network(EGSNet)based on multi-level heterogeneous architecture and boundary awareness to balance the model performance and efficiency.EGSNet divides the feature layers from different stages into low-level understanding,semantic-level understanding,and global understanding with boundary guidance.Based on the information differences among the different layers,we further propose the multi-angle collaborative enhancement(MCE)module,which extracts the detailed information from shallow features,and the large-scale contextual feature extraction(LCFE)module to understand semantic logic through deep features.The models are trained and evaluated on the glass segmentation datasets HSO(Home-Scene-Oriented)and Trans10k-stuff,respectively,and EGSNet achieves the best efficiency and performance compared to advanced methods.In the HSO test set results,the IoU,Fβ,MAE(Mean Absolute Error),and BER(Balance Error Rate)of EGSNet are 0.804,0.847,0.084,and 0.085,and the GFLOPs(Giga Floating Point Operations Per Second)are only 27.15.Experimental results show that EGSNet significantly improves the efficiency of the glass segmentation task with better performance.
基金supported by the Key R&D Program of Anhui Province in 2020 under Grant No.202004a05020078China Environment for Network Innovations(CENI)under Grant No.2016-000052-73-01-000515.
文摘As users’access to the network has evolved into the acquisition of mass contents instead of IP addresses,the IP network architecture based on end-to-end communication cannot meet users’needs.Therefore,the Information-Centric Networking(ICN)came into being.From a technical point of view,ICN is a promising future network architecture.Researching and customizing a reasonable pricing mechanism plays a positive role in promoting the deployment of ICN.The current research on ICN pricing mechanism is focused on paid content.Therefore,we study an ICN pricing model for free content,which uses game theory based on Nash equilibrium to analysis.In this work,advertisers are considered,and an advertiser model is established to describe the economic interaction between advertisers and ICN entities.This solution can formulate the best pricing strategy for all ICN entities and maximize the benefits of each entity.Our extensive analysis and numerical results show that the proposed pricing framework is significantly better than existing solutions when it comes to free content.
基金supported by the National Natural Science Foundation of China(Grant Nos.62005307 and 61975228).
文摘Structured illumination microscopy(SIM)is a popular and powerful super-resolution(SR)technique in biomedical research.However,the conventional reconstruction algorithm for SIM heavily relies on the accurate prior knowledge of illumination patterns and signal-to-noise ratio(SNR)of raw images.To obtain high-quality SR images,several raw images need to be captured under high fluorescence level,which further restricts SIM’s temporal resolution and its applications.Deep learning(DL)is a data-driven technology that has been used to expand the limits of optical microscopy.In this study,we propose a deep neural network based on multi-level wavelet and attention mechanism(MWAM)for SIM.Our results show that the MWAM network can extract high-frequency information contained in SIM raw images and accurately integrate it into the output image,resulting in superior SR images compared to those generated using wide-field images as input data.We also demonstrate that the number of SIM raw images can be reduced to three,with one image in each illumination orientation,to achieve the optimal tradeoff between temporal and spatial resolution.Furthermore,our MWAM network exhibits superior reconstruction ability on low-SNR images compared to conventional SIM algorithms.We have also analyzed the adaptability of this network on other biological samples and successfully applied the pretrained model to other SIM systems.
基金supported by Natural Science Foundation of China(Grant 61901070,61801065,62271096,61871062,U20A20157 and 62061007)in part by the Science and Technology Research Program of Chongqing Municipal Education Commission(Grant KJQN202000603 and KJQN201900611)+3 种基金in part by the Natural Science Foundation of Chongqing(Grant CSTB2022NSCQMSX0468,cstc2020jcyjzdxmX0024 and cstc2021jcyjmsxmX0892)in part by University Innovation Research Group of Chongqing(Grant CxQT20017)in part by Youth Innovation Group Support Program of ICE Discipline of CQUPT(SCIE-QN-2022-04)in part by the Chongqing Graduate Student Scientific Research Innovation Project(CYB22246)。
文摘The emergence of various new services has posed a huge challenge to the existing network architecture.To improve the network delay and backhaul pressure,caching popular contents at the edge of network has been considered as a feasible scheme.However,how to efficiently utilize the limited caching resources to cache diverse contents has been confirmed as a tough problem in the past decade.In this paper,considering the time-varying user requests and the heterogeneous content sizes,a user preference aware hierarchical cooperative caching strategy in edge-user caching architecture is proposed.We divide the caching strategy into three phases,that is,the content placement,the content delivery and the content update.In the content placement phase,a cooperative content placement algorithm for local content popularity is designed to cache contents proactively.In the content delivery phase,a cooperative delivery algorithm is proposed to deliver the cached contents.In the content update phase,a content update algorithm is proposed according to the popularity of the contents.Finally,the proposed caching strategy is validated using the MovieLens dataset,and the results reveal that the proposed strategy improves the delay performance by at least 35.3%compared with the other three benchmark strategies.
文摘Achieving reliable and efficient weather classification for autonomous vehicles is crucial for ensuring safety and operational effectiveness.However,accurately classifying diverse and complex weather conditions remains a significant challenge.While advanced techniques such as Vision Transformers have been developed,they face key limitations,including high computational costs and limited generalization across varying weather conditions.These challenges present a critical research gap,particularly in applications where scalable and efficient solutions are needed to handle weather phenomena’intricate and dynamic nature in real-time.To address this gap,we propose a Multi-level Knowledge Distillation(MLKD)framework,which leverages the complementary strengths of state-of-the-art pre-trained models to enhance classification performance while minimizing computational overhead.Specifically,we employ ResNet50V2 and EfficientNetV2B3 as teacher models,known for their ability to capture complex image features and distil their knowledge into a custom lightweight Convolutional Neural Network(CNN)student model.This framework balances the trade-off between high classification accuracy and efficient resource consumption,ensuring real-time applicability in autonomous systems.Our Response-based Multi-level Knowledge Distillation(R-MLKD)approach effectively transfers rich,high-level feature representations from the teacher models to the student model,allowing the student to perform robustly with significantly fewer parameters and lower computational demands.The proposed method was evaluated on three public datasets(DAWN,BDD100K,and CITS traffic alerts),each containing seven weather classes with 2000 samples per class.The results demonstrate the effectiveness of MLKD,achieving a 97.3%accuracy,which surpasses conventional deep learning models.This work improves classification accuracy and tackles the practical challenges of model complexity,resource consumption,and real-time deployment,offering a scalable solution for weather classification in autonomous driving systems.
文摘One of the challenges of Informationcentric Networking(ICN)is finding the optimal location for caching content and processing users’requests.In this paper,we address this challenge by leveraging Software-defined Networking(SDN)for efficient ICN management.To achieve this,we formulate the problem as a mixed-integer nonlinear programming(MINLP)model,incorporating caching,routing,and load balancing decisions.We explore two distinct scenarios to tackle the problem.Firstly,we solve the problem in an offline mode using the GAMS environment,assuming a stable network state to demonstrate the superior performance of the cacheenabled network compared to non-cache networks.Subsequently,we investigate the problem in an online mode where the network state dynamically changes over time.Given the computational complexity associated with MINLP,we propose the software-defined caching,routing,and load balancing(SDCRL)algorithm as an efficient and scalable solution.Our evaluation demonstrates that the SDCRL algorithm significantly reduces computational time while maintaining results that closely resemble those achieved by GAMS.
基金This work was supported in part by the China Ministry of Science and Technology under Grant 2015GA600002。
文摘The cache-based covert channel is one of the common vulnerabilities exploited in the Spectre attacks.Current mitigation strategies focus on blocking the eviction-based channel by using a random/encrypted mapping function to translate memory address to the cache address,while the updated-based channel is still vulnerable.In addition,some mitigation strategies are also costly as it needs software and hardware modifications.In this paper,our objective is to devise low-cost,comprehensive-protection techniques for mitigating the Spectre attacks.We proposed a novel cache structure,named EBCache,which focuses on the RISC-V processor and applies the address encryption and blacklist to resist the Spectre attacks.The addresses encryption mechanism increases the difficulty of pruning a minimal eviction set.The blacklist mechanism makes the updated cache lines loaded by the malicious updates invisible.Our experiments demonstrated that the EBCache can prevent malicious modifications.The EBCache,however,reduces the processor’s performance by about 23%but involves only a low-cost modification in the hardware.
基金supported by Jilin Provincial Science and Technology Department Natural Science Foundation of China(20210101415JC)Jilin Provincial Science and Technology Department Free exploration research project of China(YDZJ202201ZYTS642).
文摘Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms.