Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousa...Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousands of alarmed tech leaders recently signed an open letter to pause AI research to prepare for the catastrophic threats to humanity from uncontrolled AGI (Artificial General Intelligence). Perceived as an “epistemological nightmare”, AGI is believed to be on the anvil with GPT-5. Two computing rules appear responsible for these risks. 1) Mandatory third-party permissions that allow computers to run applications at the expense of introducing vulnerabilities. 2) The Halting Problem of Turing-complete AI programming languages potentially renders AGI unstoppable. The double whammy of these inherent weaknesses remains invincible under the legacy systems. A recent cybersecurity breakthrough shows that banning all permissions reduces the computer attack surface to zero, delivering a new zero vulnerability computing (ZVC) paradigm. Deploying ZVC and blockchain, this paper formulates and supports a hypothesis: “Safe, secure, ethical, controllable AGI/QC is possible by conquering the two unassailable rules of computability.” Pursued by a European consortium, testing/proving the proposed hypothesis will have a groundbreaking impact on the future digital infrastructure when AGI/QC starts powering the 75 billion internet devices by 2025.展开更多
By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-grow...By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-growing computational demands,it is impossible for a single MEC paradigm to effectively support high-quality intelligent services at end user equipments(UEs).To address this issue,we propose an air-ground collaborative MEC(AGCMEC)architecture in this article.The proposed AGCMEC integrates all potentially available MEC servers within air and ground in the envisioned 6G,by a variety of collaborative ways to provide computation services at their best for UEs.Firstly,we introduce the AGC-MEC architecture and elaborate three typical use cases.Then,we discuss four main challenges in the AGC-MEC as well as their potential solutions.Next,we conduct a case study of collaborative service placement for AGC-MEC to validate the effectiveness of the proposed collaborative service placement strategy.Finally,we highlight several potential research directions of the AGC-MEC.展开更多
The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where su...The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where substantial data transfers are necessitated by the generation of extensive information and the need for frame-by-frame analysis. Herein, we present a novel approach for dynamic motion recognition, leveraging a spatial-temporal in-sensor computing system rooted in multiframe integration by employing photodetector. Our approach introduced a retinomorphic MoS_(2) photodetector device for motion detection and analysis. The device enables the generation of informative final states, nonlinearly embedding both past and present frames. Subsequent multiply-accumulate (MAC) calculations are efficiently performed as the classifier. When evaluating our devices for target detection and direction classification, we achieved an impressive recognition accuracy of 93.5%. By eliminating the need for frame-by-frame analysis, our system not only achieves high precision but also facilitates energy-efficient in-sensor computing.展开更多
The conventional computing architecture faces substantial chal-lenges,including high latency and energy consumption between memory and processing units.In response,in-memory computing has emerged as a promising altern...The conventional computing architecture faces substantial chal-lenges,including high latency and energy consumption between memory and processing units.In response,in-memory computing has emerged as a promising alternative architecture,enabling computing operations within memory arrays to overcome these limitations.Memristive devices have gained significant attention as key components for in-memory computing due to their high-density arrays,rapid response times,and ability to emulate biological synapses.Among these devices,two-dimensional(2D)material-based memristor and memtransistor arrays have emerged as particularly promising candidates for next-generation in-memory computing,thanks to their exceptional performance driven by the unique properties of 2D materials,such as layered structures,mechanical flexibility,and the capability to form heterojunctions.This review delves into the state-of-the-art research on 2D material-based memristive arrays,encompassing critical aspects such as material selection,device perfor-mance metrics,array structures,and potential applications.Furthermore,it provides a comprehensive overview of the current challenges and limitations associated with these arrays,along with potential solutions.The primary objective of this review is to serve as a significant milestone in realizing next-generation in-memory computing utilizing 2D materials and bridge the gap from single-device characterization to array-level and system-level implementations of neuromorphic computing,leveraging the potential of 2D material-based memristive devices.展开更多
Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a nove...Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.展开更多
In the past decade,there has been tremendous progress in integrating chalcogenide phase-change materials(PCMs)on the silicon photonic platform for non-volatile memory to neuromorphic in-memory computing applications.I...In the past decade,there has been tremendous progress in integrating chalcogenide phase-change materials(PCMs)on the silicon photonic platform for non-volatile memory to neuromorphic in-memory computing applications.In particular,these non von Neumann computational elements and systems benefit from mass manufacturing of silicon photonic integrated circuits(PICs)on 8-inch wafers using a 130 nm complementary metal-oxide semiconductor line.Chip manufacturing based on deep-ultraviolet lithography and electron-beam lithography enables rapid prototyping of PICs,which can be integrated with high-quality PCMs based on the wafer-scale sputtering technique as a back-end-of-line process.In this article,we present an overview of recent advances in waveguide integrated PCM memory cells,functional devices,and neuromorphic systems,with an emphasis on fabrication and integration processes to attain state-of-the-art device performance.After a short overview of PCM based photonic devices,we discuss the materials properties of the functional layer as well as the progress on the light guiding layer,namely,the silicon and germanium waveguide platforms.Next,we discuss the cleanroom fabrication flow of waveguide devices integrated with thin films and nanowires,silicon waveguides and plasmonic microheaters for the electrothermal switching of PCMs and mixed-mode operation.Finally,the fabrication of photonic and photonic–electronic neuromorphic computing systems is reviewed.These systems consist of arrays of PCM memory elements for associative learning,matrix-vector multiplication,and pattern recognition.With large-scale integration,the neuromorphic photonic computing paradigm holds the promise to outperform digital electronic accelerators by taking the advantages of ultra-high bandwidth,high speed,and energy-efficient operation in running machine learning algorithms.展开更多
BACKGROUND Lymphovascular invasion(LVI)and perineural invasion(PNI)are important prognostic factors for gastric cancer(GC)that indicate an increased risk of metastasis and poor outcomes.Accurate preoperative predictio...BACKGROUND Lymphovascular invasion(LVI)and perineural invasion(PNI)are important prognostic factors for gastric cancer(GC)that indicate an increased risk of metastasis and poor outcomes.Accurate preoperative prediction of LVI/PNI status could help clinicians identify high-risk patients and guide treatment deci-sions.However,prior models using conventional computed tomography(CT)images to predict LVI or PNI separately have had limited accuracy.Spectral CT provides quantitative enhancement parameters that may better capture tumor invasion.We hypothesized that a predictive model combining clinical and spectral CT parameters would accurately preoperatively predict LVI/PNI status in GC patients.AIM To develop and test a machine learning model that fuses spectral CT parameters and clinical indicators to predict LVI/PNI status accurately.METHODS This study used a retrospective dataset involving 257 GC patients(training cohort,n=172;validation cohort,n=85).First,several clinical indicators,including serum tumor markers,CT-TN stages and CT-detected extramural vein invasion(CT-EMVI),were extracted,as were quantitative spectral CT parameters from the delineated tumor regions.Next,a two-step feature selection approach using correlation-based methods and information gain ranking inside a 10-fold cross-validation loop was utilized to select informative clinical and spectral CT parameters.A logistic regression(LR)-based nomogram model was subsequently constructed to predict LVI/PNI status,and its performance was evaluated using the area under the receiver operating characteristic curve(AUC).RESULTS In both the training and validation cohorts,CT T3-4 stage,CT-N positive status,and CT-EMVI positive status are more prevalent in the LVI/PNI-positive group and these differences are statistically significant(P<0.05).LR analysis of the training group showed preoperative CT-T stage,CT-EMVI,single-energy CT values of 70 keV of venous phase(VP-70 keV),and the ratio of standardized iodine concentration of equilibrium phase(EP-NIC)were independent influencing factors.The AUCs of VP-70 keV and EP-NIC were 0.888 and 0.824,respectively,which were slightly greater than those of CT-T and CT-EMVI(AUC=0.793,0.762).The nomogram combining CT-T stage,CT-EMVI,VP-70 keV and EP-NIC yielded AUCs of 0.918(0.866-0.954)and 0.874(0.784-0.936)in the training and validation cohorts,which are significantly higher than using each of single independent factors(P<0.05).CONCLUSION The study found that using portal venous and EP spectral CT parameters allows effective preoperative detection of LVI/PNI in GC,with accuracy boosted by integrating clinical markers.展开更多
BACKGROUND Gastric cancer(GC)is the most common malignant tumor and ranks third for cancer-related deaths among the worldwide.The disease poses a serious public health problem in China,ranking fifth for incidence and ...BACKGROUND Gastric cancer(GC)is the most common malignant tumor and ranks third for cancer-related deaths among the worldwide.The disease poses a serious public health problem in China,ranking fifth for incidence and third for mortality.Knowledge of the invasive depth of the tumor is vital to treatment decisions.AIM To evaluate the diagnostic performance of double contrast-enhanced ultrasonography(DCEUS)for preoperative T staging in patients with GC by comparing with multi-detector computed tomography(MDCT).METHODS This single prospective study enrolled patients with GC confirmed by preoperative gastroscopy from July 2021 to March 2023.Patients underwent DCEUS,including ultrasonography(US)and intravenous contrast-enhanced ultrasonography(CEUS),and MDCT examinations for the assessment of preoperative T staging.Features of GC were identified on DCEUS and criteria developed to evaluate T staging according to the 8th edition of AJCC cancer staging manual.The diagnostic performance of DCEUS was evaluated by comparing it with that of MDCT and surgical-pathological findings were considered as the gold standard.RESULTS A total of 229 patients with GC(80 T1,33 T2,59 T3 and 57 T4)were included.Overall accuracies were 86.9%for DCEUS and 61.1%for MDCT(P<0.001).DCEUS was superior to MDCT for T1(92.5%vs 70.0%,P<0.001),T2(72.7%vs 51.5%,P=0.041),T3(86.4%vs 45.8%,P<0.001)and T4(87.7%vs 70.2%,P=0.022)staging of GC.CONCLUSION DCEUS improved the diagnostic accuracy of preoperative T staging in patients with GC compared with MDCT,and constitutes a promising imaging modality for preoperative evaluation of GC to aid individualized treatment decision-making.展开更多
BACKGROUND This study presents an evaluation of the computed tomography lymphangio-graphy(CTL)features of lymphatic plastic bronchitis(PB)and primary chylotho-rax to improve the diagnostic accuracy for these two disea...BACKGROUND This study presents an evaluation of the computed tomography lymphangio-graphy(CTL)features of lymphatic plastic bronchitis(PB)and primary chylotho-rax to improve the diagnostic accuracy for these two diseases.AIM To improve the diagnosis of lymphatic PB or primary chylothorax,a retrospective analysis of the clinical features and CTL characteristics of 71 patients diagnosed with lymphatic PB or primary chylothorax was performed.METHODS The clinical and CTL data of 71 patients(20 with lymphatic PB,41 with primary chylothorax,and 10 with lymphatic PB with primary chylothorax)were collected retrospectively.CTL was performed in all patients.The clinical manifestations,CTL findings,and conventional chest CT findings of the three groups of patients were compared.The chi-square test or Fisher's exact test was used to compare the differences among the three groups.A difference was considered to be statistically significant when P<0.05.RESULTS(1)The percentages of abnormal contrast medium deposits on CTL in the three groups were as follows:Thoracic duct outlet in 14(70.0%),33(80.5%)and 8(80.0%)patients;peritracheal region in 18(90.0%),15(36.6%)and 8(80.0%)patients;pleura in 6(30.0%),33(80.5%)and 9(90.0%)patients;pericardium in 6(30.0%),6(14.6%)and 4(40.0%)patients;and hilum in 16(80.0%),11(26.8%)and 7(70.0%)patients;and(2)the abnormalities on conven-tional chest CT in the three groups were as follows:Ground-glass opacity in 19(95.0%),18(43.9%)and 8(80.0%)patients;atelectasis in 4(20.0%),26(63.4%)and 7(70.0%)patients;interlobular septal thickening in 12(60.0%),11(26.8%)and 3(30.0%)patients;bronchovascular bundle thickening in 14(70.0%),6(14.6%)and 4(40.0%)patients;localized mediastinal changes in 14(70.0%),14(34.1%),and 7(70.0%)patients;diffuse mediastinal changes in 6(30.0%),5(12.2%),and 3(30.0%)patients;cystic lesions in the axilla in 2(10.0%),6(14.6%),and 2(20.0%)patients;and cystic lesions in the chest wall in 0(0%),2(4.9%),and 2(4.9%)patients.CONCLUSION CTL is well suited to clarify the characteristics of lymphatic PB and primary chylothorax.This method is an excellent tool for diagnosing these two diseases.展开更多
BACKGROUND Neoadjuvant chemotherapy(NAC)has become the standard care for advanced adenocarcinoma of esophagogastric junction(AEG),although a part of the patients cannot benefit from NAC.There are no models based on ba...BACKGROUND Neoadjuvant chemotherapy(NAC)has become the standard care for advanced adenocarcinoma of esophagogastric junction(AEG),although a part of the patients cannot benefit from NAC.There are no models based on baseline computed tomography(CT)to predict response of Siewert type II or III AEG to NAC with docetaxel,oxaliplatin and S-1(DOS).AIM To develop a CT-based nomogram to predict response of Siewert type II/III AEG to NAC with DOS.METHODS One hundred and twenty-eight consecutive patients with confirmed Siewert type II/III AEG underwent CT before and after three cycles of NAC with DOS,and were randomly and consecutively assigned to the training cohort(TC)(n=94)and the validation cohort(VC)(n=34).Therapeutic effect was assessed by disease-control rate and progressive disease according to the Response Evaluation Criteria in Solid Tumors(version 1.1)criteria.Possible prognostic factors associated with responses after DOS treatment including Siewert classification,gross tumor volume(GTV),and cT and cN stages were evaluated using pretherapeutic CT data in addition to sex and age.Univariate and multivariate analyses of CT and clinical features in the TC were performed to determine independent factors associated with response to DOS.A nomogram was established based on independent factors to predict the response.The predictive performance of the nomogram was evaluated by Concordance index(C-index),calibration and receiver operating characteristics curve in the TC and VC.RESULTS Univariate analysis showed that Siewert type(52/55 vs 29/39,P=0.005),pretherapeutic cT stage(57/62 vs 24/32,P=0.028),GTV(47.3±27.4 vs 73.2±54.3,P=0.040)were significantly associated with response to DOS in the TC.Multivariate analysis of the TC also showed that the pretherapeutic cT stage,GTV and Siewert type were independent predictive factors related to response to DOS(odds ratio=4.631,1.027 and 7.639,respectively;all P<0.05).The nomogram developed with these independent factors showed an excellent performance to predict response to DOS in the TC and VC(C-index:0.838 and 0.824),with area under the receiver operating characteristic curve of 0.838 and 0.824,respectively.The calibration curves showed that the practical and predicted response to DOS effectively coincided.CONCLUSION A novel nomogram developed with pretherapeutic cT stage,GTV and Siewert type predicted the response of Siewert type II/III AEG to NAC with DOS.展开更多
In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of ...In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of UAV,the transmitting beamforming of users,and the phase shift matrix of IRS.The original problem is strong non-convex and difficult to solve.We first propose two basic modes of the proactive eavesdropper,and obtain the closed-form solution for the boundary conditions of the two modes.Then we transform the original problem into an equivalent one and propose an alternating optimization(AO)based method to obtain a local optimal solution.The convergence of the algorithm is illustrated by numerical results.Further,we propose a zero forcing(ZF)based method as sub-optimal solution,and the simulation section shows that the proposed two schemes could obtain better performance compared with traditional schemes.展开更多
Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The ...Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.展开更多
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these...With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.展开更多
Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are ...Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs.展开更多
In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer t...In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.展开更多
The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cess...The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.展开更多
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ...AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.展开更多
Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it ...Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it is known that the complementary metal-oxide-semiconductor(CMOS)field effect transistors have played the fundamental role in the modern integrated circuit technology.Therefore,will complementary memtransistors(CMT)also play such a role in the future neuromorphic circuits and chips?In this review,various types of materials and physical mechanisms for constructing CMT(how)are inspected with their merits and need-to-address challenges discussed.Then the unique properties(what)and poten-tial applications of CMT in different learning algorithms/scenarios of spiking neural networks(why)are reviewed,including super-vised rule,reinforcement one,dynamic vision with in-sensor computing,etc.Through exploiting the complementary structure-related novel functions,significant reduction of hardware consuming,enhancement of energy/efficiency ratio and other advan-tages have been gained,illustrating the alluring prospect of design technology co-optimization(DTCO)of CMT towards neuro-morphic computing.展开更多
Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodo...Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodologies to address these challenges. Through a series of experiments, we evaluate the performance, security, and efficiency of the proposed algorithms in real-world cloud environments. Our results demonstrate the effectiveness of homomorphic encryption-based secure computation, secure multiparty computation, and trusted execution environment-based approaches in mitigating security threats while ensuring efficient resource utilization. Specifically, our homomorphic encryption-based algorithm exhibits encryption times ranging from 20 to 1000 milliseconds and decryption times ranging from 25 to 1250 milliseconds for payload sizes varying from 100 KB to 5000 KB. Furthermore, our comparative analysis against state-of-the-art solutions reveals the strengths of our proposed algorithms in terms of security guarantees, encryption overhead, and communication latency.展开更多
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ...Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.展开更多
文摘Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousands of alarmed tech leaders recently signed an open letter to pause AI research to prepare for the catastrophic threats to humanity from uncontrolled AGI (Artificial General Intelligence). Perceived as an “epistemological nightmare”, AGI is believed to be on the anvil with GPT-5. Two computing rules appear responsible for these risks. 1) Mandatory third-party permissions that allow computers to run applications at the expense of introducing vulnerabilities. 2) The Halting Problem of Turing-complete AI programming languages potentially renders AGI unstoppable. The double whammy of these inherent weaknesses remains invincible under the legacy systems. A recent cybersecurity breakthrough shows that banning all permissions reduces the computer attack surface to zero, delivering a new zero vulnerability computing (ZVC) paradigm. Deploying ZVC and blockchain, this paper formulates and supports a hypothesis: “Safe, secure, ethical, controllable AGI/QC is possible by conquering the two unassailable rules of computability.” Pursued by a European consortium, testing/proving the proposed hypothesis will have a groundbreaking impact on the future digital infrastructure when AGI/QC starts powering the 75 billion internet devices by 2025.
基金supported in part by the National Natural Science Foundation of China under Grant 62171465,62072303,62272223,U22A2031。
文摘By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-growing computational demands,it is impossible for a single MEC paradigm to effectively support high-quality intelligent services at end user equipments(UEs).To address this issue,we propose an air-ground collaborative MEC(AGCMEC)architecture in this article.The proposed AGCMEC integrates all potentially available MEC servers within air and ground in the envisioned 6G,by a variety of collaborative ways to provide computation services at their best for UEs.Firstly,we introduce the AGC-MEC architecture and elaborate three typical use cases.Then,we discuss four main challenges in the AGC-MEC as well as their potential solutions.Next,we conduct a case study of collaborative service placement for AGC-MEC to validate the effectiveness of the proposed collaborative service placement strategy.Finally,we highlight several potential research directions of the AGC-MEC.
基金supported by the National Natural Science Foundation of China (52322210, 52172144, 22375069, 21825103, and U21A2069)National Key R&D Program of China (2021YFA1200501)+2 种基金Shenzhen Science and Technology Program (JCYJ20220818102215033, JCYJ20200109105422876)the Innovation Project of Optics Valley Laboratory (OVL2023PY007)Science and Technology Commission of Shanghai Municipality (21YF1454700)。
文摘The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where substantial data transfers are necessitated by the generation of extensive information and the need for frame-by-frame analysis. Herein, we present a novel approach for dynamic motion recognition, leveraging a spatial-temporal in-sensor computing system rooted in multiframe integration by employing photodetector. Our approach introduced a retinomorphic MoS_(2) photodetector device for motion detection and analysis. The device enables the generation of informative final states, nonlinearly embedding both past and present frames. Subsequent multiply-accumulate (MAC) calculations are efficiently performed as the classifier. When evaluating our devices for target detection and direction classification, we achieved an impressive recognition accuracy of 93.5%. By eliminating the need for frame-by-frame analysis, our system not only achieves high precision but also facilitates energy-efficient in-sensor computing.
基金This work was supported by the National Research Foundation,Singapore under Award No.NRF-CRP24-2020-0002.
文摘The conventional computing architecture faces substantial chal-lenges,including high latency and energy consumption between memory and processing units.In response,in-memory computing has emerged as a promising alternative architecture,enabling computing operations within memory arrays to overcome these limitations.Memristive devices have gained significant attention as key components for in-memory computing due to their high-density arrays,rapid response times,and ability to emulate biological synapses.Among these devices,two-dimensional(2D)material-based memristor and memtransistor arrays have emerged as particularly promising candidates for next-generation in-memory computing,thanks to their exceptional performance driven by the unique properties of 2D materials,such as layered structures,mechanical flexibility,and the capability to form heterojunctions.This review delves into the state-of-the-art research on 2D material-based memristive arrays,encompassing critical aspects such as material selection,device perfor-mance metrics,array structures,and potential applications.Furthermore,it provides a comprehensive overview of the current challenges and limitations associated with these arrays,along with potential solutions.The primary objective of this review is to serve as a significant milestone in realizing next-generation in-memory computing utilizing 2D materials and bridge the gap from single-device characterization to array-level and system-level implementations of neuromorphic computing,leveraging the potential of 2D material-based memristive devices.
基金the National Key Research and Development Program of China(2021YFF0900800)the National Natural Science Foundation of China(61972276,62206116,62032016)+2 种基金the New Liberal Arts Reform and Practice Project of National Ministry of Education(2021170002)the Open Research Fund of the State Key Laboratory for Management and Control of Complex Systems(20210101)Tianjin University Talent Innovation Reward Program for Literature and Science Graduate Student(C1-2022-010)。
文摘Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.
基金the support of the National Natural Science Foundation of China(Grant No.62204201)。
文摘In the past decade,there has been tremendous progress in integrating chalcogenide phase-change materials(PCMs)on the silicon photonic platform for non-volatile memory to neuromorphic in-memory computing applications.In particular,these non von Neumann computational elements and systems benefit from mass manufacturing of silicon photonic integrated circuits(PICs)on 8-inch wafers using a 130 nm complementary metal-oxide semiconductor line.Chip manufacturing based on deep-ultraviolet lithography and electron-beam lithography enables rapid prototyping of PICs,which can be integrated with high-quality PCMs based on the wafer-scale sputtering technique as a back-end-of-line process.In this article,we present an overview of recent advances in waveguide integrated PCM memory cells,functional devices,and neuromorphic systems,with an emphasis on fabrication and integration processes to attain state-of-the-art device performance.After a short overview of PCM based photonic devices,we discuss the materials properties of the functional layer as well as the progress on the light guiding layer,namely,the silicon and germanium waveguide platforms.Next,we discuss the cleanroom fabrication flow of waveguide devices integrated with thin films and nanowires,silicon waveguides and plasmonic microheaters for the electrothermal switching of PCMs and mixed-mode operation.Finally,the fabrication of photonic and photonic–electronic neuromorphic computing systems is reviewed.These systems consist of arrays of PCM memory elements for associative learning,matrix-vector multiplication,and pattern recognition.With large-scale integration,the neuromorphic photonic computing paradigm holds the promise to outperform digital electronic accelerators by taking the advantages of ultra-high bandwidth,high speed,and energy-efficient operation in running machine learning algorithms.
基金Supported by Science and Technology Project of Fujian Province,No.2022Y0025.
文摘BACKGROUND Lymphovascular invasion(LVI)and perineural invasion(PNI)are important prognostic factors for gastric cancer(GC)that indicate an increased risk of metastasis and poor outcomes.Accurate preoperative prediction of LVI/PNI status could help clinicians identify high-risk patients and guide treatment deci-sions.However,prior models using conventional computed tomography(CT)images to predict LVI or PNI separately have had limited accuracy.Spectral CT provides quantitative enhancement parameters that may better capture tumor invasion.We hypothesized that a predictive model combining clinical and spectral CT parameters would accurately preoperatively predict LVI/PNI status in GC patients.AIM To develop and test a machine learning model that fuses spectral CT parameters and clinical indicators to predict LVI/PNI status accurately.METHODS This study used a retrospective dataset involving 257 GC patients(training cohort,n=172;validation cohort,n=85).First,several clinical indicators,including serum tumor markers,CT-TN stages and CT-detected extramural vein invasion(CT-EMVI),were extracted,as were quantitative spectral CT parameters from the delineated tumor regions.Next,a two-step feature selection approach using correlation-based methods and information gain ranking inside a 10-fold cross-validation loop was utilized to select informative clinical and spectral CT parameters.A logistic regression(LR)-based nomogram model was subsequently constructed to predict LVI/PNI status,and its performance was evaluated using the area under the receiver operating characteristic curve(AUC).RESULTS In both the training and validation cohorts,CT T3-4 stage,CT-N positive status,and CT-EMVI positive status are more prevalent in the LVI/PNI-positive group and these differences are statistically significant(P<0.05).LR analysis of the training group showed preoperative CT-T stage,CT-EMVI,single-energy CT values of 70 keV of venous phase(VP-70 keV),and the ratio of standardized iodine concentration of equilibrium phase(EP-NIC)were independent influencing factors.The AUCs of VP-70 keV and EP-NIC were 0.888 and 0.824,respectively,which were slightly greater than those of CT-T and CT-EMVI(AUC=0.793,0.762).The nomogram combining CT-T stage,CT-EMVI,VP-70 keV and EP-NIC yielded AUCs of 0.918(0.866-0.954)and 0.874(0.784-0.936)in the training and validation cohorts,which are significantly higher than using each of single independent factors(P<0.05).CONCLUSION The study found that using portal venous and EP spectral CT parameters allows effective preoperative detection of LVI/PNI in GC,with accuracy boosted by integrating clinical markers.
基金This study was reviewed and approved by the Ethics Committee of Sun Yat-sen University Cancer Center(Approval No.B2023-219-03).
文摘BACKGROUND Gastric cancer(GC)is the most common malignant tumor and ranks third for cancer-related deaths among the worldwide.The disease poses a serious public health problem in China,ranking fifth for incidence and third for mortality.Knowledge of the invasive depth of the tumor is vital to treatment decisions.AIM To evaluate the diagnostic performance of double contrast-enhanced ultrasonography(DCEUS)for preoperative T staging in patients with GC by comparing with multi-detector computed tomography(MDCT).METHODS This single prospective study enrolled patients with GC confirmed by preoperative gastroscopy from July 2021 to March 2023.Patients underwent DCEUS,including ultrasonography(US)and intravenous contrast-enhanced ultrasonography(CEUS),and MDCT examinations for the assessment of preoperative T staging.Features of GC were identified on DCEUS and criteria developed to evaluate T staging according to the 8th edition of AJCC cancer staging manual.The diagnostic performance of DCEUS was evaluated by comparing it with that of MDCT and surgical-pathological findings were considered as the gold standard.RESULTS A total of 229 patients with GC(80 T1,33 T2,59 T3 and 57 T4)were included.Overall accuracies were 86.9%for DCEUS and 61.1%for MDCT(P<0.001).DCEUS was superior to MDCT for T1(92.5%vs 70.0%,P<0.001),T2(72.7%vs 51.5%,P=0.041),T3(86.4%vs 45.8%,P<0.001)and T4(87.7%vs 70.2%,P=0.022)staging of GC.CONCLUSION DCEUS improved the diagnostic accuracy of preoperative T staging in patients with GC compared with MDCT,and constitutes a promising imaging modality for preoperative evaluation of GC to aid individualized treatment decision-making.
文摘BACKGROUND This study presents an evaluation of the computed tomography lymphangio-graphy(CTL)features of lymphatic plastic bronchitis(PB)and primary chylotho-rax to improve the diagnostic accuracy for these two diseases.AIM To improve the diagnosis of lymphatic PB or primary chylothorax,a retrospective analysis of the clinical features and CTL characteristics of 71 patients diagnosed with lymphatic PB or primary chylothorax was performed.METHODS The clinical and CTL data of 71 patients(20 with lymphatic PB,41 with primary chylothorax,and 10 with lymphatic PB with primary chylothorax)were collected retrospectively.CTL was performed in all patients.The clinical manifestations,CTL findings,and conventional chest CT findings of the three groups of patients were compared.The chi-square test or Fisher's exact test was used to compare the differences among the three groups.A difference was considered to be statistically significant when P<0.05.RESULTS(1)The percentages of abnormal contrast medium deposits on CTL in the three groups were as follows:Thoracic duct outlet in 14(70.0%),33(80.5%)and 8(80.0%)patients;peritracheal region in 18(90.0%),15(36.6%)and 8(80.0%)patients;pleura in 6(30.0%),33(80.5%)and 9(90.0%)patients;pericardium in 6(30.0%),6(14.6%)and 4(40.0%)patients;and hilum in 16(80.0%),11(26.8%)and 7(70.0%)patients;and(2)the abnormalities on conven-tional chest CT in the three groups were as follows:Ground-glass opacity in 19(95.0%),18(43.9%)and 8(80.0%)patients;atelectasis in 4(20.0%),26(63.4%)and 7(70.0%)patients;interlobular septal thickening in 12(60.0%),11(26.8%)and 3(30.0%)patients;bronchovascular bundle thickening in 14(70.0%),6(14.6%)and 4(40.0%)patients;localized mediastinal changes in 14(70.0%),14(34.1%),and 7(70.0%)patients;diffuse mediastinal changes in 6(30.0%),5(12.2%),and 3(30.0%)patients;cystic lesions in the axilla in 2(10.0%),6(14.6%),and 2(20.0%)patients;and cystic lesions in the chest wall in 0(0%),2(4.9%),and 2(4.9%)patients.CONCLUSION CTL is well suited to clarify the characteristics of lymphatic PB and primary chylothorax.This method is an excellent tool for diagnosing these two diseases.
文摘BACKGROUND Neoadjuvant chemotherapy(NAC)has become the standard care for advanced adenocarcinoma of esophagogastric junction(AEG),although a part of the patients cannot benefit from NAC.There are no models based on baseline computed tomography(CT)to predict response of Siewert type II or III AEG to NAC with docetaxel,oxaliplatin and S-1(DOS).AIM To develop a CT-based nomogram to predict response of Siewert type II/III AEG to NAC with DOS.METHODS One hundred and twenty-eight consecutive patients with confirmed Siewert type II/III AEG underwent CT before and after three cycles of NAC with DOS,and were randomly and consecutively assigned to the training cohort(TC)(n=94)and the validation cohort(VC)(n=34).Therapeutic effect was assessed by disease-control rate and progressive disease according to the Response Evaluation Criteria in Solid Tumors(version 1.1)criteria.Possible prognostic factors associated with responses after DOS treatment including Siewert classification,gross tumor volume(GTV),and cT and cN stages were evaluated using pretherapeutic CT data in addition to sex and age.Univariate and multivariate analyses of CT and clinical features in the TC were performed to determine independent factors associated with response to DOS.A nomogram was established based on independent factors to predict the response.The predictive performance of the nomogram was evaluated by Concordance index(C-index),calibration and receiver operating characteristics curve in the TC and VC.RESULTS Univariate analysis showed that Siewert type(52/55 vs 29/39,P=0.005),pretherapeutic cT stage(57/62 vs 24/32,P=0.028),GTV(47.3±27.4 vs 73.2±54.3,P=0.040)were significantly associated with response to DOS in the TC.Multivariate analysis of the TC also showed that the pretherapeutic cT stage,GTV and Siewert type were independent predictive factors related to response to DOS(odds ratio=4.631,1.027 and 7.639,respectively;all P<0.05).The nomogram developed with these independent factors showed an excellent performance to predict response to DOS in the TC and VC(C-index:0.838 and 0.824),with area under the receiver operating characteristic curve of 0.838 and 0.824,respectively.The calibration curves showed that the practical and predicted response to DOS effectively coincided.CONCLUSION A novel nomogram developed with pretherapeutic cT stage,GTV and Siewert type predicted the response of Siewert type II/III AEG to NAC with DOS.
基金This work was supported by the Key Scientific and Technological Project of Henan Province(Grant Number 222102210212)Doctoral Research Start Project of Henan Institute of Technology(Grant Number KQ2005)Key Research Projects of Colleges and Universities in Henan Province(Grant Number 23B510006).
文摘In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of UAV,the transmitting beamforming of users,and the phase shift matrix of IRS.The original problem is strong non-convex and difficult to solve.We first propose two basic modes of the proactive eavesdropper,and obtain the closed-form solution for the boundary conditions of the two modes.Then we transform the original problem into an equivalent one and propose an alternating optimization(AO)based method to obtain a local optimal solution.The convergence of the algorithm is illustrated by numerical results.Further,we propose a zero forcing(ZF)based method as sub-optimal solution,and the simulation section shows that the proposed two schemes could obtain better performance compared with traditional schemes.
文摘Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.
基金supported by the National Science Foundation of China under Grant 62271062 and 62071063by the Zhijiang Laboratory Open Project Fund 2020LCOAB01。
文摘With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.
基金supported by the Foundation for Innovative Research Groups of the National Natural Science Foundation of China under Grant No.61521003the National Natural Science Foundation of China under Grant No.62072467 and 62002383.
文摘Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs.
基金funding from TECNALIA,Basque Research and Technology Alliance(BRTA)supported by the project aOptimization of Deep Learning algorithms for Edge IoT devices for sensorization and control in Buildings and Infrastructures(EMBED)funded by the Gipuzkoa Provincial Council and approved under the 2023 call of the Guipuzcoan Network of Science,Technology and Innovation Program with File Number 2023-CIEN-000051-01.
文摘In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.
基金supported in part by the National Natural Science Foundation of China under Grant 61901128,62273109the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(21KJB510032).
文摘The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.
基金Project supported in part by the National Key Research and Development Program of China(Grant No.2021YFA0716400)the National Natural Science Foundation of China(Grant Nos.62225405,62150027,61974080,61991443,61975093,61927811,61875104,62175126,and 62235011)+2 种基金the Ministry of Science and Technology of China(Grant Nos.2021ZD0109900 and 2021ZD0109903)the Collaborative Innovation Center of Solid-State Lighting and Energy-Saving ElectronicsTsinghua University Initiative Scientific Research Program.
文摘AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.
基金supported by the National Key Research and Development Program of China(No.2023YFB4502200)Natural Science Foundation of China(Nos.92164204 and 62374063)the Science and Technology Major Project of Hubei Province(No.2022AEA001).
文摘Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it is known that the complementary metal-oxide-semiconductor(CMOS)field effect transistors have played the fundamental role in the modern integrated circuit technology.Therefore,will complementary memtransistors(CMT)also play such a role in the future neuromorphic circuits and chips?In this review,various types of materials and physical mechanisms for constructing CMT(how)are inspected with their merits and need-to-address challenges discussed.Then the unique properties(what)and poten-tial applications of CMT in different learning algorithms/scenarios of spiking neural networks(why)are reviewed,including super-vised rule,reinforcement one,dynamic vision with in-sensor computing,etc.Through exploiting the complementary structure-related novel functions,significant reduction of hardware consuming,enhancement of energy/efficiency ratio and other advan-tages have been gained,illustrating the alluring prospect of design technology co-optimization(DTCO)of CMT towards neuro-morphic computing.
文摘Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodologies to address these challenges. Through a series of experiments, we evaluate the performance, security, and efficiency of the proposed algorithms in real-world cloud environments. Our results demonstrate the effectiveness of homomorphic encryption-based secure computation, secure multiparty computation, and trusted execution environment-based approaches in mitigating security threats while ensuring efficient resource utilization. Specifically, our homomorphic encryption-based algorithm exhibits encryption times ranging from 20 to 1000 milliseconds and decryption times ranging from 25 to 1250 milliseconds for payload sizes varying from 100 KB to 5000 KB. Furthermore, our comparative analysis against state-of-the-art solutions reveals the strengths of our proposed algorithms in terms of security guarantees, encryption overhead, and communication latency.
基金supported by National Key Research and Development Program of China(2018YFC1504502).
文摘Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.