摘要
The emergence of large-scale models,such as GPT-3,has become increasingly popular in the field of natural language processing(NLP)and also significantly advanced other artificial intelligence(AI)applications.Despite their many benefits,these models require massive amounts of computational resources and energy,making them difficult to deploy in real-world scenarios.According to Open AI,ChatGPT war trained on a dataset over 8 million web pages to allow it capture the important semantic link and hidden information for text generating tasks.Furthermore,training the GPT-3 model required 175 billion parameters and over 3 million GPU hours,which is beyond the reach of most individuals and organizations.Edge Artificial Intelligence(Edge AI),which refers to the practice of processing AI training tasks on local devices rather than in the cloud,has emerged as a promising solution to address these challenges.The most distinguished feature of Edge AI is it brings high-performance computing capabilities to the edge,where sensors and IoT devices are located.Under such settings,it reduces latency by processing data locally and expands computing power and data sources by integrating different end device.