期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Face recognition by decision fusion of two-dimensional linear discriminant analysis and local binary pattern 被引量:1
1
作者 Qicong WANG Binbin WANG +4 位作者 Xinjie HAO Lisheng CHEN jingmin CUI rongrong ji Yunqi LEI 《Frontiers of Computer Science》 SCIE EI CSCD 2016年第6期1118-1129,共12页
关键词 人脸识别算法 线性判别分析 决策融合 二值模式 二维 面部表情 局部特征 二进制模式
原文传递
Survey of visual sentiment prediction for social media analysis 被引量:1
2
作者 rongrong ji Donglin CAO +1 位作者 Yiyi ZHOU Fuhai CHEN 《Frontiers of Computer Science》 SCIE EI CSCD 2016年第4期602-611,共10页
原文传递
A commentary of Multi-skilled AI in MIT Technology Review 2021
3
作者 rongrong ji 《Fundamental Research》 CAS 2021年第6期844-845,共2页
Towards the end of 2012,artificial intelligence(AI)scientists first figured out how to impart“vision”to neural networks.Later,they also mastered how to enable neural networks to mimic human reasoning,hearing,speakin... Towards the end of 2012,artificial intelligence(AI)scientists first figured out how to impart“vision”to neural networks.Later,they also mastered how to enable neural networks to mimic human reasoning,hearing,speaking,and writing.Although AI has become similar to or even superior to humans in accomplishing specific tasks,it still does not possess the“flexibility”of the human brain,i.e.,the human brain can apply skills learned in one situation to another.Taking cues from the growth process of children,we think about the following question.If senses and language can be combined,and AI can perform at a level closer to humans in terms of collecting and processing information,will it be able to develop an understanding of the world?The answer is yes.“Multi-modal”systems,which can simultaneously acquire human senses and language,thereby generating significantly stronger AI,and making it easier for AI to adapt to new situations and solve new problems.Hence,such algorithms can be used to solve more complex problems,or be implanted into robots for communication and collaboration with humans in our daily lives.In September 2020,researchers from the Allen Institute for AI(AI2)created a model that could generate images from captions,thus demonstrating the ability of the algorithm to associate words with visual information.In November,scientists from the University of North Carolina at Chapel Hill developed a method of incorporating images into existing language models,which significantly enhanced the ability of the model to comprehend text.Early in 2021,OpenAI extended GPT-3 and released two visual language models:one associates the objects in the image with the words in the descriptions,and another one generates a digital image based on the combination of concepts it has learned.The progress made by“multi-modal”systems,in the long run,will help break through the limits of AI.It will not only unlock new AI applications,but also make these applications safer and more reliable.More sophisticated multi-modal systems will also aid the development of more advanced robot assistants.Ultimately,multi-modal systems may prove to be the first AI that we can trust. 展开更多
关键词 CLOSER learned ASSOCIATE
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部