期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Bone fragility in type 2 diabetes mellitus 被引量:6
1
作者 toru yamaguchi 《World Journal of Orthopedics》 2010年第1期3-9,共7页
The number of patients with osteoporosis or type 2 diabetes mellitus(T2DM)is increasing in aging and westernized societies.Both disorders predispose elderly people to disabling conditions by causing fractures and vasc... The number of patients with osteoporosis or type 2 diabetes mellitus(T2DM)is increasing in aging and westernized societies.Both disorders predispose elderly people to disabling conditions by causing fractures and vascular complications,respectively.It is well documented that bone metabolism and glucose/fat metabolism are etiologically related to each other through osteocalcin action and Wnt signaling.Bone fragility in T2DM,which is not reflected by bone mineral density(BMD),depends on bone quality deterioration rather than bone mass reduction.Thus,surrogate markers are needed to replace the insensitivity of BMD in assessing fracture risks of T2DM patients.Pentosidine,the endogenous secretory receptor for advanced glycation endproducts,and insulin-like growth factor-I seem to be such candidates,although further studies are required to clarify whether or not these markers could predict the occurrence of new fractures of T2DM patients in a prospective fashion. 展开更多
关键词 OSTEOPOROSIS Type 2 DIABETES MELLITUS Fracture risk OSTEOCALCIN WNT SIGNALING
下载PDF
A System of Associated Intelligent Integration for Human State Estimation
2
作者 Akihiro Matsufuji Wei-Fen Hsieh +1 位作者 Eri Sato-Shimokawara toru yamaguchi 《Journal of Mechanics Engineering and Automation》 2019年第3期92-99,共8页
We propose a learning architecture for integrating multi-modal information e.g., vision, audio information. In recent years, artificial intelligence (AI) is making major progress in key tasks like a language, vision, ... We propose a learning architecture for integrating multi-modal information e.g., vision, audio information. In recent years, artificial intelligence (AI) is making major progress in key tasks like a language, vision, voice recognition tasks. Most studies focus on how AI could achieve human-like abilities. Especially, in human-robot interaction research field, some researchers attempt to make robots talk with human in daily life. The key challenges for making robots talk naturally in conversation are to need to consider multi-modal non-verbal information same as human, and to learn with small amount of labeled multi-modal data. Previous multi-modal learning needs a large amount of labeled data while labeled multi-modal data are shortage and difficult to be collected. In this research, we address these challenges by integrating single-modal classifiers which trained each modal information respectively. Our architecture utilized knowledge by using bi-directional associative memory. Furthermore, we conducted the conversation experiment for collecting multi-modal non-verbal information. We verify our approach by comparing accuracies between our system and conventional system which trained multi-modal information. 展开更多
关键词 MULTI-MODAL learning bi-directional ASSOCIATIVE MEMORY NON-VERBAL HUMAN robot interaction
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部