The number of patients with osteoporosis or type 2 diabetes mellitus(T2DM)is increasing in aging and westernized societies.Both disorders predispose elderly people to disabling conditions by causing fractures and vasc...The number of patients with osteoporosis or type 2 diabetes mellitus(T2DM)is increasing in aging and westernized societies.Both disorders predispose elderly people to disabling conditions by causing fractures and vascular complications,respectively.It is well documented that bone metabolism and glucose/fat metabolism are etiologically related to each other through osteocalcin action and Wnt signaling.Bone fragility in T2DM,which is not reflected by bone mineral density(BMD),depends on bone quality deterioration rather than bone mass reduction.Thus,surrogate markers are needed to replace the insensitivity of BMD in assessing fracture risks of T2DM patients.Pentosidine,the endogenous secretory receptor for advanced glycation endproducts,and insulin-like growth factor-I seem to be such candidates,although further studies are required to clarify whether or not these markers could predict the occurrence of new fractures of T2DM patients in a prospective fashion.展开更多
We propose a learning architecture for integrating multi-modal information e.g., vision, audio information. In recent years, artificial intelligence (AI) is making major progress in key tasks like a language, vision, ...We propose a learning architecture for integrating multi-modal information e.g., vision, audio information. In recent years, artificial intelligence (AI) is making major progress in key tasks like a language, vision, voice recognition tasks. Most studies focus on how AI could achieve human-like abilities. Especially, in human-robot interaction research field, some researchers attempt to make robots talk with human in daily life. The key challenges for making robots talk naturally in conversation are to need to consider multi-modal non-verbal information same as human, and to learn with small amount of labeled multi-modal data. Previous multi-modal learning needs a large amount of labeled data while labeled multi-modal data are shortage and difficult to be collected. In this research, we address these challenges by integrating single-modal classifiers which trained each modal information respectively. Our architecture utilized knowledge by using bi-directional associative memory. Furthermore, we conducted the conversation experiment for collecting multi-modal non-verbal information. We verify our approach by comparing accuracies between our system and conventional system which trained multi-modal information.展开更多
文摘The number of patients with osteoporosis or type 2 diabetes mellitus(T2DM)is increasing in aging and westernized societies.Both disorders predispose elderly people to disabling conditions by causing fractures and vascular complications,respectively.It is well documented that bone metabolism and glucose/fat metabolism are etiologically related to each other through osteocalcin action and Wnt signaling.Bone fragility in T2DM,which is not reflected by bone mineral density(BMD),depends on bone quality deterioration rather than bone mass reduction.Thus,surrogate markers are needed to replace the insensitivity of BMD in assessing fracture risks of T2DM patients.Pentosidine,the endogenous secretory receptor for advanced glycation endproducts,and insulin-like growth factor-I seem to be such candidates,although further studies are required to clarify whether or not these markers could predict the occurrence of new fractures of T2DM patients in a prospective fashion.
文摘We propose a learning architecture for integrating multi-modal information e.g., vision, audio information. In recent years, artificial intelligence (AI) is making major progress in key tasks like a language, vision, voice recognition tasks. Most studies focus on how AI could achieve human-like abilities. Especially, in human-robot interaction research field, some researchers attempt to make robots talk with human in daily life. The key challenges for making robots talk naturally in conversation are to need to consider multi-modal non-verbal information same as human, and to learn with small amount of labeled multi-modal data. Previous multi-modal learning needs a large amount of labeled data while labeled multi-modal data are shortage and difficult to be collected. In this research, we address these challenges by integrating single-modal classifiers which trained each modal information respectively. Our architecture utilized knowledge by using bi-directional associative memory. Furthermore, we conducted the conversation experiment for collecting multi-modal non-verbal information. We verify our approach by comparing accuracies between our system and conventional system which trained multi-modal information.