Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the i...Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: With M.A.R.I.E. enable a rational quantified measurement of Emotional-Visual-Acuity (EVA) of 1) a) an individual observer, b) in a population aged 20 to 70 years old, 2) measure the range and intensity of expressed emotions by 3 Face-Tests, 3) quantify the performance of a sample of 204 observers with hyper normal measures of cognition, “thymia,” (ibid. defined elsewhere) and low levels of anxiety 4) analysis of the 6 primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual-Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Deci-sion-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Finger-print-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.展开更多
Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the i...Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: M.A.R.I.E. enables the rational, quantified measurement of Emotional Visual Acuity (EVA) in an individual observer and a population aged 20 to 70 years. Meanwhile, it can measure the range and intensity of expressed emotions through three Face- Tests, quantify the performance of a sample of 204 observers with hypernormal measures of cognition, “thymia” (defined elsewhere), and low levels of anxiety, and perform analysis of the six primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual- Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Decision-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”, 6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Fingerprint-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.展开更多
【目的】探讨检测急性肺栓塞患者的血浆脑钠肽(BNP)、肌钙蛋白I(cTnI)及D‐二聚体(D‐dimer)水平变化的临床意义。【方法】选择本院2009年1月至2013年12月收治的急性肺栓塞患者64例,根据患者病情分为大面积肺栓塞组( n =27)...【目的】探讨检测急性肺栓塞患者的血浆脑钠肽(BNP)、肌钙蛋白I(cTnI)及D‐二聚体(D‐dimer)水平变化的临床意义。【方法】选择本院2009年1月至2013年12月收治的急性肺栓塞患者64例,根据患者病情分为大面积肺栓塞组( n =27)和非大面积肺栓塞组( n =37),对两组患者血浆cTnI、BNP及D‐dimer水平进行测定,观察比较两组患者各指标水平的变化及右心功能和病死率。【结果】大面积肺栓塞组BN P、血浆cTnI水平明显高于非大面积肺栓塞组,两组比较差异有显著性( P <0.05);两组D‐dimer浓度比较差异无统计学意义( P >0.05);大面积肺栓塞组的右心功能不全者和病死率均高于非大面积肺栓塞组,两组比较差异有统计学意义( P <0.05)。【结论】检测BNP、cTnI及D‐dimer水平对APE患者临床诊断、临床决策及预后判断具有重要的临床意义。展开更多
The capability of embedded piezoelectric wafer active sensors(PWAS)to perform in-situ nondestructive evaluation(NDE)for structural health monitoring(SHM)of reinforced concrete(RC)structures strengthened with fiber rei...The capability of embedded piezoelectric wafer active sensors(PWAS)to perform in-situ nondestructive evaluation(NDE)for structural health monitoring(SHM)of reinforced concrete(RC)structures strengthened with fiber reinforced polymer(FRP)composite overlays is explored.First,the disbond detection method were developed on coupon specimens consisting of concrete blocks covered with an FRP composite layer.It was found that the presence of a disbond crack drastically changes the electromecfianical(E/M)impedance spectrum lneasurcd at the PWAS terlninals.The spectral changes depend on the distance between the PWAS and the crack tip.Second,large scale experiments were conducted on a RC beam strengthened with carbon fiber reinforced polymer(CFRP)composite overlay.The beam was subject to an accelerated fatigue load regime in a three-point bending configuration up to a total of 807,415 cycles.During these fatigue tests,the CFRP overlay experienced disbonding beginning at about 500,000 cycles.The PWAS were able to detect the disbonding before it could be reliably seen by visual inspection.Good correlation between the PWAS readings and the position and extent of disbond damage was observed.These preliminary results demonstrate the potential of PWAS technology for SHM of RC structures strengthened with FRP composite overlays.展开更多
针对阵地建设安装工程安全风险评估的问题,依据证据推理算法,提出一种阵地建设安装工程安全风险评估方法。将安全风险影响因素逐级分解至指标层,并对指标间的相关关系进行梳理,构建阵地建设安装工程安全风险评估指标体系,根据风险评估结...针对阵地建设安装工程安全风险评估的问题,依据证据推理算法,提出一种阵地建设安装工程安全风险评估方法。将安全风险影响因素逐级分解至指标层,并对指标间的相关关系进行梳理,构建阵地建设安装工程安全风险评估指标体系,根据风险评估结果,运用RIMER(belief rule base inference methodology using evidential reasoning)方法解决底层输入指标类型多样、评估信息不完全问题。实例分析结果表明:该方法具有较好的解释性和可追溯性,对于土建工程和阵地管理中的安全工作具有一定的借鉴意义。展开更多
文摘Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: With M.A.R.I.E. enable a rational quantified measurement of Emotional-Visual-Acuity (EVA) of 1) a) an individual observer, b) in a population aged 20 to 70 years old, 2) measure the range and intensity of expressed emotions by 3 Face-Tests, 3) quantify the performance of a sample of 204 observers with hyper normal measures of cognition, “thymia,” (ibid. defined elsewhere) and low levels of anxiety 4) analysis of the 6 primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual-Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Deci-sion-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Finger-print-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.
文摘Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: M.A.R.I.E. enables the rational, quantified measurement of Emotional Visual Acuity (EVA) in an individual observer and a population aged 20 to 70 years. Meanwhile, it can measure the range and intensity of expressed emotions through three Face- Tests, quantify the performance of a sample of 204 observers with hypernormal measures of cognition, “thymia” (defined elsewhere), and low levels of anxiety, and perform analysis of the six primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual- Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Decision-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”, 6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Fingerprint-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.
文摘【目的】探讨检测急性肺栓塞患者的血浆脑钠肽(BNP)、肌钙蛋白I(cTnI)及D‐二聚体(D‐dimer)水平变化的临床意义。【方法】选择本院2009年1月至2013年12月收治的急性肺栓塞患者64例,根据患者病情分为大面积肺栓塞组( n =27)和非大面积肺栓塞组( n =37),对两组患者血浆cTnI、BNP及D‐dimer水平进行测定,观察比较两组患者各指标水平的变化及右心功能和病死率。【结果】大面积肺栓塞组BN P、血浆cTnI水平明显高于非大面积肺栓塞组,两组比较差异有显著性( P <0.05);两组D‐dimer浓度比较差异无统计学意义( P >0.05);大面积肺栓塞组的右心功能不全者和病死率均高于非大面积肺栓塞组,两组比较差异有统计学意义( P <0.05)。【结论】检测BNP、cTnI及D‐dimer水平对APE患者临床诊断、临床决策及预后判断具有重要的临床意义。
基金the National Seienee Foundation through grants NSF#CMS-9908293 and NSF INT-9904493the Federal Highway Administration and the South Carolina Department of TransPortation(projeet Number 614)
文摘The capability of embedded piezoelectric wafer active sensors(PWAS)to perform in-situ nondestructive evaluation(NDE)for structural health monitoring(SHM)of reinforced concrete(RC)structures strengthened with fiber reinforced polymer(FRP)composite overlays is explored.First,the disbond detection method were developed on coupon specimens consisting of concrete blocks covered with an FRP composite layer.It was found that the presence of a disbond crack drastically changes the electromecfianical(E/M)impedance spectrum lneasurcd at the PWAS terlninals.The spectral changes depend on the distance between the PWAS and the crack tip.Second,large scale experiments were conducted on a RC beam strengthened with carbon fiber reinforced polymer(CFRP)composite overlay.The beam was subject to an accelerated fatigue load regime in a three-point bending configuration up to a total of 807,415 cycles.During these fatigue tests,the CFRP overlay experienced disbonding beginning at about 500,000 cycles.The PWAS were able to detect the disbonding before it could be reliably seen by visual inspection.Good correlation between the PWAS readings and the position and extent of disbond damage was observed.These preliminary results demonstrate the potential of PWAS technology for SHM of RC structures strengthened with FRP composite overlays.
文摘针对阵地建设安装工程安全风险评估的问题,依据证据推理算法,提出一种阵地建设安装工程安全风险评估方法。将安全风险影响因素逐级分解至指标层,并对指标间的相关关系进行梳理,构建阵地建设安装工程安全风险评估指标体系,根据风险评估结果,运用RIMER(belief rule base inference methodology using evidential reasoning)方法解决底层输入指标类型多样、评估信息不完全问题。实例分析结果表明:该方法具有较好的解释性和可追溯性,对于土建工程和阵地管理中的安全工作具有一定的借鉴意义。