In an era dominated by artificial intelligence (AI), establishing customer confidence is crucial for the integration and acceptance of AI technologies. This interdisciplinary study examines factors influencing custome...In an era dominated by artificial intelligence (AI), establishing customer confidence is crucial for the integration and acceptance of AI technologies. This interdisciplinary study examines factors influencing customer trust in AI systems through a mixed-methods approach, blending quantitative analysis with qualitative insights to create a comprehensive conceptual framework. Quantitatively, the study analyzes responses from 1248 participants using structural equation modeling (SEM), exploring interactions between technological factors like perceived usefulness and transparency, psychological factors including perceived risk and domain expertise, and organizational factors such as leadership support and ethical accountability. The results confirm the model, showing significant impacts of these factors on consumer trust and AI adoption attitudes. Qualitatively, the study includes 35 semi-structured interviews and five case studies, providing deeper insight into the dynamics shaping trust. Key themes identified include the necessity of explainability, domain competence, corporate culture, and stakeholder engagement in fostering trust. The qualitative findings complement the quantitative data, highlighting the complex interplay between technology capabilities, human perceptions, and organizational practices in establishing trust in AI. By integrating these findings, the study proposes a novel conceptual model that elucidates how various elements collectively influence consumer trust in AI. This model not only advances theoretical understanding but also offers practical implications for businesses and policymakers. The research contributes to the discourse on trust creation and decision-making in technology, emphasizing the need for interdisciplinary efforts to address societal challenges associated with technological advancements. It lays the groundwork for future research, including longitudinal, cross-cultural, and industry-specific studies, to further explore consumer trust in AI.展开更多
New challenges are introduced when people try to build a general-purpose mobile agent middleware in Grid envi- ronment. In this paper, an instance-oriented security mechanism is proposed to deal with possible security...New challenges are introduced when people try to build a general-purpose mobile agent middleware in Grid envi- ronment. In this paper, an instance-oriented security mechanism is proposed to deal with possible security threats in such mobile agent systems. The current security support in Grid Security Infrastructure (GSI) requires the users to delegate their privileges to certain hosts. This host-oriented solution is insecure and inflexible towards mobile agent applications because it cannot prevent delegation abuse and control well the diffusion of damage. Our proposed solution introduces security instance, which is an en- capsulation of one set of authorizations and their validity specifications with respect to the agent’s specific code segments, or even the states and requests. Applications can establish and configure their security framework flexibly on the same platform, through defining instances and operations according to their own logic. Mechanisms are provided to allow users delegating their identity to these instances instead of certain hosts. By adopting this instance-oriented security mechanism, a Grid-based general-purpose MA middleware, Everest, is developed to enhance Globus Toolkit’s security support for mobile agent applications.展开更多
Security has been the focus of grid systems recently. As a kind of tool, grid security infrastructure (GSI) provides the authentication and authorization services and so on. These mechanisms mostly belong to the obj...Security has been the focus of grid systems recently. As a kind of tool, grid security infrastructure (GSI) provides the authentication and authorization services and so on. These mechanisms mostly belong to the objective factors, which have not met the needs of security. As the subjective factor, trust model plays an important role in security field. A new two-level reputation trust architecture for grid is given to reduce the costs of system management largely, in which trust relationships amongst virtual organizations (VOs) are built on domain trust managers (DTMs) rather than resource nodes (RNs). Taking inter-domain trust propagation for example, trust model is improved by integrating global reputation and the subjective trust concept of each recommender into synthesizing final trust value. Moreover, before the grid starts to interact with the trustworthy entities, the pre-measure scheme is presented to filter distrustful entities further, which is based on accuracy and honesty. Experimental results indicate that the model can prevent from the malicious attacks better.展开更多
Current trusted computing platform only verifies application's static Hash value, it could not prevent application from being dynamic attacked. This paper gives one static analysis-based behavior model building metho...Current trusted computing platform only verifies application's static Hash value, it could not prevent application from being dynamic attacked. This paper gives one static analysis-based behavior model building method for trusted computing dynamic verification, including control flow graph (CFG) building, finite state automata (FSA) constructing, e run cycle removing, e transition removing, deterministic finite state (DFA) constructing, trivial FSA removing, and global push down automata (PDA) constructing. According to experiment, this model built is a reduced model for dynamic verification and covers all possible paths, because it is based on binary file static analysis.展开更多
文摘In an era dominated by artificial intelligence (AI), establishing customer confidence is crucial for the integration and acceptance of AI technologies. This interdisciplinary study examines factors influencing customer trust in AI systems through a mixed-methods approach, blending quantitative analysis with qualitative insights to create a comprehensive conceptual framework. Quantitatively, the study analyzes responses from 1248 participants using structural equation modeling (SEM), exploring interactions between technological factors like perceived usefulness and transparency, psychological factors including perceived risk and domain expertise, and organizational factors such as leadership support and ethical accountability. The results confirm the model, showing significant impacts of these factors on consumer trust and AI adoption attitudes. Qualitatively, the study includes 35 semi-structured interviews and five case studies, providing deeper insight into the dynamics shaping trust. Key themes identified include the necessity of explainability, domain competence, corporate culture, and stakeholder engagement in fostering trust. The qualitative findings complement the quantitative data, highlighting the complex interplay between technology capabilities, human perceptions, and organizational practices in establishing trust in AI. By integrating these findings, the study proposes a novel conceptual model that elucidates how various elements collectively influence consumer trust in AI. This model not only advances theoretical understanding but also offers practical implications for businesses and policymakers. The research contributes to the discourse on trust creation and decision-making in technology, emphasizing the need for interdisciplinary efforts to address societal challenges associated with technological advancements. It lays the groundwork for future research, including longitudinal, cross-cultural, and industry-specific studies, to further explore consumer trust in AI.
基金Project (No. 602032) supported by the Natural Science Foundationof Zhejiang Province, China
文摘New challenges are introduced when people try to build a general-purpose mobile agent middleware in Grid envi- ronment. In this paper, an instance-oriented security mechanism is proposed to deal with possible security threats in such mobile agent systems. The current security support in Grid Security Infrastructure (GSI) requires the users to delegate their privileges to certain hosts. This host-oriented solution is insecure and inflexible towards mobile agent applications because it cannot prevent delegation abuse and control well the diffusion of damage. Our proposed solution introduces security instance, which is an en- capsulation of one set of authorizations and their validity specifications with respect to the agent’s specific code segments, or even the states and requests. Applications can establish and configure their security framework flexibly on the same platform, through defining instances and operations according to their own logic. Mechanisms are provided to allow users delegating their identity to these instances instead of certain hosts. By adopting this instance-oriented security mechanism, a Grid-based general-purpose MA middleware, Everest, is developed to enhance Globus Toolkit’s security support for mobile agent applications.
基金the National Natural Science Foundation of China(60573141,60773041)the Hi-Tech Research and Development Program of China (2006AA01Z439)+5 种基金Natural Science Foundation of Jiangsu Province (BK2005146)High Technology Research Program of Jiangsu Province (BG2004004,BG2005037 and BG2006001)Key Laboratory of Information Technology Processing of Jiangsu Province (kjs05001,kjs0606)High Technology Research Program of Nanjing City (2006RZ105)State Key Laboratory of Modern Communication (9140C1101010603) Project sponsored by Jiangsu provincial research scheme of natural science for higher education institutions (07KJB520083).
文摘Security has been the focus of grid systems recently. As a kind of tool, grid security infrastructure (GSI) provides the authentication and authorization services and so on. These mechanisms mostly belong to the objective factors, which have not met the needs of security. As the subjective factor, trust model plays an important role in security field. A new two-level reputation trust architecture for grid is given to reduce the costs of system management largely, in which trust relationships amongst virtual organizations (VOs) are built on domain trust managers (DTMs) rather than resource nodes (RNs). Taking inter-domain trust propagation for example, trust model is improved by integrating global reputation and the subjective trust concept of each recommender into synthesizing final trust value. Moreover, before the grid starts to interact with the trustworthy entities, the pre-measure scheme is presented to filter distrustful entities further, which is based on accuracy and honesty. Experimental results indicate that the model can prevent from the malicious attacks better.
基金Supported by the National High Technology Research and Development Program of China (863 Program) (2006AA01Z442, 2007AA01Z411)the National Natural Science Foundation of China (60673071, 60970115)Open Foundation of State Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education in China (AISTC2008Q03)
文摘Current trusted computing platform only verifies application's static Hash value, it could not prevent application from being dynamic attacked. This paper gives one static analysis-based behavior model building method for trusted computing dynamic verification, including control flow graph (CFG) building, finite state automata (FSA) constructing, e run cycle removing, e transition removing, deterministic finite state (DFA) constructing, trivial FSA removing, and global push down automata (PDA) constructing. According to experiment, this model built is a reduced model for dynamic verification and covers all possible paths, because it is based on binary file static analysis.