In an era dominated by artificial intelligence (AI), establishing customer confidence is crucial for the integration and acceptance of AI technologies. This interdisciplinary study examines factors influencing custome...In an era dominated by artificial intelligence (AI), establishing customer confidence is crucial for the integration and acceptance of AI technologies. This interdisciplinary study examines factors influencing customer trust in AI systems through a mixed-methods approach, blending quantitative analysis with qualitative insights to create a comprehensive conceptual framework. Quantitatively, the study analyzes responses from 1248 participants using structural equation modeling (SEM), exploring interactions between technological factors like perceived usefulness and transparency, psychological factors including perceived risk and domain expertise, and organizational factors such as leadership support and ethical accountability. The results confirm the model, showing significant impacts of these factors on consumer trust and AI adoption attitudes. Qualitatively, the study includes 35 semi-structured interviews and five case studies, providing deeper insight into the dynamics shaping trust. Key themes identified include the necessity of explainability, domain competence, corporate culture, and stakeholder engagement in fostering trust. The qualitative findings complement the quantitative data, highlighting the complex interplay between technology capabilities, human perceptions, and organizational practices in establishing trust in AI. By integrating these findings, the study proposes a novel conceptual model that elucidates how various elements collectively influence consumer trust in AI. This model not only advances theoretical understanding but also offers practical implications for businesses and policymakers. The research contributes to the discourse on trust creation and decision-making in technology, emphasizing the need for interdisciplinary efforts to address societal challenges associated with technological advancements. It lays the groundwork for future research, including longitudinal, cross-cultural, and industry-specific studies, to further explore consumer trust in AI.展开更多
Nowadays,with the significant growth of the mobile market,security issues on the Android Operation System have also become an urgent matter.Trusted execution environment(TEE)technologies are considered an option for s...Nowadays,with the significant growth of the mobile market,security issues on the Android Operation System have also become an urgent matter.Trusted execution environment(TEE)technologies are considered an option for satisfying the inviolable property by taking advantage of hardware security.However,for Android,TEE technologies still contain restrictions and limitations.The first issue is that non-original equipment manufacturer developers have limited access to the functionality of hardware-based TEE.Another issue of hardware-based TEE is the cross-platform problem.Since every mobile device supports different TEE vendors,it becomes an obstacle for developers to migrate their trusted applications to other Android devices.A software-based TEE solution is a potential approach that allows developers to customize,package and deliver the product efficiently.Motivated by that idea,this paper introduces a VTEE model,a software-based TEE solution,on Android devices.This research contributes to the analysis of the feasibility of using a virtualized TEE on Android devices by considering two metrics:computing performance and security.The experiment shows that the VTEE model can host other software-based TEE services and deliver various cryptography TEE functions on theAndroid environment.The security evaluation shows that adding the VTEE model to the existing Android does not addmore security issues to the traditional design.Overall,this paper shows applicable solutions to adjust the balance between computing performance and security.展开更多
Software systems in distributed environment are changing from a close and relatively static form, whose users are familiar with each other, to an open and highly dynamic mode, which can be visited by public. In such c...Software systems in distributed environment are changing from a close and relatively static form, whose users are familiar with each other, to an open and highly dynamic mode, which can be visited by public. In such circumstance, trust evaluation model becomes focus of intense research at current time. Trust evaluation model establishes a management framework of trust relationship between entities, involving expression and measurement of trust, comprehensive calculation of direct trust value and recommended trust value, and recognition of malicious entities and recommendations. Based on the analysis of several typical trust evaluation models, the classification of trust evaluation ideas and modes is discussed, the questions existing in current research and the directions of future research are pointed out.展开更多
Current trusted computing platform only verifies application's static Hash value, it could not prevent application from being dynamic attacked. This paper gives one static analysis-based behavior model building metho...Current trusted computing platform only verifies application's static Hash value, it could not prevent application from being dynamic attacked. This paper gives one static analysis-based behavior model building method for trusted computing dynamic verification, including control flow graph (CFG) building, finite state automata (FSA) constructing, e run cycle removing, e transition removing, deterministic finite state (DFA) constructing, trivial FSA removing, and global push down automata (PDA) constructing. According to experiment, this model built is a reduced model for dynamic verification and covers all possible paths, because it is based on binary file static analysis.展开更多
New challenges are introduced when people try to build a general-purpose mobile agent middleware in Grid envi- ronment. In this paper, an instance-oriented security mechanism is proposed to deal with possible security...New challenges are introduced when people try to build a general-purpose mobile agent middleware in Grid envi- ronment. In this paper, an instance-oriented security mechanism is proposed to deal with possible security threats in such mobile agent systems. The current security support in Grid Security Infrastructure (GSI) requires the users to delegate their privileges to certain hosts. This host-oriented solution is insecure and inflexible towards mobile agent applications because it cannot prevent delegation abuse and control well the diffusion of damage. Our proposed solution introduces security instance, which is an en- capsulation of one set of authorizations and their validity specifications with respect to the agent’s specific code segments, or even the states and requests. Applications can establish and configure their security framework flexibly on the same platform, through defining instances and operations according to their own logic. Mechanisms are provided to allow users delegating their identity to these instances instead of certain hosts. By adopting this instance-oriented security mechanism, a Grid-based general-purpose MA middleware, Everest, is developed to enhance Globus Toolkit’s security support for mobile agent applications.展开更多
Security has been the focus of grid systems recently. As a kind of tool, grid security infrastructure (GSI) provides the authentication and authorization services and so on. These mechanisms mostly belong to the obj...Security has been the focus of grid systems recently. As a kind of tool, grid security infrastructure (GSI) provides the authentication and authorization services and so on. These mechanisms mostly belong to the objective factors, which have not met the needs of security. As the subjective factor, trust model plays an important role in security field. A new two-level reputation trust architecture for grid is given to reduce the costs of system management largely, in which trust relationships amongst virtual organizations (VOs) are built on domain trust managers (DTMs) rather than resource nodes (RNs). Taking inter-domain trust propagation for example, trust model is improved by integrating global reputation and the subjective trust concept of each recommender into synthesizing final trust value. Moreover, before the grid starts to interact with the trustworthy entities, the pre-measure scheme is presented to filter distrustful entities further, which is based on accuracy and honesty. Experimental results indicate that the model can prevent from the malicious attacks better.展开更多
文摘In an era dominated by artificial intelligence (AI), establishing customer confidence is crucial for the integration and acceptance of AI technologies. This interdisciplinary study examines factors influencing customer trust in AI systems through a mixed-methods approach, blending quantitative analysis with qualitative insights to create a comprehensive conceptual framework. Quantitatively, the study analyzes responses from 1248 participants using structural equation modeling (SEM), exploring interactions between technological factors like perceived usefulness and transparency, psychological factors including perceived risk and domain expertise, and organizational factors such as leadership support and ethical accountability. The results confirm the model, showing significant impacts of these factors on consumer trust and AI adoption attitudes. Qualitatively, the study includes 35 semi-structured interviews and five case studies, providing deeper insight into the dynamics shaping trust. Key themes identified include the necessity of explainability, domain competence, corporate culture, and stakeholder engagement in fostering trust. The qualitative findings complement the quantitative data, highlighting the complex interplay between technology capabilities, human perceptions, and organizational practices in establishing trust in AI. By integrating these findings, the study proposes a novel conceptual model that elucidates how various elements collectively influence consumer trust in AI. This model not only advances theoretical understanding but also offers practical implications for businesses and policymakers. The research contributes to the discourse on trust creation and decision-making in technology, emphasizing the need for interdisciplinary efforts to address societal challenges associated with technological advancements. It lays the groundwork for future research, including longitudinal, cross-cultural, and industry-specific studies, to further explore consumer trust in AI.
基金This work was partly supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea Government(MSIT),(No.2020-0-00952,Development of 5G edge security technology for ensuring 5G+service stability and availability,50%)the Institute of Information and Communications Technology Planning and Evaluation(IITP)grant funded by the MSIT(Ministry of Science and ICT),Korea(No.IITP-2022-2020-0-01602,ITRC(Information Technology Research Center)support program,50%).
文摘Nowadays,with the significant growth of the mobile market,security issues on the Android Operation System have also become an urgent matter.Trusted execution environment(TEE)technologies are considered an option for satisfying the inviolable property by taking advantage of hardware security.However,for Android,TEE technologies still contain restrictions and limitations.The first issue is that non-original equipment manufacturer developers have limited access to the functionality of hardware-based TEE.Another issue of hardware-based TEE is the cross-platform problem.Since every mobile device supports different TEE vendors,it becomes an obstacle for developers to migrate their trusted applications to other Android devices.A software-based TEE solution is a potential approach that allows developers to customize,package and deliver the product efficiently.Motivated by that idea,this paper introduces a VTEE model,a software-based TEE solution,on Android devices.This research contributes to the analysis of the feasibility of using a virtualized TEE on Android devices by considering two metrics:computing performance and security.The experiment shows that the VTEE model can host other software-based TEE services and deliver various cryptography TEE functions on theAndroid environment.The security evaluation shows that adding the VTEE model to the existing Android does not addmore security issues to the traditional design.Overall,this paper shows applicable solutions to adjust the balance between computing performance and security.
基金the National Natural Science Foundation of China (60503020, 60503033, 60703086)the Natural Science Foundation of Jiangsu Province(BK2006094)+2 种基金the Opening Foundation of Jiangsu Key Labo-ratory of Computer Information Processing Technology in Soochow Univer-sity(KJS0714)the Research Foundation of Nanjing University of Posts and Telecommunications (NY207052,NY207082, NY207084)Microsoft Re-search Asia Internet Services Theme 2008
文摘Software systems in distributed environment are changing from a close and relatively static form, whose users are familiar with each other, to an open and highly dynamic mode, which can be visited by public. In such circumstance, trust evaluation model becomes focus of intense research at current time. Trust evaluation model establishes a management framework of trust relationship between entities, involving expression and measurement of trust, comprehensive calculation of direct trust value and recommended trust value, and recognition of malicious entities and recommendations. Based on the analysis of several typical trust evaluation models, the classification of trust evaluation ideas and modes is discussed, the questions existing in current research and the directions of future research are pointed out.
基金Supported by the National High Technology Research and Development Program of China (863 Program) (2006AA01Z442, 2007AA01Z411)the National Natural Science Foundation of China (60673071, 60970115)Open Foundation of State Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education in China (AISTC2008Q03)
文摘Current trusted computing platform only verifies application's static Hash value, it could not prevent application from being dynamic attacked. This paper gives one static analysis-based behavior model building method for trusted computing dynamic verification, including control flow graph (CFG) building, finite state automata (FSA) constructing, e run cycle removing, e transition removing, deterministic finite state (DFA) constructing, trivial FSA removing, and global push down automata (PDA) constructing. According to experiment, this model built is a reduced model for dynamic verification and covers all possible paths, because it is based on binary file static analysis.
基金Project (No. 602032) supported by the Natural Science Foundationof Zhejiang Province, China
文摘New challenges are introduced when people try to build a general-purpose mobile agent middleware in Grid envi- ronment. In this paper, an instance-oriented security mechanism is proposed to deal with possible security threats in such mobile agent systems. The current security support in Grid Security Infrastructure (GSI) requires the users to delegate their privileges to certain hosts. This host-oriented solution is insecure and inflexible towards mobile agent applications because it cannot prevent delegation abuse and control well the diffusion of damage. Our proposed solution introduces security instance, which is an en- capsulation of one set of authorizations and their validity specifications with respect to the agent’s specific code segments, or even the states and requests. Applications can establish and configure their security framework flexibly on the same platform, through defining instances and operations according to their own logic. Mechanisms are provided to allow users delegating their identity to these instances instead of certain hosts. By adopting this instance-oriented security mechanism, a Grid-based general-purpose MA middleware, Everest, is developed to enhance Globus Toolkit’s security support for mobile agent applications.
基金the National Natural Science Foundation of China(60573141,60773041)the Hi-Tech Research and Development Program of China (2006AA01Z439)+5 种基金Natural Science Foundation of Jiangsu Province (BK2005146)High Technology Research Program of Jiangsu Province (BG2004004,BG2005037 and BG2006001)Key Laboratory of Information Technology Processing of Jiangsu Province (kjs05001,kjs0606)High Technology Research Program of Nanjing City (2006RZ105)State Key Laboratory of Modern Communication (9140C1101010603) Project sponsored by Jiangsu provincial research scheme of natural science for higher education institutions (07KJB520083).
文摘Security has been the focus of grid systems recently. As a kind of tool, grid security infrastructure (GSI) provides the authentication and authorization services and so on. These mechanisms mostly belong to the objective factors, which have not met the needs of security. As the subjective factor, trust model plays an important role in security field. A new two-level reputation trust architecture for grid is given to reduce the costs of system management largely, in which trust relationships amongst virtual organizations (VOs) are built on domain trust managers (DTMs) rather than resource nodes (RNs). Taking inter-domain trust propagation for example, trust model is improved by integrating global reputation and the subjective trust concept of each recommender into synthesizing final trust value. Moreover, before the grid starts to interact with the trustworthy entities, the pre-measure scheme is presented to filter distrustful entities further, which is based on accuracy and honesty. Experimental results indicate that the model can prevent from the malicious attacks better.