Space/air communications have been envisioned as an essential part of the next-generation mobile communication networks for providing highquality global connectivity. However, the inherent broadcasting nature of wirel...Space/air communications have been envisioned as an essential part of the next-generation mobile communication networks for providing highquality global connectivity. However, the inherent broadcasting nature of wireless propagation environment and the broad coverage pose severe threats to the protection of private data. Emerging covert communications provides a promising solution to achieve robust communication security. Aiming at facilitating the practical implementation of covert communications in space/air networks, we present a tutorial overview of its potentials, scenarios, and key technologies. Specifically, first, the commonly used covertness constraint model, covert performance metrics, and potential application scenarios are briefly introduced. Then, several efficient methods that introduce uncertainty into the covert system are thoroughly summarized, followed by several critical enabling technologies, including joint resource allocation and deployment/trajectory design, multi-antenna and beamforming techniques, reconfigurable intelligent surface(RIS), and artificial intelligence algorithms. Finally, we highlight some open issues for future investigation.展开更多
People learn causal relations since childhood using counterfactual reasoning. Counterfactual reasoning uses counterfactual examples which take the form of “what if this has happened differently”. Counterfactual exam...People learn causal relations since childhood using counterfactual reasoning. Counterfactual reasoning uses counterfactual examples which take the form of “what if this has happened differently”. Counterfactual examples are also the basis of counterfactual explanation in explainable artificial intelligence (XAI). However, a framework that relies solely on optimization algorithms to find and present counterfactual samples cannot help users gain a deeper understanding of the system. Without a way to verify their understanding, the users can even be misled by such explanations. Such limitations can be overcome through an interactive and iterative framework that allows the users to explore their desired “what-if” scenarios. The purpose of our research is to develop such a framework. In this paper, we present our “what-if” XAI framework (WiXAI), which visualizes the artificial intelligence (AI) classification model from the perspective of the user’s sample and guides their “what-if” exploration. We also formulated how to use the WiXAI framework to generate counterfactuals and understand the feature-feature and feature-output relations in-depth for a local sample. These relations help move the users toward causal understanding.展开更多
基金supported in part by the National Natural Science Foundation of China(NSFC)under grant numbers U22A2007 and 62171010the Beijing Natural Science Foundation under grant number L212003.
文摘Space/air communications have been envisioned as an essential part of the next-generation mobile communication networks for providing highquality global connectivity. However, the inherent broadcasting nature of wireless propagation environment and the broad coverage pose severe threats to the protection of private data. Emerging covert communications provides a promising solution to achieve robust communication security. Aiming at facilitating the practical implementation of covert communications in space/air networks, we present a tutorial overview of its potentials, scenarios, and key technologies. Specifically, first, the commonly used covertness constraint model, covert performance metrics, and potential application scenarios are briefly introduced. Then, several efficient methods that introduce uncertainty into the covert system are thoroughly summarized, followed by several critical enabling technologies, including joint resource allocation and deployment/trajectory design, multi-antenna and beamforming techniques, reconfigurable intelligent surface(RIS), and artificial intelligence algorithms. Finally, we highlight some open issues for future investigation.
文摘People learn causal relations since childhood using counterfactual reasoning. Counterfactual reasoning uses counterfactual examples which take the form of “what if this has happened differently”. Counterfactual examples are also the basis of counterfactual explanation in explainable artificial intelligence (XAI). However, a framework that relies solely on optimization algorithms to find and present counterfactual samples cannot help users gain a deeper understanding of the system. Without a way to verify their understanding, the users can even be misled by such explanations. Such limitations can be overcome through an interactive and iterative framework that allows the users to explore their desired “what-if” scenarios. The purpose of our research is to develop such a framework. In this paper, we present our “what-if” XAI framework (WiXAI), which visualizes the artificial intelligence (AI) classification model from the perspective of the user’s sample and guides their “what-if” exploration. We also formulated how to use the WiXAI framework to generate counterfactuals and understand the feature-feature and feature-output relations in-depth for a local sample. These relations help move the users toward causal understanding.