In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a p...In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management.展开更多
Artificial Intelligence(AI)tools become essential across industries,distinguishing AI-generated from human-authored text is increasingly challenging.This study assesses the coherence of AI-generated titles and corresp...Artificial Intelligence(AI)tools become essential across industries,distinguishing AI-generated from human-authored text is increasingly challenging.This study assesses the coherence of AI-generated titles and corresponding abstracts in anticipation of rising AI-assisted document production.Our main goal is to examine the correlation between original and AI-generated titles,emphasizing semantic depth and similarity measures,particularly in the context of Large Language Models(LLMs).We argue that LLMs have transformed research focus,dissemination,and citation patterns across five selected knowledge areas:Business Administration and Management(BAM),Computer Science and Information Technology(CS),Engineering and Material Science(EMS),Medicine and Healthcare(MH),and Psychology and Behavioral Sciences(PBS).We collected 15000 titles and abstracts,narrowing the selection to 2000 through a rigorous multi-stage screening process adhering to our study’s criteria.Result shows that there is insufficient evidence to suggest that LLM outperforms human authors in article title generation or articles from the LLM era demonstrates a marked difference in semantic richness and readability compared to those from the pre-LLM.Instead,it asserts that LLM is a valuable tool and can assist researchers in generating titles.With LLM’s assistance,the researcher ensures that the content is reflective of the finalized abstract and core research themes,potentially increasing the impact and accessibility and readability of the academic work.展开更多
基金This research was funded by Shenzhen Science and Technology Program(Grant No.RCBS20221008093121051)the General Higher Education Project of Guangdong Provincial Education Department(Grant No.2020ZDZX3085)+1 种基金China Postdoctoral Science Foundation(Grant No.2021M703371)the Post-Doctoral Foundation Project of Shenzhen Polytechnic(Grant No.6021330002K).
文摘In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management.
文摘Artificial Intelligence(AI)tools become essential across industries,distinguishing AI-generated from human-authored text is increasingly challenging.This study assesses the coherence of AI-generated titles and corresponding abstracts in anticipation of rising AI-assisted document production.Our main goal is to examine the correlation between original and AI-generated titles,emphasizing semantic depth and similarity measures,particularly in the context of Large Language Models(LLMs).We argue that LLMs have transformed research focus,dissemination,and citation patterns across five selected knowledge areas:Business Administration and Management(BAM),Computer Science and Information Technology(CS),Engineering and Material Science(EMS),Medicine and Healthcare(MH),and Psychology and Behavioral Sciences(PBS).We collected 15000 titles and abstracts,narrowing the selection to 2000 through a rigorous multi-stage screening process adhering to our study’s criteria.Result shows that there is insufficient evidence to suggest that LLM outperforms human authors in article title generation or articles from the LLM era demonstrates a marked difference in semantic richness and readability compared to those from the pre-LLM.Instead,it asserts that LLM is a valuable tool and can assist researchers in generating titles.With LLM’s assistance,the researcher ensures that the content is reflective of the finalized abstract and core research themes,potentially increasing the impact and accessibility and readability of the academic work.