期刊文献+
共找到862篇文章
< 1 2 44 >
每页显示 20 50 100
Audio-Text Multimodal Speech Recognition via Dual-Tower Architecture for Mandarin Air Traffic Control Communications
1
作者 Shuting Ge Jin Ren +3 位作者 Yihua Shi Yujun Zhang Shunzhi Yang Jinfeng Yang 《Computers, Materials & Continua》 SCIE EI 2024年第3期3215-3245,共31页
In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a p... In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management. 展开更多
关键词 speech-text multimodal automatic speech recognition semantic alignment air traffic control communications dual-tower architecture
下载PDF
A Robust Conformer-Based Speech Recognition Model for Mandarin Air Traffic Control
2
作者 Peiyuan Jiang Weijun Pan +2 位作者 Jian Zhang Teng Wang Junxiang Huang 《Computers, Materials & Continua》 SCIE EI 2023年第10期911-940,共30页
This study aims to address the deviation in downstream tasks caused by inaccurate recognition results when applying Automatic Speech Recognition(ASR)technology in the Air Traffic Control(ATC)field.This paper presents ... This study aims to address the deviation in downstream tasks caused by inaccurate recognition results when applying Automatic Speech Recognition(ASR)technology in the Air Traffic Control(ATC)field.This paper presents a novel cascaded model architecture,namely Conformer-CTC/Attention-T5(CCAT),to build a highly accurate and robust ATC speech recognition model.To tackle the challenges posed by noise and fast speech rate in ATC,the Conformer model is employed to extract robust and discriminative speech representations from raw waveforms.On the decoding side,the Attention mechanism is integrated to facilitate precise alignment between input features and output characters.The Text-To-Text Transfer Transformer(T5)language model is also introduced to handle particular pronunciations and code-mixing issues,providing more accurate and concise textual output for downstream tasks.To enhance the model’s robustness,transfer learning and data augmentation techniques are utilized in the training strategy.The model’s performance is optimized by performing hyperparameter tunings,such as adjusting the number of attention heads,encoder layers,and the weights of the loss function.The experimental results demonstrate the significant contributions of data augmentation,hyperparameter tuning,and error correction models to the overall model performance.On the Our ATC Corpus dataset,the proposed model achieves a Character Error Rate(CER)of 3.44%,representing a 3.64%improvement compared to the baseline model.Moreover,the effectiveness of the proposed model is validated on two publicly available datasets.On the AISHELL-1 dataset,the CCAT model achieves a CER of 3.42%,showcasing a 1.23%improvement over the baseline model.Similarly,on the LibriSpeech dataset,the CCAT model achieves a Word Error Rate(WER)of 5.27%,demonstrating a performance improvement of 7.67%compared to the baseline model.Additionally,this paper proposes an evaluation criterion for assessing the robustness of ATC speech recognition systems.In robustness evaluation experiments based on this criterion,the proposed model demonstrates a performance improvement of 22%compared to the baseline model. 展开更多
关键词 Air traffic control automatic speech recognition CONFORMER robustness evaluation T5 error correction model
下载PDF
Mapping methods for output-based objective speech quality assessment using data mining 被引量:2
3
作者 王晶 赵胜辉 +1 位作者 谢湘 匡镜明 《Journal of Central South University》 SCIE EI CAS 2014年第5期1919-1926,共8页
Objective speech quality is difficult to be measured without the input reference speech.Mapping methods using data mining are investigated and designed to improve the output-based speech quality assessment algorithm.T... Objective speech quality is difficult to be measured without the input reference speech.Mapping methods using data mining are investigated and designed to improve the output-based speech quality assessment algorithm.The degraded speech is firstly separated into three classes(unvoiced,voiced and silence),and then the consistency measurement between the degraded speech signal and the pre-trained reference model for each class is calculated and mapped to an objective speech quality score using data mining.Fuzzy Gaussian mixture model(GMM)is used to generate the artificial reference model trained on perceptual linear predictive(PLP)features.The mean opinion score(MOS)mapping methods including multivariate non-linear regression(MNLR),fuzzy neural network(FNN)and support vector regression(SVR)are designed and compared with the standard ITU-T P.563 method.Experimental results show that the assessment methods with data mining perform better than ITU-T P.563.Moreover,FNN and SVR are more efficient than MNLR,and FNN performs best with 14.50% increase in the correlation coefficient and 32.76% decrease in the root-mean-square MOS error. 展开更多
关键词 语音质量 数据挖掘 映射方法 质量评估 模糊神经网络 多元非线性回归 一致性测量 ITU-T
下载PDF
Non-Intrusive Objective Speech Quality Measurement Based on Fuzzy GMM and SVR for Narrowband Speech
4
作者 王晶 张莹 +1 位作者 赵胜辉 匡镜明 《Journal of Beijing Institute of Technology》 EI CAS 2010年第1期76-81,共6页
Based on fuzzy Gaussian mixture model (FGMM) and support vector regression (SVR),an improved version of non-intrusive objective measurement for assessing quality of output speech without inputting clean speech is ... Based on fuzzy Gaussian mixture model (FGMM) and support vector regression (SVR),an improved version of non-intrusive objective measurement for assessing quality of output speech without inputting clean speech is proposed for narrowband speech.Its perceptual linear predictive (PLP) features extracted from clean speech and clustered by FGMM are used as an artificial reference model.Input speech is separated into three classes,for each a consistency parameter between each feature pair from test speech signals and its counterpart in the pre-trained FGMM reference model is calculated and mapped to an objective speech quality score using SVR method.The correlation degree between subjective mean opinion score (MOS) and objective MOS is analyzed.Experimental results show that the proposed method offers an effective technique and can give better performances than the ITU-T P.563 method under most of the test conditions for narrowband speech. 展开更多
关键词 non-intrusive measurement objective speech quality fuzzy Gaussian mixture model (FGMM) support vector regression (SVR)
下载PDF
Evaluation of VolP Speech Quality Using Neural Network
5
作者 Angel Garabitov Aleksandar Tsenov 《通讯和计算机(中英文版)》 2015年第5期237-243,共7页
关键词 语音质量评价 神经网络 VOIP 质量评估 参数影响 成本效益 统计数据 客户端
下载PDF
Impact of Languages and Accent on Perceived Speech Quality Predicted by Perceptual Evaluation of Speech Quality (PESQ) and Perceptual Objective Listening Quality Assessment (POLQA): Case of Moore, Dioula, French and English
6
作者 Daouda Konane Sibiri Tiemounou Wend Yam Serge Boris Ouedraogo 《Open Journal of Applied Sciences》 2021年第12期1324-1332,共9页
Perceptual Objective Listening Quality Assessment (POLQA) and Perceptual <span>Evaluation of Speech Quality (PESQ) are commonly used objective standards for evaluating speech quality. These methods were develope... Perceptual Objective Listening Quality Assessment (POLQA) and Perceptual <span>Evaluation of Speech Quality (PESQ) are commonly used objective standards for evaluating speech quality. These methods were developed and trained on native </span>speakers’ speech sequences of some western languages. One can then wonder how these methods perform if they are applied to other languages or if the<span> speaker is non-native. This paper deals with the evaluation of PESQ and POLQA </span>on languages that were not been considered when setting up these methods, with emphasis on Moore and Dioula, two local languages of Burkina Faso. <span>Another aspect is the evaluation of these two methods in the case of non-native speakers. For this purpose, in the one hand, the Mean Opinion Score-Listening Quality Objective (MOS-LQO) of PESQ and POLQA, computed for Moore and Dioula, are compared to those of French and English. On the second hand, the </span><span>MOS-LQO scores of French and English are compared for native and</span><span> non-native speakers, to evaluate the effect of the accent of speakers.</span> 展开更多
关键词 speech quality PESQ POLQA Language ACCENT
下载PDF
Control Emotion Intensity for LSTM-Based Expressive Speech Synthesis
7
作者 Xiaolian Zhu Liumeng Xue 《国际计算机前沿大会会议论文集》 2019年第2期654-656,共3页
To improve the performance of human-computer interaction interfaces, emotion is considered to be one of the most important factors. The major objective of expressive speech synthesis is to inject various expressions r... To improve the performance of human-computer interaction interfaces, emotion is considered to be one of the most important factors. The major objective of expressive speech synthesis is to inject various expressions reflecting different emotions to the synthesized speech. To effectively model and control the emotion, emotion intensity is introduced for expressive speech synthesis model to generate speech conveyed the delicate and complicate emotional states. The system was composed of an emotion analysis module with the goal of extracting control emotion intensity vector and a speech synthesis module responsible for mapping text characters to speech waveform. The proposed continuous variable “perception vector” is a data-driven approach of controlling the model to synthesize speech with different emotion intensities. Compared with the system using a one-hot vector to control emotion intensity, this model using perception vector is able to learn the high-level emotion information from low-level acoustic features. In terms of the model controllability and flexibility, both the objective and subjective evaluations demonstrate perception vector outperforms one-hot vector. 展开更多
关键词 EMOTION INTENSITY Expressive speech synthesis controlLABLE TEXT-TO-speech NEURAL networks
下载PDF
基于Speech SDK的机器人语音交互系统设计 被引量:8
8
作者 陈景帅 周风余 《北京联合大学学报》 CAS 2010年第1期25-29,共5页
介绍了一种基于Microsoft Speech SDK5.1的机器人语音交互系统,利用Speech SDK5.1提供的应用程序编程接口SAPI进行语音识别,对识别结果在逻辑程序中处理,使用Inter-phonic5.0语音合成技术替代TTS技术来合成语音,实现了AHRR-I接待机器人... 介绍了一种基于Microsoft Speech SDK5.1的机器人语音交互系统,利用Speech SDK5.1提供的应用程序编程接口SAPI进行语音识别,对识别结果在逻辑程序中处理,使用Inter-phonic5.0语音合成技术替代TTS技术来合成语音,实现了AHRR-I接待机器人的语音对话和语音控制。 展开更多
关键词 接待机器人 speech SDK 语音识别 语音控制 SAPI
下载PDF
Speech SDK在语音机器人开发中的应用 被引量:6
9
作者 初琦 《北京工业职业技术学院学报》 2008年第4期32-36,共5页
首先表述了语音识别在机器人控制系统中的作用,然后重点介绍如何开发Source Access Point Iden-tifier语音识别软件系统,并实现对机器人的语音命令控制和简单的人机对话,对设计具有语音识别功能的智能机器人具有参考意义。
关键词 语音机器人 语音识别 SAPI 机器人控制系统 SDK
下载PDF
英语素质教育方法之一——自由演讲(Free Speech)
10
作者 王军 董亚军 逄然 《中国环境管理干部学院学报》 CAS 2007年第3期115-117,共3页
自由演讲是英语教育中素质教育的一个重要方法,以提高学生的口语能力为目标,激发学生的学习兴趣,为之后的课本学习打下良好基础。具体形式有三种:自我表述式(起始阶段),重在引导学生积极发言,提高兴趣;表演问答式(发展阶段),进一步发展... 自由演讲是英语教育中素质教育的一个重要方法,以提高学生的口语能力为目标,激发学生的学习兴趣,为之后的课本学习打下良好基础。具体形式有三种:自我表述式(起始阶段),重在引导学生积极发言,提高兴趣;表演问答式(发展阶段),进一步发展学生能力,提高层次;话题讨论式(提高阶段),发展学生自我表述能力,进程控制能力,达到融会贯通。 展开更多
关键词 英语素质教育 口语能力 自由演讲
下载PDF
基于Speech SDK的语音控制应用程序的设计与实现 被引量:40
11
作者 李禹材 左友东 +1 位作者 郑秀清 王玲 《计算机应用》 CSCD 北大核心 2004年第6期114-116,共3页
分析了微软SpeechSDK5.1里语音应用程序接口(SAPI)的结构和工作原理,提出了语音控制应用程序的设计方法,并以"Z+Z智能教学平台的语音识别接口"的设计为例,展示了这类系统的主框架和关键技术。
关键词 语音识别 COM SAPI 语音控制
下载PDF
基于Speech SDK的语音识别技术在三维仿真中的应用 被引量:4
12
作者 林鸣霄 《计算机技术与发展》 2011年第11期160-162,166,共4页
随着三维仿真技术的不断发展,简单的人机交互方式已经不能满足人们对仿真环境真实感和沉浸感的要求。针对于此,提出了将基于Speech SDK5.1的语音识别技术应用到三维仿真平台的构想,分析了Speech SDK5.1的工作原理,着重研究了其语音识别... 随着三维仿真技术的不断发展,简单的人机交互方式已经不能满足人们对仿真环境真实感和沉浸感的要求。针对于此,提出了将基于Speech SDK5.1的语音识别技术应用到三维仿真平台的构想,分析了Speech SDK5.1的工作原理,着重研究了其语音识别接口,对将语音识别应用到三维仿真程序中的可能性和关键技术进行了研究。提出了一种实现动态词汇识别的方法,并通过一个简单的实例展示了实现这类技术的框架和方法,对设计有语音识别功能的三维仿真程序有一定的参考价值。 展开更多
关键词 语音识别 三维仿真 speech SDK COM 语音控制
下载PDF
Influence of Collective Esophageal Speech Training on Self-efficacy in Chinese Laryngectomees:A Pretest-posttest Group Study 被引量:2
13
作者 Qing CHEN Jing LUO +5 位作者 Jun-ping LI Dan-ni JIAN Yong YUCHI Hong-xia RUAN Xiao-li HUANG Miao WANG 《Current Medical Science》 SCIE CAS 2019年第5期810-815,共6页
Total laryngectomy affects the speaking functions of many patients.Speech deprivation has great impacts on the quality of life of patients,especially on self-efficacy.Learning esophageal speech represents a way to hel... Total laryngectomy affects the speaking functions of many patients.Speech deprivation has great impacts on the quality of life of patients,especially on self-efficacy.Learning esophageal speech represents a way to help laryngectomees speak again.The purpose of this study was to determine the influence of collective esophageal speech training on self-efficacy of laryngectomees.In this study,28 patients and 30 family members were included.The participants received information about training via telephone or a WeChat group.Collective esophageal speech training was used to educate laryngectomees on esophageal speech.Before and after collective esophageal speech training,all participants completed the General Self-Efficacy Scale(GSES)to assess their perceptions on self-efficacy.Through the training,laryngectomees recovered their speech.After the training,the self-efficacy scores of laryngectomees were higher than those before the training,with significant differences noted(P<0.05).However,family members'scores did not change significantly.In conclusion,collective esophageal speech training is not only convenient and economical,but also improves self-efficacy and confidence of laryngectomees.Greater self-efficacy is helpful for laryngectomees to master esophageal speech and improve their quality of life.In addition,more attention should be focused on improving the self-efficacy of family members and making them give full play to their talent and potential on laryngectomees'voice rehabilitation. 展开更多
关键词 laryngectomees ESOPHAGEAL speech COLLECTIVE training SELF-EFFICACY quality of LIFE
下载PDF
基于Speech SDK的船舶机械损伤案例查询软件开发
14
作者 刘江 汪士丰 徐善林 《机电设备》 2011年第3期42-44,共3页
在分析Microsoft Speech SDK中英文引擎的基础上,在Visual Basic中实现了船舶机损案例的查询,并通过语音读出.
关键词 speech SDK 语音控制 船舶机损
下载PDF
Transmission Considerations with QoS Support to Deliver Real-Time Distributed Speech Recognition Applications
15
作者 Zhu Xiao-gang Zhu Hong-wen Rong Meng-tian 《Wuhan University Journal of Natural Sciences》 EI CAS 2002年第1期65-70,共6页
Distributed speech recognition (DSR) applications have certain QoS (Quality of service) requirements in terms of latency, packet loss rate, etc. To deliver quality guaranteed DSR application over wirelined or wireless... Distributed speech recognition (DSR) applications have certain QoS (Quality of service) requirements in terms of latency, packet loss rate, etc. To deliver quality guaranteed DSR application over wirelined or wireless links, some QoS mechanisms should be provided. We put forward a RTP/RSVP transmission scheme with DSR-specific payload and QoS parameters by modifying the present WAP protocol stack. The simulation result shows that this scheme will provide adequate network bandwidth to keep the real-time transport of DSR data over either wirelined or wireless channels. 展开更多
关键词 distributed speech recognition quality of service real-time transmission protocol resource reservation protocol wireless application protocol
下载PDF
SELECTION OF PROPER EMBEDDING DIMENSION IN PHASE SPACE RECONSTRUCTION OF SPEECH SIGNALS
16
作者 Lin Jiayu Huang Zhiping Wang Yueke Shen Zhenken (Dept.4 and Dept.8, Nat/onaJ University of Defence Technology, Changsha 410073) 《Journal of Electronics(China)》 2000年第2期161-169,共9页
In phase space reconstruction of time series, the selection of embedding dimension is important. Based on the idea of checking the behavior of near neighbors in the reconstruction dimension, a new method to determine ... In phase space reconstruction of time series, the selection of embedding dimension is important. Based on the idea of checking the behavior of near neighbors in the reconstruction dimension, a new method to determine proper minimum embedding dimension is constructed. This method has a sound theoretical basis and can lead to good result. It can indicate the noise level in the data to be reconstructed, and estimate the reconstruction quality. It is applied to speech signal reconstruction and the generic embedding dimension of speech signals is deduced. 展开更多
关键词 speech signals CHAOS Phase space RECONSTRUCTION EMBEDDING DIMENSION False nearest NEIGHBOR Noise level estimation RECONSTRUCTION quality
下载PDF
Web Voice Browser Based on an ISLPC Text-to-Speech Algorithm
17
作者 LIAO Rikun JI Yuefeng LI Hui 《Wuhan University Journal of Natural Sciences》 CAS 2006年第5期1157-1160,共4页
A kind of Web voice browser based on improved synchronous linear predictive coding (ISLPC) and Text-toSpeech (TTS) algorithm and Internet application was proposed. The paper analyzes the features of TTS system wit... A kind of Web voice browser based on improved synchronous linear predictive coding (ISLPC) and Text-toSpeech (TTS) algorithm and Internet application was proposed. The paper analyzes the features of TTS system with ISLPC speech synthesis and discusses the design and implementation of ISLPC TTS-based Web voice browser. The browser integrates Web technology, Chinese information processing, artificial intelligence and the key technology of Chinese ISLPC speech synthesis. It's a visual and audible web browser that can improve information precision for network users. The evaluation results show that ISLPC-based TTS model has a better performance than other browsers in voice quality and capability of identifying Chinese characters. 展开更多
关键词 improved synchronous linear predictive coding (ISLPC) Text-to-speech (TTS) Web voice browser voice quality
下载PDF
音乐疗法对老年性聋患者言语识别能力及负面情绪的影响
18
作者 刘亚珍 刘烨松 仇顺锋 《中国听力语言康复科学杂志》 2024年第3期290-293,共4页
目的 分析音乐疗法对老年性聋患者负面情绪、言语识别能力的影响。方法 纳入2021年1月~2023年3月我院接纳的老年性聋患者80例,随机分为对照组(40例,言语识别能力训练)与音乐组(40例,言语识别能力训练+音乐疗法),训练前后比较两组患者听... 目的 分析音乐疗法对老年性聋患者负面情绪、言语识别能力的影响。方法 纳入2021年1月~2023年3月我院接纳的老年性聋患者80例,随机分为对照组(40例,言语识别能力训练)与音乐组(40例,言语识别能力训练+音乐疗法),训练前后比较两组患者听觉功能、负面情绪、认知功能及生活质量。结果 训练后音乐组平均听阈、老年听力障碍筛查量表(hearing handicap inventory for the elderly-screening,HHIE-S)评分显著低于对照组(P<0.05);音乐组焦虑自评量表(self-rating anxiety scale,SAS)、抑郁自评量表(self-rating depression scale,SDS)评分显著低于对照组(P<0.05);音乐组简易精神状态评价量表(mini-mentalstateexamination,MMSE)、蒙特利尔认知评估量表(montrealcognitive assessment,MoCA)评分显著高于对照组(P<0.05);音乐组生活质量综合评定问卷-74(generic quality of life inventory-74,GQOLI-74)各维度评分(物质、社会、躯体、心理)显著高于对照组(P<0.05)。结论 对老年性聋患者采用音乐疗法可显著改善言语识别能力,减轻患者负面情绪,提高其认知功能及生活质量。 展开更多
关键词 音乐疗法 言语识别能力 负面情绪 生活质量 认知功能
下载PDF
基于频谱分析仪的语音识别及控制软件系统设计 被引量:1
19
作者 赵元琪 尹永柯 +1 位作者 王洪君 房明 《现代电子技术》 北大核心 2024年第6期27-31,共5页
随着数据处理技术的进步和人工智能领域的高速发展,用户在对仪器的实际使用中持续追求更为高效便捷的操控方式,同时也相当看重使用过程的灵活性和准确性,语音数据因其实用性和高效性而被广泛使用。因此,提出一种基于频谱分析仪的语音识... 随着数据处理技术的进步和人工智能领域的高速发展,用户在对仪器的实际使用中持续追求更为高效便捷的操控方式,同时也相当看重使用过程的灵活性和准确性,语音数据因其实用性和高效性而被广泛使用。因此,提出一种基于频谱分析仪的语音识别及控制软件系统。该系统支持Ubuntu 18.04及以上版本操作系统,通过语音指令实现对频谱分析仪的控制,可以实现语音唤醒、语音录入及保存、离线语音识别并转换为文字文本、可执行代码等功能。 展开更多
关键词 语音识别 控制软件 频谱分析仪 UBUNTU 语音唤醒 语音听写
下载PDF
基于时序对齐的风格控制语音合成算法
20
作者 郭傲 许柏炎 +1 位作者 蔡瑞初 郝志峰 《广东工业大学学报》 CAS 2024年第2期84-92,共9页
语音合成风格控制的目标是将自然语言转化为对应富有表现力的音频输出。基于Transformer的风格控制语音合成算法能在保持质量的情况下提高了合成速度,但仍存在不足:第一,在风格参考音频和文本长度差异大的情况下,存在合成音频部分风格... 语音合成风格控制的目标是将自然语言转化为对应富有表现力的音频输出。基于Transformer的风格控制语音合成算法能在保持质量的情况下提高了合成速度,但仍存在不足:第一,在风格参考音频和文本长度差异大的情况下,存在合成音频部分风格缺失的问题;第二,基于普通注意力的解码过程容易出现复读、漏读以及跳读的问题。针对以上问题,提出了一种基于时间对齐的风格控制语音合成算法(Temporal Alignment Text-to-Speech,TATTS)分别在编码和解码过程中有效利用时序信息。在编码过程中,TATTS提出了时序对齐的交叉注意力模块联合训练风格音频与文本表示,解决了不等长音频文本的对齐问题;在解码过程中,TATTS考虑了音频时序单调性,在Transformer解码器中引入了逐步单调的多头注意力机制,解决了合成音频中出现的错读问题。与基准模型相比,TATTS在LJSpeech和VCTK数据集上音频结果自然度分别提升了3.8%和4.8%,在VCTK数据集上风格相似度提升了10%,验证了该语音合成算法的有效性,并且体现出风格控制与迁移能力。 展开更多
关键词 语音合成 时序对齐 风格控制 TRANSFORMER 风格迁移
下载PDF
上一页 1 2 44 下一页 到第
使用帮助 返回顶部