The attacks on in-vehicle Controller Area Network(CAN)bus messages severely disrupt normal communication between vehicles.Therefore,researches on intrusion detection models for CAN have positive business value for veh...The attacks on in-vehicle Controller Area Network(CAN)bus messages severely disrupt normal communication between vehicles.Therefore,researches on intrusion detection models for CAN have positive business value for vehicle security,and the intrusion detection technology for CAN bus messages can effectively protect the invehicle network from unlawful attacks.Previous machine learning-based models are unable to effectively identify intrusive abnormal messages due to their inherent shortcomings.Hence,to address the shortcomings of the previous machine learning-based intrusion detection technique,we propose a novel method using Attention Mechanism and AutoEncoder for Intrusion Detection(AMAEID).The AMAEID model first converts the raw hexadecimal message data into binary format to obtain better input.Then the AMAEID model encodes and decodes the binary message data using a multi-layer denoising autoencoder model to obtain a hidden feature representation that can represent the potential features behind the message data at a deeper level.Finally,the AMAEID model uses the attention mechanism and the fully connected layer network to infer whether the message is an abnormal message or not.The experimental results with three evaluation metrics on a real in-vehicle CAN bus message dataset outperform some traditional machine learning algorithms,demonstrating the effectiveness of the AMAEID model.展开更多
With the vigorous development of automobile industry,in-vehicle network is also constantly upgraded to meet data transmission requirements of emerging applications.The main transmission requirements are low latency an...With the vigorous development of automobile industry,in-vehicle network is also constantly upgraded to meet data transmission requirements of emerging applications.The main transmission requirements are low latency and certainty especially for autonomous driving.Time sensitive networking(TSN)based on Ethernet gives a possible solution to these requirements.Previous surveys usually investigated TSN from a general perspective,which referred to TSN of various application fields.In this paper,we focus on the application of TSN to the in-vehicle networks.For in-vehicle networks,we discuss all related TSN standards specified by IEEE 802.1 work group up to now.We further overview and analyze recent literature on various aspects of TSN for automotive applications,including synchronization,resource reservation,scheduling,certainty,software and hardware.Application scenarios of TSN for in-vehicle networks are analyzed one by one.Since TSN of in-vehicle network is still at a very initial stage,this paper also gives insights on open issues,future research directions and possible solutions.展开更多
Visual media have dominated sensory communications for decades,and the resulting“visual hegemony”leads to the call for the“auditory return”in order to achieve a holistic balance in cultural acceptance.Romance of t...Visual media have dominated sensory communications for decades,and the resulting“visual hegemony”leads to the call for the“auditory return”in order to achieve a holistic balance in cultural acceptance.Romance of the Three Kingdoms,a classic literary work in China,has received significant attention and promotion from leading audio platforms.However,the commercialization of digital audio publishing faces unprecedented challenges due to the mismatch between the dissemination of long-form content on digital audio platforms and the current trend of short and fast information reception.Drawing on the Business Model Canvas Theory and taking Romance of the Three Kingdoms as the main focus of analysis,this paper argues that the construction of a business model for the audio publishing of classical books should start from three aspects:the user evaluation of digital audio platforms,the establishment of value propositions based on the“creative transformation and innovative development”principle,and the improvement of the audio publishing infrastructure to ensure the healthy operation and development of the digital audio platforms and consequently improve their current state of development and expand the boundaries of cultural heritage.展开更多
Background Considerable research has been conducted in the areas of audio-driven virtual character gestures and facial animation with some degree of success.However,few methods exist for generating full-body animation...Background Considerable research has been conducted in the areas of audio-driven virtual character gestures and facial animation with some degree of success.However,few methods exist for generating full-body animations,and the portability of virtual character gestures and facial animations has not received sufficient attention.Methods Therefore,we propose a deep-learning-based audio-to-animation-and-blendshape(Audio2AB)network that generates gesture animations and ARK it's 52 facial expression parameter blendshape weights based on audio,audio-corresponding text,emotion labels,and semantic relevance labels to generate parametric data for full-body animations.This parameterization method can be used to drive full-body animations of virtual characters and improve their portability.In the experiment,we first downsampled the gesture and facial data to achieve the same temporal resolution for the input,output,and facial data.The Audio2AB network then encoded the audio,audio-corresponding text,emotion labels,and semantic relevance labels,and then fused the text,emotion labels,and semantic relevance labels into the audio to obtain better audio features.Finally,we established links between the body,gestures,and facial decoders and generated the corresponding animation sequences through our proposed GAN-GF loss function.Results By using audio,audio-corresponding text,and emotional and semantic relevance labels as input,the trained Audio2AB network could generate gesture animation data containing blendshape weights.Therefore,different 3D virtual character animations could be created through parameterization.Conclusions The experimental results showed that the proposed method could generate significant gestures and facial animations.展开更多
Depression is a common mental health disorder.With current depression detection methods,specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary mea...Depression is a common mental health disorder.With current depression detection methods,specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary measures for depression assessment.Non-biological markers-typically classified as verbal or non-verbal and deemed crucial evaluation criteria for depression-have not been effectively utilized.Specialized physicians usually require extensive training and experience to capture changes in these features.Advancements in deep learning technology have provided technical support for capturing non-biological markers.Several researchers have proposed automatic depression estimation(ADE)systems based on sounds and videos to assist physicians in capturing these features and conducting depression screening.This article summarizes commonly used public datasets and recent research on audio-and video-based ADE based on three perspectives:Datasets,deficiencies in existing research,and future development directions.展开更多
The types and quantities of volatile organic compounds (VOCs) inside vehicles have been determined in one new vehicle and two old vehicles under static conditions using the Thermodesorber-Gas Chromatograph/Mass Spec...The types and quantities of volatile organic compounds (VOCs) inside vehicles have been determined in one new vehicle and two old vehicles under static conditions using the Thermodesorber-Gas Chromatograph/Mass Spectrometer (TD-GC/MS). Air sampling and analysis was conducted under the requirement of USEPA Method TO-17. A room-size, environment test chamber was utilized to provide stable and accurate control of the required environmental conditions (temperature, humidity, horizontal and vertical airflow velocity, and background VOCs concentration). Static vehicle testing demonstrated that although the amount of total volatile organic compounds (TVOC) detected within each vehicle was relatively distinct (4940 μg/m^3 in the new vehicle A, 1240 μg/m^3 in used vehicle B, and 132 μg/m^3 in used vehicle C), toluene, xylene, some aromatic compounds, and various C7-C12 alkanes were among the predominant VOC species in all three vehicles tested. In addition, tetramethyl succinonitrile, possibly derived from foam cushions was detected in vehicle B. The types and quantities of VOCs varied considerably according to various kinds of factors, such as, vehicle age, vehicle model, temperature, air exchange rate, and environment airflow velocity. For example, if the airflow velocity increases from 0.1 m/s to 0.7 m/s, the vehicle's air exchange rate increases from 0.15 h^-1 to 0.67 h^-1, and in-vehicle TVOC concentration decreases from 1780 to 1201 μg/m^3.展开更多
This paper presents embedded system design of the In-Vehicle System (IVS) for the European Union (EU) emergency call (eCall) system. The IVS transmitter modules are designed, developed and implemented on a field progr...This paper presents embedded system design of the In-Vehicle System (IVS) for the European Union (EU) emergency call (eCall) system. The IVS transmitter modules are designed, developed and implemented on a field programmable gate array (FPGA) device. The modules are simulated, synthesized, and optimized to be loaded on a reconfigurable device as a system-on-chip (SoC) for the IVS electronic device. All the modules of the transmitter are designed as a single embedded module. The bench-top test is completed for testing and verification of the developed modules. The hardware architecture and interfaces are discussed. The IVS signal processing time is analyzed for multiple frequencies. A range of appropriate frequency and two hardware interfaces are proposed. A state-of-the-art FPGA design is employed as a first implementation approach for the IVS prototyping platform. This work is used as an initial step to implement all the modules of the IVS on a single SoC chip.展开更多
基金supported by Chongqing Big Data Engineering Laboratory for Children,Chongqing Electronics Engineering Technology Research Center for Interactive Learning,Project of Science and Technology Research Program of Chongqing Education Commission of China. (No.KJZD-K201801601).
文摘The attacks on in-vehicle Controller Area Network(CAN)bus messages severely disrupt normal communication between vehicles.Therefore,researches on intrusion detection models for CAN have positive business value for vehicle security,and the intrusion detection technology for CAN bus messages can effectively protect the invehicle network from unlawful attacks.Previous machine learning-based models are unable to effectively identify intrusive abnormal messages due to their inherent shortcomings.Hence,to address the shortcomings of the previous machine learning-based intrusion detection technique,we propose a novel method using Attention Mechanism and AutoEncoder for Intrusion Detection(AMAEID).The AMAEID model first converts the raw hexadecimal message data into binary format to obtain better input.Then the AMAEID model encodes and decodes the binary message data using a multi-layer denoising autoencoder model to obtain a hidden feature representation that can represent the potential features behind the message data at a deeper level.Finally,the AMAEID model uses the attention mechanism and the fully connected layer network to infer whether the message is an abnormal message or not.The experimental results with three evaluation metrics on a real in-vehicle CAN bus message dataset outperform some traditional machine learning algorithms,demonstrating the effectiveness of the AMAEID model.
文摘With the vigorous development of automobile industry,in-vehicle network is also constantly upgraded to meet data transmission requirements of emerging applications.The main transmission requirements are low latency and certainty especially for autonomous driving.Time sensitive networking(TSN)based on Ethernet gives a possible solution to these requirements.Previous surveys usually investigated TSN from a general perspective,which referred to TSN of various application fields.In this paper,we focus on the application of TSN to the in-vehicle networks.For in-vehicle networks,we discuss all related TSN standards specified by IEEE 802.1 work group up to now.We further overview and analyze recent literature on various aspects of TSN for automotive applications,including synchronization,resource reservation,scheduling,certainty,software and hardware.Application scenarios of TSN for in-vehicle networks are analyzed one by one.Since TSN of in-vehicle network is still at a very initial stage,this paper also gives insights on open issues,future research directions and possible solutions.
基金This study is a phased achievement of the“Research on Innovative Communication of Romance of the Three Kingdoms under Audio Empowerment”project(No.23ZGL16)funded by Zhuge Liang Research Center,a key research base of social sciences in Sichuan Province.
文摘Visual media have dominated sensory communications for decades,and the resulting“visual hegemony”leads to the call for the“auditory return”in order to achieve a holistic balance in cultural acceptance.Romance of the Three Kingdoms,a classic literary work in China,has received significant attention and promotion from leading audio platforms.However,the commercialization of digital audio publishing faces unprecedented challenges due to the mismatch between the dissemination of long-form content on digital audio platforms and the current trend of short and fast information reception.Drawing on the Business Model Canvas Theory and taking Romance of the Three Kingdoms as the main focus of analysis,this paper argues that the construction of a business model for the audio publishing of classical books should start from three aspects:the user evaluation of digital audio platforms,the establishment of value propositions based on the“creative transformation and innovative development”principle,and the improvement of the audio publishing infrastructure to ensure the healthy operation and development of the digital audio platforms and consequently improve their current state of development and expand the boundaries of cultural heritage.
基金Supported by the National Natural Science Foundation of China (62277014)the National Key Research and Development Program of China (2020YFC1523100)the Fundamental Research Funds for the Central Universities of China (PA2023GDSK0047)。
文摘Background Considerable research has been conducted in the areas of audio-driven virtual character gestures and facial animation with some degree of success.However,few methods exist for generating full-body animations,and the portability of virtual character gestures and facial animations has not received sufficient attention.Methods Therefore,we propose a deep-learning-based audio-to-animation-and-blendshape(Audio2AB)network that generates gesture animations and ARK it's 52 facial expression parameter blendshape weights based on audio,audio-corresponding text,emotion labels,and semantic relevance labels to generate parametric data for full-body animations.This parameterization method can be used to drive full-body animations of virtual characters and improve their portability.In the experiment,we first downsampled the gesture and facial data to achieve the same temporal resolution for the input,output,and facial data.The Audio2AB network then encoded the audio,audio-corresponding text,emotion labels,and semantic relevance labels,and then fused the text,emotion labels,and semantic relevance labels into the audio to obtain better audio features.Finally,we established links between the body,gestures,and facial decoders and generated the corresponding animation sequences through our proposed GAN-GF loss function.Results By using audio,audio-corresponding text,and emotional and semantic relevance labels as input,the trained Audio2AB network could generate gesture animation data containing blendshape weights.Therefore,different 3D virtual character animations could be created through parameterization.Conclusions The experimental results showed that the proposed method could generate significant gestures and facial animations.
基金Supported by Shandong Province Key R and D Program,No.2021SFGC0504Shandong Provincial Natural Science Foundation,No.ZR2021MF079Science and Technology Development Plan of Jinan(Clinical Medicine Science and Technology Innovation Plan),No.202225054.
文摘Depression is a common mental health disorder.With current depression detection methods,specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary measures for depression assessment.Non-biological markers-typically classified as verbal or non-verbal and deemed crucial evaluation criteria for depression-have not been effectively utilized.Specialized physicians usually require extensive training and experience to capture changes in these features.Advancements in deep learning technology have provided technical support for capturing non-biological markers.Several researchers have proposed automatic depression estimation(ADE)systems based on sounds and videos to assist physicians in capturing these features and conducting depression screening.This article summarizes commonly used public datasets and recent research on audio-and video-based ADE based on three perspectives:Datasets,deficiencies in existing research,and future development directions.
文摘The types and quantities of volatile organic compounds (VOCs) inside vehicles have been determined in one new vehicle and two old vehicles under static conditions using the Thermodesorber-Gas Chromatograph/Mass Spectrometer (TD-GC/MS). Air sampling and analysis was conducted under the requirement of USEPA Method TO-17. A room-size, environment test chamber was utilized to provide stable and accurate control of the required environmental conditions (temperature, humidity, horizontal and vertical airflow velocity, and background VOCs concentration). Static vehicle testing demonstrated that although the amount of total volatile organic compounds (TVOC) detected within each vehicle was relatively distinct (4940 μg/m^3 in the new vehicle A, 1240 μg/m^3 in used vehicle B, and 132 μg/m^3 in used vehicle C), toluene, xylene, some aromatic compounds, and various C7-C12 alkanes were among the predominant VOC species in all three vehicles tested. In addition, tetramethyl succinonitrile, possibly derived from foam cushions was detected in vehicle B. The types and quantities of VOCs varied considerably according to various kinds of factors, such as, vehicle age, vehicle model, temperature, air exchange rate, and environment airflow velocity. For example, if the airflow velocity increases from 0.1 m/s to 0.7 m/s, the vehicle's air exchange rate increases from 0.15 h^-1 to 0.67 h^-1, and in-vehicle TVOC concentration decreases from 1780 to 1201 μg/m^3.
文摘This paper presents embedded system design of the In-Vehicle System (IVS) for the European Union (EU) emergency call (eCall) system. The IVS transmitter modules are designed, developed and implemented on a field programmable gate array (FPGA) device. The modules are simulated, synthesized, and optimized to be loaded on a reconfigurable device as a system-on-chip (SoC) for the IVS electronic device. All the modules of the transmitter are designed as a single embedded module. The bench-top test is completed for testing and verification of the developed modules. The hardware architecture and interfaces are discussed. The IVS signal processing time is analyzed for multiple frequencies. A range of appropriate frequency and two hardware interfaces are proposed. A state-of-the-art FPGA design is employed as a first implementation approach for the IVS prototyping platform. This work is used as an initial step to implement all the modules of the IVS on a single SoC chip.