Advanced DriverAssistance Systems(ADAS)technologies can assist drivers or be part of automatic driving systems to support the driving process and improve the level of safety and comfort on the road.Traffic Sign Recogn...Advanced DriverAssistance Systems(ADAS)technologies can assist drivers or be part of automatic driving systems to support the driving process and improve the level of safety and comfort on the road.Traffic Sign Recognition System(TSRS)is one of themost important components ofADAS.Among the challengeswith TSRS is being able to recognize road signs with the highest accuracy and the shortest processing time.Accordingly,this paper introduces a new real time methodology recognizing Speed Limit Signs based on a trio of developed modules.Firstly,the Speed Limit Detection(SLD)module uses the Haar Cascade technique to generate a new SL detector in order to localize SL signs within captured frames.Secondly,the Speed Limit Classification(SLC)module,featuring machine learning classifiers alongside a newly developed model called DeepSL,harnesses the power of a CNN architecture to extract intricate features from speed limit sign images,ensuring efficient and precise recognition.In addition,a new Speed Limit Classifiers Fusion(SLCF)module has been developed by combining trained ML classifiers and the DeepSL model by using the Dempster-Shafer theory of belief functions and ensemble learning’s voting technique.Through rigorous software and hardware validation processes,the proposedmethodology has achieved highly significant F1 scores of 99.98%and 99.96%for DS theory and the votingmethod,respectively.Furthermore,a prototype encompassing all components demonstrates outstanding reliability and efficacy,with processing times of 150 ms for the Raspberry Pi board and 81.5 ms for the Nano Jetson board,marking a significant advancement in TSRS technology.展开更多
Recently, virtual realities and simulations play important roles in the development of automated driving functionalities. By an appropriate abstraction, they help to design, investigate and communicate real traffic sc...Recently, virtual realities and simulations play important roles in the development of automated driving functionalities. By an appropriate abstraction, they help to design, investigate and communicate real traffic scenario complexity. Especially, for edge cases investigations of interactions between vulnerable road users (VRU) and highly automated driving functions, valid virtual models are essential for the quality of results. The aim of this study is to measure, process and integrate real human movement behaviour into a virtual test environment for highly automated vehicle functionalities. The overall system consists of a georeferenced virtual city model and a vehicle dynamics model, including probabilistic sensor descriptions. By motion capture hardware, real humanoid behaviour is applied to a virtual human avatar in the test environment. Through retargeting methods, which enable the independency of avatar and person under test (PuT) dimensions, the virtual avatar diversity is increased. To verify the biomechanical behaviour of the virtual avatars, a qualitative study is performed, which funds on a representative movement sequence. The results confirm the functionality of the used methodology and enable PuT independence control of the virtual avatars in real-time.展开更多
Lane detection is a fundamental aspect of most current advanced driver assistance systems(ADASs). A large number of existing results focus on the study of vision-based lane detection methods due to the extensive knowl...Lane detection is a fundamental aspect of most current advanced driver assistance systems(ADASs). A large number of existing results focus on the study of vision-based lane detection methods due to the extensive knowledge background and the low-cost of camera devices. In this paper, previous visionbased lane detection studies are reviewed in terms of three aspects, which are lane detection algorithms, integration, and evaluation methods. Next, considering the inevitable limitations that exist in the camera-based lane detection system, the system integration methodologies for constructing more robust detection systems are reviewed and analyzed. The integration methods are further divided into three levels, namely, algorithm, system,and sensor. Algorithm level combines different lane detection algorithms while system level integrates other object detection systems to comprehensively detect lane positions. Sensor level uses multi-modal sensors to build a robust lane recognition system. In view of the complexity of evaluating the detection system, and the lack of common evaluation procedure and uniform metrics in past studies, the existing evaluation methods and metrics are analyzed and classified to propose a better evaluation of the lane detection system. Next, a comparison of representative studies is performed. Finally, a discussion on the limitations of current lane detection systems and the future developing trends toward an Artificial Society, Computational experiment-based parallel lane detection framework is proposed.展开更多
The driver’s cognitive and physiological states affect his/her ability to control the vehicle.Thus,these driver states are essential to the safety of automobiles.The design of advanced driver assistance systems(ADAS)...The driver’s cognitive and physiological states affect his/her ability to control the vehicle.Thus,these driver states are essential to the safety of automobiles.The design of advanced driver assistance systems(ADAS)or autonomous vehicles will depend on their ability to interact effectively with the driver.A deeper understanding of the driver state is,therefore,paramount.Electroencephalography(EEG)is proven to be one of the most effective methods for driver state monitoring and human error detection.This paper discusses EEG-based driver state detection systems and their corresponding analysis algorithms over the last three decades.First,the commonly used EEG system setup for driver state studies is introduced.Then,the EEG signal preprocessing,feature extraction,and classification algorithms for driver state detection are reviewed.Finally,EEG-based driver state monitoring research is reviewed in-depth,and its future development is discussed.It is concluded that the current EEGbased driver state monitoring algorithms are promising for safety applications.However,many improvements are still required in EEG artifact reduction,real-time processing,and between-subject classification accuracy.展开更多
New approaches for testing of autonomous driving functions are using Virtual Reality (VR) to analyze the behavior of automated vehicles in various scenarios. The real time simulation of the environment sensors is stil...New approaches for testing of autonomous driving functions are using Virtual Reality (VR) to analyze the behavior of automated vehicles in various scenarios. The real time simulation of the environment sensors is still a challenge. In this paper, the conception, development and validation of an automotive radar raw data sensor model is shown. For the implementation, the Unreal VR engine developed by Epic Games is used. The model consists of a sending antenna, a propagation and a receiving antenna model. The microwave field propagation is simulated by a raytracing approach. It uses the method of shooting and bouncing rays to cover the field. A diffused scattering model is implemented to simulate the influence of rough structures on the reflection of rays. To parameterize the model, simple reflectors are used. The validation is done by a comparison of the measured radar patterns of pedestrians and cyclists with simulated values. The outcome is that the developed model shows valid results, even if it still has deficits in the context of performance. It shows that the bouncing of diffuse scattered field can only be done once. This produces inadequacies in some scenarios. In summary, the paper shows a high potential for real time simulation of radar sensors by using ray tracing in a virtual reality.展开更多
Sight obstructions along road curves can lead to a crash if the driver is not able to stop the vehicle in time.This is a particular issue along curves with limited available sight,where speed management is necessary t...Sight obstructions along road curves can lead to a crash if the driver is not able to stop the vehicle in time.This is a particular issue along curves with limited available sight,where speed management is necessary to avoid unsafe situations(e.g.,driving off the road or invading the other traffic lane).To solve this issue,we proposed a novel intelligent speed adaptation(ISA)system for visibility,called V-ISA(intelligent speed adaptation for visibility).It estimates the real-time safe speed limits based on the prevailing sight conditions.V-ISA comes with three variants with specific feedback modalities(1)visual and(2)auditory information,and(3)direct intervention to assume control over the vehicle speed.Here,we investigated the efficiency of each of the three V-ISA variants on driving speed choice and lateral behavioural response along road curves with limited and unsafe available sight distances,using a driving simulator.We also considered curve road geometry(curve direction:rightward vs.leftward).Sixty active drivers were recruited for the study.While half of them(experimental group)tested the three V-ISA variants(and a V-ISA off condition),the other half always drove with the V-ISA off(validation group).We used a linear mixed-effect model to evaluate the influence of V-ISA on driver behaviour.All V-ISA variants were efficient at reducing speeds at entrance points,with no discernible negative impact on driver lateral behaviour.On rightward curves,the V-ISA intervening variant appeared to be the most effective at adapting to sight limitations.Results of the current study implies that V-ISA might assist drivers to adjust their operating speed as per prevailing sight conditions and,consequently,establishes safer driving conditions.展开更多
This paper describes the analysis and design of an assistive device for elderly people under development at the EgyptJapan University of Science and Technology(E-JUST) named E-JUST assistive device(EJAD).Several e...This paper describes the analysis and design of an assistive device for elderly people under development at the EgyptJapan University of Science and Technology(E-JUST) named E-JUST assistive device(EJAD).Several experiments were carried out using a motion capture system(VICON) and inertial sensors to identify the human posture during the sit-to-stand motion.The EJAD uses only two inertial measurement units(IMUs) fused through an adaptive neuro-fuzzy inference systems(ANFIS) algorithm to imitate the real motion of the caregiver.The EJAD consists of two main parts,a robot arm and an active walker.The robot arm is a 2-degree-of-freedom(2-DOF) planar manipulator.In addition,a back support with a passive joint is used to support the patient s back.The IMUs on the leg and trunk of the patient are used to compensate for and adapt to the EJAD system motion depending on the obtained patient posture.The ANFIS algorithm is used to train the fuzzy system that converts the IMUs signals to the right posture of the patient.A control scheme is proposed to control the system motion based on practical measurements taken from the experiments.A computer simulation showed a relatively good performance of the EJAD in assisting the patient.展开更多
In conjunction with the NSF Engineering Research Center for Collaborative Adaptive Sensing of the Atmosphere (CASA),the Department of Electrical and Computer Engineering at the University of Massachusetts Amherst invi...In conjunction with the NSF Engineering Research Center for Collaborative Adaptive Sensing of the Atmosphere (CASA),the Department of Electrical and Computer Engineering at the University of Massachusetts Amherst invites applications for a tenure-track position in Integrative Systems Engineering(ISE) at the Assistant Professor level to begin September 2009.展开更多
In this study,we explore a human activity recognition(HAR)system using computer vision for assisted living systems(ALS).Most existing HAR systems are implemented using wired or wireless sensor networks.These systems h...In this study,we explore a human activity recognition(HAR)system using computer vision for assisted living systems(ALS).Most existing HAR systems are implemented using wired or wireless sensor networks.These systems have limitations such as cost,power issues,weight,and the inability of the elderly to wear and carry them comfortably.These issues could be overcome by a computer vision based HAR system.But such systems require a highly memory-consuming image dataset.Training such a dataset takes a long time.The proposed computervision-based system overcomes the shortcomings of existing systems.The authors have used key-joint angles,distances between the key joints,and slopes between the key joints to create a numerical dataset instead of an image dataset.All these parameters in the dataset are recorded via real-time event simulation.The data set has 780,000 calculated feature values from 20,000 images.This dataset is used to train and detect five different human postures.These are sitting,standing,walking,lying,and falling.The implementation encompasses four distinct algorithms:the decision tree(DT),random forest(RF),support vector machine(SVM),and an ensemble approach.Remarkably,the ensemble technique exhibited exceptional performance metrics with 99%accuracy,98%precision,97%recall,and an F1 score of 99%.展开更多
Advanced driver assistance systems, especially autonomous emergency braking and forward collision warnings, have becomepopular in Japan. To reduce the number of road traffic accidents, safety information should be pro...Advanced driver assistance systems, especially autonomous emergency braking and forward collision warnings, have becomepopular in Japan. To reduce the number of road traffic accidents, safety information should be provided to a driver earlier thanavoidance or warning messages so as to avoid a risky situation. A series of actual running tests was conducted to evaluate theactivation timing and effectiveness of awareness messages. Objective analysis showed that the drivers could avoid an obstaclewith a sufficient safety margin thanks to any of the awareness messages. Subjective ratings showed that the best timing is 10 sbefore encountering the obstacle. The results of objective analysis are limited in the present paper, and further analyses arerequired.展开更多
The past decade has witnessed an acceleration of autonomous vehicle research and development,with technological advances contributed by academia,government,and the industrial and consumer sectors.These advancements ho...The past decade has witnessed an acceleration of autonomous vehicle research and development,with technological advances contributed by academia,government,and the industrial and consumer sectors.These advancements hold the potential to improve society by enhancing transportation safety and throughput,where decreased congestion saves time and reduces vehicle emissions.Two of the key technologies to enable vehicle infrastructure interaction,advanced traffic management,and automated vehicles are automated roadway mapping and reliable vehicle state estimation.In this paper,we present an overview and new methods for the problems automated roadway mapping plus a discussion of the extension of these methods to the problem of vehicle state estimation.Results from the application of these methods to feature mapping and state estimation are presented.展开更多
Purpose–Two-handed automobile steering at low vehicle speeds may lead to reduced steering ability at large steering wheel angles and shoulder injury at high steering wheel rates(SWRs).As afirst step toward solving the...Purpose–Two-handed automobile steering at low vehicle speeds may lead to reduced steering ability at large steering wheel angles and shoulder injury at high steering wheel rates(SWRs).As afirst step toward solving these problems,this study aims,firstly,to design a surface electromyography(sEMG)controlled steering assistance interface that enables hands-free steering wheel rotation and,secondly,to validate the effect of this rotation on path-following accuracy.Design/methodology/approach–A total of 24 drivers used biceps brachii sEMG signals to control the steering assistance interface at a maximized SWR in three driving simulator scenarios:U-turn,908 turn and 458 turn.For comparison,the scenarios were repeated with a slower SWR and a game steering wheel in place of the steering assistance interface.The path-following accuracy of the steering assistance interface would be validated if it was at least comparable to that of the game steering wheel.Findings–Overall,the steering assistance interface with a maximized SWR was comparable to a game steering wheel.For the U-turn,908 turn and 458 turn,the sEMG-based human–machine interface(HMI)had median lateral errors of 0.55,0.3 and 0.2 m,respectively,whereas the game steering wheel,respectively,had median lateral errors of 0.7,0.4 and 0.3 m.The higher accuracy of the sEMG-based HMI was statistically significant in the case of the U-turn.Originality/value–Although production automobiles do not use sEMG-based HMIs,and few studies have proposed sEMG controlled steering,the results of the current study warrant further development of a sEMG-based HMI for an actual automobile.展开更多
Many advanced driver assistance systems have entered the market,and automated driving technologies have been developed.Many of them may not work in adverse weather conditions.A forward-looking camera,for example,is th...Many advanced driver assistance systems have entered the market,and automated driving technologies have been developed.Many of them may not work in adverse weather conditions.A forward-looking camera,for example,is the most popular system used for lane detection but does not work for a snow-covered road.The present paper proposes a self-localization system for snowy roads when the roadsides are covered with snow.The system employs a four-layer laser scanner and onboard sensors and uses only pre-existing roadside snow poles provided for drivers in a snowy region without any other road infrastructure.Because the landscape greatly changes in a short time during a snowstorm and snow removal works,it is necessary to restrict the landmarks used for self-localization to tall objects,like snow poles.A system incorporating this technology will support a driver’s efforts to keep to a lane even in a heavy snowstorm.展开更多
Purpose–An individual’s driving style significantly affects overall traffic safety.However,driving style is difficult to identify due to temporal and spatial differences and scene heterogeneity of driving behavior d...Purpose–An individual’s driving style significantly affects overall traffic safety.However,driving style is difficult to identify due to temporal and spatial differences and scene heterogeneity of driving behavior data.As such,the study of real-time driving-style identification methods is of great significance for formulating personalized driving strategies,improving traffic safety and reducing fuel consumption.This study aims to establish a driving style recognition framework based on longitudinal driving operation conditions(DOCs)using a machine learning model and natural driving data collected by a vehicle equipped with an advanced driving assistance system(ADAS).Design/methodology/approach–Specifically,a driving style recognition framework based on longitudinal DOCs was established.To train the model,a real-world driving experiment was conducted.First,the driving styles of 44 drivers were preliminarily identified through natural driving data and video data;drivers were categorized through a subjective evaluation as conservative,moderate or aggressive.Then,based on the ADAS driving data,a criterion for extracting longitudinal DOCs was developed.Third,taking the ADAS data from 47 Kms of the two test expressways as the research object,six DOCs were calibrated and the characteristic data sets of the different DOCs were extracted and constructed.Finally,four machine learning classification(MLC)models were used to classify and predict driving style based on the natural driving data.Findings–The results showed that six longitudinal DOCs were calibrated according to the proposed calibration criterion.Cautious drivers undertook the largest proportion of the free cruise condition(FCC),while aggressive drivers primarily undertook the FCC,following steady condition and relative approximation condition.Compared with cautious and moderate drivers,aggressive drivers adopted a smaller time headway(THW)and distance headway(DHW).THW,time-to-collision(TTC)and DHW showed highly significant differences in driving style identification,while longitudinal acceleration(LA)showed no significant difference in driving style identification.Speed and TTC showed no significant difference between moderate and aggressive drivers.In consideration of the cross-validation results and model prediction results,the overall hierarchical prediction performance ranking of the four studied machine learning models under the current sample data set was extreme gradient boosting>multi-layer perceptron>logistic regression>support vector machine.Originality/value–The contribution of this research is to propose a criterion and solution for using longitudinal driving behavior data to label longitudinal DOCs and rapidly identify driving styles based on those DOCs and MLC models.This study provides a reference for real-time online driving style identification in vehicles equipped with onboard data acquisition equipment,such as ADAS.展开更多
Purpose–Level 3 automated driving,which has been defined by the Society of Automotive Engineers,may cause driver drowsiness or lack of situation awareness,which can make it difficult for the driver to recognize where...Purpose–Level 3 automated driving,which has been defined by the Society of Automotive Engineers,may cause driver drowsiness or lack of situation awareness,which can make it difficult for the driver to recognize where he/she is.Therefore,the purpose of this study was to conduct an experimental study with a driving simulator to investigate whether automated driving affects the driver’s own localization compared to manual driving.Design/methodology/approach–Seventeen drivers were divided into the automated operation group and manual operation group.Drivers in each group were instructed to travel along the expressway and proceed to the specified destinations.The automated operation group was forced to select a course after receiving a Request to Intervene(RtI)from an automated driving system.Findings–A driver who used the automated operation system tended to not take over the driving operation correctly when a lane change is immediately required after the RtI.Originality/value–This is a fundamental research that examined how the automated driving operation affects the driver's own localization.The experimental results suggest that it is not enough to simply issue an RtI,and it is necessary to tell the driver what kind of circumstances he/she is in and what they should do next through the HMI.This conclusion can be taken into consideration for engineers who design automatic driving vehicles.展开更多
Purpose–Analysis of characteristic driving operations can help develop supports for drivers with different driving skills.However,the existing knowledge on analysis of driving skills only focuses on single driving op...Purpose–Analysis of characteristic driving operations can help develop supports for drivers with different driving skills.However,the existing knowledge on analysis of driving skills only focuses on single driving operation and cannot reflect the differences on proficiency of coordination of driving operations.Thus,the purpose of this paper is to analyze driving skills from driving coordinating operations.There are two main contributions:the first involves a method for feature extraction based on AdaBoost,which selects features critical for coordinating operations of experienced drivers and inexperienced drivers,and the second involves a generating method for candidate features,called the combined features method,through which two or more different driving operations at the same location are combined into a candidate combined feature.A series of experiments based on driving simulator and specific course with several different curves were carried out,and the result indicated the feasibility of analyzing driving behavior through AdaBoost and the combined features method.Design/methodology/approach–AdaBoost was used to extract features and the combined features method was used to combine two or more different driving operations at the same location.Findings–A series of experiments based on driving simulator and specific course with several different curves were carried out,and the result indicated the feasibility of analyzing driving behavior through AdaBoost and the combined features method.Originality/value–There are two main contributions:the first involves a method for feature extraction based on AdaBoost,which selects features critical for coordinating operations of experienced drivers and inexperienced drivers,and the second involves a generating method for candidate features,called the combined features method,through which two or more different driving operations at the same location are combined into a candidate combined feature.展开更多
文摘Advanced DriverAssistance Systems(ADAS)technologies can assist drivers or be part of automatic driving systems to support the driving process and improve the level of safety and comfort on the road.Traffic Sign Recognition System(TSRS)is one of themost important components ofADAS.Among the challengeswith TSRS is being able to recognize road signs with the highest accuracy and the shortest processing time.Accordingly,this paper introduces a new real time methodology recognizing Speed Limit Signs based on a trio of developed modules.Firstly,the Speed Limit Detection(SLD)module uses the Haar Cascade technique to generate a new SL detector in order to localize SL signs within captured frames.Secondly,the Speed Limit Classification(SLC)module,featuring machine learning classifiers alongside a newly developed model called DeepSL,harnesses the power of a CNN architecture to extract intricate features from speed limit sign images,ensuring efficient and precise recognition.In addition,a new Speed Limit Classifiers Fusion(SLCF)module has been developed by combining trained ML classifiers and the DeepSL model by using the Dempster-Shafer theory of belief functions and ensemble learning’s voting technique.Through rigorous software and hardware validation processes,the proposedmethodology has achieved highly significant F1 scores of 99.98%and 99.96%for DS theory and the votingmethod,respectively.Furthermore,a prototype encompassing all components demonstrates outstanding reliability and efficacy,with processing times of 150 ms for the Raspberry Pi board and 81.5 ms for the Nano Jetson board,marking a significant advancement in TSRS technology.
文摘Recently, virtual realities and simulations play important roles in the development of automated driving functionalities. By an appropriate abstraction, they help to design, investigate and communicate real traffic scenario complexity. Especially, for edge cases investigations of interactions between vulnerable road users (VRU) and highly automated driving functions, valid virtual models are essential for the quality of results. The aim of this study is to measure, process and integrate real human movement behaviour into a virtual test environment for highly automated vehicle functionalities. The overall system consists of a georeferenced virtual city model and a vehicle dynamics model, including probabilistic sensor descriptions. By motion capture hardware, real humanoid behaviour is applied to a virtual human avatar in the test environment. Through retargeting methods, which enable the independency of avatar and person under test (PuT) dimensions, the virtual avatar diversity is increased. To verify the biomechanical behaviour of the virtual avatars, a qualitative study is performed, which funds on a representative movement sequence. The results confirm the functionality of the used methodology and enable PuT independence control of the virtual avatars in real-time.
文摘Lane detection is a fundamental aspect of most current advanced driver assistance systems(ADASs). A large number of existing results focus on the study of vision-based lane detection methods due to the extensive knowledge background and the low-cost of camera devices. In this paper, previous visionbased lane detection studies are reviewed in terms of three aspects, which are lane detection algorithms, integration, and evaluation methods. Next, considering the inevitable limitations that exist in the camera-based lane detection system, the system integration methodologies for constructing more robust detection systems are reviewed and analyzed. The integration methods are further divided into three levels, namely, algorithm, system,and sensor. Algorithm level combines different lane detection algorithms while system level integrates other object detection systems to comprehensively detect lane positions. Sensor level uses multi-modal sensors to build a robust lane recognition system. In view of the complexity of evaluating the detection system, and the lack of common evaluation procedure and uniform metrics in past studies, the existing evaluation methods and metrics are analyzed and classified to propose a better evaluation of the lane detection system. Next, a comparison of representative studies is performed. Finally, a discussion on the limitations of current lane detection systems and the future developing trends toward an Artificial Society, Computational experiment-based parallel lane detection framework is proposed.
文摘The driver’s cognitive and physiological states affect his/her ability to control the vehicle.Thus,these driver states are essential to the safety of automobiles.The design of advanced driver assistance systems(ADAS)or autonomous vehicles will depend on their ability to interact effectively with the driver.A deeper understanding of the driver state is,therefore,paramount.Electroencephalography(EEG)is proven to be one of the most effective methods for driver state monitoring and human error detection.This paper discusses EEG-based driver state detection systems and their corresponding analysis algorithms over the last three decades.First,the commonly used EEG system setup for driver state studies is introduced.Then,the EEG signal preprocessing,feature extraction,and classification algorithms for driver state detection are reviewed.Finally,EEG-based driver state monitoring research is reviewed in-depth,and its future development is discussed.It is concluded that the current EEGbased driver state monitoring algorithms are promising for safety applications.However,many improvements are still required in EEG artifact reduction,real-time processing,and between-subject classification accuracy.
文摘New approaches for testing of autonomous driving functions are using Virtual Reality (VR) to analyze the behavior of automated vehicles in various scenarios. The real time simulation of the environment sensors is still a challenge. In this paper, the conception, development and validation of an automotive radar raw data sensor model is shown. For the implementation, the Unreal VR engine developed by Epic Games is used. The model consists of a sending antenna, a propagation and a receiving antenna model. The microwave field propagation is simulated by a raytracing approach. It uses the method of shooting and bouncing rays to cover the field. A diffused scattering model is implemented to simulate the influence of rough structures on the reflection of rays. To parameterize the model, simple reflectors are used. The validation is done by a comparison of the measured radar patterns of pedestrians and cyclists with simulated values. The outcome is that the developed model shows valid results, even if it still has deficits in the context of performance. It shows that the bouncing of diffuse scattered field can only be done once. This produces inadequacies in some scenarios. In summary, the paper shows a high potential for real time simulation of radar sensors by using ray tracing in a virtual reality.
文摘Sight obstructions along road curves can lead to a crash if the driver is not able to stop the vehicle in time.This is a particular issue along curves with limited available sight,where speed management is necessary to avoid unsafe situations(e.g.,driving off the road or invading the other traffic lane).To solve this issue,we proposed a novel intelligent speed adaptation(ISA)system for visibility,called V-ISA(intelligent speed adaptation for visibility).It estimates the real-time safe speed limits based on the prevailing sight conditions.V-ISA comes with three variants with specific feedback modalities(1)visual and(2)auditory information,and(3)direct intervention to assume control over the vehicle speed.Here,we investigated the efficiency of each of the three V-ISA variants on driving speed choice and lateral behavioural response along road curves with limited and unsafe available sight distances,using a driving simulator.We also considered curve road geometry(curve direction:rightward vs.leftward).Sixty active drivers were recruited for the study.While half of them(experimental group)tested the three V-ISA variants(and a V-ISA off condition),the other half always drove with the V-ISA off(validation group).We used a linear mixed-effect model to evaluate the influence of V-ISA on driver behaviour.All V-ISA variants were efficient at reducing speeds at entrance points,with no discernible negative impact on driver lateral behaviour.On rightward curves,the V-ISA intervening variant appeared to be the most effective at adapting to sight limitations.Results of the current study implies that V-ISA might assist drivers to adjust their operating speed as per prevailing sight conditions and,consequently,establishes safer driving conditions.
基金supported in part by a scholarship provided by the Mission DepartmentMinistry of Higher Education of the Government of Egypt
文摘This paper describes the analysis and design of an assistive device for elderly people under development at the EgyptJapan University of Science and Technology(E-JUST) named E-JUST assistive device(EJAD).Several experiments were carried out using a motion capture system(VICON) and inertial sensors to identify the human posture during the sit-to-stand motion.The EJAD uses only two inertial measurement units(IMUs) fused through an adaptive neuro-fuzzy inference systems(ANFIS) algorithm to imitate the real motion of the caregiver.The EJAD consists of two main parts,a robot arm and an active walker.The robot arm is a 2-degree-of-freedom(2-DOF) planar manipulator.In addition,a back support with a passive joint is used to support the patient s back.The IMUs on the leg and trunk of the patient are used to compensate for and adapt to the EJAD system motion depending on the obtained patient posture.The ANFIS algorithm is used to train the fuzzy system that converts the IMUs signals to the right posture of the patient.A control scheme is proposed to control the system motion based on practical measurements taken from the experiments.A computer simulation showed a relatively good performance of the EJAD in assisting the patient.
文摘In conjunction with the NSF Engineering Research Center for Collaborative Adaptive Sensing of the Atmosphere (CASA),the Department of Electrical and Computer Engineering at the University of Massachusetts Amherst invites applications for a tenure-track position in Integrative Systems Engineering(ISE) at the Assistant Professor level to begin September 2009.
文摘In this study,we explore a human activity recognition(HAR)system using computer vision for assisted living systems(ALS).Most existing HAR systems are implemented using wired or wireless sensor networks.These systems have limitations such as cost,power issues,weight,and the inability of the elderly to wear and carry them comfortably.These issues could be overcome by a computer vision based HAR system.But such systems require a highly memory-consuming image dataset.Training such a dataset takes a long time.The proposed computervision-based system overcomes the shortcomings of existing systems.The authors have used key-joint angles,distances between the key joints,and slopes between the key joints to create a numerical dataset instead of an image dataset.All these parameters in the dataset are recorded via real-time event simulation.The data set has 780,000 calculated feature values from 20,000 images.This dataset is used to train and detect five different human postures.These are sitting,standing,walking,lying,and falling.The implementation encompasses four distinct algorithms:the decision tree(DT),random forest(RF),support vector machine(SVM),and an ensemble approach.Remarkably,the ensemble technique exhibited exceptional performance metrics with 99%accuracy,98%precision,97%recall,and an F1 score of 99%.
文摘Advanced driver assistance systems, especially autonomous emergency braking and forward collision warnings, have becomepopular in Japan. To reduce the number of road traffic accidents, safety information should be provided to a driver earlier thanavoidance or warning messages so as to avoid a risky situation. A series of actual running tests was conducted to evaluate theactivation timing and effectiveness of awareness messages. Objective analysis showed that the drivers could avoid an obstaclewith a sufficient safety margin thanks to any of the awareness messages. Subjective ratings showed that the best timing is 10 sbefore encountering the obstacle. The results of objective analysis are limited in the present paper, and further analyses arerequired.
基金supported in part by the US Department of Transportation Federal Highway Administration[grant number DTFH61-09-C-00018]and[grant number DTFH61-06-D-00006]California Department of Transportation[grant number 65A0261].
文摘The past decade has witnessed an acceleration of autonomous vehicle research and development,with technological advances contributed by academia,government,and the industrial and consumer sectors.These advancements hold the potential to improve society by enhancing transportation safety and throughput,where decreased congestion saves time and reduces vehicle emissions.Two of the key technologies to enable vehicle infrastructure interaction,advanced traffic management,and automated vehicles are automated roadway mapping and reliable vehicle state estimation.In this paper,we present an overview and new methods for the problems automated roadway mapping plus a discussion of the extension of these methods to the problem of vehicle state estimation.Results from the application of these methods to feature mapping and state estimation are presented.
文摘Purpose–Two-handed automobile steering at low vehicle speeds may lead to reduced steering ability at large steering wheel angles and shoulder injury at high steering wheel rates(SWRs).As afirst step toward solving these problems,this study aims,firstly,to design a surface electromyography(sEMG)controlled steering assistance interface that enables hands-free steering wheel rotation and,secondly,to validate the effect of this rotation on path-following accuracy.Design/methodology/approach–A total of 24 drivers used biceps brachii sEMG signals to control the steering assistance interface at a maximized SWR in three driving simulator scenarios:U-turn,908 turn and 458 turn.For comparison,the scenarios were repeated with a slower SWR and a game steering wheel in place of the steering assistance interface.The path-following accuracy of the steering assistance interface would be validated if it was at least comparable to that of the game steering wheel.Findings–Overall,the steering assistance interface with a maximized SWR was comparable to a game steering wheel.For the U-turn,908 turn and 458 turn,the sEMG-based human–machine interface(HMI)had median lateral errors of 0.55,0.3 and 0.2 m,respectively,whereas the game steering wheel,respectively,had median lateral errors of 0.7,0.4 and 0.3 m.The higher accuracy of the sEMG-based HMI was statistically significant in the case of the U-turn.Originality/value–Although production automobiles do not use sEMG-based HMIs,and few studies have proposed sEMG controlled steering,the results of the current study warrant further development of a sEMG-based HMI for an actual automobile.
基金This work was supported by Japan Institute of Country-ology and Engineering(JICE 2017 and 2018).
文摘Many advanced driver assistance systems have entered the market,and automated driving technologies have been developed.Many of them may not work in adverse weather conditions.A forward-looking camera,for example,is the most popular system used for lane detection but does not work for a snow-covered road.The present paper proposes a self-localization system for snowy roads when the roadsides are covered with snow.The system employs a four-layer laser scanner and onboard sensors and uses only pre-existing roadside snow poles provided for drivers in a snowy region without any other road infrastructure.Because the landscape greatly changes in a short time during a snowstorm and snow removal works,it is necessary to restrict the landmarks used for self-localization to tall objects,like snow poles.A system incorporating this technology will support a driver’s efforts to keep to a lane even in a heavy snowstorm.
基金This research was funded by the National Nature Science Foundation of China(No.52072290)Hubei Province Science Fund for Distinguished Young Scholars(No.2020CFA081)the Fundamental Research Funds for the Central Universities(No.191044003,No.2020-YB-028).
文摘Purpose–An individual’s driving style significantly affects overall traffic safety.However,driving style is difficult to identify due to temporal and spatial differences and scene heterogeneity of driving behavior data.As such,the study of real-time driving-style identification methods is of great significance for formulating personalized driving strategies,improving traffic safety and reducing fuel consumption.This study aims to establish a driving style recognition framework based on longitudinal driving operation conditions(DOCs)using a machine learning model and natural driving data collected by a vehicle equipped with an advanced driving assistance system(ADAS).Design/methodology/approach–Specifically,a driving style recognition framework based on longitudinal DOCs was established.To train the model,a real-world driving experiment was conducted.First,the driving styles of 44 drivers were preliminarily identified through natural driving data and video data;drivers were categorized through a subjective evaluation as conservative,moderate or aggressive.Then,based on the ADAS driving data,a criterion for extracting longitudinal DOCs was developed.Third,taking the ADAS data from 47 Kms of the two test expressways as the research object,six DOCs were calibrated and the characteristic data sets of the different DOCs were extracted and constructed.Finally,four machine learning classification(MLC)models were used to classify and predict driving style based on the natural driving data.Findings–The results showed that six longitudinal DOCs were calibrated according to the proposed calibration criterion.Cautious drivers undertook the largest proportion of the free cruise condition(FCC),while aggressive drivers primarily undertook the FCC,following steady condition and relative approximation condition.Compared with cautious and moderate drivers,aggressive drivers adopted a smaller time headway(THW)and distance headway(DHW).THW,time-to-collision(TTC)and DHW showed highly significant differences in driving style identification,while longitudinal acceleration(LA)showed no significant difference in driving style identification.Speed and TTC showed no significant difference between moderate and aggressive drivers.In consideration of the cross-validation results and model prediction results,the overall hierarchical prediction performance ranking of the four studied machine learning models under the current sample data set was extreme gradient boosting>multi-layer perceptron>logistic regression>support vector machine.Originality/value–The contribution of this research is to propose a criterion and solution for using longitudinal driving behavior data to label longitudinal DOCs and rapidly identify driving styles based on those DOCs and MLC models.This study provides a reference for real-time online driving style identification in vehicles equipped with onboard data acquisition equipment,such as ADAS.
基金This work was supported by Council for Science,Technology and Innovation(CSTI),Crossministerial Strategic Innovation Promotion Program(SIP),entitled“Human Factors and HMI Research for Automated Driving”.
文摘Purpose–Level 3 automated driving,which has been defined by the Society of Automotive Engineers,may cause driver drowsiness or lack of situation awareness,which can make it difficult for the driver to recognize where he/she is.Therefore,the purpose of this study was to conduct an experimental study with a driving simulator to investigate whether automated driving affects the driver’s own localization compared to manual driving.Design/methodology/approach–Seventeen drivers were divided into the automated operation group and manual operation group.Drivers in each group were instructed to travel along the expressway and proceed to the specified destinations.The automated operation group was forced to select a course after receiving a Request to Intervene(RtI)from an automated driving system.Findings–A driver who used the automated operation system tended to not take over the driving operation correctly when a lane change is immediately required after the RtI.Originality/value–This is a fundamental research that examined how the automated driving operation affects the driver's own localization.The experimental results suggest that it is not enough to simply issue an RtI,and it is necessary to tell the driver what kind of circumstances he/she is in and what they should do next through the HMI.This conclusion can be taken into consideration for engineers who design automatic driving vehicles.
基金This work is also supported by“the Fundamental Research Funds YJ 201621 for the Central Universities”at Sichuan University and“the National Natural Science Foundation of China U1664263.”。
文摘Purpose–Analysis of characteristic driving operations can help develop supports for drivers with different driving skills.However,the existing knowledge on analysis of driving skills only focuses on single driving operation and cannot reflect the differences on proficiency of coordination of driving operations.Thus,the purpose of this paper is to analyze driving skills from driving coordinating operations.There are two main contributions:the first involves a method for feature extraction based on AdaBoost,which selects features critical for coordinating operations of experienced drivers and inexperienced drivers,and the second involves a generating method for candidate features,called the combined features method,through which two or more different driving operations at the same location are combined into a candidate combined feature.A series of experiments based on driving simulator and specific course with several different curves were carried out,and the result indicated the feasibility of analyzing driving behavior through AdaBoost and the combined features method.Design/methodology/approach–AdaBoost was used to extract features and the combined features method was used to combine two or more different driving operations at the same location.Findings–A series of experiments based on driving simulator and specific course with several different curves were carried out,and the result indicated the feasibility of analyzing driving behavior through AdaBoost and the combined features method.Originality/value–There are two main contributions:the first involves a method for feature extraction based on AdaBoost,which selects features critical for coordinating operations of experienced drivers and inexperienced drivers,and the second involves a generating method for candidate features,called the combined features method,through which two or more different driving operations at the same location are combined into a candidate combined feature.