Closed-loop real time SSVEP-based heads-up display to control in vehicle features using deep learning

Inventors

Shishavan, Hossein HamidiKim, InsooBehzadi, MohammadDede, ErcanLohan, Danny

Assignees

Toyota Motor Engineering and Manufacturing North America IncUniversity of Connecticut Health Center

Publication Number

US-12358370-B2

Publication Date

2025-07-15

Expiration Date

2042-12-16

Interested in licensing this patent?

MTEC can help explore whether this patent might be available for licensing for your application.


Abstract

A vehicle system includes a controller programmed to display a plurality of icons on a heads-up-display (HUD) of the vehicle, receive electroencephalography (EEG) data from a driver of the vehicle, perform a Fast Fourier Transform of the EEG data to obtain an EEG spectrum, input the EEG spectrum into a trained machine learning model, determine which of the plurality of icons the driver is viewing based on an output of the trained machine learning model, and perform one or more vehicle operations based on the output of the trained machine learning model.

Core Innovation

The invention discloses a vehicle system that includes a controller programmed to display multiple icons on a heads-up-display (HUD) of the vehicle, receive electroencephalography (EEG) data from the driver, perform a Fast Fourier Transform (FFT) of the EEG data to obtain an EEG spectrum, and input that EEG spectrum into a trained machine learning model. The model then predicts which icon on the HUD the driver is viewing based on the EEG spectrum. This allows the vehicle system to perform one or more vehicle operations based on the output of the machine learning model in real time.

The problem being solved is the prevalence of distracted driving resulting from the driver's involvement with or operation of in-vehicle features. The system aims to enable the driver to interact with vehicle functions without manual input or taking their eyes off the road, thereby reducing distraction and enhancing driving safety. By detecting steady state visually evoked potentials (SSVEP) in the driver's brain corresponding to viewing specific icons, the system provides a hands-free and eyes-forward method of controlling in-vehicle features.

Claims Coverage

The patent includes three independent claims covering a vehicle system, a method of operating that system, and a method of training the machine learning model. The main inventive features relate to the integration of EEG data processing with a trained machine learning model to determine driver intent via HUD icon viewing and subsequent vehicle operation.

Real-time vehicle system using EEG and trained machine learning model

A vehicle system programmed to display icons on a HUD, receive EEG data from a driver, perform a Fast Fourier Transform (FFT) on the EEG data, input the EEG spectrum into a trained machine learning model to predict the icon the driver is viewing, determine which icon is being viewed, and perform vehicle operations based on this prediction.

EEG data preprocessing with filtering and segmentation

The system applies a band pass filter to the EEG data, performs data segmentation, and then performs an FFT on the segmented and filtered data before inputting it into the machine learning model.

Training the machine learning model with EEG data and ground truth

Receiving training EEG data collected from multiple subjects viewing specific icons, performing FFT on the training data to produce EEG spectrum data, and training a machine learning model to predict the viewed icon based on the EEG spectrum data.

Machine learning model architecture with convolutional neural network and SE-Res blocks

The trained machine learning model comprises a convolutional neural network with a residual neural network (ResNet) architecture and one or more squeeze and excite (SE) blocks. Each SE-Res block includes 2D convolutional layers, batch normalization, activation layers, and SE blocks.

Structure of the SE blocks within the model

Each SE block includes a global max pooling layer, a fully connected layer with a rectified linear unit (ReLU) activation function, and a fully connected layer with a sigmoid activation function, designed to weight feature maps based on importance.

Model output and classification layers

The model includes dropout layers to prevent overfitting and a Softmax classification layer for predicting one of multiple icons the driver is viewing.

The inventive features comprehensively cover the system and methods for detecting driver gaze on HUD icons using EEG data processed via FFT and analyzed by an advanced machine learning architecture to control vehicle operations in real time.

Stated Advantages

Allows control of in-vehicle features without distracting the driver, thereby reducing distracted driving.

Enables hands-free and eyes-forward vehicle operation through EEG-based detection of driver intent.

Provides more accurate and quicker prediction of icons being viewed compared to known methods, facilitating real-time operation.

Improves feature extraction and model performance by combining residual neural network architecture with squeeze and excite blocks.

Incorporates adaptive icon color adjustment to assist drivers who have difficulty recognizing icons, reducing erroneous selections.

Documented Applications

Facilitating driver control of vehicle functions such as audio options, navigation, settings, and temperature through HUD icon selection detected by EEG signals.

Providing sub-menu navigation on the HUD based on icons viewed by the driver, enabling context-specific vehicle operations.

Adjusting air conditioning and other vehicle environmental controls by interpreting driver gaze via EEG.

JOIN OUR MAILING LIST

Stay Connected with MTEC

Keep up with active and upcoming solicitations, MTEC news and other valuable information.