University of Minnesota
University Relations

Minnesota Supercomputing Institute

Log out of MyMSI

Research Abstracts Online
January - December 2011

Main TOC ...... Next Abstract

University of Minnesota Twin Cities
College of Science and Engineering
Department of Electrical and Computer Engineering

PI: Keshab K. Parhi

Analysis, Feature Selection, and Classification of MEG and EEG Data

Epilepsy and schizophrenia are two of the most common neurological disorders. The goal of this project is to develop a seizure prediction model with low computational complexity, which is feasible to implement in real-time in implantable or wearable devices. The first step in this project is to identify a subset of features set which can increase the prediction accuracy of the model. The group will investigate different feature selection algorithms like SVM-RFE and Adaboost to find the best features. Further, different classification techniques will be explored to develop the seizure prediction algorithm with low computational complexity. Multi-dimensional analyses of MEG oscillations will be performed using Support Vector Machines Recursive Feature Elimination (SVM-REF) to select the spectro-temporo-spatial features that distinguish schizophrenia patients from controls at each language level.

For epilepsy, the power in nine different spectral bands from six different EEG electrodes is used as features, for a total of 54 features. For schizophrenia, behavioral and MEG signal analysis is used to gain knowledge into level-specific neurophysiology and across-level neurodynamics of language. The computational power required to compute and classify these features is very high. Feature selection algorithms are used to select a subset of features that contain essentially most of the relevant information for making decisions. This subset of features generally results in increase of the performance accuracy of the model as irrelevant features are not taken into consideration. There are different kinds of feature selection algorithms. On one end, there are combinatorial feature selection algorithms that try out exhaustively all the subset of features and select the set that results in the best performance of the model. At the other end lie the algorithms that rank the features using some criterion and select the high ranked features. Alternatively, thresholds can be set which can remove the features below/above that threshold.

Group Members

Manohar Ayinala, Graduate Student
Te-Lung Kung, Graduate Student
Sohini Roychowdhury, Graduate Student
Tingting Xu, Graduate Student
Bo Yuan, Graduate Student
Zisheng Zhang, Graduate Student