08 oktober 2007

BCI and Covariate shift

An annoying problem in BCI is that data tend to shift from statistical distribution between the phase of training versus testing. Masashi Sugiyama and coauthors from Germany might have come up with a solution and make BCI training more robust. Read the pdf here Masashi Sugiyama1 , Benjamin Blankertz2 , Matthias Krauledat2, 3 , Guido Dornhege2 and Klaus-Robert Müller2, 3 (1) Department of Computer Science, Tokyo Institute of Technology, Tokyo, Japan (2) Fraunhofer FIRST.IDA, Berlin, Germany (3) Department of Computer Science, University of Potsdam, Potsdam, Germany Abstract A common assumption in supervised learning is that the input points in the training set follow the same probability distribution as the input points used for testing. However, this assumption is not satisfied, for example, when the outside of training region is extrapolated. The situation where the training input points and test input points follow different distributions is called the covariate shift. Under the covariate shift, standard machine learning techniques such as empirical risk minimization or cross-validation do not work well since their unbiasedness is no longer maintained. In this paper, we propose a new method called importance-weighted cross-validation, which is still unbiased even under the covariate shift. The usefulness of our proposed method is successfully tested on toy data and furthermore demonstrated in the brain-computer interface, where strong non-stationarity effects can be seen between calibration and feedback sessions.

1 opmerking:

Will Dwinnell zei

Extrapolation is always a problem, though I would think that, at least in cases where data is not too scarce, stratified sampling of the training data should address this issue.

-Will Dwinnell
Data Mining in MATLAB