The first chapter of the thesis contains an introduction to epilepsy and EEG-analysis as weel as a motivation to estimate time-delays. Thereafter we present some delay-estimation methods.
In the second chapter we extensively discuss the mutual information method to estimate time-delays caused by the finite propagation velocity of the epileptiform activity. This method resembles the cross-correlation method to estimate time-delays, only the cross-covariance is replaced by the mutual information. Several aspects of the mutual information method are discussed; a histogram based mutual information estimator and the bias as well as the variance of that estimator. Finally we apply this estimator to EEG's of an eilepsy patient and of a dog, epileptic by electrical stimulation.
Because of the unsatisfactory, especially ambiguous, results obtained by the mutual information method, we study in the third chapter the properties of certain delay definitions. As is done for the mathematical distance, we define the properties of a procedure measuring a delay. New delay estimation methods, such as the method based on a delay definition by the information theoretical criterion, can be found by this approach. The delay definition searches for the lag with a minimum mutual information between the future and the past. We have proved that this criterion unambiguously defines a delay. The problem of unambiguously measuring a delay is now replaced by a problem of the interpretation of the measured delay. We have constructed an estimatorand we have applid this estimator to EEG's; the results are presented. Due to a lack of time, we could not investigate the statistical properties of this method.
In chapter four we study the problem of testing models to determine wheter two EEG-signals are dependent or not. Therefore we reinterpret the maximum likelihood method and conclude that this method, after some extensions, can be used to compare generations models with unequal number of parameters. We introduce the mean of the log-likelihood as a criterion to measure the acceptability of models and we introduce its estimate: the average log-likelihood. The average log-likelihood is in case of independent observations proportional to the classical log-likelihood. The maximum of the average log-likelihood is a biased estimate of the corresponding maximum of the mean log-likelihood and this estimate is inaccurate due to variance. The (N-) bias correction on the maximum of the average log-likelihoodresembles the Akaike information theoretical criterion, which we discuss. We suggest some improvements on the Akaike criterion; we have not yet been able to test these improvements in practice. When testing the dependence of two EEG-signals, given certain assumptions, the statistic to be evaluated is the mutual information estimate.
The last chapter contains some rekarks and general conclusions; we also propose some subjects to be investigated in the near future.