S eizure patter ns in EEG recording were observed
after 30 to 60 minutes on start of incubation.
S ingle channel analog EEG was recorded with the
standard amplifier setting.(7) Signals were
s imultaneously recorded in the computer hard disk
following digitization of the traces at 256 Hz with
help of an analog to digital converter (ADLiNK,
8 1 12HG, NuDAQ, Taiwan) and its supporting
software (VI SUAL LAB-M, Version 2.0c, Blue Pearl
laboratory, USA). The digitized data were
fragmented in 1 second epochs (256 data points)
and stored in separate files. Each epoch was pre-
processed for noise reduction before final FFT or
power spectrum analysis . At first, the DC value was
s ubtr acted from the data and then the base line
movement was reduced. I n the final step of pre-
process ing, the data were band pass filtered with
cutoff frequencies of 0.25 and 30 Hz, as the
maximum frequency component of interest in
anesthetized animal is less than 25 Hz.(8) These
filtered data epochs were processed for FFT or power
s pectrum calculation before being used as input for
ANN.
T hree layered feed-forward back-propagation
network was used for detecting the seizures . The
network was implemented via software by using
C+ + programming language on a computer .(9) The
individual computational elements that make up
most artificial neural systems models are more often
referred to as processing elements (PEs). Like a
neuron, a PE has many inputs but only single output,
which can fan out to many other PEs in the network.
T he input ith receives from the jth PE is indicated as
xj . Each connection to the ith PE has as s ociated with
a quantity called weight or connection strength. The
weight on the connection from the node j th to ith
node is denoted as wij . Each PE deter mines a net
input value based on all it’s input connection .( 1 0)
T he net input is calculated by summing the input
values , gated (multiplied) by their corresponding
weights . I n other words, the net input to the ith unit
can be written as :
neti = Xj χj wij
Backpropagation network: The back-propagation
learning involves propagation of the error backwards
from the output layer to the hidden layers in order to
determine the update for the weights leading to the
units in a hidden layer . It does not have feedback
connections , but errors are back propagated during
training by using least mean square (LMS ) error.
Error in the output determines measures of hidden
layer output errors, which are used as a bias for
adj ustment of connection weights between the input
and hidden layers . Adjusting the two sets of weights
between the pair of layers and recalculating the
outputs is an iterative process that is carried on until
the error falls below a tolerance level . Learning rate
par ameter s scale the adj us tments to the weights .
T he input of a particular element was calculated as
the sum of the input values multiplied by connection
strength (synaptic weight).(11) ANN was trained by
FFT data of selected EEG data files. During training,
the network was provided the inputs and the desired
outputs , and the weights were adj usted accordingly
so as to minimize the error between expected and
des ired outputs . After the training, the network was
tested with unknown input patterns that were not
present in the training set.
Results
T he par ameter s of the ANN were set to get optimized
performance of the network program over the entire
set of EEG data. The training of the ANN was tried
with variable number of hidden neurons as well as by
assigning different learning rates parameters
between the ranges of 0.01 to 0.5. The optimized
performance of the ANN was found with structures of
40- 12-1 (nodes of input, hidden and output) and
with the learning rate of 0.1 . The schematic diagram
of the neural network used in the present study is
s hown in Fig.- 1 .