In this experiment, a number of algorithms that can be used for tranining artificial neural
networks are compared for binary classification task. In the experiment the
*Spambase* dataset is used. The classification accuracy of the
resulting classifier is determined and the interpretation of related graphs is reviewed.

*Spambase* [UCI MLR]

Before fitting the models the dataset is partitionated by the `Data Partition`

operator
according to the rates 60/20/20 among the training, validatation and test datasets.

Firstly, a standard artificial neural network is fitted by the `NeuralNetwork`

operator
where the network topology of a multilayer perceptron is defined as `3`

hidden neuron in one
hidden layer. The goodness-of-fit of the resulting model can be verified using standard statistics
(misclassification rate, the number of incorrectly classified cases) and graphics (response and lift curve).

In addition to the standard goodness-of-fit tests we get results which have meaning for artificial neural networks only. These results involve the graph of the weights of the neurons, the graph of the history of the teaching where the misclassification rate can be seen as the function of the iteration for training and validation datasets.

Similar graphs were obtained for the other two neural network fitting operator, namely for the
`DMNeural`

operator and for the `AutoNeural`

operator. At the first,
the exception is the following stepwise optimization statistics.

Finally, the three models can be compared by the `Model Comparison`

operator.
As a result, we obtain the following statistics and graphs.

The above statistics and figures clearly show that the best model is the first artificial neural network with multi-layer perceptron architecture, where there is one hidden layer with 3 neurons.