By default EYE uses regression and searches for the best general model for data, just as we have seen with the example of the gardening data. Sometimes, however, the data falls into a special category: it represents a classification problem. Here each datapoint falls into one of a finite number of classes, and the goal is to be able to predict which class a new datapoint will belong to.
For instance, suppose our friend the gardener had carried out experiments on growing hybrids. Perhaps the color of the flowers on the hybrid plants varied: some had yellow flowers, some had orange flowers, and some had red flowers. Now the gardener would like to predict what color flowers will result from particular hybrid experiments. Each experiment produces a result belonging to one of a finite number of classes: yellow, orange, or red. We represent this by assigning one output variable to each class, and setting that output variable to be 1 if the result belongs to that class, and 0 otherwise.
The user can switch on the Use classification mode if their data conforms to a classification problem (i.e. for each datapoint there is a single output that has the value 1.0--corresponding to that datapoint's class--and all the other outputs are 0.0). EYE will then constrain its own predictions and models so that they also conform to the classification mode.