As mentioned above, an alternative to training the PDP on the automatically derived auto-SLU-success feature is to train it on the hand-labelled SLU-success while still testing it on the automatic feature. This second method is referred to as ``hand-labelled-training'' and the resulting feature is hlt-SLU-success. This may provide a more accurate model but it may not capture the characteristics of the automatic feature in the test set. Table 8 gives results for the two methods. One can see from this table that there is a slight, insignificant increase in accuracy for Exchange 1 and the whole dialogue using the hand-labelled-training method. However, the totally automated method yields a better result (79.2% compared to 77.4%) for Exchanges 1&2, which as mentioned above, is the most important result for these experiments. This increase shows a trend but is not significant (df=866, t=1.8, p=0.066). The final row of the table gives the results using the hand-labelled feature SLU-success in both the training and testing and is taken as the topline result.