Keystroke Dynamics - Methodology Survey Results

Web Supplement to “Should Security Researchers Experiment More and Draw More Inferences?” (CSET-2011)


by
Kevin Killourhy and Roy Maxion
(click to show email)


This webpage supplements “Should Security Researchers Experiment More and Draw More Inferences?&rdquo by Kevin Killourhy and Roy Maxion, published in CSET 2011.
Kevin Killourhy and Roy Maxion. “Should security researchers experiment more and draw more inferences?” In 4th Workshop on Cyber Security Experimentation and Test (CSET-2011), August 8, 2011, San Francisco, CA, 2011. The USENIX Association, Berkeley, CA.
In the paper, we explain why failure to use standard scientific practices is endangering security research. Practices such as (1) conducting comparative experiments and (2) drawing statistical inferences from the results are common in other scientific fields, but our perception is that they are applied only haphazardly in computer-security research. To investigate whether our perception is true, we conducted a survey of a segment of the literature to confirm how often researchers really do conduct comparative experiments and draw statistical inferences.

This webpage provides citations for each of the papers in the survey and our decisions regarding the use of comparative experiments and statistical inferences. The review methodology is presented below, but for more detailed information, please refer to our paper.

Review Methodology

Keystroke dynamics—the study of whether genuine users and impostors can be distinguished by their typing rhythms—was chosen as the topic of the survey. To obtain a large and representative sample of keystroke-dynamics research papers, we consulted the IEEE Xplore Digital Library of articles and conference proceedings published by the IEEE. We conducted two keyword searches for keystroke dynamics and keystroke biometrics. In total, these two searches returned 101 unique papers: 13 journal articles and 88 conference or workshop papers.

We screened these papers to identify those that described the evaluation of a keystroke-dynamics classifier and reported the evaluation results. This screening excluded 21 papers after which 80 papers remained.

We read each of the remaining 80 papers to assess whether a comparative experiment was performed. Specifically, we recognized a paper as having performed a comparative experiment if, in the section describing the evaluation and its results (including tables and figures), the researchers compared the performance of multiple classifiers on the same keystroke-dynamics data set.

We consider this definition to be lenient. In fields such as medicine, a new treatment would be compared to an established baseline treatment, not another new treatment. However, we recognized a paper as having a comparative experiment even if two new classifiers were compared. We considered using a stricter criteria, but it can be surprisingly tricky to determine whether a classifier is intended to be new (e.g., support vector machines have been independently proposed for keystroke dynamics by several researchers).

While we recognized papers that evaluated multiple classifiers as comparative, we did not recognize papers that evaluated multiple tunings of a single classifier. An exploration of how error rates change with different amounts of training or with different anomaly-score thresholds would not be recognized as a comparative experiment. Otherwise, any paper with an ROC curve would need to be recognized as performing a comparison across different tunings. We felt that including such papers would grossly distort the value of what we aimed to measure.

Note that some of the papers include other kinds of comparisons. For instance, classifier performance is compared across different typing tasks and different biometric modes (e.g., keystroke dynamics vs face recognition). For this study, we have focused on assessing how often multiple classifiers are compared. As such, while other kinds of comparisons are valuable and have been noted, we do not count such papers toward the percentage of comparative experiments.

For the same 80 papers, we also assessed whether a statistical inference was made. Specifically, we recognized a paper as having performed a statistical inference if, in the section describing the evaluation results and analysis (including tables and figures), the researchers reported the results of a hypothesis test (e.g., a p-value) or reported confidence intervals. A few authors used the word ``significant'' without, to the best of our knowledge, meaning it in a statistically precise way. Note that we recognized all statistical inferences, not just those concerning the relative performance of multiple classifiers. For instance, if a researcher performed a hypothesis test to establish that error rates were lower on long passwords than short passwords, we recognized it as a statistical inference.

Summary

Only 43 papers (53.75%) performed a comparative experiment, and only 6 papers (7.5%) drew statistical inferences. Our intent in surveying the literature is not to criticize individual papers, but to demonstrate the lack of standard scientific practices across the field. These results confirm our suspicion that these standard scientific practices are not part of common methodology. Because of the importance of comparative experiments and inferential statistics for scientific discovery, we hope that these results will promote discussion of these practices, what problems arise when they are not used, and when they should be required. Our opinion is that comparative experiments and statistical inferences are necessary for a science of security.

Citation Comparative Experiments? Statistical Inferences? Notes
1 N. Bartlow and B. Cukic. “Evaluating the Reliability of Credential Hardening through Keystroke Dynamics,” in 17th IEEE International Symposium on Software Reliability Engineering (ISSRE 2006), (November 6–11, 2006, Raleigh, NC), pp. 117–126, IEEE Computer Society, Los Alamitos, CA, 2006. Yes Yes A random-forest classifier is evaluated in a variety of configurations and on multiple typing tasks (i.e., password length and complexity), Evaluations of other classifiers were performed as well. They were only briefly described, but the error rates are presented as points in a figure. Statistical t-tests were conducted to compare performance across typing tasks.
2 R. Giot, H. Baptiste, and C. Rosenberger. “Low Cost and Usable Multimodal Biometric System Based on Keystroke Dynamics and 2D Face Recognition,” in 20th International Conference on Pattern Recognition (ICPR 2010), (August 23–26, 2010, Istanbul, Turkey), pp. 1128–1131, IEEE Computer Society, Los Alamitos, CA, 2010. Yes Yes Three different classifiers are evaluated, and their error rates are compared. Comparisons are also made across biometric modalities (e.g., keystroke dynamics and face recognition). In Table 1, substantial differences in error rates are denoted with symbols (e.g., ‘*’ and ‘=’). These symbols correspond to the conventional indicators of various significance levels in hypothesis testing.
3 K. S. Killourhy and R. A. Maxion. “Comparing Anomaly-Detection Algorithms for Keystroke Dynamics,” in IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2009), (June 29–July 2, 2009, Estoril, Lisbon, Portugal), pp. 125–134, IEEE Computer Society, Los Alamitos, CA, 2009. Yes Yes Fourteen classifiers are evaluated, and the error rates are compared. Wilcoxon signed-rank tests are used to draw statistical inferences.
4 N. Yager and T. Dunstone. “The Biometric Menagerie,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 2, pp. 220–230, 2010. Yes Yes Two keystroke-dynamics classifiers are evaluated alongside classifiers for various other biometrics, and the error rates are compared. Kruskal-Wallis tests are employed to test whether the “animals” are the same across classifiers (i.e., users whose error rates are likened to animals such as lambs, goats, and wolves).
5 M. El-Abed, R. Giot, B. Hemery, and C. Rosenberger. “A Study of Users' Acceptance and Satisfaction of Biometric Systems,” in 44th IEEE International Carnahan Conference on Security Technology (ICCST 2010), (October 5–8, 2010, San Jose, CA), pp. 170–178, IEEE, Piscataway, NJ, 2010. No Yes An existing keystroke-dynamics classifier is re-evaluated along with a face-based recognition system, but technically, multiple keystroke-dynamics classifiers are not compared. The two biometric modes are compared not only by error rates but also by acceptability, as measured by a user survey. Kruskal-Wallis tests are used to draw statistical inferences.
6 M. Villani, C. Tappert, G. Ngo, J. Simone, H. S. Fort, and S. Cha. “Keystroke Biometric Recognition Studies on Long-Text Input under Ideal and Application-Oriented Conditions,” in Conference on Computer Vision and Pattern Recognition Workshop (CVPRW 2006), (Jun 17–22, 2006, New York, NY), pp. 39–46, IEEE Computer Society Press, 2006. No Yes Comparisons are made across data sets with and without outliers, and with different training amounts, tasks, keyboards, and numbers of subjects, but multiple classifiers are not compared. χ²-tests are used to draw statistical inferences.
7 G. L. F. B. G. Azevedo, G. D. C. Cavalcanti, and E. C. B. C. Filho. “An Approach to Feature Selection for Keystroke Dynamics Systems Based on PSO and Feature Weighting,” in IEEE Congress on Evolutionary Computation (CEC 2007), (September 25–28, 2007), pp. 3577–3584, IEEE, Piscataway, NJ, 2007. Yes No A standard genetic algorithm and several particle-swarm-optimization classifiers are evaluated, and the error rates are compared. No statistical inferences are drawn.
8 G. L. F. B. G. Azevedo, G. D. C. Cavalcanti, and E. C. B. C. Filho. “Hybrid Solutions for the Feature Selection in Personal Identification Problems through Keystroke Dynamics,” in International Joint Conference on Neural Networks (IJCNN 2007), (August 12–17, 2007, Orlando, FL), pp. 1947–1952, IEEE, Piscataway, NJ, 2007. Yes No Genetic-algorithm and particle-swarm-optimization classifiers are evaluated, and the error rates are compared. While descriptive measures of uncertainty are presented (e.g., standard deviations), no formal statistical inferences are drawn.
9 S. Bleha, C. Slivinsky, and B. Hussien. “Computer-Access Security Systems Using Keystroke Dynamics,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 12, pp. 1217–1222, 1990. Yes No Multiple classifiers are evaluated (e.g., minimum-distance, Bayes, and a combination of the two), and the error rates are compared. No statistical inferences are drawn.
10 M. Brown and S. J. Rogers. “A Practical Approach to User Authentication,” in 10th Annual Computer Security Applications Conference, (December 5–9, 1994, Orlando, FL), pp. 108–116, IEEE Computer Society, Los Alamitos, CA, 1994. Yes No ADALINE and back-propagation neural networks are evaluated, and the error rates are compared. The word "significantly" is used, but the context does not suggest a statistically precise sense of the word.
11 P. Campisi, E. Maiorana, M. L. Bosco, and A. Neri. “User Authentication Using Keystroke Dynamics For Cellular Phones,” IET Signal Processing, vol. 3, no. 4, pp. 333–341, 2009. Yes No In addition to comparisons of different tunings and tasks (e.g., amount of training and length of password), different classifiers are evaluated and the error rates compared. Different classifiers are distinguished by different transformation functions (e.g., min-max, z-score, sigmoid). The line between different tunings of a single classifier and multiple classifiers is admittedly gray. Transformations of similar complexity are presented as distinct classifiers in other papers, and we consider them distinct in this paper as well. No statistical inferences are drawn.
12 W. Chang. “Improving Hidden Markov Models with a Similarity Histogram for Typing Pattern Biometrics,” in IEEE International Conference on Information Reuse and Integration (IRI 2005), (August 15–17, 2005, Las Vegas, NV), pp. 487–493, IEEE, Piscataway, NJ, 2005. Yes No A previously proposed classifier is modified, adding a preprocessing step involving a similarity histogram; the new classifier is evaluated and compared to the old one. The modification seems substantial, and the comparison of a new technology to a current baseline is sound practice. No statistical inferences are drawn.
13 H. Davoudi and E. Kabir. “A New Distance Measure for Free Text Keystroke Authentication,” in 14th International CSI Computer Conference (CSICC 2009), (October 20–21, 2009, Tehran, Iran), pp. 570–575, IEEE, Piscataway, NJ, 2009. Yes No Three different distance measures are evaluated (i.e., a previously used one, a new one, and a combination), and their error rates are compared. The different measures seem substantial, and the comparison of a new technology to a current baseline is sound practice. No statistical inferences are drawn.
14 H. Davoudi and E. Kabir. “Modification of the Relative Distance for Free Text Keystroke Authentication,” in 5th International Symposium on Telecommunications (IST 2010), (December 4–6, 2010, Tehran, Iran), pp. 547–551, IEEE, Piscataway, NJ, 2010. Yes No Two different distance measures are evaluated (i.e., a previously used one and a new one), and their error rates are compared. The different distance measures seem substantially different, and the comparison of a new technology to a current baseline is sound practice. No statistical inferences are drawn.
15 S. T. de Magalhães, K. Revett, and H. M. D. Santos. “Password Secured Sites—Stepping Forward with Keystroke Dynamics,” in International Conference on Next Generation Web Services Practices (NWeSP 2005), (Aug 22–26, 2005, Seoul, Korea), pp. 293–298, IEEE Computer Society, Los Alamitos, CA, 2005. Yes No A previously proposed “lightweight” classifier is modified by treating latencies for keystroke pairs differently, depending on their location on a keyboard; the new classifier is evaluated and compared to the old one. The modification seems substantial, and the comparison of a new technology to a current baseline is sound practice. No statistical inferences are drawn.
16 H. Dozono and M. Nakakuni. “An Integration Method of Multi-Modal Biometrics Using Supervised Pareto Learning Self Organizing Maps,” in International Joint Conference on Neural Networks (IJCNN 2008), part of the IEEE World Congress on Computational Intelligence (WCCI 2008), (June 1–8, 2008, Hong Kong), pp. 602–606, IEEE, Piscataway, NJ, 2008. Yes No Standard self-organizing maps and a new kind of self-organizing map are evaluated, and the error rates are compared. No statistical inferences are drawn.
17 R. Giot, M. El-Abed, and C. Rosenberger. “Keystroke Dynamics with Low Constraints SVM Based Passphrase Enrollment,” in IEEE Third International Conference on Biometrics: Theory, Applications and Systems (BTAS 2009), (September 28–30, 2009, Washington, DC), pp. 425–430, IEEE, Piscataway, NJ, 2009. Yes No A newly proposed classifier (SVM) and several state-of-the-art classifiers are evaluated, and the error rates are compared. Comparisons are also made of different tuning parameters and tasks (e.g., different keyboards, amount of training, adaptation mechanisms, thresholds, and number of enrolled users). The best equal-error rate is presented in bold, a convention which sometimes signifies a statistical test, but no inferential procedures are described.
18 R. Giot, M. El-Abed, and C. Rosenberger. “Keystroke Dynamics Authentication for Collaborative Systems,” in International Symposium on Collaborative Technologies and Systems (CTS 2009), (May 18–22, 2009, Baltimore, MD), pp. 172–179, IEEE, Piscataway, NJ, 2009. Yes No Four or five classifiers are evaluated, and their error rates are compared. No statistical inferences are drawn.
19 S. Haider, A. Abbas, and A. K. Zaidi. “A Multi-Technique Approach for User Identification through Keystroke Dynamics (SMC 2000),” in IEEE International Conference on Systems, Man and Cybernetics, (October 8–11, 2000, Nashville, TN), vol. 2, pp. 1336–1341, IEEE, Piscataway, NJ, 2000. Yes No Three classifiers are evaluated (i.e., fuzzy-logic, neural networks, and statistics), and the error rates are compared. No statistical inferences are drawn.
20 N. Harun, S. S. Dlay, and W. L. Woo. “Performance of Keystroke Biometrics Authentication System Using Multilayer Perceptron Neural Network (MLP NN),” in 7th International Symposium on Communication Systems, Networks & Digital Signal Processing, (July 21–23, 2010, Newcastle upon Tyne, UK), pp. 711–714, IEEE, Piscataway, NJ, 2010. Yes No Two neural networks are evaluated, each using a different preprocessing step (e.g., principal component analysis), and the error rates are compared. This difference between the classifiers seems substantial. No statistical inferences are drawn.
21 N. Harun, W. L. Woo, and S. S. Dlay. “Performance of Keystroke Biometrics Authentication System Using Artificial Neural Network (ANN) and Distance Classifier Method,” in International Conference on Computer and Communication Engineering (ICCCE 2010), (May 11–13, 2010, Kuala Lumpur, Malaysia), pp. 1–6, IEEE, Piscataway, NJ, 2010. Yes No Different neural networks and distance measures are evaluated, and their error rates are compared. While descriptive measures of uncertainty are presented (e.g., mean-squared errors and standard deviations), no formal statistical inferences are drawn.
22 S. Hocquet, J. Ramel, and H. Cardot. “Fusion of Methods for Keystroke Dynamics Authentication,” in 4th IEEE Workshop on Automatic Identification Advanced Technologies (AutoID-2007), (Oct 17–18, 2005, Buffalo, NY), pp. 224–229, IEEE Computer Society, Los Alamitos, CA, 2005. Yes No Different classifiers, composed of different normalization functions and transformations, are evaluated and the error rates are compared. The differences in these functions and transformations seem substantial. No statistical inferences are drawn.
23 S. Hocquet, J. Ramel, and H. Cardot. “Estimation of User Specific Parameters in One-class Problems,” in Proceedings of the 18th International Conference on Pattern Recognition (ICPR 2006), (August 20-24, 2006, Hong Kong China), vol. 4, pp. 449–452, IEEE Computer Society, Los Alamitos, CA, 2006. Yes No Several different classifiers (e.g., k-nearest neighbors, neural networks, and SVMs) are evaluated, and the error rates are compared. No statistical inferences are drawn.
24 J. Hu, D. Gingrich, and A. Sentosa. “A k-Nearest Neighbor Approach for User Authentication through Biometric Keystroke Dynamics,” in IEEE International Conference on Communications (ICC 2008), (May 19–23, 2008, Beijing, China), pp. 1556-1560, IEEE, Piscataway, NJ, 2008. Yes No A new clustering-based classifier is evaluated alongside a previously proposed classifier, and the error rates are compared. No statistical inferences are drawn.
25 S. S. Joshi and V. V. Phoha. “Competition Between SOM Clusters to Model User Authentication System in Computer Networks,” in 2nd International Conference on Communication System Software and Middleware (COMSWARE 2007), (January 7–12, 2007, Bangalore, India), pp. 284-291, IEEE, Piscataway, NJ, 2007. Yes No A new classifier is evaluated alongside a previously proposed decision-tree classifier, and the error rates are compared. No statistical inferences are drawn.
26 M. Karnan and M. Akila. “Identity Authentication based on Keystroke Dynamics using Genetic Algorithm and Partical Swarm Optimization,” in 2nd IEEE International Conference on Computer Science and Information Technology (ICCSIT 2009), (August 8–11, 2009, Beijing, China), pp. 203–207, IEEE, Piscataway, NJ, 2009. Yes No Genetic-algorithm and particle-swarm-optimization classifiers are evaluated, and the error rates are compared. Additional comparisons are made across tuning parameters (e.g., feature sets). No statistical inferences are drawn.
27 M. Karnan and M. Akila. “Personal Authentication based on Keystroke Dynamics using Soft Computing Techniques,” in 2nd International Conference on Communication Software and Networks, (February 26–28, 2010), pp. 334–338, IEEE Computer Society, Los Alamitos, CA, 2010. Yes No Three classifiers (i.e., genetic algorithm, particle-swarm optimization, and a newly proposed ant-colony algorithm) are evaluated, and the error rates are compared. No statistical inferences are drawn.
28 M. Karnan and N. Krishnaraj. “Bio Password—Keystroke Dynamic Approach to Secure Mobile Devices,” in IEEE International Conference on Computational Intelligence and Computing Research, (December 28–29, 2010, Tamilnadu, India), pp. 1–4, IEEE, Piscataway, NJ, 2010. Yes No Different preprocessing steps for feature selection are evaluated, and the error rates are compared. No statistical inferences are drawn.
29 C. C. Loy, W. K. Lai, and C. P. Lim. “Keystroke Patterns Classification using the ARTMAP-FD Neural Network,” in 3rd International Conference on Intelligent Information Hiding and Multimedia Signal Processing, (November 26–28, 2007, Kaohsiung, Taiwan), vol. 1, pp. 61–64, IEEE, Piscataway, NJ, 2007. Yes No Multiple classifiers are evaluated (e.g., Gaussian, k-nearest neighbor, and ARTMAP), and the error rates are compared. Additional comparisons are made across features (e.g., timing vs pressure). The best error rate is presented in bold, a convention which sometimes signifies a statistical test, but no inferential procedures are described.
30 H. Lv, Z. Lin, W. Yin, and J. Dong. “Emotion Recognition Based on Pressure Sensor Keyboards,” in IEEE International Conference on Multimedia and Expo (ICME 2008), (June 23–26, 2008, Hannover, Germany), pp. 1089–1092, IEEE, Piscataway, NJ, 2008. Yes No Three algorithms intended to identify typist emotion are evaluated, and the error rates are compared. The best error rate is presented in bold, a convention which sometimes signifies a statistical test, but no inferential procedures are described.
31 H. Lv and W. Wang. “Biologic Verification Based on Pressure Sensor Keyboards and Classifier Fusion Techniques,” IEEE Transactions on Consumer Electronics, vol. 52, no. 3, pp. 1059–1063, 2006. Yes No Three algorithms are evaluated, and the error rates are compared. No statistical inferences are drawn.
32 A. Mészáros, Z. Bankó, and L. Czúni. “Strengthening Passwords by Keystroke Dynamics,” in IEEE International Workshop on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, (September 6–8, 2007, Dortmund, Germany), pp. 574–577, IEEE, Piscataway, NJ, 2007. Yes No A principal-component-analysis-based classifier and a Euclidean-distance classifier are evaluated, and the error rates are compared. No statistical inferences are drawn.
33 S. Modi and S. Elliott. “Keystroke Dynamics Verification Using a Spontaneously Generated Password,” in 40th IEEE International Carnahan Conference on Security Technology, (October 16–19, 2006, Lexington, KY), pp. 116–121, IEEE, Piscataway, NJ, 2006. Yes No Three classifiers are evaluated (one previously proposed, and two newly developed), and their error rates are compared. No statistical inferences are drawn.
34 J. Montalvão, C. A. S. Almeida, and E. O. Freire. “Equalization of Keystroke Timing Histograms for Improved Identification Performance,” in International Telecommunications Symposium, (September 3–6, 2006, Fortaleza, Brazil), pp. 560–565, IEEE, Piscataway, NJ, 2006. Yes No Multiple classifiers are evaluated, and their error rates are compared. No statistical inferences are drawn.
35 M. S. Obaidat and B. Sadoun. “Verification of Computer Users Using Keystroke Dynamics,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 27, no. 2, pp. 261–269, 1997. Yes No Many different classifiers (including both traditional pattern-recognition algorithms and neural networks) are evaluated, and the error rates are compared. No statistical inferences are drawn.
36 S. Park, J. Park, and S. Cho. “User Authentication Based on Keystroke Analysis of Long Free Texts with a Reduced Number of Features,” in 2nd International Conference on Communication Systems, Networks and Applications (ICCSNA 2010), (June 29–July 1, 2010, Hong Kong), pp. 433–435, IEEE, Piscataway, NJ, 2010. Yes No A newly proposed classifier is evaluated alongside some previous approaches; the error rates are compared. No statistical inferences are drawn.
37 N. Pavaday and K. M. S. Soyjaudah. “Investigating Performance of Neural Networks in Authentication using Keystroke Dynamics,” in IEEE AFRICON 2007, (September 26–28, 2007, Windhoek, South Africa), pp. 1–8, IEEE, Piscataway, NJ, 2007. Yes No A neural network is evaluated alongside previously proposed fuzzy-logic and statistical classifiers, and the error rates are compared. No statistical inferences are drawn.
38 G. Z. Pedernera, S. Sznur, G. S. Ovando, S. García, and G. Meschino. “Revisiting Clustering Methods to their Application on Keystroke Dynamics for Intruder Classification,” in IEEE Workshom on Biometric Measurements and Systems for Security and Medical Applications (BioMS 2010), (September 9, 2010, Taranto, Italy), pp. 36–40, IEEE, Piscataway, NJ, 2010. Yes No Multiple clustering methods are evaluated (e.g., adapted k-means and Adapted Subtractive clustering), and the error rates are compared. No statistical inferences are drawn.
39 J. A. Robinson, V. M. Liang, J. A. M. Chambers, and C. L. MacKenzie. “Computer User Verification Using Login String Keystroke Dynamics,” IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans, vol. 28, no. 2, pp. 236–241, 1998. Yes No Three different classifiers are evaluated, and the results are compared. No statistical inferences are drawn.
40 T. Shimshon, R. Moskovitch, L. Rokach, and Y. Elovici. “Continuous Verification Using Keystroke Dynamics,” in 6th International Conference on Computational Intelligence and Security (CIS 2010), (Dec 11–14, 2010, Nanning, China), pp. 411–415, IEEE Computer Society, Los Alamitos, CA, 2010. Yes No A new classifier is evaluated alongside a previously proposed one, and the error rates are compared. Additional comparisons are made across tasks (e.g., length of session). No statistical inferences are drawn.
41 T. Shimshon, R. Moskovitch, L. Rokach, and Y. Elovici. “Clustering Di-Graphs for Continuously Verifying Users according to their Typing Patterns,” in IEEE 26th Convention of Electrical and Electronics Engineers in Israel, (November 17–20, 2010, Eilat, Israel), pp. 445–449, IEEE, Piscataway, NJ, 2010. Yes No A new classifier is evaluated alongside a previously proposed one, and the error rates are compared. No statistical inferences are drawn.
42 S. Sinthupinyo, W. Roadrungwasinkul, and C. Chantan. “User Recognition Via Keystroke Latencies Using SOM and Backpropagation Neural Network,” in ICROS-SICE International Joint Conference, (August 18–21, 2009, Fukuoka City, Japan), pp. 3160–3165, SICE, Tokyo, Japan, 2009. Yes No Several different self-organizing maps, leveraging neural networks and decision trees, are evaluated, and the error rates are compared. While descriptive measures of uncertainty are presented (e.g., standard deviations), no formal statistical inferences are drawn.
43 P. S. Teh, A. B. J. Teoh, T. S. Ong, and H. F. Neo. “Statistical Fusion Approach on Keystroke Dynamics,” in 3rd International IEEE Conference on Signal-Image Technologies and Internet-Based Systems (SITIS 2007), (December 16–19, 2007, Jiangong Jinjiang, Shanghai, China), pp. 918–923, IEEE, Piscataway, NJ, 2007. Yes No A Gaussian classifier and a direct-similarity-measure classifier are evaluated (along with their combination), and the error rates are compared. No statistical inferences are drawn.
44 E. Yu and S. Cho. “GA-SVM Wrapper Approach for Feature Subset Selection in Keystroke Dynamics Identity Verification,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN), (July 20–24, 2003, Portland, OR), vol. 3, pp. 2253–2257, IEEE, Piscataway, NJ, 2003. Yes No A new feature-selection method is proposed and evaluated alongside a classifier without feature selection; the error rates are compared. No statistical inferences are drawn.
45 Y. Zhang, G. Chang, L. Liu, and J. Jia. “Authenticating User's Keystroke Based on Statistical Models,” in 4th International Conference on Genetic and Evolutionary Computing, (December 13–15, 2010, Shenzhen, China), pp. 578–581, IEEE Computer Society, Los Alamitos, CA, 2010. Yes No One classifier with a time-series model and another without such a model are evaluated, and their error rates are compared. No statistical inferences are drawn.
46 A. A. E. Ahmed and I. Traore. “Anomaly Intrusion Detection Based on Biometrics,” in 6th Annual IEEE Systems, Man and Cybernetics (SMC) Information Assurance Workshop (IAW 2005), (June 15–17, 2005, West Point, NY), pp. 452–453, IEEE, Piscataway, NJ, 2005. No No The classifier is evaluated in isolation. No statistical inferences are drawn.
47 A. Ali, Wahyudi, and M. J. E. Salami. “Keystroke Pressure Based Typing Biometrics Authentication System by Combining ANN and ANFIS-Based Classifiers,” in 5th International Colloquium on Signal Processing and Its Applications (CSPA 2009), (March 6–8, 2009, Kuala Lumpur, Malaysia), pp. 198–203, IEEE, Piscataway, NJ, 2009. No No Two classifiers are proposed (ANN and ANFIS), but error rates are only reported for the combined classifier. No statistical inferences are drawn.
48 L. C. F. Araújo, L. H. R. Sucupira Jr., M. G. Lizárraga, L. L. Ling, and J. B. T. Yabu-uti. “User Authentication through Typing Biometrics Features,” IEEE Transactions on Signal Processing, vol. 53, no. 2, pp. 851–855, 2005. No No Error rates are compared across different tunings and environmental conditions (e.g., feature sets, typed strings, number of attempts, use of updating, timing accuracy, and amount of training), but multiple classifiers are not compared. No statistical inferences are drawn.
49 G. C. Boechat, J. C. Ferreira, and E. C. B. C. Filho. “Authentication Personal,” in International Conference on Intelligent and Advanced Systems (ICAIS 2007), (November 25–28, 2007, Kuala Lumpur, Malaysia), pp. 254–256, IEEE, Piscataway, NJ, 2007. No No Different thresholds and features are compared, but no comparisons are made among multiple classifiers. No statistical inferences are drawn.
50 Z. Changshui and S. Yanhua. “AR Model for Keystroker Verification,” in IEEE International Conference on Systems, Man, and Cybernetics, (October 8–11, 2000, Nashville, TN), vol. 4, pp. 2887–2890, IEEE, Piscataway, NJ, 2000. No No Error rates are compared across different tunings (e.g., estimation methods and order parameters for fitting auto-regressive models) but not across multiple classifiers. No statistical inferences are drawn.
51 W. Chen and W. Chang. “Applying Hidden Markov Models to Keystroke Pattern Analysis for Password Verification,” in IEEE International Conference on Information Reuse and Integration (IRI 2004), (November 8–10, 2004, Las Vegas, NV), pp. 467–474, IEEE, Piscataway, NJ, 2004. No No Error rates are compared across different tunings (e.g., detection thresholds), but not across multiple classifiers. No statistical inferences are drawn.
52 N. L. Clarke, S. M. Furnell, P. L. Reynolds, and P. M. Rodwell. “Advanced Subscriber Authentication Approaches for Third Generation Mobile Systems,” in 3rd International Conference on 3G Mobile Communication Technologies, (May 8–10, 2002, London, UK), pp. 319–323, Institution of Electrical Engineers, London, UK, 2002. No No Error rates are compared across different tunings and tasks (e.g., PIN vs phone number), but only one neural-network classifier is evaluated. No statistical inferences are drawn.
53 W. G. de Ru and J. H. P. Eloff. “Enhanced Password Authentication through Fuzzy Logic,” IEEE Expert, vol. 12, no. 6, pp. 38–45, 1997. No No Error rates for a single fuzzy-logic classifier are reported. No statistical inferences are drawn.
54 W. E. Eltahir, M. J. E. Salami, A. F. Ismail, and W. K. Lai. “Dynamic Keystroke Analysis Using AR Model,” in IEEE International Conference on Industrial Technology (ICIT 2004), (December 8–10, 2004, Hammamet, Tunisia), vol. 3, pp. 1555-1560, IEEE, Piscataway, NJ, 2004. No No Error rates are presented for only a proposed time-series-based classifier. No statistical inferences are drawn.
55 S. Giroux, R. Wachowiak-Smolikova, and M. P. Wachowiak. “Keystroke-Based Authentication by Key Press Intervals as a Complementary Behavioral Biometric,” in IEEE International Conference on Systems, Man and Cybernetics (SMC 2009), (October 11–14, 2009, San Antonio, TX), pp. 80–85, IEEE, Piscataway, NJ, 2009. No No The proposed classifier is evaluated in isolation. Error bars and similar measures of uncertainty are presented for intermediate values in the evaluation (e.g., inter-keystroke latencies) but no statistical inferences are drawn from the evaluation results.
56 S. Giroux, R. Wachowiak-Smolikova, and M. P. Wachowiak. “Keypress Interval Timing Ratios as Behavioral Biometrics for Authentication in Computer Security,” in 1st International Conference on Networked Digital Technologies (NDT 2009), (July 28–31, 2009, Ostrava, The Czech Republic), pp. 195–200, IEEE, Piscataway, NJ, 2009. No No The proposed classifier is evaluated in isolation. Error bars and similar measures of uncertainty are presented for intermediate values in the evaluation (e.g., inter-keystroke latencies) but no statistical inferences are drawn from the evaluation results.
57 N. J. Grabham and N. M. White. “Use of a Novel Keypad Biometric for Enhanced User Identity Verification,” in International Instrumentation and Measurement Technology Conference (I ²MTC 2008, (May 12–15, 2008, Victoria, British Columbia, Canada), pp. 12–16, IEEE, Piscataway, NJ, 2008. No No The proposed classifier is evaluated in isolation. No statistical inferences are drawn.
58 D. Hosseinzadeh, S. Krishnan, and A. Khademi. “Keystroke Identification Based on Gaussian Mixture Models,” in IEEE International Conference on Acoustics, Speech, and Signal Processing, (May 14–19, 2006, Toulouse, France), vol. 3, pp. 1144–1147, IEEE, Piscataway, NJ, 2006. No No Comparisons are made across tunings of the classifier (e.g., feature sets), but multiple classifiers are not compared. No statistical inferences are drawn.
59 D. Hosseinzadeh and S. Krishnan. “Gaussian Mixture Modeling of Keystroke Patterns for Biometric Applications,” IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications and Reviews, vol. 38, no. 6, pp. 816–826, 2008. No No Comparisons are made across different tunings (e.g., feature sets, thresholds, and number of authentication attempts), but multiple classifiers are not compared. The best error rates are presented in bold, a convention which sometimes signifies a statistical test, but no inferential procedures are described.
60 R. Koch and G. D. Rodosek. “User Identification in Encrypted Network Communications,” in International Conference on Network and Service Management, (October 25–29, 2010, Niagara Falls, ON, Canada), pp. 246–249, IEEE, Piscataway, NJ, 2010. No No A classifier is evaluated with and without a cluster verification step, and the error rates are compared. However, the authors explain that cluster verification acts like an oracle. Comparing performance with and without this cluster verification is intended to compare the classifier's performance if clustering were done perfectly to its actual performance. The oracle is not a practical option, and so multiple classifiers are not compared. No statistical inferences are drawn.
61 D. Lin. “Computer-Access Authentication with Neural Network Based Keystroke Identity Verification,” in International Conference on Neural Networks (ICNN 1997), (June 9–12, 1997, Houston, TX), vol. 1, pp. 174–178, IEEE, Piscataway, NJ, 1997. No No Comparisons are made across tunings and user groups (e.g., number of hidden nodes and typing proficiency), but multiple classifiers are not compared. No statistical inferences are drawn.
62 S. Mandujano and R. Soto. “Deterring Password Sharing: User Authentication via Fuzzy c-Means Clustering Applied to Keystroke Biometric Data,” in 5th Mexican International Conference on Computer Science, (September 20–24, Colima, Mexico), pp. 181–187, IEEE Computer Society, Los Alamitos, CA, 2004. No No The proposed classifier is evaluated in isolation. No statistical inferences are drawn.
63 R. A. Maxion and K. S. Killourhy. “Keystroke Biometrics with Number-Pad Input,” in Proceedings of the 40th IEEE/IFIP International Conference on Dependable Systems and Networks (DSN-2010), (June 28–July 1, 2010, Chicago, IL), pp. 201–210, IEEE, Los Alamitos, CA, 2010. No No Comparisons are made across tasks and tunings (e.g., different numbers of attempts and different outlier-handling strategies), but multiple classifiers are not compared. No statistical inferences are drawn.
64 J. R. Montalvão Filho and E. O. Freire. “Multimodal Biometric Fusion—Joint Typist (Keystroke) and Speaker Verification,” in International Telecommunications Symposium, (September 3–6, 2006), pp. 609–614, IEEE, Piscataway, NJ, 2006. No No Error rates for a keystroke-dynamics classifier are compared to error rates for different biometrics (e.g., vocal pitch), and the various biometrics are combined to compare unimodal and multimodal biometrics. However, multiple keystroke-dynamics classifiers are not compared. No statistical inferences are drawn.
65 A. Ogihara, H. Matsumura, and A. Shiozaki. “Biometric Verification Using Keystroke Motion and Key Press Timing for ATM User Verification,” in Proceedings of the International Symposium on Intelligent Signal Processing and Communication (ISPACS'06), (December 12–15, 2006, Tottori, Japan), pp. 223–226, IEEE, Piscataway, NJ, 2006. No No Comparisons are made across tunings (e.g., feature sets), but multiple classifiers are not compared. No statistical inferences are drawn.
66 N. Pavaday and K. M. S. Soyjaudah. “Comparative Study of Secret Code Variants in Terms of Keystroke Dynamics,” in 3rd International Conference on Risks and Security of Internet Systems (CRiSIS 2008), (October 28–30, 2008, Tozeur, Tunisia), pp. 133–140, IEEE, Piscataway, NJ, 2008. No No Comparisons are made across typing tasks (e.g., letters vs numbers), but the proposed neural-network classifier is evaluated in isolation. No statistical inferences are drawn.
67 K. Revett. “A Bioinformatics Based Approach to Behavioural Biometrics,” in Frontiers in the Convergence of Bioscience and Information Technology, (October 11–13, 2007, Jeju Island, Korea), pp. 665–670, IEEE Computer Society, Los Alamitos, CA, 2007. No No Classifiers for two different biometric tasks are evaluated (i.e., identification and verification), but multiple classifiers are not compared. No statistical inferences are drawn.
68 K. Revett, S. T. de Magalhães, and H. Santos. “Data Mining a Keystroke Dynamics Based Biometrics Database Using Rough Sets,” in 12th Portuguese Conference on Artificial Intelligence (EPIA 2005), (December 5–8, 2005), pp. 188–191, IEEE, Piscataway, NJ, 2005. No No Comparisons are made across tuning parameters (e.g., amount of rule pruning), but multiple classifiers are not compared. No statistical inferences are drawn.
69 M. Rybnik, M. Tabedzki, and K. Saeed. “A Keystroke Dynamics Based System for User Identification,” in 7th Computer Information Systems and Industrial Management Applications (CISIM 2008), (June 26–28, 2008, Ostrava, The Czech Republic), pp. 225–230, IEEE Computer Society, Los Alamitos, CA, 2008. No No Comparisons are made across feature sets, feature weighting, and voting mechanisms, but multiple classifiers are not compared. No statistical inferences are drawn.
70 M. Rybnik, P. Panasiuk, and K. Saeed. “User Authentication with Keystroke Dynamics using Fixed Text,” in International Conference on Biometrics and Kansei Engineering (ICBAKE 2009), (June 25–28, 2009, Cieszyn, Poland), pp. 70–75, IEEE Computer Society, Los Alamitos, CA, 2009. No No Comparisons are made across tunings and typing tasks, but multiple classifiers are not compared. The word "significantly" is used, but the context does not suggest a statistically precise sense of the word.
71 H. Saevanee and P. Bhatarakosol. “User Authentication Using Combination of Behavioral Biometrics Over the Touchpad Acting Like Touch Screen of Mobile Device,” in International Conference on Computer and Electrical Engineering (ICCEE 2008), (December 20–22, 2008, Phuket, Thailand), pp. 82–86, IEEE Computer Society, Los Alamitos, CA, 2008. No No Comparisons are made across feature sets, but multiple classifiers are not compared. No statistical inferences are drawn.
72 H. Saevanee and P. Bhattarakosol. “Authenticating User Using Keystroke Dynamics and Finger Pressure,” in 6th IEEE Consumer Communications and Networking Conference (CCN 2009), (January 10–13, 2009, Las Vegas, Nevada), pp. 1–2, IEEE, Piscataway, NJ, 2009. No No Comparisons are made across feature sets, but multiple classifiers are not compared. No statistical inferences are drawn.
73 T. Samura and H. Nishimura. “Keystroke Timing Analysis for Individual Identification in Japanees Free Text Typing,” in ICROS-SICE International Joint Conference, (August 18–21, 2009, Fukuoka City, Japan), pp. 3166–3170, Society of Instrument and Control Engineers (SICE), Tokyo, Japan, 2009. No No Comparisons are made across user groups and feature sets, but multiple classifiers are not compared. No statistical inferences are drawn.
74 M. Sharif, T. Faiz, and M. Raza. “Time Signatures—An Implementation of Keystroke and Click Patterns for Practical and Secure Authentication,” in 3rd International Conference on Digital Information Management (ICDIM 2008), (November 13–16, 2008, London, UK), pp. 559–562, IEEE, Piscataway, NJ, 2008. No No Comparisons are made across typing task (e.g., long and short passwords) and user groups (e.g., beginner vs expert), but multiple classifiers are not compared. No statistical inferences are drawn.
75 S. J. Shepherd. “Continuous Authentication by Analysis of Keyboard Typing Characteristics,” in European Convention on Security and Detection, (May 16–18, 1995), pp. 111–114, IEE, London, 1995. No No The classifier is evaluated in isolation. No statistical inferences are drawn.
76 T. Sim and R. Janakiraman. “Are Digraphs Good for Free-Text Keystroke Dynamics?” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (June 17–22, 2007, Minneapolis, MN), pp. 1–6, IEEE Computer Society, Los Alamitos, CA, 2007. No No Comparisons are made across typing task (e.g., different words), but multiple classifiers are not compared. While descriptive measures of uncertainty are presented (e.g., standard deviations), no formal statistical inferences are drawn.
77 A. Sulong, Wahyudi, and M. U. Siddiqi. “Intelligent Keystroke Pressure-Based Typing Biometrics Authentication System using Radial Basis Function Network,” in 5th International Colloquium on Signal Processing & Its Applications (CSPA 2009), (March 6–8, 2009, Kuala Lumpur, Malaysia), pp. 151–155, IEEE, Piscataway, NJ, 2009. No No Comparisons are made across impostor scenarios (e.g., known and unknown), but multiple classifiers are not compared. No statistical inferences are drawn.
78 C. Tseng, F. Liu, and T. Lin. “Design and Implementation of a RFID-based Authentication System by using Keystroke Dynamics,” in IEEE International Conference on Systems, Man and Cybernetics (SMC 2010), (October 10–13, 2010, Istanbul, Turkey), pp. 3926–3929, IEEE, Piscataway, NJ, 2010. No No The classifier is evaluated in isolation. No statistical inferences are drawn.
79 Y. Want, G. Du, and F. Sun. “A Model for User Authentication Based on Manner of Keystroke and Principal Component Analysis,” in 5th International Conference on Machine Learning and Cybernetics, (August 13–16, 2006, Dalian, China), pp. 2788–2792, IEEE, Piscataway, NJ, 2006. No No The classifier is evaluated in isolation. No statistical inferences are drawn.
80 R. S. Zack, C. C. Tappert, and S. Cha. “Performance of a Long-Text-Input Keystroke Biometric Authentication System Using an Improved k-Nearest-Neighbor Classification Method,” in 4th International Conference on Biometrics: Theory, Applications and Systems (BTAS 2010), (September 27–29, 2010, Washington, DC), pp. 1–6, IEEE, Piscataway, NJ, 2010. No No Comparisons are made across different tunings (e.g., the k in kNN), different amounts of training data, and different data sets, but multiple classifiers are not compared. No statistical inferences are drawn.
81 J. Ashbourn. “Practical Implementation of Biometrics Based on Hand Geometry,” in IEE Colloquium on Image Processing for Biometric Measurement, (April 20, 1994), pp. 5/1–5/6, Institution of Electrical Engineers, London, UK, 1994. Out of scope The technology of keystroke dynamics is referenced, but no keystroke-dynamics classifiers are evaluated.
82 A. M. P. Canuto, F. Pintro, A. F. Neto, and M. C. Fairhurst. “Enhancing Performance of Cancellable Fingerprint Biometrics using Classifier Ensembles,” in Brazilian Symposium on Neural Networks (SBRN 2010), (October 23–28, 2010, São Paulo, Brazil), pp. 55–60, IEEE Computer Society, Los Alamitos, CA, 2010. Out of scope Keystroke dynamics are only referenced as an example biometric.
83 H. Crawford. “Keystroke Dynamics: Characteristics and Opportunities,” in 8th Annual International Conference on Privacy Security and Trust (PST 2010), (August 17–19, 2010, Ottawa, Canada), pp. 205–212, IEEE, Piscataway, NJ, 2010. Out of scope Existing keystroke-dynamics studies are reviewed and future recommendations are made.
84 V. C. Estrada, A. Nakao, and E. C. Segura. “Classifying Computer Session Data Using Self-Organizing Maps,” in International Conference on Computational Intelligence and Security, (December 11–14, 2009, Beijing, China), vol. 1, pp. 48–53, IEEE Computer Society, Los Alamitos, 2009. Out of scope Keystroke-dynamics classifiers are compared to other insider-detection methods, including a new one that they propose. However, since no keystroke-dynamics classifier is empirically evaluated, the work appears to be out of scope.
85 E. Flior and K. Kowalski. “Continuous Biometric User Authentication in Online Examinations,” in 7th International Conference on Information Technology (ITNG 2010), (April 12–14, 2010, Las Vegas, NV), pp. 488–492, IEEE Computer Society, Los Alamitos, CA, 2010. Out of scope A new classifier is presented, but it is not empirically evaluated.
86 R. Giot, M. El-Abed, and C. Rosenberger. “GREYC Keystroke: a Benchmark for Keystroke Dynamics Biometric Systems,” in IEEE Third International Conference on Biometrics: Theory, Applications and Systems (BTAS 2009), (September 28–30, 2009, Washington, DC), pp. 419–424, IEEE, Piscataway, NJ, 2009. Out of scope A new data set for evaluating keystroke-dynamics classifiers is described, but no new classifier is evaluated.
87 G. Herbst and S. F. Bocklisch. “Classification of Keystroke Dynamics—A Case Study of Fuzzified Discrete Event Handling,” in 9th International Workshop on Discrete Event Systems (WODES 2009), (May 28–30, 2008, Göteborg, Sweden), pp. 394–399, IEEE, Piscataway, NJ, 2008. Out of scope A new fuzzy-logic-based classifier is presented, but it is not empirically evaluated.
88 E. S. Imsand and J. A. Hamilton Jr. “GUI Usage Analysis for Masquerade Detection,” in IEEE SMC Information Assurance and Security Workshop (IAW 2007), (June 20–22, 2007, West Point, NY), pp. 270–276, IEEE, Piscataway, NJ, 2007. Out of scope Keystroke-dynamics are described and differentiated from a newly proposed behavioral biometric.
89 A. J. Ko and J. O. Wobbrock. “Cleanroom: Edit-Time Error Detection with the Uniqueness Heuristic,” in IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2010), (September 21–25, 2010, Madrid, Spain), pp. 7–14, IEEE Computer Society, Los Alamitos, CA, 2010. Out of scope An editor that detects errors using keystroke timing is described, but the topic of the paper is not classifying typists using keystroke dynamics.
90 A. Kolakowska. “Generating Training Data for SART-2 Keystroke Analysis Module,” in 2nd International Conference on Information Technology (ICIT 2010), (June 28–30, 2010, Gdansk, Poland), pp. 57–60, IEEE, Piscataway, NJ, 2010. Out of scope A new classifier is presented, but it is not empirically evaluated.
91 C. S. Leberknight, G. R. Widmeyer, and M. L. Recce. “An Investigation into the Efficacy of Keystroke Analysis for Perimeter Defense and Facility Access,” in IEEE International Conference on Technologies for Homeland Security, (May 12–13, 2008, Waltham, MA), pp. 345–350, IEEE, Piscataway, NJ, 2008. Out of scope A new classifier is presented, but it is not empirically evaluated.
92 B. Miller. “Vital Signs of Identity,” IEEE Spectrum, vol. 31, no. 2, pp. 22–30, 1994. Out of scope Keystroke dynamics are described as part of a review of biometric modes.
93 R. Moskovitch, C. Feher, A. Messerman, N. Kirschnick, T. Mustafić, A. Camtepe, B. Löhlein, U. Heister, S. Möller, L. Rokach, and Y. Elovici. “Identity Theft, Computers and Behavioral Biometrics,” in IEEE International Conference on Intelligence and Security Informatics (ISI 2009), (June 8–10, 2009, Dallas, TX), pp. 155–160, IEEE, Piscataway, NJ, 2009. Out of scope Keystroke-dynamics research literature is reviewed, but no new classifier is proposed and empirically evaluated.
94 A. Peacock, X. Ke, and M. Wilkerson. “Typing Patterns: A Key to User Identification,” IEEE Security and Privacy, vol. 2, no. 5, pp. 40–47, 2004. Out of scope Keystroke-dynamics research literature is reviewed, but no new classifier is proposed and empirically evaluated.
95 C. Rosenberger. “Emerging Trends in Biometric Authentication,” in International Conference on High Performance Computing & Simulation (HPCS 2009), (June 21–24, 2009, Leipzig, Germany), pp. 256, IEEE, Piscataway, NJ, 2009. Out of scope Keystroke dynamics is described as part of a discussion of trends in biometrics.
96 E. Shakshuki, Z. Luo, J. Gong, and Q. Chen. “Multi-Agent System for Security Service,” in 18th International Conference on Advanced Information Networking and Application (AINA 2004), (March 29–31, 2004, Fukouka, Japan), pp. 303–308, IEEE Computer Society, Los Alamitos, CA, 2004. Out of scope A new classifier is presented, but it is not empirically evaluated.
97 K. N. Shiv Subramaniam, S. Raj Bharath, and S. Ravinder. “Improved Authentication Mechanism Using Keystroke Analysis,” in International Conference on Information and Communication Technology, (March 7–9, 2007, Dhaka, Bangladesh), pp. 258–261, IEEE, Piscataway, NJ, 2007. Out of scope A new classifier is presented, but it is not empirically evaluated.
98 Z. Tao, F. Ming-Yu, and F. Bo. “Side-Channel Attack on Biometric Cryptosystem Based on Keystroke Dynamics,” in 1st International Symposium on Data, Privacy, and E-Commerce (ISDPE 2007), (November 1–3, 2007), pp. 221–223, IEEE Computer Society, Los Alamitos, CA, 2007. Out of scope Keystroke dynamics are employed as a means of hardening a cryptographic key so that the authors can explore whether the key can be recovered via power-consumption analysis.
99 S. P. Venkatachalam, P. Muthu Kannan, and V. Palanisamy. “Combining Cryptography with Biometrics for Enhanced Security,” in International Conference on Control, Automation, Communications and Energy Conservation (INCACEC 2009), (June 4–6, 2009), pp. 1–6, IEEE, Piscataway, NJ, 2009. Out of scope Keystroke dynamics are proposed as a means of generating cryptographic keys, but no classifier is empirically evaluated.
100 A. C. Weaver. “Biometric Authentication,” Computer, vol. 39, no. 2, pp. 96–97, 2006. Out of scope In a review of biometric modes, keystroke dynamics is described, but no system is evaluated.
101 Z. Zhang, C. Xiao, D. Zhao, H. Sun, X. Kail, and G. Tian. “Identity Authentication System Based on Improved PR-RP Model,” in 2nd IEEE International Conference on Advanced Computer Control (ICACC 2010), (March 27–29, 2010, Shenyang, China), vol. 5, pp. 65–69, IEEE, Piscataway, NJ, 2010. Out of scope A new classifier is presented. Data are collected with which to evaluate the classifier, and graphs of the data are produced. However, the data are not used to empirically evaluate the classifier, and no error rates are reported.


This material is based upon work supported by the National Science Foundation under grant number CNS-0716677. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors, and do not necessarily reflect the views of the National Science Foundation.