Machine learning studies the question "How can we build computer programs that automatically improve their performance through experience?" This includes learning to perform many types of tasks based on many types of experience. For example, it includes robots learning to better navigate based on experience gained by roaming their environments, medical decision aids that learn to predict which therapies work best for which diseases based on data mining of historical health records, and speech recognition systems that learn to better understand your speech based on experience listening to you.
This course is designed to give PhD students a thorough grounding in the methods, mathematics and algorithms needed to do research and applications in machine learning. Students entering the class with a pre-existing working knowledge of probability, statistics and algorithms will be at an advantage, but the class has been designed so that anyone with a strong numerate background can catch up and fully participate.
Given that the brain is the only system that achieves many of the goals of AI, it is natural to ask whether and how knowledge about the brain can inform the way we design artificial systems. In this seminar, we will read and discuss recent research at the interface of cognitive neuroscience and AI. We will delve into topics related to learning paradigms and to representations, such as task-specific and task-invariant representations or cross-modal representations. The class will require participating in discussions and writing a final paper. There are no formal prerequisites but a background in machine learning and cognitive neuroscience will be helpful. For students who do not have that background, readings will be provided to be covered at the beginning of the semester.
Machine learning is a subfield of computer science with the goal of exploring, studying, and developing learning systems, methods, and algorithms that can improve their performance with learning from data. This course is designed to give undergraduate students a one-semester-long introduction to the main principles, algorithms, and applications of machine learning and is specifically designed for the SCS undergrad majors. After completing the course, students will be able to:
- select and apply an appropriate supervised learning algorithm for classification problems and understand its underlying assumptions (e.g., naive Bayes, perceptron, support vector machine, logistic regression).
- select and apply an appropriate supervised learning algorithm for regression problems and understand its underlying assumptions (e.g., linear regression, ridge regression).
- recognize different types of unsupervised learning problems, and select and apply appropriate algorithms (e.g., clustering, linear and nonlinear dimensionality reduction).
- work with probabilities (Bayes rule, conditioning, expectations, independence), linear algebra (vector and matrix operations, eigenvectors), and calculus (gradients) to derive machine learning methods such as linear regression, naive Bayes, and principal components analysis.
- understand machine learning principles such as model selection, overfitting, and underfitting, and techniques such as cross-validation and regularization.
- implement machine learning algorithms such as logistic regression via stochastic gradient descent, linear regression (using a linear algebra toolbox), perceptron, or k-means clustering.
- run appropriate supervised and unsupervised learning algorithms on real and synthetic data sets and interpret the results.
10606: This course provides a place for students to practice the necessary mathematical background for further study in machine learning. Topics covered include probability (random variables, modeling with continuous and discrete distributions), linear algebra (inner product spaces, linear operators), and multivariate differential calculus (partial derivatives, matrix differentials). The course assumes some background in each of the above, but will review and give practice in each. (It does not provide from-scratch coverage of all of the above, which would be impossible in a course of this length.) Some coding will be required: the course will provide practice with translating the above mathematical concepts into concrete programs.
10607: This course provides a place for students to practice the necessary computational background for further study in machine learning. Topics covered include computational complexity, analysis of algorithms, proof techniques, optimization, dynamic programming, recursion, and data structures. The course assumes some background in each of the above, but will review and give practice in each. (It does not provide from-scratch coverage of all of the above, which would be impossible in a course of this length.) Some coding will be required: the course will provide practice with translating the above computational concepts into concrete programs.
These two minis are intended to prepare students for further study in machine learning – particularly for taking 10-601 and 10-701. One of the courses (10-606) focuses on mathematical background, and the other course (10-607) focuses on computational background. Most students take both mini courses, but this is not required. 10-606 is not a prerequisite of 10-607.
In this course students will gain exposure to practical aspects of machine learning and statistical data analysis. Through a series of case studies of real problems, students will learn to appreciate the intricacies involved in the practical application of machine learning. The course will focus on formalizing research questions, data exploration, identifying potential pitfalls, using machine learning for science and decision making, reproducibility and fairness. The outcome of the course will be a write up of the various case studies that will be shared between all students and possibly posted online (subject to agreement between students).
The human brain is a complex information processing system and is currently the topic of multiple fascinating branches of research. Understanding how it works is a very challenging scientific task. In recent decades, multiple techniques for imaging the activity of the brain at work have been invented, which has allowed the field of cognitive neuroscience to flourish. Cognitive neuroscience is concerned with studying the neural mechanisms underlying various aspects of cognition, by relating the activity in the brain to the tasks being performed by it. This typically requires exciting collaborations with other disciplines (e.g. psychology, biology, physics, computer science).
You should take this course if you’re interested in how the brain works and how you can use cutting edge brain imaging and data analysis tools to study it. During this course, you will learn tools based on the python programming language to understand, manipulate, and explore human brain recordings (such as ECoG, EEG, MEG and fMRI). You will learn to formulate hypotheses about how the brain represents information and then test these hypotheses using real world data. You will learn useful analysis methods to help you derive conclusions from brain recording data.
By giving you first hand experience in data analysis of brain data, this course will provide you an insight into the experiments and data used in the cognitive neuroscience field. It will allow you to build a better understanding of the current cutting edge research in cognitive neuroscience. Hence, you will be able to keep up with recent advances in this field and/or will be able to apply your knowledge by doing research here at Berkeley. Additionally, the data analysis techniques and the investigation approaches that you will learn will be easily transferable to research in other disciplines.