Call
for Papers
We invite researchers to submit their recent work on
scalable non-parametric methods, including, for example,
Gaussian processes, Dirichlet processes, Indian buffet
processes, and support vector machines. Full
details appear below, in the workshop overview.
Submissions will take place in the form of 2-4 page
abstracts (unlimited references), in the NIPS style (available
here). Author names do not need to be
anonymized. Accepted papers will be presented as
posters or contributed talks.
Submissions should be
e-mailed to scalable.nips2015@gmail.com
Key
Dates
Paper Submission Extended:
20 October 2015
Acceptance: 30
October 2015
Travel award notification: 30 October 2015
Camera ready: 20 November 2015
Workshop
Overview
In 2015, every minute of the day,
users share hundreds of thousands of pictures, videos,
tweets, reviews, and blog posts. More than ever before, we
have access to massive datasets in almost every area of
science and engineering, including genomics, robotics, and
climate science. This wealth of information provides
an unprecedented opportunity to automatically learn rich
representations of data, which allows us to greatly
improve performance in predictive tasks, but also provides
a mechanism for scientific discovery. That is, by
automatically learning expressive representations of data,
versus carefully hand crafting features, we can obtain a
new theoretical understanding of our modelling
problems. Recently, deep learning architectures have
had success for such representation learning, particularly
in computer vision and natural language processing.
Expressive non-parametric methods also have great
potential for large-scale structure discovery; indeed,
these methods can be highly flexible, and have an
information capacity that grows with the amount of
available data. However, there are practical
challenges involved in developing non-parametric methods
for large scale representation learning.
Consider, for example, kernel methods. A kernel
controls the generalisation properties of these
methods. A well chosen kernel leads to impressive
empirical performances. Difficulties arise when the kernel
is a priori unknown and the number of datapoints is
large. One must develop an expressive kernel
learning approach, and scaling such an approach poses
different challenges than scaling a standard kernel
method. One faces additional computational constraints,
and the need to retain significant model structure for
expressing the rich information available in a large
dataset. However, the need for expressive kernel
learning on large datasets is especially great, since such
datasets often provide more information to automatically
learn an appropriate statistical representation.
This 1 day workshop is about non-parametric methods for
large scale structure learning, including automatic
pattern discovery, extrapolation, manifold learning,
kernel learning, metric learning, data compression,
feature extraction, trend filtering, and dimensionality
reduction. Non-parametric methods include, for example,
Gaussian processes, Dirichlet processes, Indian buffet
processes, and support vector machines. We are
particularly interested in developing scalable and
expressive methods to derive new scientific insights from
large datasets. A poster session, coffee breaks, and a
panel guided discussion will encourage interaction between
attendees. This workshop aims to bring
together researchers wishing to explore alternatives to
neural networks for learning rich non-linear function
classes, with an emphasis on nonparametric methods,
representation learning and scalability. We wish to
carefully review and enumerate modern approaches to these
challenges, share insights into the underlying properties
of these methods, and discuss future directions.