Least Squares Support Vector Machines
Time:
010912 10:15-12:00 Lecture

010912 13:15-15:00 Discussion

010913 10:15-12:00 Lecture

010913 13:15-15:00 Discussion

010914 10:15-12:00 Lecture

010914 13:15-15:00 Discussion

Speaker:
Johan Suykens, Katholieke Universiteit Leuven
Suitable reading
Nonlinear Modeling and Support Vector Machines

Johan Suykens

IEEE Instrumentation and Measurement Technology Conference

Budapest, Hungary, May 21st-23rd, 2001

and the references therein

Language: English
Abstract:
In the last decade, neural networks have proven to be a powerful
methodology in a wide range of fields and applications. Although
neural nets have often been presented as a kind of miracle approach,
reliable training methods exist nowadays mainly thanks to interdisciplinary
studies and insights from several fields including statistics, circuit-,
systems and control theory, signal processing, information theory, physics
and others. Despite many of these advances, there still remain a number of
weak points such as the existence of many local minima solutions and
how to choose the number of hidden units. Major breakthroughs are obtained
at this point with a new class of neural networks called support vector
machines (SVMs), developed within the area of statistical learning theory
and structural risk minimization. SVM solutions are characterized by convex
optimization problems (typically quadratic programming). Moreover, the model
complexity (e.g. number of hidden units) also follows from solving this
convex optimization problem. The method is kernel based and allows
for linear, polynomial, spline, RBF, MLP kernels and others.

In the first part of this course we explain the theory of linear and
non-linear SVM for solving classification and nonlinear function estimation
problems. In the second part, we focus on least squares support vector
machines (LS-SVMs) which involve solving linear systems instead of QP
problems. The method is capable of solving highly nonlinear and noisy
black-box modelling problems, even in high dimensional input spaces.
Issues of robust nonlinear estimation and sparse approximation will be
discussed, together with hyperparameter selection methods. Several
frameworks will be explained, including Bayesian learning and VC theory.
In the third part we present first extensions of LS-SVM methods to recurrent
networks and use in optimal control problems. The huge potential of (LS)-SVM
methodologies will be continuously illustrated on a large variety of examples
and case studies.