**Camera Modeling and the use of vision information in sensor fusion**

*Time:* 13:15-14:00. *Place:* Glashuset.

Given the increased use of cameras in various control related applications we will in this seminar discuss how to make use of vision information in estimation problems. A camera is nothing else than another sensor that we are using (e.g., MATRIS, SEFS, MOVIII, VR) and will use more or less routinely in the future. In order to be able to make use of this information we have to understand the basic principles underlying the image formation process in a camera. Hence, we will start out in a "lecture format", where we build a mathematical model of a camera from its basic physical construction and estimate the physical parameters (e.g., focal length, pixel sizes etc.) in this model. After this introductory part we will discuss how we can incorporate information from cameras into various estimation problems. Finally, we will give some examples from our work on using cameras as sensors.

**Interior point methods in finance**

*Time:* 13:15-14:00. *Place:* Glashuset.

The seminar will give an overview of optimization in finance, and in particular how primal-dual interior point methods can be applied to solve problems with different structures. Many financial investment problems require interest rates. Future interest rates need to be consistent with market prices, they are also often assumed to follow a smooth curve. It is shown how this can be solved in discrete time with a primal-dual algorithm which minimize the first and second order derivative subject to non-linear constraints. In a similar fashion as Model Predictive Control, an investment problem will be studied over time. By developing a realistic model of equity prices, which is estimated by Maximum Likelihood estimation, realistic scenarios can be generated with Monte Carlo simulation. Together with a primal-dual solver for Stochastic Programming these can be used to develop a model that controls the wealth in an appealing way. Throughout the presentation results will be presented from both back testing against historical data and how Reuters can be used to build a real time system.

**Using motion-controlled industrial robots for high-speed vision and contact force control**

*Time:* 13:15-14:00. *Place:* Glashuset.

Many promising robotics research results were obtained during the late 1970s and early 1980s. Some examples include Cartesian force control and advanced motion planning. Many of these technologies have still not been fully exploited in industrial applications, while others have only recently reached industrial usage. An important question to consider is how this situation can be improved for future deployment of necessary technologies. Today, modern robot control systems used in industry provide highly optimized motion control that works well in a variety of standard applications. However, applications that are considered non-standard today motivate a variety of research efforts in order to package results in a usable form. At the robotics lab in Lund a number of researchers, mainly from the departments of Automatic Control and Computer Science, are active in this area. For several years, there has been an active cooperation with ABB Robotics in developing techniques that permit real-time motion controllers to be extended for control using external sensors, most importantly force sensors and digital cameras. These efforts have resulted in products for high-performance contact force control being released by ABB, and have stimulated university research on sensor-based robot control.

In this talk, some of the results of these collaborative efforts will be presented. A number of different applications will be discussed, together with some control-oriented background. The applications presented include a novel system for force-controlled robotic drilling, based on force feedback for controlling the contact with the drilled surface. Other applications that have been handled can be found in robotized assembly and machining. Further, methods for high-speed visual tracking and vision-based control are presented, which are shown to outperform similar static methods in dynamical vision applications. Finally, there will be a discussion of our current activities and future research directions.

**Integer quadratic programming for control and communication**

*Time:* 10:15-12:00. *Place:* Visionen.

The main topic of this thesis is integer quadratic programming with applications to problems arising in the areas of automatic control and communication. One of the most widespread modern control methods is Model Predictive Control (MPC). In each sampling time, MPC requires the solution of a Quadratic Programming (QP) problem. To be able to use MPC for large systems, and at high sampling rates, optimization routines tailored for MPC are used. In recent years, the range of application of MPC has been extended to so-called hybrid systems. Hybrid systems are systems where continuous dynamics interact with logic. When this extension is made, binary variables are introduced in the problem. As a consequence, the QP problem has to be replaced by a far more challenging Mixed Integer Quadratic Programming (MIQP) problem, which is known to have a computational complexity which grows exponentially in the number of binary optimization variables. In modern communication systems, multiple users share a so-called multi-access channel. To estimate the information originally sent, a maximum likelihood problem involving binary variables can be solved. The process of simultaneously estimating the information sent by multiple users is called Multiuser Detection (MUD). In this thesis, the problem to efficiently solve MIQP problems originating from MPC and MUD is addressed. Four different algorithms are presented. First, a polynomial complexity preprocessing algorithm for binary quadratic programming problems is presented. By using the algorithm, some, or all, binary variables can be computed efficiently already in the preprocessing phase. In numerical experiments, the algorithm is applied to unconstrained MPC problems with a mixture of real valued and binary valued control signals, and the result shows that the performance gain can be significant compared to solving the problem using branch and bound. The preprocessing algorithm has also been applied to the MUD problem, where simulations have shown that the bit error rate can be significantly reduced compared to using common suboptimal algorithms. Second, an MIQP algorithm tailored for MPC is presented. The algorithm uses a branch and bound method where the relaxed node problems are solved by a dual active set QP algorithm. In this QP algorithm, the KKT systems are solved using Riccati recursions in order to decrease the computational complexity. Simulation results show that both the proposed QP solver and MIQP solver have lower computational complexity compared to corresponding generic solvers. Third, the dual active set QP algorithm is enhanced using ideas from gradient projection methods. The performance of this enhanced algorithm is shown to be comparable with the existing commercial state-of-the-art QP solver CPLEX for some random linear MPC problems. Fourth, an algorithm for efficient computation of the search directions in an SDP solver for a proposed alternative SDP relaxation applicable to MPC problems with binary control signals is presented. The SDP relaxation considered has the potential to give a tighter lower bound on the optimal objective function value compared to the QP relaxation that is traditionally used in branch and bound for these problems, and its computational performance is better than the ordinary SDP relaxation for the problem. Furthermore, the tightness of the different relaxations is investigated both theoretically and in numerical experiments.

**Control of hybrid systems: Theory, computation and applications**

*Time:* 14:15-15:00. *Place:* Visionen.

Theory, computation and applications define the evolution of the field of control. This premise is illustrated with the emerging area of hybrid systems, which can be viewed, loosely speaking, as dynamical systems with switches. Many practical problems can be formulated in the hybrid system framework. Power electronics are hybrid systems by their very nature, systems with hard bounds and/or friction can be described in this manner and problems from other domains, as diverse as driver assistance systems, anesthesia and active vibration control can be put in this form.

I will describe the theoretical basis of some of the tools that have been proposed to synthesize the controllers for hybrid systems. Parametric programming has received a lot of attention in the control literature in the past few years because model predictive controllers (MPC) can be posed in a parametric framework and hence pre-solved offline, resulting in a significant decrease in on-line computation effort. I will describe recent work on parametric linear programming (pLP) from the point of view of the control engineer. I will survey various types of algorithms, and identify a new standard convex hull approach that offers significant potential for approximation of pLPs for the purpose of control. The resulting algorithm, based on the beneath/beyond paradigm, computes low-complexity approximate controllers that guarantee stability and feasibility.

Many industrial applications will serve to highlight the theoretical developments and the extensive software that helps to bring the theory to bear on the practical examples. Joint work with Colin Jones, Miroslav Baric and Melanie Zeilinger

Biographical Information:

Manfred Morari was appointed head of the Automatic Control Laboratory at ETH Zurich in 1994. Before that he was the McCollum-Corcoran Professor of Chemical Engineering and Executive Officer for Control and Dynamical Systems at the California Institute of Technology. He obtained the diploma from ETH Zurich and the Ph.D. from the University of Minnesota, both in chemical engineering. His interests are in hybrid systems and the control of biomedical systems. In recognition of his research contributions, he received numerous awards, among them the Donald P. Eckman Award and the John Ragazzini Award of the Automatic Control Council, the Allan P. Colburn Award and the Professional Progress Award of the AIChE, the Curtis W. McGraw Research Award of the ASEE, Doctor Honoris Causa from Babes-Bolyai University, Fellow of IEEE, the IEEE Control Systems (Technical Field) Award, and was elected to the National Academy of Engineering (U.S.). Professor Morari has held appointments with Exxon and ICI plc and serves on the technical advisory boards of several major corporations.

**Bayesian Signal Processing**

*Time:* 09:15-12:00. *Place:* Visionen.

The vast majority of tasks we seek to address in statistical signal processing are inductive inference tasks. In other words, we are seeking knowledge in conditions of uncertainty. Probability quantifies degrees of belief amid uncertainty, and the calculus of probability is available to us as a consistent framework for manipulating degrees of belief in order to arrive at answers to problems of interest. This is the Bayesian paradigm which I will advocate in this course. It presents some challenges but more rewards, and the aim of this course is to examine these challenges and rewards critically in the context of signal processing.

In terms of challenges, perhaps the greatest is to overcome the frequentist mindset that still dominates the signal processing field. Thereafter, we must elicit probability functions for all unknowns, most notoriously expressed in the need for priors. Finally, we must develop tractable procedures for computing and manipulating probability functions. A main aim of the course will be to present the Variational Bayes method for approximating distributions, and to examine its contrasts and cooperations with stochastic approximations, which dominate Bayesian signal processing at present.

The reward of such effort is, first-and-foremost, the fact that the Bayesian approach is a principled and prescriptive pathway to solving signal processing problems properly. If non-Bayesian solutions are consistent, they can always be characterized as special cases of Bayesian solutions. The unique armoury of the Bayesian includes, of course, the prior, which can be used to regularize an inference, and to exploit external information. This is well known. Less well known, but perhaps more powerful, is the availability of the marginalization operator, conferred uniquely because of the measure nature of probability functions. Among the compelling advantages of marginalization are the automatic embrace of Ockham's Razor, and the ability to compare set hypotheses, including competing model structures.

All these ideas will be explored in this course, and illustrated via important representative problems, such as sinusoidal identification, principal component analysis and nonlinear filtering. Radiotherapy, functional medical imaging and speech processing will be among the applications to be considered.

The main sections of the course will be:

1. How to be a Bayesian (Bayesian Ways)

2. Why to be a Bayesian (Bayesian Gains)

3. A Question of Priors

4. The Need for Approximation: The Variational Bayes Method

5. Going On-Line: Nonlinear Filtering and Variational Bayesian Filtering

The following texts would make excellent companions for this course;

The Variational Bayes Method in Signal Processing

V. Smidl and A. Quinn

Springer 2006

The Bayesian Choice

C.P. Robert

Springer 2007

Data Analysis: A Bayesian Tutorial

D.S. Sivia

Oxford 2006

Bayesian Statistics: An Introduction

P.M. Lee

Arnold 2004

Probability Theory: The Logic of Science

E.T. Jaynes

Cambridge 2003

Bayesian Theory

J.M. Bernardo and A.F.M. Smyth

Wiley 1994

**Simulation and Stochastic Analysis of Complex Biochemical Systems**

*Time:* 14:00-14:45. *Place:* Glashuset.

Life processes and the ways how organisms function boil down to a wide variety of molecular chemical reactions that represent nonlinear time-varying dynamical systems. The intricate networks that describe these systems are very difficult to analyze without computers. For instance, the interactions of molecules encoded within the genome that define the gene regulatory networks, protein interaction networks, or metabolic networks, are often so complex that it is very difficult to identify the structural properties of the networks and their relationship with the cell behavior. Traditionally, the computational approaches for studying biochemical systems have been based on the assumption that the systems are deterministic in nature. It is well known, however, that the modeling of these systems can be improved with stochastic models and that many processes of signal transduction and gene expression can only be analyzed with them. In this talk we address the forward and inverse problems of biochemical systems using the stochastic approach. In particular, we discuss the development of a methodology for simulating a complex system by using the principle of divide and conquer, and we address the estimation of unknowns in the system from limited experimental time series measurements.

Biography:

Petar M. Djurić received his B.S. and M.S. degrees in electrical engineering from the University of Belgrade, in 1981 and 1986, respectively, and his Ph.D. degree in electrical engineering from the University of Rhode Island, in 1990. From 1981 to 1986 he was a Research Associate with the Institute of Nuclear Sciences, Vinča, Belgrade. Since 1990 he has been with Stony Brook University, where he is Professor in the Department of Electrical and Computer Engineering. He works in the area of statistical signal processing, and his primary interests are in the theory of modeling, detection, estimation, and time series analysis and Monte Carlo–based methods for signal processing. He applies the theory to problems that arise in a wide variety of disciplines including wireless communications, sensor networks, medicine, and biology. Prof. Djurić has been elected Distinguished Lecturer of the IEEE Signal Processing Society for the period 2008-2009. In 2007, he received the Best Paper Award of the IEEE Signal Processing Magazine. He has served on numerous technical committees and has been on the editorial boards of various journals. Prof. Djurić is a Fellow of IEEE.

**Performance and Implementation Aspects of Nonlinear Filtering**

*Time:* 10:15-12:00. *Place:* Visionen.

Nonlinear filtering is an important standard tool for information and sensor fusion applications, e.g., localization, navigation, and tracking. It is an essential component in surveillance systems and of increasing importance for standard consumer products, such as cellular phones with localization, car navigation systems, and augmented reality. This thesis addresses several issues related to nonlinear filtering, including performance analysis of filtering and detection, algorithm analysis, and various implementation details.

The most commonly used measure of filtering performance is the root mean square error (RMSE), which is bounded from below by the Cramér-Rao lower bound (CRLB). This thesis presents a methodology to determine the effect different noise distributions have on the CRLB. This leads up to an analysis of the intrinsic accuracy (IA), the informativeness of a noise distribution. For linear systems the resulting expressions are direct and can be used to determine whether a problem is feasible or not, and to indicate the efficacy of nonlinear methods such as the particle filter (PF). A similar analysis is used for change detection performance analysis, which once again shows the importance of IA.

A problem with the RMSE evaluation is that it captures only one aspect of the resulting estimate and the distribution of the estimates can differ substantially. To solve this problem, the Kullback divergence has been evaluated demonstrating the shortcomings of pure RMSE evaluation.

Two estimation algorithms have been analyzed in more detail; the Rao-Blackwellized particle filter (RBPF), by some authors referred to as the marginalized particle filter (MPF), and the unscented Kalman filter (UKF). The RBPF analysis leads to a new way of presenting the algorithm, thereby making it easier to implement. In addition the presentation can possibly give new intuition for the RBPF as being a stochastic Kalman filter bank. In the analysis of the UKF the focus is on the unscented transform (UT). The results include several simulation studies and a comparison with the Gauss approximation of the first and second order in the limit case.

This thesis presents an implementation of a parallelized PF and outlines an object-oriented framework for filtering. The PF has been implemented on a graphics processing unit (GPU), i.e., a graphics card. The GPU is a inexpensive parallel computational resource available with most modern computers and is rarely used to its full potential. Being able to implement the PF in parallel makes new applications, where speed and good performance are important, possible. The object-oriented filtering framework provides the flexibility and performance needed for large scale Monte Carlo simulations using modern software design methodology. It can also be used to help to efficiently turn a prototype into a finished product.

**Time Varying Matrix Estimation for Stochastic Systems using the "Equivalent Control" Concept**

*Time:* 15:15-16:00. *Place:* Glashuset.

This work deals with the problem of time-varying parameters estimation of stochastic system under "white" and "colored" perturbations. A two step method is proposed. First, there is designed a tracking process, based in the "equivalent control" technique, providing the finite-time equivalence of the original stochastic process with unknown parameters to an auxiliary one. This step does not eliminate the noise, but it permits to represent the model to be identified in a regression form. In the second step the Least Squares Method with a scalar forgetting factor is applied to estimate time varying parameters of the given model.

**Hybrid Systems: The Continuous Meets the Discrete in Systems and Control**

*Time:* 10:15-11:00. *Place:* Glashuset.

Hybrid systems have both continuous and discrete states which evolve subject to continuous (ODE governed) and discrete (automata governed) controlled dynamics. Such systems play a central role in contemporary control engineering due to the standard feedback architecture of digital devices controlling systems with both continuous and discrete properties. Examples of hybrid systems are to be found in chemical and automotive engineering, space vehicle control and communication networks; moreover, hybrid behaviour can be identified in optics and in thermodynamic systems. In this talk we give a Hybrid Pontryagin Maximum Principle for general hybrid systems, a Hybrid Dynamic Programming theorem for regional hybrid systems (where the discrete state depends on the continuous state value) and present optimal control algorithms based on these results.

*Time:* 13:15-15:15. *Place:* Glashuset.

A receding horizon control problem is formulated to minimize entropy of an estimate distribution by controlling a mobile sensor. Computational methods are developed for the case of a range-limited sensor tracking a moving target using a particle filter representation. A control which maximizes probability of detection is shown to be mathematically equivalent when the probability of detection is low, leading to a more reliable and efficient control calculation.

*Time:* 13:15-15:15. *Place:* Glashuset.

Binary-valued or quantized sensors are employed in many practical systems. Typical examples include switching sensors for exhaust gas oxygen, traffic condition indicators in the ATM (asynchronous transmission mode), neural networks. More important, the new paradigm of sensor networks, networked systems and control, e-health systems for remote monitoring, diagnosis, etc. mandate that signals must be sent over a communication network, and hence must be quantized. In other words, pursuing modeling and control of systems that involve communication channels will need, as a foundation, identification and complexity analysis of system identification with quantized observations.

In this talk, recent advances will be presented on system identification with binary or quantized observations. We will start with the fundamental aspects of identification algorithms, strong convergence, convergence rates, and algorithm efficiency (optimality). Findings from these fundamental issues are then employed to understand such identification problems in various system and environment settings, including different system models (gain, finite impulse response, and rational systems), joint identification of systems and noise distributions, impact of communication channels on identification accuracy and speed, selection of quantization thresholds, etc.

**Optimal Control and Model Reduction of Nonlinear DAE Models**

*Time:* 10:15-12:00. *Place:* Visionen.

In this thesis, different topics for models that consist of both differential and algebraic equations are studied. The interest in such models, denoted DAE models, have increased substantially during the last years. One of the major reasons is that several modern object-oriented modeling tools used to model large physical systems yield models in this form. The DAE models will, at least locally, be assumed to be described by a decoupled set of ordinary differential equations and purely algebraic equations. In theory, this assumption is not very restrictive because index reduction techniques can be used to rewrite rather general DAE models to satisfy this assumption.

One of the topics considered in this thesis is optimal feedback control. For statespace models, it is well-known that the Hamilton-Jacobi-Bellman equation (HJB) can be used to calculate the optimal solution. For DAE models, a similar result exists where a Hamilton-Jacobi-Bellman-like equation is solved. This equation has an extra term in order to incorporate the algebraic equations, and it is investigated how the extra term must be chosen in order to obtain the same solution from the different equations.

A problem when using the HJB to find the optimal feedback law is that it involves solving a nonlinear partial differential equation. Often, this equation cannot be solved explicitly. An easier problem is to compute a locally optimal feedback law. For analytic nonlinear time-invariant statespace models, this problem was solved in the 1960's, and in the 1970's the time-varying case was solved as well. In both cases, the optimal solution is described by convergent power series. In this thesis, both of these results are extended to analytic DAE models.

Usually, the power series solution of the optimal feedback control problem consists of an infinite number of terms. In practice, an approximation with a finite number of terms is used. A problem is that for certain problems, the region in which the approximate solution is accurate may be small. Therefore, another parametrization of the optimal solution, namely rational functions, is studied. It is shown that for some problems, this parametrization gives a substantially better result than the power series approximation in terms of approximating the optimal cost over a larger region.

A problem with the power series method is that the computational complexity grows rapidly both in the number of states and in the order of approximation. However, for DAE models where the underlying statespace model is control-affine, the computations can be simplified. Therefore, conditions under which this property holds are derived.

Another major topic considered is how to include stochastic processes in nonlinear DAE models. Stochastic processes are used to model uncertainties and noise in physical processes, and are often an important part in for example state estimation. Therefore, conditions are presented under which noise can be introduced in a DAE model such that it becomes well-posed. For well-posed models, it is then discussed how particle filters can be implemented for estimating the time-varying variables in the model.

The final topic in the thesis is model reduction of nonlinear DAE models. The objective with model reduction is to reduce the number of states, while not affecting the input-output behavior too much. Three different approaches are studied, namely balanced truncation, balanced truncation using minimization of the co-observability function and balanced residualization. To compute the reduced model for the different approaches, a method originally derived for nonlinear statespace models is extended to DAE models.

**Gripen DEMO fuel system development practice**

*Time:* 13:15-14:00. *Place:* Glashuset.

With the development of the Gripen DEMO fuel system as an example; this is a presentation of the present practice of how to develop fluid dynamical systems in highly integrated aircrafts. To get a good balance between performance, weight, cost and physical restrictions, model based methods are necessary to keep track of the impacts of system changes. Even with long experience, it is impossible to predict how some changes will affect the system.

For fluid dynamical systems, the system models tend to be very large, up to 300 continuous states, since the systems are distributed through the whole aircraft. These models are mainly used for simulation studies, due to three main causes. The models operate in the nonlinear range of the model, in some systems there is never a fixed working point and personnel lack training in system analysis. The complexity of the system, environment and model makes it hard to perform a valuable validation. Typical studies in the concept evaluation and detailed design phases will be described.

Present development is toward connecting more and more system models to get a better trade-off between systems and better control of interfaces between systems. The presentation will give a brief overview of the challenges yet to overcome when using model based methods, among them how to keep track of the impact of model uncertainties in very large, nonlinear models derived from physical modeling.

**Electro mechanical braking and road friction estimation**

*Time:* 13:15-14:00. *Place:* Glashuset.

The presentation will summarize experiences from the Road Friction Estimation project performed within the IVSS-framwork. Particularly, problems encountered while using force-slip based algorithms are addressed. The presentation will also give some insight in Haldex work with brake by wire systems.

**Real-time markerless camera tracking for mobile augmented reality**

*Time:* 10:15-11:00. *Place:* Glashuset.

The presentation will give some insight in mobile augmented reality and focus on markerless camera tracking, which is one enabling technology for this research area. Different approaches to camera tracking, such as model-based tracking, sequential structure from motion, visual-inertial tracking and SLAM will be summarized with respect to both, image processing and 3D geometry estimation.

**Use of optimization in power system analysis**

*Time:* 13:15-14:15. *Place:* Glashuset.

In the seminar first the challenges for the power system will be presented. These challenges include both network analysis including dynamics, power system production planning and electricity market analysis. Optimization is in the power system field used for, e.g.,

- Optimal operation of the grid in order to minimize losses (optimal load flow including use of reactive resources and controllable devices)

- Optimal operation of the production system (planning according to power prices and uncertainties)

- Power market simulation (assumptions on perfect competition, or other assumptions including market power, environmental restriction, market rules etc, leads to a formulation of an optimization problem)

- Optimal expansion of power system (valid for both grid expansion and power production expansion)

- Use of optimization to solve large nonlinear systems of equations

- Parameter estimation of dynamic power system models

- Solving system identification tasks relevant to power systems

- Design and tuning of optimal controllers for power system applications

- Development of reduced-order linear models of nonlinear power system components.

The main use of optimization at KTH, Division of Electric Power Systems, is to formulate problems to be solved with optimization. The focus is mainly to modify the problems in order to make them solvable, and not so much in development of new solution algorithms. Examples from these areas will be presented.

**Interior-point methods for nuclear norm minimization**

*Time:* 13:15-14:00. *Place:* Algoritmen.

The nuclear norm (sum of singular values) of a matrix is often used in convex heuristics for rank minimization problems in control, system identification, signal processing, and statistics. These heuristics can be viewed as generalizations of 1-norm minimization methods for compressed sensing.

In this talk we discuss the implementation of interior-point methods for the solution of linear nuclear norm approximation problems. This problem can be formulated as a semidefinite program that includes large auxiliary matrix variables and is difficult to solve by general-purpose solvers. By exploiting problem structure, we show that the cost per iteration of an interior-point method can be reduced to roughly the cost of solving the approximation problem in Frobenius norm.

**A Structure Utilizing Inexact Primal-Dual Interior-Point Method for Analysis of Linear Differential Inclusions**

*Time:* 10:15-12:00. *Place:* Visionen.

The ability to analyze system properties for large scale systems is an important part of modern engineering. Although computer power increases constantly, there is still need to develop tailored methods that are able to handle large scale systems, since sometimes standard methods cannot handle the large scale problems that occur.

In this thesis the focus is on system analysis, in particular analysis methods that result in optimization problems with a specific problem structure. In order to solve these optimization problems, primal-dual interior-point methods have been tailored to the specific structure. A convergence proof for the suggested algorithm is also presented.

It is the structure utilization and the use of an iterative solver for the search directions that enables the algorithm to be applied to optimization problems with a large number of variables. However, the use of an iterative solver to find the search directions will give infeasible iterates in the optimization algorithm. This make the use of an infeasible method desirable and hence is such a method proposed. Using an iterative solver requires a good preconditioner. In this work two different preconditioners are used for different stages of the algorithm. The first preconditioner is used in the initial stage, while the second preconditioner is applied when the iterates of the algorithm are close to the boundary of the feasible set. The proposed algorithm is evaluated in a simulation study. It is shown that problems which are unsolvable for a standard solver are solved by the proposed algorithm.

**A Fix-Up for the EKF Parameter Estimator**

*Time:* 13:15-14:00. *Place:* Algoritmen.

We have reduced recursive parameter estimation to Kalman filtering, with a few added fixes. By incorporating projections in the parameter gain updates and parameter variance estimates, the recursive maximum likelihood method asymptotically becomes a reformulation and fix-up of the extended Kalman filter used as a parameter estimator (EKFPE), except that an additional n x n linear symmetric matrix must also be updated for each parameter estimate. Estimates for both the process and measurement noise variances, as well as for structural parameters, have been proven convergent to a maximum of the likelihood function. This obviates the usual guesswork in finding noise variances when fitting data using the EKFPE, and assures the existence of the innovations representation for the recursive maximum likelihood method. Slightly non-linear and also slightly unstable linear, as well as drastically time-varying stable linear, system parameters can be estimated even in severe noise environments On average, the rate of convergence of parameter estimates appears to be faster than other methods if no projection limit is hit.

**Astronomy, Cosmology, and Control Systems: Instrumentation for Understanding the Universe through Adaptive Optics**

*Time:* 13:15-14:00. *Place:* Glashuset.

Astronomy and cosmology are hot research areas of science now because 94% of our universe is unknown, consisting of dark matter and dark energy. To investigate these elusive and challenging concepts, new instrumentation on earth-based telescopes is needed. The Thirty Meter Telescope (10 times the area of the primary mirror of the Keck telescope, presently the world\u2019s largest) that is almost completely designed and is getting construction funding now ($200,000,000 from Gordon and Betty Moore) is dependent upon adaptive optics (AO) to probe much farther into space. Both Multi-Conjugate AO and Multi-Objective AO can be effectively improved by predictive AO. Predictive AO is motivated by the study of LQG optimal control on a Hilbert space, which results in a finite dimensional PID controller as an exact solution. This leads to the problem of multilevel bulk wind velocity estimation from the residuals of the AO controller. These results and cool pictures of the cosmos will be shown.

**Pose Estimation and Calibration Algorithms for Vision and Inertial Sensors**

*Time:* 10:15-12:00. *Place:* Visionen.

This thesis deals with estimating position and orientation in real-time, using measurements from vision and inertial sensors. A system has been developed to solve this problem in unprepared environments, assuming that a map or scene model is available. Compared to `camera-only' systems, the combination of the complementary sensors yields an accurate and robust system which can handle periods with uninformative or no vision data and reduces the need for high frequency vision updates.

The system achieves real-time pose estimation by fusing vision and inertial sensors using the framework of nonlinear state estimation for which state space models have been developed. The performance of the system has been evaluated using an augmented reality application where the output from the system is used to superimpose virtual graphics on the live video stream. Furthermore, experiments have been performed where an industrial robot providing ground truth data is used to move the sensor unit. In both cases the system performed well.

Calibration of the relative position and orientation of the camera and the inertial sensor turn out to be essential for proper operation of the system. A new and easy-to-use algorithm for estimating these has been developed using a gray-box system identification approach. Experimental results show that the algorithm works well in practice.

**Homogeneous polynomial Lyapunov functions for robustness analysis of uncertain systems**

*Time:* 15:15-16:00. *Place:* Glashuset.

Lyapunov functions are a widely used tool for assessing robust stability of systems affected by structured uncertainties. The aim of the talk is to illustrate the potential of homogeneous polynomial Lyapunov functions in several robustness analysis problems.

A sum-of-squares parameterization of homogeneous polynomial forms is exploited to formulate sufficient conditions for robust stability in terms of LMI optimization problems. It is shown that the use of the proposed classes of Lyapunov functions enhances robust stability tests and robust performance evaluation, both for time-invariant and time-varying uncertainties, with respect to several classes of Lyapunov functions considered in the literature (quadratic, piecewise, linear parameter-dependent, etc.).

**System Identification for Interconnected Nonlinear Systems**

*Time:* 13:15-14:00. *Place:* Glashuset.

Models of dynamical systems are important in many disciplines of science, ranging from physics and traditional mechanical and electrical engineering to life sciences, computer science and economics. Engineers, for example, use models for development, analysis and control of complex technical systems. Dynamical models can be derived from physical insights, for example some known laws of nature, (which are models themselves), or, as considered here, by fitting unknown model parameters to measurements from an experiment. The latter approach is what we call system identification. A model is always (at best) an approximation of the true system, and for a model to be useful, we need some characterization of how large the model error is. In this thesis we consider model errors originating from stochastic (random) disturbances that the system was subject to during the experiment.

Stochastic model errors, known as variance-errors, are usually analyzed under the assumption of an infinite number of data. In this context the variance-error can be expressed as a (complicated) function of the spectra (and cross-spectra) of the disturbances and the excitation signals, a description of the true system, and the model structure (i.e., the parametrization of the model). The primary contribution of this thesis is an alternative geometric interpretation of this expression. This geometric approach consists in viewing the asymptotic variance as an orthogonal projection on a vector space that to a large extent is defined from the model structure. This approach is useful in several ways. Primarily, it facilitates structural analysis of how, for example, model structure and model order, and possible feedback mechanisms, affect the variance-error. Moreover, simple upper bounds on the variance-error can be obtained, which are independent of the employed model structure.

Perhaps the most important contribution of this thesis, and of the geometric approach, is the analysis method as such. Hopefully the methodology presented in this work will be useful in future research on the accuracy of identified models; in particular non-linear models and models with multiple inputs and outputs, for which there are relatively few results at present.

**On efficient on-line implementation of off-line MPC for hybrid systems**

*Time:* 13:15-14:00. *Place:* Glashuset.

In Model Predictive Control (MPC), if the model of the plant is constrained and linear, and the performance index is based on linear vector norms, it can be shown that the underlying optimization problem can be formulated and solved as a multi-parametric linear program. The resulting closed-form solution, which can be interpreted as a lookup table, is a piecewise affine (PWA) control law defined over polyhedral regions of the state space. An advantage of the closed-form solutions is that their on-line application reduces to a simple set-membership test, which can be performed much faster compared to traditional on-line optimization-based techniques. However, the time needed to evaluate the lookup table significantly limits the minimal admissible sampling time of the control system. In this talk we first review basic properties of parametric solutions to MPC problems for hybrid systems. Once the solution is obtained, we subsequently introduce various strategies which help to increase the on-line evaluation speed. One of the schemes is based on constructing search trees using so-called bounding boxes. In this approach the lookup table can be evaluated by answering a series of stabbing queries. The second approach is based on approximating the optimal PWA feedback law by a single polynomial in a way such that stability of the closed-loop system is preserved. To achieve this goal, we propose first to calculate a parameterization of a set of stabilizing feedback laws for hybrid systems and then (in the second step) to find a multivariate polynomial contained in such a set. If the polynomial exists and is applied as a state feedback control law to the system, closed-loop stability and constraint satisfaction are guaranteed.