**Multi-User Transmit Beamforming using Convex Optimization**

*Time:* 13:15-14:00. *Place:* Glashuset.

This talk will provide an overview of advanced convex optimization approaches to multi-sensor transmit beamforming. Beamforming is a versatile and powerful approach to receive, transmit, or relay signals-of-interest in a spatially selective way in the presence of interference and noise. The focus of the talk is on multi-user transmit beamforming, which arises in the cellular multi-user downlink when the transmitter is equipped with multiple transmit antennas. In this case, multiple beamforming weight vectors are used to carry different co-channel unicast transmissions, one per user. These vectors are then jointly designed to balance the interference between different transmissions. The weight vector designed for a given user should have a large inner product with the steering vector of this user, and small inner products with the steering vectors of all other users. The problem of interest is convex, since it can be formulated as a second-order cone programming problem.

What if we wish to transmit common information to many users? The core problem is a multicast generalization of the multiuser downlink beamforming problem. Such generalization is relevant and timely, e.g., in the context of the emerging WiMAX and UMTS-LTE wireless networks. The joint multicast beamforming problem is in general NP-hard, motivating the pursuit of computationally efficient quasi-optimal solutions. For arbitrary channel vectors, it is shown that semidefinite relaxation coupled with suitable randomization / cochannel multicast power control yield computationally efficient high-quality approximate solutions. The multicast beamforming problem is revisited for the important special case when the channel vectors are Vandermonde. This arises when a uniform linear antenna array is used at the transmitter under far-field line-of-sight propagation conditions, as provisioned in 802.16e and related wireless backhaul scenarios. It is shown that for Vandermonde channel vectors it is possible to recast the optimization in terms of the autocorrelation sequences of the sought beamvectors, yielding an equivalent convex reformulation. This affords efficient optimal solution using modern interior point methods. The optimal beamvectors can then be recovered using spectral factorization.

**Collaborating Swarms, Multi-network Topologies and Constrained Coalitional Games**

*Time:* 13:15-14:00. *Place:* Visionen.

We consider the problem of autonomous collaboration in groups of robots or vehicles or agents in general. We describe methods for deriving local coordination rules using techniques from time-varying Markov random fields, which result in distributed asynchronous coordination algorithms using parallel Gibbs samplers. The algorithms circumvent the well known problem of traditional potential methods that get stuck in locally optimal paths. We show that under reasonable and mild assumptions globally optimal coordination paths emerge from these local strategies. We then consider the tradeoffs between performance and execution time. We develop and analyze two additional distributed coordination algorithms to speed up convergence, a hybrid one which is a mixture between deterministic gradient coordination and randomized Gibbs samplers, and another one which adds memory to this second hybrid algorithm. We demonstrate that these algorithms converge much faster while still resulting in nearly optimal paths. We then investigate the role of the communication topology among the collaborating agents in improving performance of distributed algorithms on graphs, such as convergence speed. We rigorously demonstrate that Small World graphs emerge as a good tradeoff between performance and efficiency in consensus problems, where the latter serves as a prototypical coordination problem. We discuss extensions to expander graphs and the significance of separating the collaboration topology from the communication topology in collaborating swarms. Next we introduce constrained coalitional games and we show that they capture in a fundamental way the basic tradeoff of benefits vs. cost of collaboration, in networked collaborating systems. We demonstrate that various simple models of constrained coalitional games can explain network formation and the emergence or not of collaboration. We close with conclusions on autonomic networked swarms and examples from biology, engineering, social and economic networks, and provide a brief list of interesting future research directions.

**Finding pedestrians in the dark**

*Time:* 14:15-15:00. *Place:* Glashuset.

Active safety is a growing area within the automotive industry, and includes different means of improving the safety before any accident has occurred. In this talk, we will present Autoliv's pedestrian detection system for night vision, available in BMW's 5 and 7 series and Audi A8. The system uses a Far Infrared (FIR) camera to detect pedestrians in front of the vehicle during night-time, and consists of three major functional modules - image enhancement, pedestrian detection, and a warning strategy - of which we will focus on the latter two. To develop a successful detection system, a surrounding infrastructure must be built to support activities such as data collection, and some of these aspects will also be touched upon.

**Dirichlet Process for Multi-target Tracking in Application to Dynamic Speech Spectrum Representation and Tracking Vocal Tract Resonance Frequencies**

*Time:* 13:15-14:00. *Place:* Glashuset.

This talk is about a Dirichlet process based multi-target tracking algorithm which aims to track unknown number of targets in a surveillance region. Dirichlet processes are widely used in classification, clustering and mixture density estimation problems in statistics. The proposed model relies on an extension of Dirichlet processes, namely, time varying Dirichlet processes. The model naturally handles the track initiation/deletion tasks of the multi-target tracking problem whereas the existing algorithms use either some ad-hoc logic or their probabilistic variants. As an application, the formant tracking problem in speech signal processing literature is chosen. The method involves modeling the speech signal spectrum by an unknown number of mixture of Gaussians, for which the Dirichlet process mixture model is utilized.

**Topics in Particle filtering/smoothing**

*Time:* 13:15-14:00. *Place:* Glashuset.

Particle filtering/smoothing is a promising class of algorithms to deal with the estimation problems in nonlinear dynamic systems. One of the key issues in particle filtering is a suitable choice of the so called importance function. The optimal importance function (by Doucet et al.) which includes the information from the most recent observation is difficult to obtain in most practical situations. I will describe some recent approaches of approximating this function. Another issue which has received little attention is the MAP estimator in the particle filtering/smoothing context. Some new developments in this direction will be discussed. Finally, I will describe new algorithms for estimating the unknown static parameter of a general state space model.

**Smoothing in general hidden Markov models using sequential Monte Carlo methods**

*Time:* 13:15-14:00. *Place:* Glashuset.

A recurring problem when operating on general hidden Markov models is the approximation of the smoothing distribution, that is, the conditional distribution of one or more states given past, present, and future observations. The aim of this talk is to show how sequential Monte Carlo methods (particle filters) can be used for approximating conveniently such distributions; for example, I present a particle-based algorithm that approximates efficiently, by avoiding the well-known degeneracy of the genealogical particle tree, the smoothing distribution at a computational cost that grows only linearly with the number of particles. I will also describe how to establish the convergence, in terms of novel exponential deviation inequalities and central limit theorems, of the schemes under consideration.

**Structure exploitation in semidefinite programming for control**

*Time:* 10:15-12:00. *Place:* Visionen.

Many control problems can be cast as semidefinite programs. However, since the size of these problems grow quite quickly, the computational time to solve them can be quite substantial. In order to reduce the computational time, many proposals of how to tailor-make algorithms to various types of control problems can be found in the literature. In this thesis, two papers with similar ambitions are presented.

The first paper deals with the case where the constraints of the optimization problem are of the type that stems from the Kalman-Yakubovic-Popov lemma, and where some of these constraints are so called complicating constraints. This means the optimization problem will be greatly simplified if these constraints were not present. By the use of Lagrangian relaxation, the optimization problem is decomposed into smaller ones, which can be solved independently of each other. Computational results show that for some classes of problems, this algorithm can reduce the computational time compared to using a solver which does not take into account the nature of the complicating constraints.

In the second paper, the fact that many control-related semidefinite programs have matrix-valued variables is utilized to speed up computations. This implies that the corresponding basis matrices have a certain low-rank structure which can be exploited when formulating the equations for the search directions, something that was discovered in the 90s and is implemented in LMI Lab. However, much has happened in the area of semidefinite programming since the release of LMI Lab, and new, faster algorithms have been developed. However, the idea of using the lowrank structure in the basis matrices can still be used. We implement this, using the publicly available solver SDPT3 in combination with our code for formulating the system of equations for the search directions. In order to facilitate for potential users, we also describe how the modeling language YALMIP is changed so that this lowrank structure can be tracked, and how the code can be easily interfaced. Computational results show that the computational time is decreased.

**Automatic Milking of Cows - with a cool robot!**

*Time:* 13:15-14:00. *Place:* Glashuset.

Dairy farming has gone through a dramatic evolution with robotic milking the last years. In this presentation the DeLaval Voluntary Milking System (VMS) will be presented. An introduction to the system, its modules and technical aspects will be given. Conditions such as animal welfare and interaction will be addressed as well as future challenges.

The VMS is a robotic milking system which was in 2009 chosen to be the coolest robot by the readers of the Swedish journal Ny Teknik.

**Modeling and simulation in practice**

*Time:* 13:15-14:00. *Place:* Glashuset.

To chose the right level of detail in your model of a specific application is an art, and it is of utmost importance to achieve the intended result within the set out time frame. Since the start in October 2002, Mathcore has worked with modeling of various types of systems, from the human metabolism to gas turbines.

During the talk, Jan Brugård, CEO at Mathcore, will give examples of detailed models of gas turbines from Siemenes, very simple models to study printing presses from Tetra Pak and models of the human metabolism developed at Universitat de Barcelona to illustrate how model developement is goverened by purpose, application, as well as time frame.

**A system identification challenge from biology: to combine identifiability with interpretability**

*Time:* 13:15-14:00. *Place:* Glashuset.

Having the opportunity to speak before one of the leading system identification groups, I will present a major identification challenge from my work in systems biology, modeling biological systems - with the hope that more people will contribute to its solution.

In systems biology, the emphasis is not to understand the system in order to create a better predictor, but to use prediction-error based tools to evolve the understanding of the system. This leads to two apparently conflicting goals for the models: i) to include the known mechanistic details and sub-processes ii) to find out what can be concluded, and not only proposed, about the system, based on the given observations. The reason for the apparent conflict is that the first goal argues for large models, whereas the second argues for minimal and/or identifiable models.

I will present a few examples from my work that illustrates this problem, and also some partial methodological results I have achieved already. I will also demonstrate how these developments point towards a comprehensive new modeling framework. This framework includes modifications of classical modeling tasks - such as the characterization of all acceptable parameters instead of one identifiable, and model reduction that combines optimized identifiability with preserved interpretability - but also potentially novel tasks such as the mapping of identifiable parameters and properties to corresponding interpretable parameters and predictions, taken from two general non-nested models describing the same system and data.

**Challenges in Digital Signal Processing for improved performance of AD-conversion**

*Time:* 13:15-14:00. *Place:* Glashuset.

The talk will give an overview of techniques available for improving performance in AD-conversion. We focus on state-of-the-art interleaving and linearization for high-end performance ADCs, e.g., from Texas Instruments (550MSps, 12 bits) and National Semiconductor (130 MSps,16 bits). Both theory and real-life data will be discussed.

**Guidance of sounding rockets**

*Time:* 13:15-14:00. *Place:* Glashuset.

RUAG Space develops and manufactures guidance and control systems sounding rockets and small satellites. More than 200 sounding rockets have been guided by our control systems.

Sounding rockets are relatively small rockets that are launched into a so called suborbital trajectories. These are used for scientific purposes (microgravity and astronomy). A guidance system can be used to reduce the impact dispersion and to alleviate the wind conditions at launch.

S19 is a family of guidance systems for sounding rockets, typically two-stage rockets that can reach apogees of 200-400 km. The guidance system is located in the payload and uses canards to aerodynamically control the flight of the rocket during about 18 seconds.

Maxus GCS is used for guiding the one-stage Castor 4B rocket using thrust vector control. This rocket is launched from Esrange and can reach altitudes above 800 km.

A presentation of the development, design, analysis, implementation and verification of these systems will be given.

**Control and Identification of Distributed Systems**

*Time:* 10:15-11:00. *Place:* Glashuset.

Two approaches to analyse the Robust Control Synthesis problem for "discretized" infinite-dimensional systems will be presented. The first approach is the structured matrix approach that allows to efficiently solve both heterogeneous and homogeneous distributed systems specified on a regular grid. The second approach is the decomposable systems approach. This approach considers identical systems in an arbitrary configuration (network). For the second class we conclude with a number of challenges from an identification perspective.

**Data Driven Fault Detection and Identification Connected to Subspace Identification**

*Time:* 13:15-14:00. *Place:* Glashuset.

Classical methods for fault detection and identification (FDI) require the knowledge of a model. However, modeling by first principles is often a time consuming and expensive task, especially when the system is of a rather large scale. On the other hand, models can also be identified from data. System identification techniques usually require defining model structures and selecting model orders. These may not always be easy for engineers. In order to avoid modeling and provide more usability to practitioners, this work aims at developing design algorithms, which directly extract FDI filters from the I/O data measured from a plant, without explicitly realizing a transfer-function or state-space type model. These algorithms are connected to closed-loop subspace identification techniques, and only require identifying a sequence of Markov parameters.

This presentation shall mainly focus on the fault detection method, which includes residual generation and evaluation. First, we shall show how to identify a residual generator from data. Next, we shall analyze the covariance structure of the residual vector in terms of both the innovation signals and the parameter identification errors. We then show how to robustify the residual evaluation scheme by incorporating the contribution of the parameter errors, which can be categorized into cautious filtering methods. We shall also show the advantages of this data-driven fault detection scheme over a data-driven parity space approach, in terms of the covariance analysis. Finally, an example will be shown to verify the improved detection performance.

**Generalized Principal Component Analysis: An Introduction**

*Time:* 13:15-14:00. *Place:* Glashuset.

There are a large number of problems in which we encounter the problem of modeling large amounts of data, by what is referred to as a "mixture of models", that is to say that the data can be segmented into finitely many sub components, each of which can be separately modeled. In the context of the identification of hybrid systems it is easy to see how this would arise when the input-output behavior depends on the "discrete state" of the hybrid system. Of course, the applications in computer vision, signal and image processing and indeed more generally in statistics are extremely numerous. This area of work has found a tremendous outpouring of effort and methods in recent years in the areas of machine learning, signal processing, hybrid systems, statistics and adaptive systems literature. However, it is our perception that the conceptual and theoretical underpinnings of the bulk of the literature could be improved.

In the course of a recent set of papers with Yi Ma of the University of Illinois, Urbana Champaign and Rene Vidal of Johns Hopkins University and their students, we have developed what we believe to be an interesting new approach to simultaneously segmenting and modeling data from mixtures of models. The heart of our approach lies in what is called "Generalized Principal Component Analysis". This in turn has many connections with such classical problems as Hilbert's Nullstellensatz and some other open problems in algebraic geometry and statistics. In my talk at this workshop, I will give a brief overview of the approaches and their applications to date. The work is being incorporated into a monograph to be published by Springer Verlag in 2011 (?).

**Docentföreläsning: Nonlinear Filtering with Sensor Fusion Applications**

*Time:* 13:15-14:00. *Place:* Visionen.

Sensor fusion problems such as positioning and tracking applications naturally leads to a recursive Bayesian estimation formulation. For nonlinear systems or systems where the underlying probability distribution is non-Gaussian or highly multi-modal, traditional methods based on first and second order statistics, such as the Extended Kalman Filter (EKF) may perform badly. The Particle Filter (PF) provides a numerical solution to the non-tractable recursive Bayesian estimation problem. The theory will be discussed and compared to traditional methods, where it will be shown that the PF can handle nonlinearities and is not restricted to any particular assumption on the noise. It also removes the restriction on an explicit state-space formulation.

For many practical applications such as positioning, a combination between KF and PF can be applied; which is referred to as the Rao-Blackwellized PF (RBPF). In the lecture the algorithmic complexity is addressed for this filter, where the KF solves a linear subsystem subject to Gaussian noise efficiently and optimally. It is also discussed how to structure the RBPF for efficient implementation, and it is demonstrated how to apply the method for Simultaneous Localization And Mapping (SLAM), where a map is constructed from measurement data and used for navigation at the same time. Several positioning and tracking applications are compared to traditional approaches, for instance underwater navigation, UAV/robot SLAM, and sensor quantization. In parallel, fundamental limits such as the Cramer-Rao Lower Bound (CRLB) is discussed, and alternative analysis methods based on information theoretic results are highlighted.

**Verification and Control of Hybrid Systems using Reachability Analysis**

*Time:* 13:15-14:15. *Place:* Visionen.

This talk will present reachability analysis as a tool for model checking and controller synthesis for hybrid systems. We will consider the problem of guaranteeing reachability to a given desired subset of the state space. We allow for nonlinear dynamics in each discrete mode, and possibly non-convex state constraints. We make use of techniques from hybrid system verification to compute reachable sets, under bounded model disturbances that vary continuously. We also consider the effects of sampling and quantization. The resulting control policy is an explicit feedback law involving both a selection of continuous inputs and discrete switching commands at each time instant, based upon measurement of system state. We discuss real time implementations of this, and present several examples from multiple UAV control.

**Past, present and the future of the Gripen Control System**

*Time:* 13:15-14:00. *Place:* Glashuset.

A short overview of control system from yesterday till today followed by experiences from JA 37 Viggen and the Fly By Wire demonstrator, that influenced the design of the modern light weight multirole combat fighter JAS 39 Gripen with its negative inherent stability. Other important control system design parameters are also discussed. The basic strategy of JAS 39 Gripen flight control system design are also mentioned.

To provide the pilot with maximum aircraft performance in every situation, without extra attention to aircraft limits such as structural overloading, departure situations and for example engine air intake envelope limits, the JAS 39 Gripen has inbuilt flight control system limit functions that makes JAS 39 Gripen Carefree to manoeuvre.

The flight control system also handles millions of external store combinations mounted on this small aircraft and how the effects from the external stores are compensated. How the aircraft is controlled in the high angle of attack region.

In the JAS 39 Gripen there are wake vortex detection and reduction functions implemented. This technique was used in light civil airliner wake vortex encounter during landing simulations, to see the effect from the wake vortex encounter on the light civil airliner and how this effect could be reduced. Short about flight control system development tools and simulators and some about other performed projects and an example on a future flight control system experimental task.

**Optimal experiment design for open and closed-loop identification**

*Time:* 09:30-10:00. *Place:* Visionen.

Professor Michel Gevers, Université Catholique de Louvain, is appointed honorary doctor of Technology at Linköping University in May 2010. He is an internationally renowned researcher in the field of automatic control and systems engineering and has, for example, led high-profile projects to develop models for the regulation of industrial processes.

He has also contributed to technical research at Linköping University as an advisor and through organised research exchanges.

**Optimal experiment design for open and closed-loop identification, cont'**

*Time:* 14:15-14:45. *Place:* Algoritmen.

Professor Michel Gevers, Université Catholique de Louvain, is appointed honorary doctor of Technology at Linköping University in May 2010. He is an internationally renowned researcher in the field of automatic control and systems engineering and has, for example, led high-profile projects to develop models for the regulation of industrial processes.

He has also contributed to technical research at Linköping University as an advisor and through organised research exchanges.

**The dynamics of confusion and consensus in cooperative multi-agent systems**

*Time:* 13:15-14:00. *Place:* Visionen.

Interest in the dynamics of coordinated and cooperative systems has grown significantly in recent years. Such systems appear in many disciplines; biology (synchronized firefly displays, flocking and schooling behavior), operations management (decentralized dynamic task assignment), environmental science (mobile environmental sensors), aeronautics (autonomous aerial vehicle formations) and computer science (distributed computing consensus) are just a few examples.

The focus of this work is high-precision formations and the motivating application is the decentralized control of deep-space interferometric imaging formations. I will examine the issues of confusion and consensus when vehicles estimate each others behavior in the attempt to achieve a common goal and determine the minimum amount of communication required to "control" confusion. The design of distributed estimators over lossy communication channels will be presented.

**Dirty radio - a smorgasbord with modeling, measurements, pre- and post-distortion**

*Time:* 13:15-14:00. *Place:* Glashuset.

Developing the future wireless communication systems not only requires to steadily increase the spectral efficiency and approaching the Shannon bound, but also to combat the technological limits set by analog RF front-ends. In this seminar, some issues on modeling, measuring, and digitally compensating radio frequency components such as analog-digital converters (ADCs), in-phase/quadrature (IQ) mixers, oscillators, and power amplifiers (PAs) are discussed. The presentation is far from a complete survey of the topic, but rather reflects the speaker's own interest in the area.

**Opportunistic networks meet infrastructure**

*Time:* 09:15-10:00. *Place:* Glashuset.

Opportunistic networks are composed of mobile wireless devices that communicate when within range of each other, i.e. when there is a contact. No end-to-end connectivity is assumed to exist, so data is stored and carried until an opportunity for forwarding appears. However, since user mobility is slow compared to the data transmission speed, such networks experience higher delivery delay compared to cellular networks. On the other hand, the traffic load of cellular infrastructure networks is increasing due to user demand for content in the Internet.

I will present recent experimental results on the performance enhancements when merging infrastructure with an opportunistic network. From a more theoretical perspective, I will show how a worm propagating in an opportunistic network can spread faster if the infrastructure starts sending security alerts.

**My first billion control loops**

*Time:* 10:15-11:00. *Place:* Glashuset.

I will talk about some impressions from my 9 years as an expert at Ericsson where I worked with the launch and optimization of the WCDMA 3G system and the design of the LTE 4G system. I will also briefly mention the research projects I work with at the moment.