Gesellschaft f�r Informatik e.V.

Lecture Notes in Informatics


INFORMATIK 2006, Informatik für Menschen, Band 1, Beiträge der 36. Jahrestagung der Gesellschaft für Informatik e.V. (GI), 2. - 6. Oktober 2006 in Dresden P-93, 323-328 (2006).


2006


Editors

Christian Hochberger, Rüdiger Liskowsky (eds.)


Contents

The PMHT: solutions for some of its problems

Monika Wieneke and Wolfgang Koch

Abstract


Tracking multiple targets in a cluttered environment is a challenging task. The Probabilistic Multi-Hypothesis Tracker (PMHT) is an efficient approach to cope with it. Linearity in the number of targets and measurements is the main motivation for its continued study. Unfortunately, the PMHT has not yet shown its superiority in terms of a better track-lost statistics. Furthermore, the problem of track extraction is not satisfactorily solved. This work focuses on mitigating the PMHT's main problems and presents the integration of a sequential likelihood-ratio test for track extraction. The PMHT works on a sliding data-window. For each window-position it iteratively applies a Kalman smoother using synthetic measurements. In [WRS02] three properties are made responsible for the PMHT's problems in track maintenance and its small practical acceptance: Non-Adaptivity, Hospitality and Narcissism. We derive new synthesis weights governed by innovation covariances to make the PMHT adaptive. To avoid hospitality we introduce a spurious measurement representing a missing detection. Finally we resize the estimation errors of the start iteration to rouse up the PMHT from its narcissism. To introduce our notations we continue by a formal description of tracking. The tracking Scenario is defined as follows: A sensor observes S point targets in its field of view (FoV). It generates measurements Z = z1:$T = {zt, Nt}$Tt=1 for a period [1 : T]. The sensor output at a scan t not only consists of the set of measurements zt but also of the number of measurements Nt. Thus we model measured data as a pair ${zt, Nt}$. Measurements znt $\in R2$ with n $\in $[1 : Nt] are assumed to be Cartesian position data. The spurious measurement n = 0 denotes a missing detection as already mentioned. The Task of tracking consists in estimating the kinematic states X = x1:T of the observed targets. The states xst $\in R6$ with s $\in $[1 : S] comprise position, velocity and acceleration. Difficulties arise from unkown associations A = a1:T = {at}Tt=1 of measurements to targets. They are modeled as random variables at = {ant}Nt n=0 that map each measurement n $\in $[0 : Nt] to one of the targets s $\in $[0 : S] by assigning ant = s. The target s = 0 is a spurious planar target representing clutter. It corresponds to the FoV. Mathematically expressed the optimization problem arg maxX $p(X |Z)$ is to be solved. Expectation-Maximization is an efficient method for this task. The remainder of this work starts with a brief explanation of Expectation-Maximization whereas we introduce new posterior weights. Section 2 continues with the derivation of the PMHT and presents our ideas to solve its problems. In section 3 we explain the integration of likelihood-ratio testing into the PMHT. 323 Expectation-Maximization (EM) is an iterative method for localizing posterior modes. At each iteration, EM first calculates posterior weights $p(A|Z, X l)$. They define an optimal lower bound $Q(X ; X l)$ of $p(X |Z)$ at the current guess X l. In contrast to the conventional approach we regard the estimation error covariances Pl as intrinsic information of X l. To make it explicit we use extended posterior weights $p(A|Z, X l, Pl)$ in our work (eqn. 1). $Q(X ; X l) = \log p(X ) + log(p(A, Z|X )) p(A|Z, X l, Pl) (1)$ A As $Q(X ; X l)$ is expressed as an expectation, the step is called E-Step. In the following M-Step, EM maximizes the bound with respect to the free variable X . How this is done depends on the application. The PMHT is the application of EM to the tracking problem. It results in estimates xst for each target s $\in $[1 : S] at each time t $\in $[1 : T]. Covariance matrices P st occur as by-product. We interpret them as estimation error covariances of xst. 2 Derivation of the modified PMHT The Q-Function contains all available information: the statistical models of detection process, measurement process and target dynamics. A series of calculations is required to make the information visible. We pass on deriving dynamics and sensor model and proceed with our new posterior weights. Disposed readers are referred to [SL95]. Adaptive Posterior Weights wlns t This section addresses the PMHT's problem of Non-Adaptivity. As we model the sensor output as a pair ${zt, Nt}$, we can split it and treat Nt separately. Some simpler calculations followed by Bayes' Rule and the product formula A.1 finally yield eqn. 2. Nt T N (znt; Hxlst, Slst)$π$ns t T Nt $p(A|Z, X l, Pl) = n=0$ =: wlns N t (2) t S t=1 N (zn t=1 n=0 t ; Hxls t , Sls t )$π$ns t n=0 s =0 Our weights are controlled by the innovation covariances Sls := HP ls t HT +R with measuring matrix H and measuring error R. Using these weights the PMHT works adaptively because it takes the quality P ls t of the current track estimation into account. If P ls t blows up soon enough, a track rescue is possible. The weights comprise two kinds of measures that evaluate the relevance of a measurement with respect to a target estimation: A distance measure N (znt; Hxlst, Sls) and a `visibility measure' denoted as $π$ns t := $p(ant = s|Nt)$. For n > 0 the latter reflects how likely it is to hit a target, not taking concrete position data into account. The weight $π$0s t simply is the probability of missing a target. Note that our visibility weights are posteriors depending on Nt. The original PMHT uses priors $324 p(ant = s)$ instead and hence is less flexible. For the calculation of $π$ns t we apply Bayes' Rule. Its outputs $p(ant = s)$ and $p(Nt|ant = s)$ are easier to handle than $π$ns t itself. Using binomial coefficients we also come to grips with multiple targets. By increasing Nt the weights $π$ns t of the real measurements (n > 0) converge to a uniform distribution. Maximization of the Q-Function As the Q-function can be rewritten as a sum over the targets, the maximization problem decomposes into S independent problems: one summand $Qs(X ; X l)$ per target. Exponentiation and successive application of the product formula A.2 yields relation 3 with evolution matrix F and process noise covariance D. \?zlst and \?Rlst denote synthetic measurements and corresponding error covariances respectively. T Nt exp $Qs(X ; X l) \propto N$ (xs0; xs0|0, Ps0|0) N (xst; Fxst|t - 1, D) N (\?zlst; Hxst, \?Rlst) (3) t=1 n=0 N - 1 t Nt with \?zlst = \?Rlst wlns t (Rn t ) - 1zn t and \? Rlst = wlns t (Rn t ) - 1 (4) n=0 n=0 At this stage the spurious measurement n = 0 makes an impact: Obeying the formalism we have to renormalize the posterior weights with respect to all measurements including the missing detection n = 0 and exchange wlns t by w* lns t = wlns t / Nt n=0 wlns t . Thereby an intermediate result on the way to eqn. 2 enables us to set the weight w0s t of the missing detection to $π$0s t . As its error covariance is $R0t = \infty $the corresponding summands in eqn. 4 vanish. So in a Cartesian system we finally obtain centroid measurements with covariances Nt \? R Rlst = and w* lns N t < 1 (5) t w* lns n=1 t n=1 As in the original PMHT the sum of weights in eqn. 5 can be greater than 1 it suffers from Hospitality. It interprets multiple measurements as one measurement of high accuracy. We have enforced the sum to be less than 1 and hence mitigated the hospitality problem. Initializing the Iteration Process The PMHT works on a sliding data-window. The initial states of the current window are set to the final estimates of the preceding window (tpre $\in $[2 : T ]) and the corresponding prediction (tpre = T + 1). If the latter states only have a slight tendency to walk off, the estimations of the current window often pursue this. Even a single false alarm near a poor prediction can cause wrong tendencies. However for the `narcissistic' PMHT, the track is progressing normally: It perfectly fits the estimations to the new data situation, though it should know it better from the past. A proper initiation can mitigate the PMHT's Narcissism. Consider a single iteration l before retrodiction: It consists in weight calculation, filtering and prediction (in that order) for each time scan t. We interfere between weight calculation and filtering step at each 325 scan of the first iteration (l = 0). The goal is to give certainty about the already estimated states. Especially in case of missing detections the ellipsis P 0s t|t - 1 of the predicted estimation error comprises too much uncertainty. We propose to use the corresponding final error covariance P stpre+1 of the preceding data-window instead. As Pstpre+1 is usually smaller, the PMHT is reminded of having already estimated the states t = 1:T - 1 in the preceding iteration loop. Note that the size does not change for t = T . Experimental Example We simulated an aircraft with detection probability PD = 0.7, observed by a radar: time interval $Δt = 5$ s, clutter density $ρ= 10 - 7.3/m2$, FoV-radius 25000 m and error $σ$= (50 m, 50 m). Figure 1 shows our PMHT on the left and the failing PDAF on the right. We chose a window length of 6 time scans and a constant number of 6 iterations. Measurements are marked as blue +, final state estimations as red x. Measurements of the real target are bordered by . We show the estimation errors at the last two scans (t = T - 1, T ) of the first two iterations (l = 0, 1) in each case: error ellipsis before filtering in green, after filtering in red. The data situations that make the narcissistic PMHT walk off are marked as `critical'. False alarms are plotted at all scans, only within a radius of 3000 m around the initial (l = 0) estimation of scan t = T . x 104 x 104 2 critical 2 1.5 1.5


Full Text: PDF

ISBN 978-3-88579-187-4


Last changed 24.01.2012 21:55:50