Google

Time Management

"PHD Life is all about self-motivation .. treat it like a day job. Set strick working hours and study activities, and if you don't complete them in the time alloted then do as you would as a good employee - work overtime" - Duggi Zuram

Thursday, June 14, 2007

maneuvering Target Tracking using cost reference Particle Filtering [Bugallo et al 2004]

Monica F. Bugallo, Shanshan Xu, Joaquin Miguez, Peter M. djuric.

This paper discuss a new method CRPF (Cost Reference Particle Filtering) compared them to SRPF (Statistical Reference Particle Filtering). Eliminate all probabilistic assumption.

Work compared to SMC method

Tuesday, June 12, 2007

System Modelling with MATLAB , Basic [NIse, Norman S. 2000]

SYSTEM MODELING

  • Use for analysis and design of feedback control system
  • Consist of two approaches:

1. Classical or frequency-domain method which converting a differential equation system to transfer function using Laplace transform or z transform
o Simplify the representation of individual subsystem and modeling interconnection subsystems
o Disadvantage: limited applicability only to linear, Time-invariant system
o Advantage: rapidly provide stability and transient response information therefore can immediately see the effect of varying system parameters until an acceptable design is met
o Few quick calculation or graphic representation of data rapidly yields the physical interpretation

2. State-space method or modern or time-domain approach
o Increase in space exploration and involves modeling using linear, time-invariant differential equation.
o MIMO (multiple output multiple output) system is compactly represented similar to single input-output system
o Can handle system with non-zero initial condition
o This time-domain approaches can be used to represent system via digital computer for digital simulation where system response can be obtained for changes in system parameters.
o Numerous state-space software packages offered for PC
o Disadvantage: Designer engaged in several calculations before the physical representation of the model is apparent.

  • LINEAR
  • TIME VARIANT
  • LAPLACE TRANSFORM
  • Use table and theorem for transformation from time domain to frequency domain
  • Partial fraction expansion can be applied to simplify a complicated function by braking it down to a sum of simpler term
    • F(S)=N(S)/D(S)
    • The order of N(S) must be less than D(S) if not N(S) must be divided by D(S)
    • Case 1: Roots of the Denominator of F(S) Are Real and Distinct
    • Case 2: Roots of the Denominator of F(S) Are Real and Repeat
    • Case 3: Roots of the Denominator of F(S) Are Complex or Imaginary
  • TRANSFER FUNCTION
  • H(S)=C(S)/Y(S) r = reference input, c = control output
  • Differential equation (time domain) transform using Laplace (frequency domain) therefore allow separation of input, system, and output.
  • Algebraically relates a system’s output to its input.
  • Allow separation of the input, system, and output into three separate and distinct parts.
  • Algebraically combine mathematical representations of subsystems to yield a total system representation.
  • LINERIZATION
  • Obtained linear approximation for nonlinear system in order to obtained transfer function
  • STATE SPACE
MATLAB CODE TRANSFER FUNCTION STATE SPACE

Monday, June 11, 2007

An Introduction to Kalman Filter [Greg Welch and Gary Bishop, 2006]

Kalman Filter: an efficient recursive filters that estimates the state of a process by minimizing the mean squared error from a series if incomplete and noisy measurements.

1. DISCRETE KALMAN FILTER
  • state estimation governed by the linear stochastic difference equation
  • state eqn
    • xk = Axk-1 + Buk-1 + wk-1
  • Measurement eqn
    • zk = Hxk-1 + vk-1
  • Process and measurement noise with probability distribution
    • p(w) ~ N(0,Q) , p(v) ~ N(0,R)
  • priori and posteriori estimate error

  • priori and posteriori estimate covariance error
  1. PREDICT

  2. CORRECT




2. EXTENDED KALMAN FILTER
  • state estimation governed by the non-linear stochastic difference equation

Sequential Monte Cralo Particle Filtering

Arnaud Doucet, Nando de Freitas, and Neil Gordon

Little background on SMC.

Monte Carlo Method originally known as ‘method of statistical sampling’

ESTIMATION GENERAL CONCEPT

  • Estimating unknown quantities from given observations. I.e.: Prior knowledge available.
  • Able to formulate Bayesian Model which is the prior distribution for the unknown quantities and the likelihood function relating these quantities to the observations.
  • Inferences on the unknown quantities are made from the posterior distribution obtained from Bayes’ Theorem.
  • Often observations arrive sequentially in time and able to perform inference on-line. Therefore necessary to update the posterior distribution as data become available.
  • Goal: Computational simplicity allows not having to store all data.
  • Example
    • Data model using linear Gaussian state space: derive the posterior distribution using Kalman Filter.
    • Data model using partially observed state space Markov Chain: obtained analytical solution using Hidden Markov Model HMM Filter
  • Estimation method such as Kalman Filter and Gaussian sum approximation is based on normal distribution fail to cover the non-Gaussianity and nonlinearity. While Grid-based filter based on deterministic numerical integration method is too computationally expensive to be used in high dimension.

SMC SOLUTION

  • Able to handle very complex data, typically involving non-Gaussianity, nonlinearity which condition usually preclude analytic solution.
  • Flexible, easy to implement, parallelizable and applicable in very general setting.
  • Closely related algorithm: bootstrap filters, condensation, particle filters, Monte Carlo filters, interacting particle approximations, and survival of the fittest.

Thursday, June 7, 2007

An Overview of Recent Developments in Target Tracking, in the Active Airbone Sonar Networks Domain

Francis Martineri and Sebastian Brisson, 1993

ABSTRACT
  1. Address Target Tracking problem from the measurements made on a passive sonars activated by an active sonars (multistatic networks)
  2. recalls on principles of classic tracking approaches in the specific context of distributed sensors measuring range, doppler, and azimuth
  3. Include improvements on simultaneous tracking of a target and calibration of the sensor field
  4. Introduce improvement on 'recently introduced approach in target tracking' and target motion analysis which is based on a non-deterministic modelisation of the target evolution.
  5. Method based on Hidden Markov Modelling leads to multiple and maneuvering target tracking with few Hypotheses
CASE INTRODUCTION
  • The increase in level of difficulties in detection and localisation from basic sensor due to low level of submarine radiated noise, develop interest in distributed sonar networks.
  • Network consist of active sonar acting as a transmitter/receiver, and N passive sensors acting as receiver
  • Sensors and targets assume to be at the same plane
  • Targets are characterised by position and speed in Cartesian coordinates
    • X(t) = (x(t),y(t),vx(t),vy(t))
  • Sensor positions are known Xs = ( (xs1,ys1), ... , (xsN,ysN)
  • False alarm assumed uniform
  • Measurements carried in discrete time: a signal emitted by the active sensor will give rise to detection on the passive sensor after it has impinged the targets.
  • The range, doppler, or azimuth measurements corresponding to a detection on sensor are extracted. (equations given)
CLASSIC APPROACHES IN TRACKING
  • Come from a classic data association and tracking method and based on 2 steps
  1. Extracting and Data Association
    • Extracting the measurement corresponding to each single target received from the sensors. Usually Extraction perform by track formation at the sensor level. (Based on the block diagram of track extraction, I presume they use a centralized fusion techniques)
    • Track to track association where tracks from different sensors corresponding to the same targets have to be associated.
      • often perform manually by an operator. However at this point automatic algorithms based on Generalized Likelihood Ratio Test have been developed and validated
  2. Target Parameter Estimate
    • M = M ( X(t), Xs ) + B where
      • M is a global vector of measure corresponding to the same target
      • M ( X(t), Xs ) composed of measurements at the various time instant computed according the range, doppler, or azimuth equations.
      • B is a zero mean Gaussian noise vector of known covariance ∑B.
    • Assuming target motion model in known (in most cases rectilinear unaccelerated), the target characteristic are next derived from the measurement vector M with the help of maximum likelihood estimators (MLE)
    • MLE are based on the identification of the target characteristic which minimize a least square criterion on the measurements. In this case is as follow
      • J = [M-M(X(t),Xs),Xs]T∙∑B∙[M-M(X(t),Xs)] + [Xs-Xap]T∙∑ap∙[Xs-Xap]
      • Xap represents supposed sensor position
      • ap represents the covariance of this uncertainty
    • J take into account the imperfect knowledge of the sonar position, therefore calibration of the network can be perform simultaneously to the target characteristic estimation by considering Xs as an unknown and Xap as a parameter.
    • Classic MLE - Gaussian Newton (GN) algorithm and EKF are extended to this case with few modification.
    • Difficulties occur for EKF when the initial parameter is to be set.
    • Results of measuring range and azimuth for both methods are provided.
    • EKF able to track maneuvering target provided the target noise model covariance is well chosen
    • GN algorithm is significantly degraded due to the fact that it is implicitly non adaptive
    • Accurate calibration is performed jointly with an unbiased target motion estimation provided at least one sensor position is exactly known and azimuth measurements are made by at least one sensor.
  • Classic approach has limitation on maneuvering targets
THE DIAMANT ALGORITHM
  • DIAMANT ( DIscrete Association, Motion ANalysis and Tracking) differs compare to classic sonar tracking method.
  • Use Hidden Markov Chain method
Note: No further reading has been made. (seem irrelevant at a.t.m)

Covariance Matrix

Reference for Covariance and Covariance Matrices from wiki.

Variance : measure how much a single random variable varies
Covariance: measure how much two random variable varies together
Covariance Matrix: covariance in the form of matrix for higher dimension of scalar-valued random variable

Refernce for matrix from wiki

Wednesday, June 6, 2007

Overview of Problems and Techniques in Target Tracking

Wolfgang Koch

SENSOR SYSTEM
  • Sensing hardware of sensor or sensor networks collecting information on time varying quantities (waveform). This information is a series of sensor data set (also called scans or data frames) that received at discrete instants of time.
  • Via a detector device, the data go through a data rate reduction process and forwarded for signal processing.
  • This data set is used to estimate the state of a stochastically driven dynamical system. Therefore the signal processing results in estimates of the waveform parameters and produced the final sensor reports (measured quantities) and become the input of tracking system
TRACKING SYSTEM
  • Tracking results from the data association and estimation algorithm techniques (sensor data processing) which used to exploit efficiently the data from sensor resources and also to obtain information that not directly produce by the sensor reports.
  • i.e: tracks mean estimate of state trajectories which statistically represent the quantities or object considered along with their temporal history.
  • Sensor reports that can be associated to existing tracks are used for track maintenance
  • While remaining reports that non-associated to any existing track are used to initiate new track or multiple frame track extraction.(track initiation)
  • Both track initiation and track maintenance required a prior knowledge of the sensor performance, object characteristic and object environment. This prior knowledge available in the form of statistical modeling assumptions.
  • Plot-to-track unit is important when dealing with multiple target tracking system
  • Stored data (in Track File Storage) is extracted during track processing and used for track termination /conformation, object classification/identification, and track to track fusion (fusion of track representing identical information).
  • Results are displayed through man-machined interface. Other purpose is for interaction function where available information on sensor, object of interest, and the environment can be specified, updated, or corrected by direct Human interaction or the track processor itself.
CHALLENGE
  • High false return background. Sensor can't compress data
  • Ambiguous correlation between new and existing track become a problem for closely-spaced moving object. Plus false return or unwanted object, identity of the individual object tracks might get lost
  • Limited sensor resolution capability make the data association harder bcoz the closely-spaced object may continuously change from being resolved to unresolved and back again. Besides, sensor returns having poor quality, low SNR, or fading phenomena. Scan rate maybe low in certain application ex: long-range air surveillance.
  • The underlying dynamics models are restricted to one particular sample out of several sets of alternatives. Sudden switches between the underlying dynamics models do occur and tracks can get lost in such critical situation.
BAYESIAN APPROACH
  • Most mathematical techniques of tracking system essentially make use of Bayes' Rule
  • Tracking algorithm is an iterative updating scheme for conditional probability densities that describe the object states given both the accumulated sensor data and all available prior information
  • Optimal state estimators may be derived related to various risk function provided that filtering, density iteration, has been done correctly.
  • Generalization of standard smoothing algorithms, retrodiction, provides a backwards iteration scheme for calculating the probability densities of the past objects states given all information accumulated up to the current scan.
  • Track maintenance and data acquisition are closely related
    • exist feedback of tracking information to the sensor system
  • Tracking algorithm must be initiated by appropriately chosen prior densities.
SENSOR FUSION ASPECTS
  • Network of homogeneous or heterogeneous sensors are preferred compare to single sensors
  • Networks' advantages:
    • Total coverage of suitably distributed sensors are much larger
    • Low-cost sensor network
    • Redundancy of overlapping fields of view increased data rates and observation under several aspect
    • Multiple-sited networks provide that is on principle not available by corresponding single-sited sensor
    • More robust against failure or destructive of individual components
  • Centralised data fusion: sensor reports are transmitted to a processing centre without significant delay.
    • Issue of single-sited sensor or sensor networks is irrelevant.
    • Practical realization is difficult bcoz of limited data links between the sensors and the fusion centre, synchronisation problems, or misalignment errors.
  • Decentralised fusion architecture or hybrid solution is proposed.
    • Data of the sensor system are pre-processed at their individual sites
    • Fusion centre receives higher-level information
    • i.e: sensor-individual track which are to be fused with other tracks resulting in a central track.
EXPERIMENTAL EXAMPLE
  • Real radar data from single-sited radar.
  • Data compared using IMM-MHT Retroiction and MMSE-MHT Retrodiction
  • IMM-MHT perform better especially in a verysmooth condition (accurate speed and heading information)
  • Both filtering and retrodiction produce better results when using model histories more than length n = 1 (delay the frame for MMES-MHT)

Wednesday, May 30, 2007

Continuous Distribution

EXPONENTIAL

Used to model the amount of time until a specific event occurs or to model the time between the independent events. Example:
    • the time until the computer locks up
    • the time between arrivals of telephone calls
    • the time until a part fails
MATLAB: expocdf(x,1/λ) where mean E(X)=1/λ.

GAMMA f(x;λ,t)

when t is a positive integer, the gamma distribution can be used to model the amount of time one has to waith until the t events has occurred.

MATLAB: gampdf(x,t,1/λ).

CHI-SQUARE

A gamma distribution where λ=0.5 and t= v/2 where v is a positive integer, is called a chi-square distribution with v degree of freedom. Chi-square distribution is used to derived the distribution of the sample variance and is important for goodness-fit-test in statistical analysis.

WEIBULL

Closely related to Exponential.
Apply for problems of reliability and life testing.
Used to model the distribution of time it takesa for objects to fail.

BETA

Can be used to model ran var that takes on value over a bounded interval from 0 to 1.

MULTIVARIATE NORMAL

Tuesday, May 29, 2007

Normal Distribution [Miller 1985 n Martinez 2001]

History:
  1. Known as Gaussian Distribution.
  2. Studied first in 18th century when scientists observed an astonishing degree of regularity in errors of measurement.
  3. The error distributions observed were approximated by distribution called ' normal curve of errors' (Bell shape) produced by the normal distrbution Eqn. that determined by the expected value and variance for normal distribution.
Properties:
  1. PDF aproaches zero as x approaches + or - inf
  2. centered at mean μ and max value occur at x=μ
  3. PDF for normal distribution is symmetric about mean μ
MATLAB Command:
  1. normcdf(x,mu,sigma)
  2. normpdf(x.mu,sigma)
  3. normspec(specs, mu, sigma)
MATLAB Example:
%Set up parameter for normal distb.
mu = 5;
sigma = 2;
%Set up upper and lower limit specs
specs = [2, 8]prob = normspec(specs, mu, sigma);

Equations:

GAUSSIAN ( NORMAL ) DISTRIBUTION ( PDF ), MEAN, AND VARIANCE



STANDARDIZED RAN VAR, STANDARD NORMAL DISTRIBUTION ( CDF )

Definition: Standard Nornal Distribution is a normal probability distribution that has a mean of 0 and a standard deviation of 1

GAUSSIAN APPROXIMATION TO THE BINOMIAL DISTRIBUTION

Use to approximate the binomial distribution when n is large but but is close to 0.5, not small enough to use Poisson Approximation.
Rule of thumb: use the normal approximation to the binomial distribution only when np and (1-np) are both greater than 5.

Theorem: (State without proof) If x is a value of random variable having the binomial distribution with the parameters n and p and if
then the limiting form of the distribution function of this standardized random variables as n --> inf is given by

Friday, May 25, 2007

Uniform Distribution [Miller 1985] MATLAB [Martinez 2001]

Uniform distriobution for Continuous random variables. Random variables are uniformly distributed over the interval (a,b).



















EXAMPLE MATLAB

Thursday, May 24, 2007

Poisson distribution [Miller n Freund] Matlab [Martinez]

Poisson Distribution is approximation for Binomial Distribution when n --> inf and p --> 0, smalll ( so np is moderate).
Derived from Binomial Dist Eqn by sub var p with λ/n .

where λ= np
Expected value E[X] = λ and variance V(X) = λ (replace np=λ p-->0)

Example:

5% of of bounded book at certain bindery centre have defective. Find the probability that 2 of 100 books bounded by this bindery centre is defective:












Poisson Process:

Extending the uses of above formula for process taking place over continuous interval of time. i.e: Events occur at points in time or space

To find the probability of x success during a time interval of length T, we devided the interval T into n equal parts of length t, with the probability of success p = α t. α is the average (mean) of successes per unit time.

Assumption:
  1. The probability of a success during a very small interval, t, is given by p = α t.
  2. The probability of > one success during such a small time interval ∆t is negligible.
  3. The probability of a success during such a time interval does not depend on what happened prior to that time.

The formula for Poisson distribution can be futher expand by expading the parameter λ.
λ = n.p = (T/t) *(αt) = αT

Note: However most of the time we use symbol λ to represent α .

Example:

Bank receives average λ= 6 bad checks per day, what are the probabilty that it will receive:
a: 4 bad checks on any given day.

    • f(x;λT) = f(4;6(1))=0.135
    • MATLAB: prob = poisspdf(4,6) = 0.1339
    • or prob = poisscdf(4,6)-poisscdf(3,6) = 0.1339

b: at most 10 bad checks on any two consecutive day.

    • f(x;λT) = f(x;6(2))=f(0;12) + f(1; 12)+ ...+ f(10;12)= 0.347
    • MATLAB: prob = sum(poisspdf(0:10,12) = 0.3472
    • or prob = poisscdf(10,12) = 0.3472

Wednesday, May 23, 2007

Binomial Distribution [Miller and Freund] Matlab [ Martinez 2001]

Repeated trials with getting x successes in n trials i.e, x successes and n-x failures in n attempts.
Assumption involves:
  1. Only 2 possible outcomes for each trial: success and failure
  2. The probability of success is the same for each trial
  3. There are n trials, where n is constant
  4. The n trial are independent
Trials satisfying this assumptions are reffered to as Bernoulli random variables.

for x = 0, 1, 2, ... , n

Expected Value (mean) E[X]= np ; Variance V(x) = np(1-p)
discriptions:

n : trials
x : successes
n - x : failures
p : probability of success
1-p : probability of failure
| n|
| x| : called binomial coefficient , which is the no of combination of x objects selected from a set of n object = n! / (r!(n-r)!)

MATLAB EXAMPLE on Binomial distribution using both probability mass function and cummulative distribution function.

Tuesday, May 22, 2007

Computational Statistics [Martinez 2001]{Miller] [Mario F. Triola 2000]

2.2 : PROBABILITY

Random Experiment: process or action whose outcome can't be predicted w certainty and would likely change when d exp is repeated or function defined over the elements of sample space (Miller et al.)

Random variable, X
: outcome from random experiment.
x is the observed value of a random variable X
discrete ran var - can take value from a finite or countably infinite
continuous ran var - can take values from an interval of real number

Sample space, S
: set of all outcomes from exp. ex: 6-sided dice : sample space [ 1 2 3 4 5 6 ]
ANXIOM 2: P(S) = 1

Event, E
: subset of outcomes in the sample space
AXIOM 1 : Probability event E mest be between 0 and 1 0 P(E) 1

Mutually exclusive events:
two events that can't occur simaltaneously or jointly. can be extended to any num of events as long as all pairs of events is considered
AXIOM 3 : Mutually exclusive events E1, E2, ... EK
P(E
1 U E2 U...U EK)= i=1 to K P(Ei)

Probability :
Measure of the likelihood that some event will occur

Probability distribution
f(x): describes the probabilities associated w each possible value for the random variables

Cummulative distribution function, cdf,
F(x): probabilty that the ran var X assumes a value
less than or equal to a given x. F(x) take value from zero to one

Probability density function
: Probability distribution for continuous random variables
f(x) = P(a X b) = integrat'n from a to b: total area under the curve = 1
associated cdf : F(x) = P ( X x ) = (integration from -inf to x)

Probability mass function:
Probabilty distribution of discrete random variables
f(xi) = P (X = xi) ; i = 1,2, ... ,
associated cdf: F(a) = xi a f(xi)

Equal likelihood model:
Experiment where each of n outcomes is equally likely,
and assign a probability mass of 1/n

Relative frequency method:
conduct the experiment n times and record d outcomes
probability is assigned by P(E) = f/n

2.3 : CONDITIONAL PROBABILITY AND INDEPENDENCE

Conditional Probability: event E given event F -->

P(E|F) = P(E F)/P(F) ; P(F) > 0
P(E F) = P(F) P(E|F) or P(E) P(F|E)

Independent Events:
P(E F) = P(E) P(F) or P(F) P(E)
P(E) = P(E F)/P(F)= P(E|F)
Therefore P(E) = P(E|F) or P(F) = P(F|E)

if extended to k events

P(E1 E2 ... EK) = ∏ i=1 to K P(Ei)

Bayes Theorem: initial probability is called prior probability.
new info is used to update prior probability to obtained posterior probability.

P(Er|F) = event Er given event F or 'effect' F was 'caused' by the event Er

P(Er|F) = P(Er ∩ F) / P(F) = P(Er) P(F|Er) /P(F)
P (F) = P(E1 ∩ F) + P(E2 ∩ F) +... +P(EK ∩F)
= P(E1) P(F|E1) + ... + P(EK) P(EK|F)
therefore

P(Er|F) = P(Er) P(F|Er) / P(E1) P(F|E1) + ... + P(EK) P(EK|F)

Example:

Tom services 60% of all cars and fail to wash the windshiled 1/10 time.
George services 15% of all cars and fail to wash the windshiled 1/10 time.
Jack services 20% of all cars and fail to wash the windshiled 1/ 20 time.
Peter services 5% of all cars and fail to wash the windshiled 1/20 time.
If customer complains later that her windshield ws not washed,
What is the probability that her car was serviced by jack?


P(Er|F)=P(EF)/P(F)
P(Er|F)=P(Er)P(F|Er)/[P(E1)P(F|E1)+P(E2)P(F|E2)+...+P(E4)P(F|E4)]
P(Er|F)=(0.2)(1/20)/[(0.6)(1/10)+(0.15)(1/10)+(0.2)(1/20)+(0.05)(1/20)]
P(Er|F)=0.114

therefore the probability that the windshield not washed (effect F) caused by Jack (event Er) is 0.114. This shows that even Jack only fail 1 windshield in 20 cars, 11% of windsheild failures are his responsibility.


2.4: EXPECTATION

Mathematical expectations have been playing an increasingly important role in scientific decision making, as it generally considered rational to select which ever alternative has the most promising mathematical expectation.

Mean provides a measure of central tendency of the distribution.
  • Mean of n measurements : Arithmetic mean (data treatment)
    • μ = ∑x / n
  • Mean of group data ( frequency)
    • μ = ∑(f x) / N
  • Mean of probability distribution
    • Mean or expected value of random variable defined using pdf or pmf.
    • Expected value is sum of all possible values of the ran var where each one is weighted by the probability that X will take on a value. i.e: the probability of obtaining a1,a2,...,ai is p1,p2,...,pi
    • μ = E[X] = a1p1 + a2p2 + ... + aipi = (xi f(xi))
Variance is a measureb of dispersion in the distribution ( how much a single random variable varies). Large variance means that the observed value is most likely to be far from mean μ.
  • Variance of n observation
    • V(X) = (x-μ)2 /(n-1) = [nx2 -(∑x)2]/n(n-1)
  • Variance of group data (frequency)
    • V(X) =[nx2f -(∑xf)2]/n(n-1)
  • Variance of probability distribution
    • Variance it the sum of the squared distances
    • each one weighted by the probability that X = x.
    • V(X) = E[(X-μ)2] = (x-μ)2 f(x)
    • V(X) = E[X2]-μ2 = E[X2]-(E[X])2
2.5:COMMON DISTRIBUTION

Discrete distribution: Binomial, Poisson

Continuous distribution: uniform, Normal, Exponential, Gamma. Chi-square, Weibull, beta, Multivariate Normal

Thursday, May 17, 2007

Maneuvering Target Tracking by Using Particle Filter [N. Ikoma et al.]

Hope:A brighter light at the end of my reading. please ya Allah... the brighter the better

Simulation Based
  1. Dynamics of the maneuvering target tracking model, for continuous model, is defined according to Singer 1970.
    1. Gaussian white noise process is used as the input for system noise vector in the continuous time model.
    2. After discretized, Cauchy distribution, as the typical heavy-tailed distribution, is proposed for the system noise to cater for manuvering target with an abrupt change of its acceleration.
    3. Non-linear Observation model is used to represent the radar measurement process.
  2. To estimate the non-Gaussian nonlinear state space model defined above, particle filtering (SMC) is considered as the most effective method compared to Gaussian-sum approximation and numerical representation.
    1. Monte Carlo Filter (MCF) method is employed among other SMC methods, which are bootstrap and condensation (conditional density propagation).
    2. State Space model: use the subset of the general class of state space reperesentation estimated byMCF.same as space model define aboved.
    3. State Estimation: Calculate the conditional distribution from the given observation. 3 Steps: prediction, filtering, and smoothing with fixed lag.
    4. MCF used an approximation of non-Gaussian distribution by particles to defined the state approximation.
    5. Filtering procedure: Find the prediction value, then calculte the likelihood of each particle, then resample the particle.
    6. Smoothing: By augmenting the particles where invoving the process of integrating past and current time (prediction and filtering) values.
    7. Likelihood:The 'hyperparameter', that consist of covariance matrices of observation noise,R and system noice, Q, is determined by maximizing the log-likelihood. This 'hyperparameter' governs the performance of state estimation where if Q > R, the observation value is more reliable and state variables change quickly. While if R > Q, the state variables evolve smoothly and the observation value can be ignored.
  3. Simulation:
    1. Simulate with the assupmtion of small observation noise.
    2. The simulation shows that Cauchy-MCF model can quickly follow the sudden change while Gaussian-EKF model has delayed response.
  4. Future work:
    1. Larger noise measurement
    2. Application to real data
    3. High SNR case is a real challange b'coz acceleration is much sensitive to the noise in position.

Wednesday, May 9, 2007

Object Tracking: A Survey [A. Yilmaz et al]

Object Tracking Method

We have point tracking method , kernel tracking methol, and silhouette tracking method.

Point Tracking Method

Deterministic : constrain the correspondence problem using qualitative motion heuristic
[Veenman et al. 2001]
  • The combination of the constrin used are as follow:
    • Proximity - constant background
    • Maximum velocity - defines maximum displacement r and the circular area form from the obtained radius, r.
    • Small velocity change
    • Common motion - the multiple points that represent the object have similar movement.
    • Rigidity - assume the object in 3D is rigid therefore the distance at time t-2, t-1, and t are the same. (confirm again)
    • Proximal uniformity - combination of proximity and small velocity change.
  • History
    • 1987 Sethi nad Jain solve the correspondance using greedy approach based on the proximity and rigidity constrain. The algorithm consider two consecutive frame and is initialized by the nearest neighbor criterion. However this algorithm can'thandle occlusions, entries or exit.
    • 1990 Salari and Sethi improve the algorithm by first establishing correspondence for the detectedpoints and then extending the tracking of the missing objects by adding a number of hypothetical points
    • 1991 Rangarajan and Shah proposed greedy method using proximal uniformity constrain. Initial correxpondences are ontained by computing optical flow in the first two frames. The algorithm can't handle exit and entries. For occlusions, the problem is solved by establishing the correspondence for the detected object in the current frame and the position of the remaining object is predicted based on the constant velocity assupmtion.
    • 1997 Intille et al.modified Rangarajan and Shah [1991] for matching object centroids. Object detected using background subtraction. Exit and entries handle explicitly by eximining the image region looking for a door to detect exit or entries
    • 2001 Veenman et al. extend both Sethi and Jain [1987] and Rangarajan and Shah [1991] works by introducing the common motion constrain for correspondence.The algorith is initialized by generating the initial tracks using a two-pass algorithm, and cost function is minimized by Hungarian assignment algorithm in two consecutive frames. can handle occlusion and misdetection but not exit and entries.
    • 2003 Shafique and Shah propose multiframe approach to preserve temporal coherency of the speed and position.Fin the best unique path for each point. [x clear sgt]
Statistical Method: take the object measurements and uncertainties into account to establish
correspondence.
Use space state approach to model the object properties such as position, velocity, and acceleration. Measurements obtained by detection mechanism which usually consist of object position in the image

Single Object State Estimation

  • Kalman Filter: used to estimate the stae of linear system and have Gaussian distribution. Compose of prediction and correction steps.
    • Prediction step uses the state model to predict the new state of the variables
    • Correction step uses the current obzervation to update the object's state
    • Matlab Toolbox: KalmanSrc
  • Particle Filter: used for non Gaussian distribution [Tanizaki 1987]
    • The conditional state density at time t is represented by a set of sampling particles with weight (sampling probability).
    • The weight define the importance of a sample (observation frequency) [Isard and Blake 1998]
    • The common sampling scheme is importance sampling
      • Selection: Select N random Sample
      • Prediction: Generate new sample with zero mean Gaussian Error and non-negative function
      • Correction: Weight corresponding to the new sample are computed using the measurement equation which can be modeled as a Gaussian density
    • Particle filter -based tracker is initialized by either using the first measurements or by training the system using sample sequences.
    • maltab toolbox available at ParticleFlSrc.
Multiobject Data Association and State Estimation

Requires a joiny solution of data association and state estimation problem.
Before applying Kalman or Particle Filter one must deterministicallyassociate the most likely measurement for a particular object to that object's state.
  • Joint Probability Data Association Filter (JPDAF)
  • Multiple Hypothesis Tracking (MHT)
Note: will continue once have better undestanding