This paper presents a new robust adaptive Gaussian mixture sigma-point particle filter by adopting the concept of robust adaptive estimation to the Gaussian mixture sigma-point particle filter. This method approximates state mean and covariance via Sigma-point transformation combined with new available measurement information. It enables the estimations of state mean and covariance to be adjusted via the equivalent weight function and adaptive factor, thus restraining the disturbances of singular measurements and kinematic model noise. It can also obtain efficient predict prior and posterior density functions via Gaussian mixture approximation to improve the filtering accuracy for nonlinear and non-Gaussian systems. Simulation results and comparison analysis demonstrate the proposed method can effectively restrain the disturbances of abnormal measurements and kinematic model noise on state estimate, leading to improved estimation accuracy.

Particle filter, Gaussian mixture sigma-point particle filter, Gaussian mixture approximation, Robust adaptive estimation

The problem of nonlinear filtering has its origins in the areas of tracking and signal processing. However, the underlying setting is extremely general and is ubiquitous in many applied areas such as integrated navigation system, information fusion, signal processing, target tracking, fault detection, geodetic positioning and automatic control, where random processes are used to model complex dynamic phenomena. In essence, nonlinear filtering is to estimate the state of a nonlinear and non-Gaussian stochastic system from measurement data.

A significant amount of research efforts have been dedicated to nonlinear filtering. The extended Kalman filter (EKF), unscented Kalman filter (UKF) and particle filter (PF) are the typical nonlinear filtering methods. The EKF is an approximation method, in which nonlinear system equations are linearized by the Taylor expansion and the linear state is assumed to obey the Gaussian distribution. However, the linearization of system equations involve error, leading to biased or even divergent filtering solutions. Further, the EKF also requires the calculation of the Jacobian matrix, which is difficult to implement in practical applications [1,2]. The UKF overcomes the problems of the EKF by conducting nonlinear parameter estimation based on unscented transform. It can achieve third-order accuracy, while the EKF provides first-order accuracy only. However, the UKF performance is sensitive to system model error. Further, when the nonlinearity of a dynamic system is strong, the estimated state error will become large or even divergent.

The PF is an optimal recursive Bayesian filtering method based on Monte Carlo simulation by producing a sample of independent random variables according to a conditional probability distribution [3]. It is easy to implement, suitable for high-dimensional problems, and can deal with nonlinear and non-Gaussian models. However, the phenomenon of particle degeneracy may occur in the approximation process, and the accuracy heavily depends on the selection of the importance sampling density and resampling scheme [4-6]. Various methods were reported focusing on improvement of the importance sampling density and resampling scheme, such as the auxiliary PF, regularized PF and marginalized PF. However, these methods suffer from their own problems. The performance of the auxiliary PF will be degraded if the process noise is large. The particles generated by the regularized PF do not share a common distribution with the practical probability function, leading to the increased variance of state estimation. The marginalized PF is only suitable for the nonlinear system containing the linear state component [7,8].

Based on the Gaussian filter reported by Ito and Xiong [9], Van der Merwe, et al. proposed a Sigma-point particle filter (SPPF) and a Gaussian mixture Sigma-point particle filter (GMSPPF) for nonlinear systems [10,11], where the Sigma point is used to improve the importance sampling density. The Sigma point is a method to calculate the statistics of a random variable under a nonlinear transform. It makes use of unscented transform for the importance density and obtains third-order statistics for the estimations of state mean and variance. It can approximate posterior mean and covariance for a non-Gaussian and nonlinear system in the accuracy of at least second-order Taylor series. Therefore, both SPPF and GMSPPF have a better performance than the PF [9-11]. The SPPF requires the Gaussian approximations of each particle to conduct a system update, while the GMSPPF uses a multi-dimensional mixed ingredient for update of the entire particle set, leading to the superiority to the SPPF [12-14]. Further, as it moves particles to the high likelihood areas and subsequently uses a finite Gaussian to obtain the appropriate importance density function, thus the GMSPPF significantly reduces the computational load comparing to the PF and SPPF. However, the GMSPPF adopts the UKF to approximate the predicted prior and posterior density functions. If there are uncertainties involved in system and measurement noises, the UKF approximation would be deteriorated, leading to the biased solution of the GMSPPF.

Robust adaptive estimation utilizes the equivalent weight function and adaptive factor to control the information of the kinematic model and observation to inhibit the disturbances of systematic noises, leading to more accurate estimations of mean and covariance [15,16]. It can be combined with the PF by selecting an appropriate importance density function to handle the disturbances of system error on state estimation.

This paper presents a new robust adaptive Gaussian mixture sigma-point particle filter (RAGMSPPF) by introducing the equivalent weight matrix and adaptive factor into the UKF to improve the accuracy of Gaussian approximation and further enhance the GMSPPF performance. This method approximates state mean and covariance by Sigma-point transformation combined with the information obtained from new available measurements. Subsequently, it develops the equivalent weight function and adaptive factor to adjust the estimations of state mean and covariance to control the disturbances of singular measurements and kinematic model noise. Based on above, it obtains predicted prior and posterior density functions via Gaussian mixture approximation to improve the filtering accuracy for nonlinear and non-Gaussian systems. Simulations and comparison analysis have been conducted to comprehensively evaluate the performance of the proposed filtering method.

Consider the following nonlinear system

${x}_{k}\text{}=\text{}f({x}_{k-1})+{v}_{k-1}\text{}$

${z}_{k}\text{}=\text{}h({x}_{k})+{n}_{k}\text{(1)}$

where ${x}_{k}\in {R}^{n}$ is the state vector at epoch k, ${z}_{k}\in {R}^{n}$ the system measurement at epoch k, ${v}_{k}\in {R}^{n}$ the system state noise which is assumed to be a white noise with variance *Q _{k}*, and ${n}_{k}\in {R}^{n}$ the measurement noise which is assumed to be a white noise with variance

*R*. Both f(·) and h(·) are a nonlinear function, and

_{k}*k*= 0,1,… is the sampling time.

The basic idea of the Gaussian mixture sigma point particle filter (GMSPPF) is to use a finite Gaussian mixture model to construct the posterior density function, which is much closer to the actual density function [9-13]. In the time update, the predicted prior density is expressed as

$p({x}_{k}|{z}_{1:k-1})\approx {p}_{G}({x}_{k}|{z}_{1:k-1})\text{}=\text{}{\displaystyle \sum _{{g}^{\text{'}}=1}^{{G}^{\text{'}}}{\alpha}_{k|k-1}^{{g}^{\text{'}}}N({x}_{k};{\overline{x}}_{k|k-1}^{{g}^{\text{'}}},{P}_{k|k-1}^{{g}^{\text{'}}})}\text{}\left(2\right)$

and in the measurement update, the posterior density is expressed as

$p({x}_{k}|{z}_{1:k})\approx {p}_{G}({x}_{k}|{z}_{1:k})\text{}=\text{}{\displaystyle \sum _{{g}^{\text{'}\text{'}}=1}^{{G}^{\text{'}\text{'}}}{\alpha}_{k}^{{g}^{\text{'}\text{'}}}N({x}_{k};{\overline{x}}_{k}^{{g}^{\text{'}\text{'}}},{P}_{k}^{{g}^{\text{'}\text{'}}})}\text{(3)}$

where $({\overline{x}}_{k|k-1}^{{g}^{\text{'}}},{P}_{k|k-1}^{{g}^{\text{'}}})$ and $({\overline{x}}_{k}^{{g}^{\text{'}\text{'}}},{P}_{k}^{{g}^{\text{'}\text{'}}})$ are calculated by constructing a parallel bank of UKFs [11]. α^{g} represents the mixing weight, $N({x}_{k};{\overline{x}}_{k}^{g},{P}_{}^{g})$ represents the Gaussian density function with mean vector ${\overline{x}}_{k}$ and positive definite covariance matrix ${P}_{k}^{g}$, and ${\text{G}}^{\text{'}}\text{=GI}$ and ${G}^{\u2033}\text{}=\text{}{G}^{\prime}J\text{}=\text{}GIJ$ where G, I and J are the numbers of mixing components in the state, process noise, and measurement noise, respectively.

It can be seen from (2) and (3) that in the GMSPPF the predicted prior and posterior density functions $({\overline{x}}_{k|k-1}^{{g}^{\text{'}}},{P}_{k|k-1}^{{g}^{\text{'}}})$ and $({\overline{x}}_{k}^{{g}^{\text{'}\text{'}}},{P}_{k}^{{g}^{\text{'}\text{'}}})$ are approximated by the UKF. However, if there are uncertainties involved in system and measurement noises, the UKF approximation would be deteriorated, leading to the biased solution of the GMSPPF. This paper establishes a new RAGMSPPF by introducing the equivalent weight matrix and adaptive factor into the UKF to improve the accuracy of Gaussian approximation and further enhance the GMSPPF performance.

The RAGMSPPF algorithm can be described as the following steps:

**Step 1:** Initialization: at k = 0, define the initial prior probability density function in the form of N Gaussian mixtures, i.e.

$p({x}_{k}|{z}_{1:k-1})\text{}=\text{}{\displaystyle \sum}_{j=1}^{N}{\alpha}_{k-1}^{j}N({x}_{k}:{\overline{x}}_{k|k-1}^{j},{P}_{k|k-1}^{j})\text{(4)}$

where ${\alpha}_{k-1}^{j}>0$ denotes the j^{th} Gaussian weight, and $\sum _{j=1}^{N}{\alpha}_{k-1}^{j}\text{}=\text{}1$

**Step 2:** The state mean and covariance $({\overline{x}}_{k}^{j},{P}_{k}^{j})$
are computed by the robust adaptive UKF algorithm.

(i) At epoch (k ≥ 1), calculate 2n + 1 Sigma points as

${\chi}_{k-1}^{i}\text{}=\text{}[{\overline{x}}_{k-1}^{j},{\overline{x}}_{k-1}^{j}+\sqrt{(n+\lambda ){P}_{k-1}^{j}},{\overline{x}}_{k-1}^{j}-\sqrt{(n+\lambda ){P}_{k-1}^{j}}]\text{(5)}$

where $\lambda \text{}=\text{}{a}_{x}{}^{2}(n+\kappa )-n$ is the scaling parameter, a_{x} determines the spread of the Sigma points around ${\overline{x}}_{k}^{j}$ and is usually set to a small positive value, and k is the secondary scaling parameter which is usually set to zero

W_{i} is the weight associated with the i^{th} Sigma-point such that ƩW_{i} = 1, i = 0,1,…2n. W_{i} (I = 0,1,…2n) are set as

${W}_{0}^{m}\text{}=\text{}\frac{\lambda}{(n+\lambda )}$

${W}_{0}^{c}\text{}=\text{}\frac{\lambda}{(n+\lambda )}+(1-{a}_{x}^{2}+\beta )\text{}i\text{}=\text{}0\text{}\left(6\right)$

${W}_{i}^{c}\text{}=\text{}{W}_{i}^{m}\text{}=\text{}{W}_{i}\text{}=\text{}1/2(n+\lambda )\text{}i\text{}=\text{}1,2,\cdots ,2n\text{(7)}$

where β is used to incorporate the prior knowledge on the distribution of x in the filtering process. For the Gaussian distribution, β = 2 is optimal.

(ii) Considering the current measurement, calculate the Gaussian approximation function $N({x}_{k}:{\overline{x}}_{k}^{j},{P}_{k}^{j})$, and further propagate the approximated mean, covariance and cross covariance via the Sigma-point transformation as follows

${\chi}_{k|k-1}^{i}\text{}=\text{}f({\chi}_{k-1}^{i})\text{}\left(8\right)$

${\overline{x}}_{k|k-1}^{j}\text{}=\text{}\sum _{2n}^{i=0}{W}_{i}^{m}{\chi}_{j,k|k-1}^{i}\text{}\left(9\right)$

${Z}_{k|k-1}^{i}=h({\chi}_{k|k-1}^{i})\text{}\left(10\right)$

${\overline{z}}_{k|k-1}^{j}={\displaystyle \sum}_{i=0}^{2n}{W}_{i}^{m}{Z}_{j,k|k-1}^{i}\text{}\left(11\right)$

${P}_{{z}_{k}z}^{j}{}_{{}_{k}}={\displaystyle \sum}_{i=0}^{2n}{W}_{i}^{c}[({Z}_{j,k|k-1}^{i}-{\overline{z}}_{k|k-1}^{j}]\cdot {[{Z}_{j,k|k-1}^{i}-{\overline{z}}_{k|k-1}^{j}]}^{T}+{Q}_{k}\text{}\left(12\right)$

${P}_{{x}_{k}{z}_{k}}^{j}={\displaystyle \sum _{i=0}^{2n}{W}_{i}^{c}}[({\chi}_{j,k|k-1}^{i}-{\overline{x}}_{k|k-1}^{j}]\cdot {[{Z}_{j,k|k-1}^{i}-{\overline{z}}_{k|k-1}^{j}]}^{T}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}+{R}_{k}\text{}\left(13\right)$

${P}_{k/k-1}^{j}={\displaystyle \sum}_{i=0}^{2N}{W}_{i}^{m}[({\chi}_{j,k|k-1}^{i}-{x}_{k|k-1}^{j}]\cdot {[{\chi}_{j,k|k-1}^{i},-{x}_{k|k-1}^{j}]}^{T}\text{}\left(14\right)$

${K}_{k}^{j}={P}_{{x}_{k}{z}_{k}}^{j}{({P}_{{z}_{k}{z}_{k}}^{j})}^{-1}\text{}\left(15\right)$

(iii) Calculate the equivalent weight matrix $\overline{P}$ and the adaptive factor α [16-18]. The equivalent weight matrix is constructed using the IGG scheme [16] to incorporate the robustness constraint in measurement via a descending weight function. In case of independent variables, the equivalent weight matrix is a diagonal matrix $\overline{P}\text{}=\text{}diag({\overline{P}}_{1},{\overline{P}}_{2},\cdots ,{\overline{P}}_{k})$ with the diagonal element ${\overline{P}}_{k}$ represented as

${\overline{P}}_{k}\text{}=\text{}(\begin{array}{ccc}{p}_{k}& & \left|{V}_{k}\right|\le {k}_{0}\\ {p}_{k}\frac{{k}_{0}}{\left|{V}_{k}\right|}& & {k}_{0}\left|{V}_{k}\right|\le {k}_{1}\\ 0& & \left|{V}_{k}\right|{k}_{1}\end{array}\text{}\left(16\right)$

An alternative expression of ${\overline{P}}_{k}$ is ${\overline{P}}_{k}\text{}=\text{}(\begin{array}{ccc}{p}_{k}& & \left|{V}_{k}\right|\le {k}_{0}\\ {p}_{k}\frac{{k}_{0}}{\left|{V}_{k}\right|}\frac{{({k}_{1}-\left|{V}_{k}\right|)}^{2}}{{({k}_{1}-{k}_{0})}^{2}}& & {k}_{0}\le \left|{V}_{k}\right|{k}_{1}\\ 0& & \left|{V}_{k}\right|\ge {k}_{1}{}_{}\end{array}\text{}\left(17\right)$

where p_{k} is the k^{th} main diagonal element of the covariance matrix of the measurement vector z_{k}; ${k}_{0}\in (1,1.5)$
and ${k}_{1}\in (3,8)$; and ${V}_{k}\text{}=\text{}{\overline{z}}_{k/k-1}-{z}_{k}$ is the residual vector of z_{k}.

To balance the contributions of the predicted state and measurement to the state estimation, the adaptive factor γ_{k} is represented as the following piecewise decreasing function

${\gamma}_{k}\text{}=\text{}(\begin{array}{ccc}1& & \left|\Delta {\tilde{x}}_{k}\right|\le {c}_{0}\\ \frac{{c}_{0}}{\left|\Delta {\tilde{x}}_{k}\right|}\frac{{({c}_{1}-\left|\Delta {\tilde{x}}_{k}\right|)}^{2}}{{({c}_{1}-{c}_{0})}^{2}}& & {c}_{0}\le \left|\Delta {\tilde{x}}_{k}\right|{c}_{1}\\ 0& & {c}_{1}\le {\left|\Delta {\tilde{x}}_{k}\right|}_{}\end{array}\text{}\left(18\right)$

where ${c}_{0}\in (1,1.5)$ and ${c}_{1}\in (3,8)$ and $\Delta {\tilde{x}}_{k}\text{}=\text{}\frac{\Vert {\overline{x}}_{k}-{\overline{x}}_{k/k-1}\Vert}{\sqrt{tr({P}_{k})}}$, tr(·) represents the matrix trace, and ${\overline{x}}_{k}$ and P_{k} denotes the state estimation and the corresponding covariance [17,18].

It can be seen from above that the equivalent weight and adaptive factor are in a similar form, and they both are the adjustment factors. The former is determined from the residual, while the latter is determined from the difference between the state estimation and state prediction, i.e., $\Delta {\tilde{x}}_{k}$. Using the equivalent weight matrix and the adaptive factor, the estimate ${\overline{x}}_{k}^{j}$ and variance ${P}_{k}^{j}$ can be obtained as follows

${\overline{x}}_{k}^{j}{}^{}\text{}=\text{}{\overline{x}}_{k|k-1}^{j}+{K}_{k}^{j}({z}_{k}-{\overline{Z}}_{k|k-1}^{j})\text{}\left(19\right)$

${P}_{k}^{j}\text{}=\text{}{\gamma}_{k}{P}_{{x}_{k}{x}_{k}}^{j}-{K}_{k}^{j}{\overline{P}}^{j}{}_{{z}_{k}{z}_{k}}{K}_{k}^{jT}\text{}\left(20\right)$

${K}_{k}^{j}\text{}=\text{}{P}_{{x}_{k}{z}_{k}}^{j}{({\overline{P}}_{{z}_{k}{z}_{k}}^{j})}^{-1}\text{}\left(21\right)$

where ${\overline{P}}_{{z}_{k}{z}_{k}}^{j}\text{}=\text{}{\overline{P}}^{-1}$

It can be seen from the above equations that ${P}_{k}^{j}$
can be adjusted via γ_{k} and ${\overline{P}}^{j}{}_{{z}_{k}{z}_{k}}$, thus effectively controlling the utilization of kinematic model information and measurement information to restrain the disturbances of singular measurements and kinematic model noise.

**Step 3:** Calculate the posterior probability density function.

$p({x}_{k}|{z}_{1:k})\text{}=\text{}{\displaystyle \sum}_{j=1}^{{N}_{f}}{\alpha}_{k}^{j}N({x}_{k}:{\overline{x}}_{k}^{j},{P}_{k}^{j})\text{}\left(22\right)$

where N_{f} = NN_{n} is the total number of mixing components in the measurement update, and N_{n} is the number of mixing components in the measurement noise. The jth posterior probability density $N({x}_{k}:{\overline{x}}_{k}^{j},{P}_{k}^{j})$
can be obtained from (5)-(21), and the mixing weights are

${\alpha}_{k}^{j}\text{}=\text{}\frac{{\alpha}_{k-1}^{j}{\varsigma}_{k}^{j}}{{\displaystyle \sum _{j=1}^{{N}_{f}}{\alpha}_{k-1}^{j}{\varsigma}_{k}^{j}}}\text{}\left(23\right)$

where ${\varsigma}_{k}^{j}\text{}=\text{}N({z}_{k}:{\overline{z}}_{k|k-1}^{j},{P}_{{z}_{k}{z}_{k}}^{j})$

Equation (22) can be viewed as the importance density function. It can be seen from (20) that ${P}_{k}^{j}$
can be adjusted via γ and $\overline{P}$, making the important density function described as (22) closely approximate the practical density function. It can be seen from (16), when there is singularity involved in the measurement model, the residual vector V_{k} will be enlarged, thus reducing the components of equivalent weight matrix $\overline{P}$ and enlarging the predicted measurement covariance ${\overline{P}}_{{z}_{k}{z}_{k}}^{j}$. Subsequently, it is readily known from (21) that the gain matrix ${K}_{k}^{j}$ will be reduced. As a result, as can be seen from (19) the use of measurement information in the estimation will be reduced, thus restraining the disturbance of singular measurement on the state estimation.

Similarly, the adaptive factor γ_{k} is reduced when there is noise involved in the kinematic model. Consequently, the utilization of state prediction information in state parameter estimation is reduced, inhibiting the disturbances of the kinematic model noise. If ${\overline{P}}_{{z}_{k}{z}_{k}}=\text{}{P}_{{z}_{k}{z}_{k}}$
and γ = 0, then $({\overline{x}}_{k}^{j},{P}_{k}^{j})$
are the mean and covariance matrix obtained by the standard UKF.

**Step 4:** Similar to the calculation of the posterior probability density function, the predicted density function is approximated by

$p({x}_{k+1}|{z}_{1:k})\text{}=\text{}{\displaystyle \sum}_{j=1}^{{N}_{p}}{\alpha}_{k+1}^{j}N({x}_{k+1}:{\overline{x}}_{k\text{+}1}^{j},{P}_{k\text{+}1}^{j})\text{}\left(24\right)$

where N_{p} = N_{f}N_{v} is the total number of mixing components in the time update, and N_{v} is the number of mixing components in the measurement noise. The jth predictive probability density $N({\overline{x}}_{k\text{+}1|k}^{j},{P}_{k\text{+}1|k}^{j})$
can be obtained from (8), (9) and (14), and the mixing weights are represented by

${\alpha}_{k+1}^{j}\text{}=\text{}\frac{{\alpha}_{k}^{j}{\varsigma}_{k+1}^{m}{\gamma}_{k+1}^{m}}{{\displaystyle \sum}_{j=1}^{{N}_{p}}{\displaystyle \sum}_{m=1}^{{N}_{f}}{\alpha}_{k}^{j}{\varsigma}_{k+1}^{m}{\gamma}_{k+1}^{m}}\text{}\left(25\right)$

where ${\gamma}_{k+1}^{m}\text{}=\text{}N({z}_{k+1}:{\overline{z}}_{k+1|k}^{m},{P}_{k+1|k}^{m})$

**Step 5:** Calculate the particle weights.

${w}_{k}^{j}\text{}=\text{}\frac{p({z}_{k}\left|{x}_{k}^{j}){p}_{G}({x}_{k}^{j}\right|{z}_{1:k-1}^{})}{{p}_{G}({x}_{k}^{j}|{z}_{1:}{}_{k})}\text{}\left(26\right)$

Normalize the weights

${\tilde{w}}_{k}^{j}\text{}=\text{}{w}_{k}^{j}/{\displaystyle \sum}_{j=1}^{N}{w}_{k}^{j}\text{}\left(27\right)$

**Step 6:** The PF resamples the particle in a new set of equally weighted particles, but the RAGMSPPF approximates the density function in a Gaussian form, that is, obtains mean and covariance in time series recursively. In order to avoid the massive number of particles due to resampling and reduce the particle degeneracy phenomenon and computational complexity, applying the weighted expectation-maximization technique [11] to fit the set of weighted particles with the posterior N-component Gaussian mixture model, we can readily have

${\widehat{x}}_{k}\text{}=\text{}{\displaystyle \sum}_{j=1}^{N}{\tilde{w}}_{k}^{j}{\overline{x}}_{k}^{j}\text{}\left(28\right)$

${\widehat{P}}_{k}\text{}=\text{}{\displaystyle \sum}_{j=1}^{N}{\tilde{w}}_{k}^{j}[{P}_{k}^{j}+({\overline{x}}_{k}^{j}-{\widehat{x}}_{k}^{j}){({\overline{x}}_{k}^{j}-{\widehat{x}}_{k}^{j})}^{T}]\text{}\left(29\right)$

**Step 7:** Repeat Steps 2 to 6 until all samples are processed.

Simulations were conducted to comprehensively evaluate and analyze the performance of the RAGMSPPF method. The comparison analysis with the PF and GMSPPF methods is also discussed in this section.

### Lognormal probability density

For comparison, simulations were conducted under the same conditions by the proposed RAGMSPPF as well as the PF and GMSPPF, respectively. Consider the state and measurement models [19].

$\dot{x}\text{}=\text{}\mathrm{sin}x+\Gamma (t)$

${z}_{k}\text{}=\text{}{x}_{k}^{2}+{v}_{k}\text{}\left(30\right)$

${x}_{0}=0.1N\left(-0.2,1\right)+0.9\left(0.2,1\right)$

where $\Gamma (t)\in N(0,Q),\text{}{v}_{k}\in N(0,R),$ Q = 1, and R = 1. The sampling time was Δt = 0.25s and measurements were updated every 0.1s. The particle number for the PF was set to 500. A 5-3-1 scheme (a 5-component GMM (Gaussian Mixture Model) for the state posterior, a 3-component GMM to approximate the process noise, and a 1-component GMM for the measurement noise) was used in the GMSPPF and RAGMSPPF. The estimation accuracy for the nonlinear system was calculated in terms of root mean square error (RMSE), which is defined as

$RMSE\text{}=\text{}\sqrt{\frac{1}{N}{\displaystyle \sum}_{i=1}^{N}{({x}_{k}^{i}-{\widehat{x}}_{k}^{i})}^{2}}\text{}\left(31\right)$

When the nonlinear system is of multi-dimensional distribution, the state estimations of the PF, GMSPPF and RAGMSPPF are compared in terms of the lognormal probability density defined by

$L\text{}=\text{}{\displaystyle \sum}_{j=1}^{M}\mathrm{log}{\displaystyle \sum}_{i=1}^{N}{w}_{k}^{i}N\left({x}_{k}{}^{j}|{\widehat{x}}_{k}^{i},{P}_{k}^{i}\right)\text{}\left(32\right)$

where ${\widehat{x}}_{k}^{i}$ and ${P}_{k}^{i}$ are obtained from the above three filters, respectively. It can be seen from (32) that the larger the lognormal probability density is, the more accurate the estimation is.

As shown in figure 1 and figure 2, the PF has the largest RMSE and smallest lognormal probability density, while the RAGMSPPF has the smallest RMSE and largest lognormal probability density. The RMSE and lognormal probability density of the GMSPPF are between those of the PF and RAGMSPPF, respectively. Thus, it is evident the performance of the proposed RAGMSPPF is better than those of the PF and GMSPPF.

### SINS/SAR integrated navigation

Simulations were also conducted to comprehensively evaluate and analyze the performance of the proposed RAGMSPPF in terms of SINS/SAR (Strap-down Inertial Navigation System/Synthetic Aperture Radar) integrated navigation.

The navigation coordinate is the E-N-U (East-North-Up) geography coordinate system. The kinematic model of the SINS/SAR integrated navigation system is

$\dot{x}(t)\text{}=\text{}f(x(t))+G(t)w(t)\text{}\left(33\right)$

where x(t) is the system state vector; f(·) is the nonlinear function and G(t) is the noise driven matrix, and their specific forms can be found in [20]; and w(t) = [w_{gx},w_{gy},w_{gz},w_{ax},w_{ay},w_{az}]^{T} is the system noise, whose components denote the white noises of the gyroscope and accelerometer along the x,y,z axes. The system state vector is defined as

$x(t)={[{q}_{0},{q}_{1},{q}_{2},{q}_{3},\delta {v}_{E},\delta {v}_{N},\delta {v}_{U},\delta L,\delta \lambda ,\delta h,{\epsilon}_{bx},{\epsilon}_{by},{\epsilon}_{bz},{\epsilon}_{rx},{\epsilon}_{ry},{\epsilon}_{rz},{\nabla}_{bx},{\nabla}_{by},{\nabla}_{bz}]}^{T}\text{}\left(34\right)$

where $({q}_{0},{q}_{1},{q}_{2},{q}_{3})$ is the attitude error quaternion, $(\delta {v}_{E},\delta {v}_{N},\delta {v}_{U})$ the velocity error, $(\delta L,\delta \lambda ,\delta h)$ the position error, $({\epsilon}_{bx},{\epsilon}_{by},{\epsilon}_{bz})$ the gyro constant drift, $({\epsilon}_{rx},{\epsilon}_{ry},{\epsilon}_{rz})$ the gyro first-order Markov drift, and $({\nabla}_{bx},{\nabla}_{by},{\nabla}_{bz})$ the accelerometer zero bias.

The measurement is defined by subtracting the heading angle and position information of SAR and the altitude of the barometric altimeter from those of SINS. The measurement equation of the SINS/SAR integrated navigation system is represented as

$z(t)\text{}=\text{}h(\text{x}(t))+v(t)\text{}\left(35\right)$

where h(·) is the nonlinear function and its specific form can be found in [20], and v(t) is the measurement noise. The measurement of the SINS/SAR integrated navigation system can be further described as

$z(t)\text{}=\text{}\left[\begin{array}{c}{\varphi}_{I}-{\varphi}_{S}\\ {\lambda}_{I}-{\lambda}_{S}\\ {L}_{I}-{L}_{S}\\ {h}_{I}-{h}_{e}\end{array}\right]\text{}=\text{}\left[\begin{array}{c}(\varphi +\delta {\varphi}_{I})-(\varphi +\delta {\varphi}_{S})\\ (\lambda +\delta {\lambda}_{I})-(\lambda +\delta {\lambda}_{S})\\ (L+\delta {L}_{I})-(L+\delta {L}_{S})\\ (h+\delta {h}_{I})-(h+\delta {h}_{e})\end{array}\right]\text{}=\text{}\left[\begin{array}{c}\delta {\varphi}_{I}\\ \delta {\lambda}_{I}\\ \delta {L}_{I}\\ \delta {h}_{I}\end{array}\right]-\left[\begin{array}{c}\delta {\varphi}_{S}\\ \delta {\lambda}_{S}\\ \delta {L}_{S}\\ \delta {h}_{e}\end{array}\right]\text{}\left(36\right)$

where ${\varphi}_{I}$, λ_{I}, L_{I} and h_{I} represent the heading angle, latitude, longitude and altitude of SINS, ${\varphi}_{S}$, λ_{s} and L_{s} represent the heading angle, latitude and longitude of SAR, and h_{e} is the altitude of the barometric altimeter. δ· denotes the corresponding error, which is assumed as a white noise process.

The flight trajectory of the aircraft was designed to reflect flexible and high-speed flying conditions for the purpose of performance evaluation. As shown in figure 3, there are different maneuvers such as level constant velocity motion, climbing, left turn, right turn, and diving involved in the flight process. The simulation parameters are shown in table 1. The simulation time was 1000 s; The filtering period was 1 s. The particle numbers in the PF and the numbers of Gaussian mixture components in the GMSPPF and RAGMSPPF were identical to those in the simulation case described in Section 4.1.

For the comparison analysis, simulation trials were conducted at the same conditions by the PF, GMSPPF and proposed RAGMSPPF, respectively. Further, in order to evaluate the performance of the proposed RAGMSPPF under the condition of different noises, the standard deviations of both system and measurement noises were enlarged to the twice of their initial values during the time period from 500 s to 1000 s.

Figure 4 illustrates the position error obtained by the PF, GMSPPF and proposed RAGMSPPF, respectively. As shown in figure 4, the errors in longitude, latitude and altitude by the PF are within the ranges of ± 19 m, ± 20 m and ± 22 m during the time period (100 s, 500 s) and within the ranges of ± 31 m, ± 33 m and ± 40 m during the time period (500 s, 1000 s). The GMSPPF improves the PF performance. The resultant errors in longitude, latitude and altitude by the GMSPPF are within the ranges of ± 16 m, ± 17 m and ± 20 m for the time period (100 s, 500 s) and within the ranges of ± 26 m, ± 27 m and ± 37 m for the time period (500 s, 1000 s). In contrast, the errors in longitude, latitude and altitude obtained by the RAGMSPPF are within the ranges of ± 9 m, ± 9 m and ± 10 m for the time period (100 s, 500 s) and within the ranges of ± 13 m, ± 14 m and ± 19 m for the time period (500 s, 1000 s), which are much smaller than those of the PF and GMSPPF. This demonstrates that the proposed RAGMSPPF can achieve much higher accuracy than the PF and GMSPPF.

Table 2 summarizes the mean absolute errors (MAE) and standard deviations (STD) of the PF, GMSPPF and proposed RAGMSPPF in the time periods (100 s, 500 s) and (500 s, 1000 s). It can be seen that the MAE and STD of the RAGMSPPF are much smaller than those of the other two filters. The above analysis demonstrates that the proposed RAGMSPPF outperforms the PF and GMSPPF, leading to improved accuracy for SINS/SAR integrated navigation.

This paper presents a new RAGMSPPF by absorbing the merits of the GMSPPF and robust adaptive estimation. This method approximates state mean and covariance via Sigma-point transformation combined with new measurement information. It enables the adjustment of state mean and covariance via the equivalent weight function and adaptive factor to effectively restrain the disturbances of singular measurements and system noise on state estimation. It also provides more efficient predicted prior and posterior density functions for time and measurement updates, thus more suitable for the filtering calculation of a nonlinear and non-Gaussian model. Simulations and comparison analysis demonstrate that the proposed RAGMSPPF outperforms the PF and GMSPPF, leading to improved accuracy for SINS/SAR integrated navigation.

Future research work will focus on improvement of the proposed RAGMSPPF. The proposed RAGMSPPF will be combined with the advanced artificial intelligence technologies such as pattern recognition, neural network and advanced expert systems, thus establishing an intelligent algorithm to automatically deal with the disturbances of various kinds of system error from different sources.

The work of this paper was supported by the National Natural Science Foundation of China (Project Number: 61174193) and the Specialized Research Fund for the Doctoral Program of Higher Education (Project Number: 20136102110036). It was also supported by the Australian Research Council (ARC) Discovery Early Career Award (DECRA) (DE130100274).