neurotools.stats.gaussian module

Routines for working with scalar and multivariate Gaussian distributions in filtering applicationss

neurotools.stats.gaussian.gaussian_quadrature(p, domain, eps=1e-12)[source]

Numeric integration of probability density p over domain; Values in p smaller than eps (defaults to 1e-12) are set to eps.

Treat f as a density and estimate it’s mean and precision over the domain.

Parameters:
  • p – 1D iterable of probabilities

  • domain – 1D iterable of domain points corresponding to p

  • eps (positive float; default 1e-12) – Minimum probability value permitted

Returns:

Gaussian object with mean and precision matching estimated mean and precision of distribution specified by (p,`domain`).

Return type:

Gaussian

neurotools.stats.gaussian.gaussian_quadrature_logarithmic(logp, domain)[source]

Numeric integration of log-probability. Not yet implemented.

Parameters:
  • p – 1D iterable of probabilities

  • domain – 1D iterable of domain points corresponding to p

Return type:

Gaussian

class neurotools.stats.gaussian.Gaussian(m, t)[source]

Bases: object

Scalar Gaussian model to use in abstracted forward-backward Supports multiplication of Gaussians

m

mean

Type:

float

t

precision (reciprocal of variance)

Type:

float

logpdf(x)[source]

The log-pdf of a univariate Gaussian

Parameters:

x (float or np.array) – Point or array of points at which to evaluate the log-PDF

Returns:

log-PDF value at locations specified by x

Return type:

np.array

neurotools.stats.gaussian.MVG_check(M, C, eps=1e-06)[source]

Checks that a mean and covariance (or precision) represented a valid multivariate Gaussian distribution. The mean must be finite and real-valued. The covariance (or precision) matrix must be symmetric positive definite.

Parameters:
  • M (1D np.array) – Mean vector

  • C (2D np.array) – Covariance matrix

  • eps (positive float; default 1e-6)

neurotools.stats.gaussian.MVG_logPDF(X, M, P=None, C=None)[source]

N: dimension of distribution K: number of samples

Parameters:
  • X – KxN vector of samples for which to compute the PDF

  • M – N mean vector

  • P – NxN precision matrix OR

  • C – NxN covariance matirx

Example

N = 10 K = 100 M = randn(10) Q = randn(N,N) C = Q.dot(Q.T) X = randn(K,N) MVG_logPDF(X,M,C=C) - MVG_logPDF(X,M,P=np.linalg.pinv(C))

neurotools.stats.gaussian.MVG_PDF(X, M, P=None, C=None)[source]
Parameters:
  • X – NxK vector of samples for which to compute the PDF

  • M – N mean vector

  • P – NxN precision matrix OR

  • C – NxN covariance matirx

neurotools.stats.gaussian.MVG_sample(M, P=None, C=None, N=1, safe=1)[source]

Sample from multivariate Gaussian

Parameters:
  • M – vector mean

  • C – covariance matrix

neurotools.stats.gaussian.MVG_multiply(M1, P1, M2, P2, safe=1)[source]

Multiply two multivariate Gaussians based on precision

neurotools.stats.gaussian.MVG_multiply_C(M1, C1, M2, C2, safe=1)[source]

Multiply two multivariate Gaussians based on covariance not implemented

neurotools.stats.gaussian.MVG_divide(M1, P1, M2, P2, eps=1e-06, handle_negative='repair', verbose=0)[source]

Divide two multivariate Gaussians based on precision

Parameters:

handle_negative – ‘repair’ (default): returns a nearby distribution with positive variance ‘ignore’: can return a distribution with negative variance ‘error’: throws a ValueError if negative variances are producted

neurotools.stats.gaussian.MVG_projection(M, C, A)[source]

Compute a new multi-variate gaussian reflecting distribution of a projection A

Parameters:
  • M – length N vector of the mean

  • C – NxN covariance matrix

  • A – KxN projection of the vector space (should be unitary?)

neurotools.stats.gaussian.MVG_entropy(M, P=None, C=None)[source]

Differential entropy of a multivariate gaussian distribution M : N mean vector P : NxN precision matrix OR C : NxN covariance matirx

neurotools.stats.gaussian.MVG_DKL(M0, P0, M1, P1)[source]

KL divergence between two Gaussians

Example

M = randn(10) Q = randn(N,N) P = Q.dot(Q.T) MGV_DKL(M,P,M,P)

neurotools.stats.gaussian.MVG_DKL_CP(M0, C0, M1, P1)[source]

KL divergence between two Gaussians First one specified using covariance Second one using precision

Example

M = randn(10) Q = randn(N,N) P = Q.dot(Q.T) MGV_DKL(M,P,M,P)

neurotools.stats.gaussian.MVG_conditional(M0, P0, M1, P1)[source]

If M0,P0 is a multivariate Gaussian and M1,P1 is a conditional multivariate Gaussian This function returns the joint density

NOT IMPLEMENTED

neurotools.stats.gaussian.MVG_kalman(M, C, A, Q)[source]

Performs a Kalman update with linear transform A and covariance Q Returns the posterior mean and covariance

neurotools.stats.gaussian.MVG_kalman_P_inverseA(M, P, A, invA, Q)[source]

Performs a Kalman update with linear transform A and covariance Q Returns the posterior mean and covariance

This one accepts and returns precision This one needs the inverse of the forward state transition matrix

Example

C2 = ACA’+Q P2 = inv[A inv(P) A’ + Q] P2 = inv[ A (inv(P) + inv(A) Q inv(A’) ) A’ ] P2 = inv[ A inv(P) (1 + P inv(A) Q inv(A’) ) A’ ] P2 = inv(A’) inv(1 + P inv(A) Q inv(A’)) P inv(A)

neurotools.stats.gaussian.MVG_kalman_joint(M, C, A, Q, safe=0)[source]

Performs a Kalman update with linear transform A and covariance Q Keeps track of the joint distribution between the prior and posterior

neurotools.stats.gaussian.MVG_kalman_joint_P(M, P, A, Q=None, W=None, safe=0)[source]

Performs a Kalman update with linear transform A and covariance Q Keeps track of the joint distribution between the prior and posterior Accepts and returns precision matrix Q must be invertable