neurotools.stats.hmm module

Routines for discrete time hidden Markov models

neurotools.stats.hmm.poisson_parameter_guess(X, Y, N)[source]

Based on a sequence of inferred states X, estimate the log of the transition matrix A, the prior vector P, and the state probability matrix B.

Parameters:
  • X (ndarary) – 1D integer array of hidden states with values in 0..N-1

  • Y (ndarary) – 1D integer array of observations

  • N (positive integer) – Number of states

Returns:

  • logP (log Prior for state at first observation)

  • logA (log State transition array)

  • logB (log Poisson process rates)

  • params (p1,p01,p10,mu0,mu1) – p1 = np.mean(X) p01 = np.sum(np.diff(X)== 1)/(1+np.sum(X==0)) p10 = np.sum(np.diff(X)==-1)/(1+np.sum(X==1)) mu0 = np.mean(Y[X==0]) mu1 = np.mean(Y[X==1])

neurotools.stats.hmm.poisson_baum_welch(Y, initial=None)[source]

Fit Hidden Markov Model using Expectation Maximization. For now it is limited to two latent states.

Parameters:
  • Y (ndarray) – One dimension array of integer count-process observations Must have states ranging form 0 to N

  • initial (ndarray) – Optional parameter initializing the guess for the hidden states. If none is provided, we will use a 2 distribution Poisson mixture model fit with EM. Please note that this procedure fails when the frequency of latent states is asymmetric, so you may want to provide different initial conditions.

neurotools.stats.hmm.viterbi(Y, P, A, B)[source]

See https://en.wikipedia.org/wiki/Viterbi_algorithm

Parameters:
  • Y (1D array) – Observations (integer states)

  • P (array shape = (nStates ,)) – 1D array of priors for initial state

  • A (array (nStates,nStates)) – State transition matrix

  • B (ndarray K x N) – conditional probability matrix (emission/observation) probabilty of each observation given each state

Returns:

One-dimensional array of most likely states

Return type:

np.array

neurotools.stats.hmm.viterbi_log(Y, logP, logA, logB)[source]

See https://en.wikipedia.org/wiki/Viterbi_algorithm

Parameters:
  • Y (1D array) – Observations (integer states)

  • logP (array shape = (nStates ,)) – 1D array of priors for initial state given in log probability

  • logA (array (nStates,nStates)) – State transition matrix given in log probability

  • logB (ndarray K x N) – conditional probability matrix log probabilty of each observation given each state

Returns:

One-dimensional array of most likely states

Return type:

np.array

neurotools.stats.hmm.hasNaN(x)[source]

Faster way to test if array contains NaN

Parameters:

x (np.array)

Returns:

Whether x contains not-a-number.

Return type:

bool

neurotools.stats.hmm.poisson_viterbi_state_infer(Y, initial=None, maxiter=100)[source]

Fit Hidden Markov Model using np.expectation Maximization. For now it is limited to two latent states. The Viterbi algorithm is used to assign the most likely overall trajectory in the np.expectation maximization. This is different from the Baum-Welch algorithm which uses the forward-backward algoirthm to infer distributions over states rather than the single most likely trajectory.

Parameters:
  • Y (ndarray) – One dimension array of integer count-process observations Must have states ranging form 0 to N

  • initial (ndarray) – Optional parameter initializing the guess for the hidden states. If none is provided, we will use a 2 distribution Poisson mixture model fit with EM. Please note that this procedure fails when the frequency of latent states is asymmetric, so you may want to provide different initial conditions.

  • maxiter (int, default 1000) – Maximum number of iterations when fitting the model

Returns:

  • X (inferred states)

  • params (inferred model parameters)

neurotools.stats.hmm.forward_backward(y, x_0, T, B)[source]

Compute distribution of hidden states conditioned on all time points using the forward-backward algorithm.

Parameters:
  • y (ndarray) – An n_timepoints long vector of state observations

  • x_0 (ndarray) – An n_hidden_states long vector of initial state prior

  • T (ndarray) – An n_hidden_states x n_hidden_states transition matrix

  • B – An n_hidden_states x n_observable_stated observation (emission) matrix

Returns:

  • iterable f – forward inference results

  • iterable b – backward inference results

  • iterable pr – posterior inference results (forward * backward)

Example

# Initialize model
n_states = 2
x_0 = array([ 0.6,  0.4])
T   = array([
       [ 0.69,  0.3 ],
       [ 0.4 ,  0.59]])
B   = array([
       [ 0.5,  0.4,  0.1],
       [ 0.1,  0.3,  0.6]])
# Initialize example states
y  = array([0, 1, 2, 0, 0])
fwd, bwd, posterior = forward_backward(y,x_0,T,B)
print(posterior)
# Verify that it works with a large number of observations
y = randint(0,n_states,(10000,))
fwd, bwd, posterior = forward_backward(y,x_0,T,B)
print(posterior)
neurotools.stats.hmm.jump(pobabilities)[source]

Markov jump function: pick a state according to probabilities

Parameters:

probabilities (vector of probabilities, must sum to 1.)

neurotools.stats.hmm.sample(L, T, B, x0)[source]

x,y = sample(L,T,B,x0) Sample from a discrete hidden markov model.

Parameters:
  • L (number of samples to draw)

  • T (state transition matrix)

  • B (observation (emission) matrix)

  • x0 (initial conditions)

Returns:

  • np.array (x) – Latent states sampled

  • np.array (y) – Observed states sampled

neurotools.stats.hmm.log_likelihood(x, y, T, B, x0)[source]

Likelihood of hidden (x) and observed (y) state sequence for hidden markov model with hidden-state transition probabilities T and emission probabilities B, and initial state x0.

Returns the log likelihood, which is more numerically stable and avoids overflow.

Parameters:
  • x

  • y

  • T

  • B

  • x0

neurotools.stats.hmm.baum_welch(y, n_hid, convergence=1e-10, eps=0.0001, miniter=10)[source]

That,Bhat,X0hat,f,b,p,llikelihood = baum_welch(y,n_hid,convergence = 1e-6, eps = 1e-4)

Baum-Welch algorithm

Use np.expectation maximization to find locally optimal parameters for a discrete hidden markov model.

Parameters:
  • y – 1D array of state observations

  • n_hid – number of hidden statees

  • convergence (float, default 1e-6) – stop when the largest change in tranisition or emission matrix parameters is less than this value

  • eps (float, default 1e-8) – small uniform probability for regularization, to avoid probability of any state or transition going to zero

Returns:

  • That (estimated transition matrix)

  • Bhat (estimated observation matrix)

  • X0hat (estimated initial state)

  • f (forward filter of observations with estimated model)

  • b (backward filter of observations with estimated model)

  • p (smoothing estimation of latent state using estimated model)

  • llikelihood (likelihood of data given model)

neurotools.stats.hmm.forward_abstract(y, x0, T, B)[source]

Abstracted form of forward filtering algorithm

Parameters:
  • y (iterable of observations) – sequence of observations

  • B (y→(P(x)→P(x)),) – observation model conditioning on observations P(x|y(t)) should accept and observation, and return a function from prior distribution to posterior distribution.

  • x0 (P(x)) – initial state estimate (distribution)

  • T.fwd (P(x)→P(x);) – linear operator for the forward pass; should be a function that accepts and returns a distribution

  • T.bwd (P(x)→P(x);) – linear operator for the backward pass should be a function that accepts and returns a distribution

neurotools.stats.hmm.backward_abstract(y, x0, T, B)[source]

Abstracted form of backward filtering algorithm

Parameters:
  • y (iterable of observations) – sequence of observations

  • B (y→(P(x)→P(x)),) – observation model conditioning on observations P(x|y(t)) should accept and observation, and return a function from prior distribution to posterior distribution.

  • x0 (P(x)) – initial state estimate (distribution)

  • T.fwd (P(x)→P(x);) – linear operator for the forward pass; should be a function that accepts and returns a distribution

  • T.bwd (P(x)→P(x);) – linear operator for the backward pass should be a function that accepts and returns a distribution

neurotools.stats.hmm.forward_backward_abstract(y, x0, T, B, prior=1)[source]

Abstracted form of forward-backward algorithm

Parameters:
  • y (iterable) – sequence of observations

  • x0 (P(x)) – Initial condition

  • T.fwd (P(x)→P(x)) – Operator for the forward pass

  • T.bwd (P(x)→P(x)) – Operator for the backward pass

  • B (y→(P(x)→P(x)),) – conditioning on observations P(x|y(t))

  • prior (Optional, P(x)) – Prior to be multiplied with the latent state on every time-step

class neurotools.stats.hmm.DiffusionGaussian(d)[source]

Bases: object

Diffusion operator to use in abstracted forward-backward Operates on Gaussian densities

fwd(p)[source]

Forward (and backward) operator of a Gaussian diffusion process (Wiener process).

Parameters:

d (Gaussian object)

Returns:

result

Return type:

new gaussian object reflecting diffusion

bwd(p)

Forward (and backward) operator of a Gaussian diffusion process (Wiener process).

Parameters:

d (Gaussian object)

Returns:

result

Return type:

new gaussian object reflecting diffusion

class neurotools.stats.hmm.LogGaussianCoxApproximator(a, b, y)[source]

Bases: Gaussian

Approximate Gaussian distribution to use in abstracted forward- backward. Used to condition Gaussian states on Poisson observations. Uses 1D integration (quadrature) to estimate posterior moments. Assumes log λ = a*x+b.

Parameters:
  • a (log-rate gain parameter)

  • b (log-rate bias parameter)

  • y (The observed count (non-negative integer))

class neurotools.stats.hmm.LogGaussianCoxModel(a, b)[source]

Bases: object

Poisson observation model Returns a density that, when multiplied by a Gaussian, returns a numeric approximation of the posterior as a Gaussian see: LogGaussianCoxApproximator

class neurotools.stats.hmm.GaussianCoxApproximator(a, b, y)[source]

Bases: Gaussian

One-dimensional Gaussian Cox-process observation model

Parameters:
  • a (rate gain parameter)

  • b (rate bias parameter)

  • y (The observed count (non-negative integer))

class neurotools.stats.hmm.GaussianCoxModel(a, b)[source]

Bases: object

One-dimensional Gaussian Cox-process observation model see: GaussianCoxApproximator

class neurotools.stats.hmm.ChisquareCoxApproximator(a, b, y)[source]

Bases: Gaussian

One-dimensional Chi-squared Cox-process observation model

rate = (s.a*x+s.b)**2 rate = a²x²+2abx+b²

Parameters:
  • a (log-rate gain parameter)

  • b (log-rate bias parameter)

  • y (The observed count (non-negative integer))

class neurotools.stats.hmm.ChisquareCoxModel(a, b)[source]

Bases: object

One-dimensional Chi-squared Cox-process observation model see: GaussianCoxApproximator

class neurotools.stats.hmm.BernoulliObservationApproximator(a, b, y)[source]

Bases: Gaussian

Approximate Gaussian distribution to use in abstracted forward- backward. Used to condition Gaussian states on Bernoulli observations

class neurotools.stats.hmm.BernoulliObservationModel(a, b)[source]

Bases: object

Bernoulli observation model Returns a density that, when multiplied by a Gaussian, returns a numeric approximation of the posterior as a Gaussian see: BernoulliObservationApproximator

class neurotools.stats.hmm.TruncatedLogGaussianCoxApproximator(a, b, y)[source]

Bases: Gaussian

Approximate Gaussian distribution to use in abstracted forward- backward. Used to condition Gaussian states on Poisson observations

class neurotools.stats.hmm.TruncatedLogGaussianCoxModel(a, b)[source]

Bases: object

Poisson observation model Returns a density that, when multiplied by a Gaussian, returns a numeric approximation of the posterior as a Gaussian see: LogGaussianCoxApproximator

class neurotools.stats.hmm.MVGaussian(M, T, TM=None)[source]

Bases: object

Multivariate Gaussian model to use in abstracted forward-backward

class neurotools.stats.hmm.MVGUpdate(A, sigma)[source]

Bases: object

A: linear system transition matrix. Means evolve as X = AX sigma: linear system covariance transition. it is a matrix.

fwd(o)[source]
bwd(o)[source]
neurotools.stats.hmm.lgcp_observation_minimizer(y, px, B)[source]
class neurotools.stats.hmm.MVLogGaussianCox(B)[source]

Bases: object

y: binary observations B: projection vector from multivariate system to log-rate

Uses Laplace approximation for covariance

class MVPoissonApproximator(B, y)[source]

Bases: object

class neurotools.stats.hmm.OUGaussian(var, tau, dt, regularize)[source]

Bases: object

fwd(p)[source]
bwd(p)[source]
class neurotools.stats.hmm.MVGOUUpdate(A, mean, sigma, regularize)[source]

Bases: object

A: linear system transition matrix. Means evolve as X = AX sigma: linear system covariance transition. it is a matrix.

fwd(o)[source]
bwd(o)[source]
class neurotools.stats.hmm.PoissonObservationApproximator(a, b, y)[source]

Bases: Gaussian

Approximate Gaussian distribution to use in abstracted forward- backward. Used to condition Gaussian states on Poisson observations. Uses 1D integration (quadrature) to estimate posterior moments. Assumes log λ = a*x+b.

a: log-rate gain parameter b: log-rate bias parameter y: The observed count (non-negative integer)

class neurotools.stats.hmm.PoissonObservationModel(a, b)[source]

Bases: object

Poisson observation model Returns a density that, when multiplied by a Gaussian, returns a numeric approximation of the posterior as a Gaussian see: PoissonObservationApproximator

class neurotools.stats.hmm.TruncatedPoissonObservationApproximator(a, b, y)[source]

Bases: Gaussian

Approximate Gaussian distribution to use in abstracted forward- backward. Used to condition Gaussian states on Poisson observations

class neurotools.stats.hmm.TruncatedPoissonObservationModel(a, b)[source]

Bases: object

Poisson observation model Returns a density that, when multiplied by a Gaussian, returns a numeric approximation of the posterior as a Gaussian see: PoissonObservationApproximator