neurotools.linalg.matrix module

Collecting matrix-related subroutines

neurotools.linalg.matrix.triu_elements(M, k=0)[source]

Similar to Matlab’s “diag” function, but for upper-triangular matrices

Pack N*(N-1) elements into the upper triangle of an NxN Matrix or Return the N*(N-1) elements from the upper triangle as an NxN matrix

>>> triu_elements(randn(D*(D+1)//2))
Parameters:
  • M (np.array) – Matrix from which to extract triangular elements or List of elements to pack into a triangular matrix

  • k (int) – Diagonal offset; Forwarded to np.triu_indecies if input is matrix

neurotools.linalg.matrix.tril_elements(M, k=0)[source]

Somewhat like matlab’s “diag” function, but for lower-triangular matrices

Pack N*(N-1) elements into the lower triangle of an NxN Matrix or Return the N*(N-1) elements from the lower triangle as an NxN matrix

tril_elements(randn(D*(D+1)//2))

neurotools.linalg.matrix.column(x)[source]

Ensure that x is a column vector if x is multidimensional, x.ravel() will be calle first

neurotools.linalg.matrix.row(x)[source]

Ensure that x is a row vector

neurotools.linalg.matrix.rcond(x)[source]

Reciprocal condition number

neurotools.linalg.matrix.check_finite_real(M)[source]

Check that all entries in array M are finite and real-valued

neurotools.linalg.matrix.check_covmat(C, N=None, eps=1e-06)[source]

Verify that matrix C is a size NxN positive definite matrix.

Parameters:
  • C (object) – Object we expect to be a square positive definite matrix.

  • N (int; default is None) – Expected size of array. If None, defaults to C.shape[0]

  • eps (positive float; default 1e-6) – Maximum allowed distance between C and its transpose.

Returns:

C – Reconditioned covariance matrix.

Return type:

NxN np.float64

neurotools.linalg.matrix.check_covmat_fast(C, N=None, eps=1e-06)[source]

Verify that matrix M is a size NxN precision or covariance matirx.

Parameters:
  • C (object) – Object we expect to be a square positive definite matrix.

  • N (int; default is None) – Expected size of array. If None, defaults to C.shape[0]

  • eps (positive float; default 1e-6) – Maximum allowed distance between C and its transpose.

neurotools.linalg.matrix.real_eig(M, eps=1e-09)[source]

This code expects a real hermetian matrix and should throw a ValueError if not. This is redundant to the scipy eigh function (use that one instead!)

Returns:

  • w (eigenvalues)

  • v (eigenvectors)

neurotools.linalg.matrix.psd_eig(M, eps=1e-09)[source]

This code expects a real hermetian matrix and should throw a ValueError if not. This is probably redundant to the scipy eigh function. Do not use.

neurotools.linalg.matrix.logdet(C, eps=1e-06, safe=0)[source]

Logarithm of the determinant of a matrix Works only with real-valued positive definite matrices

neurotools.linalg.matrix.solt(a, b)[source]

wraps solve_triangular automatically detects lower vs. upper triangular

neurotools.linalg.matrix.rsolt(a, b)[source]

wraps solve_triangular, right hand solution solves system x A = B for triangular A

neurotools.linalg.matrix.rsolve(a, b)[source]

wraps solve, applies to right hand solution solves system x A = B

neurotools.linalg.matrix.qf(A, S=None)[source]

Matrix quatratic forms A*S*A.T If S is none, compute A*A.T

neurotools.linalg.matrix.abserr(M1, M2)[source]

Mean absolute element-wise difference between teo matrices

neurotools.linalg.matrix.errmx(stuff)[source]

Takes a list of objects and prints out a matirx of the pairwise element-wise mean absolute differences. All objects mst have the same shape.

neurotools.linalg.matrix.cholupdate(R, x)[source]

Rank-1 update to a cholesky factorization Possibly slower than simply recomputing

Test q = randn(10,10) qq = q.T.dot(q) ch = chol(qq) x = randn(10) xx = outer(x,x) pp = qq+xx cp = chol(pp) c2 = cholupdate(ch,x.T) print(abserr(c2,cp))

neurotools.linalg.matrix.cholupdate_eye(R)[source]

Idenetity matrix update to a cholesky factorization Probably slower than simply recomputing

Test q = randn(10,10) qq = q.T.dot(q) ch = chol(qq) pp = qq+eye(10) cp = chol(pp) c2 = cholupdate_eye(ch) print(abserr(c2,cp))

neurotools.linalg.matrix.cartesian_product(*arrays)[source]

https://stackoverflow.com/questions/11144513/ numpy-cartesian-product-of-x-and-y-array-points-into-single-array-of-2d-points

neurotools.linalg.matrix.nearPDHigham(A, nit=10)[source]

Computes a positive definite matrix “close” to matrix $X$ using the algorithm of Higham (2000).

This is based on a [Stackoverflow answer](https://stackoverflow.com/a/10940283/900749), and relevant intellectual property rights are retained by user [sega sai](https://stackoverflow.com/users/1269140/sega-sai), [Stackoverflow](https://stackoverflow.com), or Higham (2000).

Parameters:

X (np.array) – Square, real-valued matrix

Returns:

Positive semi-definite matrix close to $X$

Return type:

np.array

neurotools.linalg.matrix.nearPSDRebonatoJackel(A, epsilon=1e-10)[source]

Computes a positive definite matrix “close” to matrix $X$ using the algorithm of Rebonato and Jackel (1999).

This is based on a [Stackoverflow answer](https://stackoverflow.com/a/18542094/900749)

Parameters:
  • X (np.array) – Square, real-valued matrix

  • epsilon (non-negative scalar, default 1e-10) – minimum eigenvalue

Returns:

Positive semi-definite matrix close to $X$

Return type:

np.array

neurotools.linalg.matrix.cinv(X, repair=False)[source]

Invert positive matrix X using cholesky factorization and LAPACK’s dtrtri function.

Parameters:
  • X (np.array) – Square, real-valued, symmetric positive definite matirx

  • repair (boolean, default is False) – If true, and if $X$ is not positive definite, then this routine will attempt to operate on a new positive matrix close to $X$

Returns:

The inverse of X computed using Cholesky factorization

Return type:

matrix

neurotools.linalg.matrix.csolve(H, J)[source]

Solve PSD linear system $x = H^{-1}J$ via Cholesky factorization

neurotools.linalg.matrix.wheremax(a)[source]

Returns the indecies of the maximum element in a multi-dimensional array.

Parameters:

a (np.array) – Numpy multi-dimensional array

Returns:

Tuple of indecies indicating the maximum element

Return type:

tuple

neurotools.linalg.matrix.wheremin(a)[source]

Returns the indecies of the minimum element in a multi-dimensional array.

Parameters:

a (np.array) – Numpy multi-dimensional array

Returns:

Tuple of indecies indicating the minimum element

Return type:

tuple

neurotools.linalg.matrix.reglstsq(X, Y, reg=1e-15, suppress_transpose_error=False)[source]

Regularized least squares. Solves Y=XM for M with L2 regularization

Parameters:
  • X (two-dimensional numpy array) – Matrix of observations of the explanatory/independent variables Nsamples x Nfeatures

  • Y (two-dimensional numpy array) – Matrix of observations of the response/dependent variables Nsamples x Nfeatures

  • reg (positive float) – Small positive L2 regularization, default is 1e-15

  • suppress_transpose_error (boolean; default False) – Set this to True to fit regression models with more dimensions than training examples.

Returns:

w

Return type:

weight vector

neurotools.linalg.matrix.Ldistance(X, M, L=2, eps=0.001)[source]

L-n norm distance between two vectors

neurotools.linalg.matrix.Llasso(X, M, L=2, eps=0.001)[source]

Lasso-like distance (I think? need to check this one)

neurotools.linalg.matrix.rmatrix(h)[source]

Generate 2D rotation matrix for angle h

Parameters:

h (float) – Angle of rotation in radians

Returns:

M – 2D rotation matrix for h radians.

Return type:

2x2 numpy array

neurotools.linalg.matrix.ldiv(A, B)[source]

Behaves like Matlab AB; Solves AX=B for X = A^{-1}B i.e. find matrix X which when right-multiplied with A is close to B.

Parameters:
  • A (np.array) – Nsamples x Nfeatures array of independent variables

  • B (np.array) – NSAMPLES×NOUTPUTS array of dependent variables

Returns:

X – NFEATURES×NOUTPUTS vector such that AX≃B

Return type:

np.array

neurotools.linalg.matrix.rdiv(A, B)[source]

Solves for X=AB^{-1} , i.e. BX = A

neurotools.linalg.matrix.autopredict(z, reg=1e-09)[source]

Solve mutual prediction problem for T x N covariates z Approximates z = z Q, where diag(Q) is zero

neurotools.linalg.matrix.kronsum(A, B)[source]

Kronecker sum

neurotools.linalg.matrix.top_v(C)[source]

Return the leading eigenvector of a covariance matrix

neurotools.linalg.matrix.normedcovariance(C)[source]

Divide a covariance matrix by its largest eigenvalue

neurotools.linalg.matrix.maxeig(C)[source]

Get largetst eigenvalue

neurotools.linalg.matrix.selector_matrix(b)[source]

Create matrix that extracts a subset of elements. Returns selector matrix S such that:

X[b] = S @ X

Parameters:

b (boolean vector)

neurotools.linalg.matrix.laplace_kernel()[source]

Returns a 3x3 laplacian kernel that is as radially symmetric as possible.

Return type:

3x3 np.array containing the discrete 2D Laplacian kernel

neurotools.linalg.matrix.match_covariance(Q, x, verbose=False, sample_deficient=False)[source]

Adjust data to match target covariance with minimal L2 distortion.

This uses eigendecomposition-based whitening. The following tutorials show that this is a minimum distortion approach:

Eldar YC. Minimum mean-squared error covariance shaping. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03). 2003 Apr 6 (Vol. 6, pp. VI-713). IEEE.

Eldar YC, Oppenheim AV. MMSE whitening and subspace whitening. IEEE Transactions on Information Theory. 2003 Jun 25;49(7):1846-51.

Test code (relies on from neurotools.nlab import *):

N = 2    # dimensions
T = 1000 # Samples

# Target covariance structure
q = randn(N,N)
e,v = psd_eig(q@q.T)
e = array([1,0.1])
shuffle(e)
q = v@diag(e**0.5)
Q = q@q.T

# Source covariance structure
w = randn(N,N)
e,v = psd_eig(w@w.T)
e = array([1,0.1])
shuffle(e)
w = v@diag(e**0.5)
W = w@w.T

# Original samples
x1 = w@randn(N,T)

x2 = match_covariance(Q,x1,verbose=True,sample_deficient=False)

subplot(131)
scatter(x1.ravel(),x2.ravel(),s=1)
force_aspect()
simpleaxis()

subplot(132)
scatter(*x1,s=1)
scatter(*x2,s=1)
force_aspect()
simpleaxis()

subplot(133)
plot(*cat([(xi1,xi2,(NaN,NaN)) for (xi1,xi2) in zip(x1.T,x2.T)]).T,lw=0.1)
force_aspect()
simpleaxis()
Parameters:
  • x (np.array) – Nfeatures x Nsamples array of data

  • Q (np.array) – Nfeatures² target covariance matrix

  • verbose (bool, default false) – Whether to print debugging information

  • sample_deficient (bool, default false) – Override sanity checks if number of samples is less than the rank of the data.

Returns:

x2 – Samples transformed to approximately match target covariance while minimizing mean-squared-error distortion

Return type:

transformed samples

neurotools.linalg.matrix.get_whitener(S)[source]