neurotools.signal package

Submodules

Module contents

Routines for signal processing.

neurotools.signal.gaussian_kernel(sigma)[source]

Generate 1D Guassian kernel for smoothing

Parameters:

sigma (positive float) – Standard deviation of kernel. Kernel size is automatically adjusted to ceil(sigma*2)*2+1

Returns:

K – normalized Gaussian kernel

Return type:

vector

neurotools.signal.gaussian_smooth(x, sigma, mode='same')[source]

Smooth signal x with gaussian of standard deviation sigma, using edge-clamped boundary conditions.

Parameters:
  • sigma (positive float) – Standard deviation of Gaussian smoothing kernel.

  • x (1D np.array) – Signal to filter.

Return type:

smoothed signal

neurotools.signal.circular_gaussian_smooth(x, sigma)[source]

Smooth signal x with Gaussian of standard deviation sigma, Circularly wrapped using Fourier transform.

Parameters:
  • x (np.array) – 1D array-like signal

  • sigma (positive float) – Standard deviation

Return type:

smoothed signal

neurotools.signal.mirrored_gaussian_smooth(x, sigma)[source]

Smooth signal x with Gaussian of standard deviation sigma, using reflected boundary conditions.

Parameters:
  • x (np.array) – 1D array-like signal

  • sigma (positive float) – Standard deviation

Return type:

smoothed signal

neurotools.signal.circular_gaussian_smooth_2D(x, sigma)[source]

Smooth signal x with gaussian of standard deviation sigma Circularly wrapped using Fourier transform

sigma: standard deviation x: 2D array-like signal

Parameters:
  • x (np.ndarray) – Smoothing is performed over the last two dimensions, which should have the same length

  • sigma (positive float) – Standard deviation of Gaussian kernel for smoothing, in pixels.

Returns:

x, circularly smoothed over the last two dimensions.

Return type:

np.array

neurotools.signal.nonnegative_bandpass_filter(data, fa=None, fb=None, Fs=1000.0, order=4, zerophase=True, bandstop=False, offset=1.0)[source]

For filtering data that must remain non-negative. Due to ringing conventional fitering can create values less than zero for non-negative real inputs. This may be unrealistic for some data.

To compensate, this performs the filtering on the natural logarithm of the input data. For small numbers, this can lead to numeric underflow, so an offset parameter (default 1) is added to the data for stability.

Parameters:
  • (ndarray) (data) – data, filtering performed over last dimension

  • (number) (fb) – low-freq cutoff Hz. If none, lowpass at fb

  • (number) – high-freq cutoff Hz. If none, highpass at fa

  • (int) (Fs) – Sample rate in Hz

  • (1..6) (order) – butterworth filter order. Default is 4

  • (boolean) (bandstop) – Use forward-backward filtering? (true)

  • (boolean) – Do band-stop rather than band-pass

  • number) (offset (positive) – Offset data to avoid underflow (1)

Returns:

Filtered signal

Return type:

filtered

neurotools.signal.bandpass_filter(data, fa=None, fb=None, Fs=1000.0, order=4, zerophase=True, bandstop=False)[source]

IF fa is None, assumes lowpass with cutoff fb IF fb is None, assume highpass with cutoff fa Array can be any dimension, filtering performed over last dimension.

Parameters:
  • data (ndarray) – data, filtering performed over last dimension

  • fa (number) – low-frequency cutoff. If none, highpass at fb

  • fb (number) – high-frequency cutoff. If none, lowpass at fa

  • order (1..6) – butterworth filter order. Default 4

  • zerophase (boolean) – Use forward-backward filtering? (true)

  • bandstop (boolean) – Do band-stop rather than band-pass

Returns:

result – Filtered signal.

Return type:

np.array

neurotools.signal.box_filter(data, smoothat, padmode='reflect')[source]

Smooths data by convolving with a size smoothat box provide smoothat in units of frames i.e. samples.

Parameters:
  • x (np.array) – One-dimensional numpy array of the signal to be filtred

  • window (positive int) – Filtering window length in samples

  • mode (string, default 'same') – If ‘same’, the returned signal will have the same time-base and length as the original signal. if ‘valid’, edges which do not have the full window length will be trimmed

Returns:

One-dimensional filtered signal

Return type:

np.array

neurotools.signal.median_filter(x, window=100, mode='same')[source]

Filters a signal by calculating the median in a sliding window of width ‘window’

mode=’same’ will compute median even at the edges, where a full window

is not available

mode=’valid’ will compute median only at points where the full window

is available

Parameters:
  • x (np.array) – One-dimensional numpy array of the signal to be filtred

  • window (positive int) – Filtering window length in samples

  • mode (string, default 'same') – If ‘same’, the returned signal will have the same time-base and length as the original signal. if ‘valid’, edges which do not have the full window length will be trimmed

Returns:

One-dimensional filtered signal

Return type:

np.array

neurotools.signal.percentile_filter(x, pct, window=100, mode='same')[source]

Filters a signal by calculating the median in a sliding window of width ‘window’

mode=’same’ will compute median even at the edges, where a full window

is not available

mode=’valid’ will compute median only at points where the full window

is available

Parameters:
  • x (np.array) – One-dimensional numpy array of the signal to be filtred

  • pct (float in 0..100) – Percentile to apply

  • window (positive int) – Filtering window length in samples

  • mode (string, default 'same') – If ‘same’, the returned signal will have the same time-base and length as the original signal. if ‘valid’, edges which do not have the full window length will be trimmed

Returns:

One-dimensional filtered signal

Return type:

np.array

neurotools.signal.variance_filter(x, window=100, mode='same')[source]

Extracts signal variance in a sliding window

mode=’same’ will compute median even at the edges, where a full window is not available

mode=’valid’ will compute median only at points where the full window is available

Parameters:
  • x (np.array) – One-dimensional numpy array of the signal to be filtred

  • window (positive int) – Filtering window length in samples

  • mode (string, default 'same') – If ‘same’, the returned signal will have the same time-base and length as the original signal. if ‘valid’, edges which do not have the full window length will be trimmed

Returns:

One-dimensional filtered signal

Return type:

np.array

neurotools.signal.stats_block(data, statfunction, N=100, sample_match=None)[source]

Compute function of signal in blocks of size $N$ over the last axis of the data

Parameters:
  • data (np.array) – N-dimensional numpy array. Blocking is performed over the last axis

  • statfunction (function) – Statistical function to compute on each block. Should be, or behave similarly to, the numpy buit-ins, e.g. np.mean, np.median, etc.

  • N (positive integer, default 100) – Block size in which to break data. If data cannot be split evenly into blocks of size $N$, then data are truncated to the largest integer multiple of N.

  • sample_match (positive integer, default None) – If not None, then blocks will be sub-sampled to contain sample_match samples. sample_match should not exceed data.shape[-1]//N

Returns:

Blocked data

Return type:

np.array

neurotools.signal.mean_block(data, N=100, sample_match=None)[source]

Calls stats_block using np.mean. See documentation of stats_block for details.

Parameters:
  • data (1D np.array) – Signal to filter

  • N (positive integer, default 100) – Block size in which to break data. If data cannot be split evenly into blocks of size $N$, then data are truncated to the largest integer multiple of N.

  • sample_match (positive integer, default None) – If not None, then blocks will be sub-sampled to contain sample_match samples. sample_match should not exceed data.shape[-1]//N

Returns:

resultstats_block(data,np.mean,N)

Return type:

np.array

neurotools.signal.var_block(data, N=100)[source]

Calls stats_block using np.var. See documentation of stats_block for details.

Parameters:
  • data (1D np.array) – Signal to filter

  • N (positive integer, default 100) – Block size in which to break data. If data cannot be split evenly into blocks of size $N$, then data are truncated to the largest integer multiple of N.

  • sample_match (positive integer, default None) – If not None, then blocks will be sub-sampled to contain sample_match samples. sample_match should not exceed data.shape[-1]//N

Returns:

resultstats_block(data,np.var,N)

Return type:

np.array

neurotools.signal.median_block(data, N=100)[source]

Calls stats_block using np.median. See documentation of stats_block for details.

Parameters:
  • data (1D np.array) – Signal to filter

  • N (positive integer, default 100) – Block size in which to break data. If data cannot be split evenly into blocks of size $N$, then data are truncated to the largest integer multiple of N.

Returns:

resultstats_block(data,np.median,N)

Return type:

np.array

neurotools.signal.linfilter(A, C, x, initial=None)[source]

Linear response filter on data $x$ for system

$$ partial_t z = A z + C x(t) $$

Parameters:
  • A (matrix) – K x K matrix defining linear syste,

  • C (matrix) – K x N matrix defining projection from signal $x$ to linear system

  • x (vector or matrix) – T x N sequence of states to filter

  • initial (vector) – Optional length N vector of initial filter conditions. Set to 0 by default

Returns:

filtered – filtered data

Return type:

array

neurotools.signal.padout(data)[source]

Generates a reflected version of a 1-dimensional signal.

The original data is placed in the middle, between the mirrord copies. Use the function “padin” to strip the padding

Parameters:

data (1d np.array)

Returns:

result – length 2*data.shape[0] padded array

Return type:

np.array

neurotools.signal.padin(data)[source]

Removes padding added by the padout function; padin and padout together are used to control the boundary condtitions for filtering. See the documentation for padout for details.

Parameters:

data (array-like) – Data array produced by the padout function

Returns:

data with edge padding removed

Return type:

np.array

neurotools.signal.estimate_padding(fa, fb, Fs=1000)[source]

Estimate the amount of padding needed to address boundary conditions when filtering. Takes into account the filter bandwidth, which is related to the time-locality of the filter, and therefore the amount of padding needed to prevent artifacts at the edge.

neurotools.signal.lowpass_filter(x, cut=10, Fs=1000, order=4)[source]

Execute a butterworth low pass Filter at frequency “cut” Defaults to order=4 and Fs=1000

neurotools.signal.highpass_filter(x, cut=40, Fs=1000, order=4)[source]

Execute a butterworth high pass Filter at frequency “cut” Defaults to order=4 and Fs=1000

neurotools.signal.fdiff(x, Fs=240.0)[source]

Take the discrete derivative of a signal, correcting result for sample rate. This procedure returns a singnal two samples shorter than the original.

Parameters:
  • x (1D np.array) – Signal to differentiate

  • Fs (positive number;) – Sampling rate of x

neurotools.signal.arenear(b, K=5)[source]

Expand a boolean/binary sequence by K samples in each direction. See also “aresafe”

Parameters:
  • b (1D np.bool) – Boolean array;

  • K (positive int; default 5) – Number of samples to add to each end of spans of b which are True

Returns:

b – Expanded b.

Return type:

bp.bool

neurotools.signal.aresafe(b, K=5)[source]

Contract a boolean/binary sequence by K samples in each direction. I.e. trim off K samples from the ends of spans of b that are True.

For example, you may want to test for a condition, but avoid samples close to edges in that condition.

Parameters:
  • b (1D np.bool) – Boolean array;

  • K (positive int; default 5) – Number of samples to shave off each end of spans of b which are True

Returns:

b – Trimmed b

Return type:

bp.bool

neurotools.signal.get_edges(signal, pad_edges=True)[source]

Assuming a binary signal, get the start and stop times of each treatch of “1s”

Parameters:
  • signal (1-dimensional array-like)

  • pad_edges (True) – Should we treat blocks that start or stop at the beginning or end of the signal as valid?

Return type:

2xN array of bin start and stop indecies

neurotools.signal.set_edges(edges, N)[source]

Converts list of start, stop times over time period N into a [0,1] array which is 1 for any time between a [start,stop) edge info outsize [0,N] results in undefined behavior

neurotools.signal.remove_gaps(w, cutoff)[source]

Removes gaps (streaches of zeros bordered by ones) from binary signal w that are shorter than cutoff in duration.

Parameters:
  • w (one-dimensional array-like) – Binary signal

  • cutoff (positive int) – Minimum gap duration to keep

Returns:

Copy of w with gaps shorter than cutoff removed

Return type:

array-like

neurotools.signal.remove_short(w, cutoff)[source]

Removes spans of ones bordered by zeros from binary signal w that are shorter than cutoff in duration.

Parameters:
  • w (one-dimensional array-like) – Binary signal

  • cutoff (positive int) – Minimum gap duration to keep

Returns:

Copy of w with spans shorter than cutoff removed

Return type:

array-like

neurotools.signal.pieces(x, thr=4)[source]

Chops up x between points that differ by more than thr

Parameters:
  • x (1D np.array)

  • thr (number; default 4) – Derivative threshold for cutting segments.

Returns:

ps – List of (range(a,b),x[a:b]) for each piece of x.

Return type:

list

neurotools.signal.interpolate_NaN(u)[source]

Fill in NaN (missing) data in a one-dimensional timeseries via linear interpolation.

Parameters:

u (np.array) – Signal in which to interpolate NaN values

Returns:

u – Copy of u with NaN values filled in via linear interpolation.

Return type:

np.array

neurotools.signal.interpolate_NaN_quadratic(u)[source]

Fill in NaN (missing) data in a one-dimensional timeseries via quadratic interpolation.

Parameters:

u (1D np.array)

Returns:

u, with NaN values replaced using locally- quadratic interpolation.

Return type:

np.array

neurotools.signal.killSpikes(x, threshold=1)[source]

Remove times when the signal exceeds a given threshold of the standard deviation of the underlying signal. Removed data are re-interpolated from the edges. This procedure is particularly useful in correcting higkinematics velocity trajectories. Velocity should be smooth, but motion tracking errors can cause sharp spikes in the signal.

Parameters:
  • x

  • threshold (1)

neurotools.signal.drop_nonfinite(x)[source]

Flatten array and remove non-finite values

Parameters:
  • x (np.float32) – Numpy array from which to move non-finite values

  • Returns

  • np.float32 (1D) – Flattened array with non-finite values removed.

neurotools.signal.virtual_reference_line_noise_removal(lfps, frequency=60, hbw=5)[source]

Accepts an array of LFP recordings (first dimension should be channel number, second dimension time ). Sample rate assumed 1000Hz

Extracts the mean signal within 2.5 Hz of 60Hz. For each channel, removes the projection of the LFP signal onto this estimated line noise signal.

I’ve found this approach sometimes doesn’t work very well, so please inspect the output for quality.

To filter out overtones, see band_stop_line_noise_removal().

Parameters:
  • lfps – LFP channel data

  • frequency (positive number) – Line noise frequency, defaults to 60 Hz (USA).

  • hbw (positive number) – Half-bandwidth settings; Default is 5

Returns:

removed – Band-stop filtered signal

Return type:

np.array

neurotools.signal.band_stop_line_noise_removal(lfps, frequency=60.0)[source]

Remove line noise using band-stop at 60Hz and overtones.

Parameters:
  • lfps – LFP channel data

  • frequency (positive number) – Line noise frequency, defaults to 60 Hz (USA).

  • hbw (positive number) – Half-bandwidth settings; Default is 10

Returns:

removed – Band-stop filtered signal

Return type:

np.array

neurotools.signal.local_peak_within(freqs, cc, fa, fb)[source]

For a spectrum, identify the largest local maximum in the frequency range [fa,fb].

Parameters:
  • freqs (np.array) – Frequencies

  • cc (np.array) – Amplitude

  • fa (float) – low-frequency cutoff

  • fb (float) – high-frequency cutoff

Returns:

  • i – index of peak, or None if no local peak found

  • frequency – frequency at peak, or None if no local peak found

  • peak – amplitude at peak, or None if no local peak found

neurotools.signal.local_maxima(x, include_endpoints=False)[source]

Detect local maxima in a 1D signal.

Parameters:
  • x (np.array)

  • include_endpoints (bool; default False) – Whether to include endpoints as local maxima.

Returns:

  • t (np.int32) – Location of local maxima in x

  • x[t] (np.array) – Values of x at these local maxima.

neurotools.signal.local_minima(x, include_endpoints=False)[source]

Detect local minima in a 1D signal.

Parameters:

x (np.array)

Returns:

  • t (np.int32) – Location of local minima in x

  • x[t] (np.array) – Values of x at these local minima.

neurotools.signal.peak_within(freqs, spectrum, fa, fb)[source]

Find maximum within a band

Parameters:
  • freqs (np.array) – Frequencies

  • spectrum

  • fa (float) – low-frequency cutoff

  • fb (float) – high-frequency cutoff

neurotools.signal.interpmax1d(x)[source]

Locate a peak in a 1D array by interpolation; see dspguru.com/dsp/howtos/how-to-interpolate-fft-peak

Parameters:

x (1D np.array; Signal in which to locate the gloabal maximum.)

Returns:

i

Return type:

float; Interpolated index of global maximum in x.

neurotools.signal.peak_fwhm(x, max_fraction=0.5, include_fraction=0.5, include_endpoints=True)[source]

Extract full-width-half-maximum information from the peaks (local maxima) in a series.

This will extract the width of peaks at half maximum (or some other fraction given by max_fraction).

If the nearest valley (trough, local minimum) is higher than half-maximum, this will be used as a boundary for the peak instead.

Parameters:
  • x (1D np.array) – 1D spectrum

  • max_fraction (float ∈(0,1); default 0.5) – Height fraction at which to take the peak width. The default is 0.5 for “half maximum”.

  • include_fraction (float ∈(0,1); default 0.5) – Exclude peaks shorter than include_fraction times the tallest peak.

  • include_endpoints (bool; default True) – Whether to include endpoints as local maxima.

Returns:

  • peak_index (1D np.int32) – Index into x of each peak

  • peak_height (1D np.float32) – Value of x at each peak

  • peak_width (1D np.int32) – Width of peak at max_fraction height

  • peak_start (1D np.int32) – Start of peak above max_fraction height

  • peak_stop (1D np.int32) – End of peak above max_fraction height

class neurotools.signal.PeakInfoResult(index, height, inflection_below, inflection_above, trough_below, trough_above, qfit)[source]

Bases: NamedTuple

index: float

Alias for field number 0

height: float

Alias for field number 1

inflection_below: float

Alias for field number 2

inflection_above: float

Alias for field number 3

trough_below: float

Alias for field number 4

trough_above: float

Alias for field number 5

qfit: list

Alias for field number 6

neurotools.signal.quadratic_peakinfo(x, freqs=None)[source]

Collect information about the location, width, and height of local maxima. Fit a Gaussian model to each peak using data between the edges of the peak, defined as the inflection points below and above each local maxima.

Parameters:
  • x (1D np.array) – 1D spectrum

  • freqs (list (optional)) – List of frequencies for each value of x. In not provided, return results will be in terms of index into x (starting at 0).

Returns:

peaks – List of PeakInfoResult containing the fields

index:

float Index into x of each local maximum (interpolated).

height:

float Value of x at each local maximum

inflection_below:

float Nearest inflection below each local maximum.

inflection_above:

float Nearest inflection above each local maximum.

trough_below:

float Nearest trough below each local maximum.

trough_above:

float Nearest trough above each local maximum.

qfit:

list Quadratic regression polynomial coefficients [c2,c1,c0].

Return type:

list

neurotools.signal.unitscale(signal, axis=None)[source]

Rescales signal so that its minimum is 0 and its maximum is 1.

Parameters:

signal (np.array) – Array-like real-valued signal

Returns:

Rescaled signal-min(signal)/(max(signal)-min(signal))

Return type:

signalL np.array

neurotools.signal.topercentiles(x)[source]
neurotools.signal.zeromean(x, axis=0, verbose=False, ignore_nan=True)[source]

Remove the mean trend from data

Parameters:
  • x (np.array) – Data to remove mean trend from

  • axis (int or tuple, default None) – Axis over which to take the mean; forwarded to np.mean axis parameter

Returns:

x – Copy of x shifted so that mean is zero.

Return type:

np.array

neurotools.signal.zeromedian(x, axis=0, verbose=False, ignore_nan=True)[source]

Remove the median trend from data

Parameters:
  • x (np.array) – Data to remove mean trend from

  • axis (int or tuple, default None) – Axis over which to take the median; forwarded to np.mean axis parameter

Returns:

x – Copy of x shifted so that median is zero.

Return type:

np.array

neurotools.signal.zscore(x, axis=0, regularization=1e-30, verbose=False, ignore_nan=True, ddof=0)[source]

Z-scores data, defaults to the first axis.

A regularization factor is added to the standard deviation to prevent numerical instability when the standard deviation is small. The default refularization is 1e-30.

Parameters:
  • x – Array-like real-valued signal.

  • axis – Axis to zscore; default is 0.

Returns:

x – (x-mean(x))/std(x)

Return type:

np.ndarray

neurotools.signal.unitsum(x, axis=None, verbose=False, ignore_nan=True, ddof=0)[source]

Normalize a np.ndarray to sum to 1. The default behavior is to act over all axes.

Parameters:
  • x (np.ndarray) – Array-like real-valued signal.

  • axis (int) – Axis to zscore; default is 0.

Returns:

x

Return type:

np.ndarray

neurotools.signal.gaussianize(x, axis=-1, verbose=False)[source]

Use percentiles to force a timeseries to have a normal distribution.

Parameters:
  • x (np.ndarray)

  • axis (int; default -1)

  • verbose (boolean; default False)

neurotools.signal.uniformize(x, axis=-1, killeps=None)[source]

Use percentiles to force a timeseries to have a uniform [0,1] distribution.

uniformize() was designed to operate on non-negative data and has some quirks that have been retained for archiving and backwards compatibility.

Namely, if the killeps argument is provided, uniformize() assumes that inputs are non- negative and excludes values less than killeps`*σ, where σ is the standard-deviation of `x, from the percentile rankings. The original default for killeps was 1e-3.

This was done because in the T-maze experiments, the mouse often spends quite a bit of time at the beginning of the maze. We don’t want most of the [0,1] dynamic range dedicatde to encoding positions or times near the start of trials. So, we estimate the scale of x using its standard deviation, and then clip values that are small relative to this scale. This heuristic works for position/time data from the Dan-Helen-Ethan experiments, but isn’t very general.

Parameters:
  • x (np.float32:) – Timeseries

  • axis (axis specifies; default -1) – axis argument forwarded to ` scipy.stats.rankdata`

  • killeps (positive float; default None) – Original value of killeps passed to uniformize()

neurotools.signal.normalize(t, method)[source]
Parameters:
  • t (np.ndarray) – Signal to normalize

  • method (str) – ‘zeromean’ ‘zscore’ ‘gaussian’ ‘rank’ ‘percentile’

neurotools.signal.invert_uniformize(x, p, axis=-1, killeps=None)[source]

Inverts the uniformize() function

uniformize() was designed to operate on non-negative data and has some quirks that have been retained for archiving and backwards compatibility.

Namely, if the killeps argument is provided, uniformize() assumes that inputs are non- negative and excludes values less than killeps`*σ, where σ is the standard-deviation of `x, from the percentile rankings. The original default for killeps was 1e-3.

This was done because in the T-maze experiments, the mouse often spends quite a bit of time at the beginning of the maze. We don’t want most of the [0,1] dynamic range dedicate to encoding positions or times near the start of trials. So, we estimate the scale of x using its standard deviation, and then clip values that are small relative to this scale. This heuristic works for position/time data from the Dan-Helen-Ethan experiments, but isn’t very general.

Uniformize processing steps

  • Mark timepoints where abs(x)<killeps*std(x)

  • Rank data excluding these timepoints

  • Check how many timepoints were actually included

  • Normalize ranks to this amount

Parameters:
  • x (np.float32:) – Original timeseries passed as argument x to uinformize()

  • p (np.float32 ∈ [0,1]) – Values on [0,1] interval to convert back into raw signal values, based on percentiles of x.

  • axis (axis specifies; default -1) – axis argument forwarded to ` scipy.stats.rankdata`

  • killeps (positive float; default None) – Original value of killeps passed to uniformize() (leave blank if you did not specify this argument; It defaults to 1e-3)

Returns:

Recontructed values. This should be equivalent to the original data with values less than killeps times the standard-deviation “σ” of original_x set to σ*killeps

Return type:

np.float32

neurotools.signal.deltaovermean(x, axis=0, regularization=1e-30, verbose=False, ignore_nan=True)[source]

Subtracts, then divides by, the mean. Sometimes called “dF/F”

Parameters:
  • x (np.array) – Array-like real-valued signal.

  • axis (maptlotlib.Axis) – Axis to zscore; default is 0.

  • regularization (positive number; default 1e-30) – Regularization to avoid division by 0

  • verbose (boolean; default False)

  • ignore_nan (boolean; default True)

Returns:

x – (x-mean(x))/std(x)

Return type:

np.ndarray

neurotools.signal.span(data)[source]

Get the range of values (min,max) spanned by a dataset

Parameters:

data (np.array)

Returns:

span – np.max(data)-np.min(data)

Return type:

non-negative number

neurotools.signal.mintomax(x, prefix=None, doprint=True)[source]
neurotools.signal.unit_length(x, axis=0)[source]

Interpret given axis of multidimensional array as vectors, and normalize them to unit length.

Parameters:
  • x (np.array)

  • axis (int or tuple, default None)

Returns:

u – vectors in x normalized to unit length

Return type:

np.array

neurotools.signal.spaced_derivative(x)[source]

Differentiate a 1D timeseries returning a new vector with the same number of samples. This smoothly interpolates between a forward difference at the start of the signal and a backward difference at the end of the signal.

Parameters:

x (1D np.float32) – Signal to differentiate

Returns:

dx

Return type:

1D np.float32

neurotools.signal.upsample(x, factor=4)[source]

Uses fourier transform to upsample x by some factor.

Operations:

  1. remove linear trend

  2. mirror to get reflected boundary

  3. take the fourier transform

  4. add padding zeros to FFT to effectively upsample

  5. taking inverse fourier transform

  6. remove mirroring

  7. restore linear tend

Parameters:
  • factor (int) – Integer upsampling factor. Default is 4.

  • x (array-like) – X is cast to float64 before processing. Complex values are not supported.

Returns:

x – upsampled x

Return type:

array

neurotools.signal.nice_interp(a, b, t)[source]

numpy.interp1d with nice defaults

Parameters:
  • a (x values for interpolation)

  • b (y values for interpolation)

  • t (x values to sample at)

Returns:

np.array

Return type:

interpolated values

neurotools.signal.autocorrelation(x, lags=None, center=True, normalize=True)[source]

Computes the normalized autocorrelation over the specified time-lags using convolution. Autocorrelation is normalized such that the zero-lag autocorrelation is 1.

TODO, fix: For long lags it uses FFT, but has a different normalization from the time-domain implementation for short lags. In practice this will not matter.

Parameters:
  • x (1d array) – Data for which to compute autocorrelation function

  • lags (int,) – Default is min(200,len(x), Number of time-lags over which to compute the ACF.

  • center (bool, default True) – Whether to mean-center data before taking autocorrelation

  • normalize (bool, default True) – Whether to normalize by zero-lag signal variance

Returns:

Autocorrelation function, length 2*lags + 1

Return type:

ndarray

neurotools.signal.fftacorr1d(x)[source]

Autocorrelogram via FFT.

Parameters:

x (bp.float32)

neurotools.signal.zgrid(L)[source]

2D grid coordinates as complex numbers, ranging from -L/2 to L/2

Parameters:

L (int) – Desired size of LxL grid

Returns:

np.complex64

Return type:

LxL coordinate grid; center is zero.

neurotools.signal.make_lagged(x, NLAGS=5, LAGSPACE=1)[source]

Create shifted/lagged copies of a 1D signal. These are retrospective (causal) features.

Parameters:
  • x (1D np.array length T)

  • NLAGS (positive int; default 5)

  • LAGSPACE (positive int; default 1)

Returns:

result – The first element is the original unshifted signal. Later elements are shifts progressively further back in time.

Return type:

NLAGS×T np.array

neurotools.signal.linear_cosine_basis(TIMERES=100, NBASIS=10, normalize=True)[source]

Cosine basis tiling the unit interval

Parameters:
  • TIMERES (int; default 100) – Number of samples used to tile the unit interval.

  • NBASIS (int; default 10) – Number of cosine basis functions to prepare

  • normalize (boolean; default True) – Normalize sum of each basis element to 1?

Returns:

B

Return type:

np.array

neurotools.signal.circular_cosine_basis(N, T)[source]

Periodic raised-consine basis.

Parameters:
  • N (number of basis functions)

  • T (grid resolution)

Returns:

B – Periodic raised-consine basis.

Return type:

np.array