Deconvolute¶
The yass.deconvolute
module implements spikes deconvolution,
yass.deconvolute.legacy
contains and old algorithm that will be removed
in the future
-
yass.deconvolute.
run
(spike_index, templates, recordings_filename='standarized.bin', function=<function legacy>)[source]¶ Deconvolute spikes
Parameters: - spike_index: numpy.ndarray (n_data, 2), str or pathlib.Path
A 2D array for all potential spikes whose first column indicates the spike time and the second column the principal channels. Or path to npy file
- templates: numpy.ndarray (n_channels, waveform_size, n_templates), str
- or pathlib.Path
A 3D array with the templates. Or path to npy file
- output_directory: str, optional
Output directory (relative to CONFIG.data.root_folder) used to load the recordings to generate templates, defaults to tmp/
- recordings_filename: str, optional
Recordings filename (relative to CONFIG.data.root_folder/ output_directory) used to draw the waveforms from, defaults to standarized.bin
Returns: - spike_train: numpy.ndarray (n_clear_spikes, 2)
A 2D array with the spike train, first column indicates the spike time and the second column the neuron ID
Examples
import logging import numpy as np import yass from yass import preprocess from yass import detect from yass import cluster from yass import templates from yass import deconvolute np.random.seed(0) # configure logging module to get useful information logging.basicConfig(level=logging.INFO) # set yass configuration parameters yass.set_config('config_sample.yaml', 'deconv-example') standarized_path, standarized_params, whiten_filter = preprocess.run() (spike_index_clear, spike_index_all) = detect.run(standarized_path, standarized_params, whiten_filter) spike_train_clear, tmp_loc, vbParam = cluster.run(spike_index_clear) (templates_, spike_train, groups, idx_good_templates) = templates.run( spike_train_clear, tmp_loc) spike_train = deconvolute.run(spike_index_all, templates_)