Cluster

Clustering spikes

yass.cluster.run(*args, **kwargs)[source]

Spike clustering

Parameters:
scores: numpy.ndarray (n_spikes, n_features, n_channels), str or Path

3D array with the scores for the clear spikes, first simension is the number of spikes, second is the nymber of features and third the number of channels. Or path to a npy file

spike_index: numpy.ndarray (n_clear_spikes, 2), str or Path

2D array with indexes for spikes, first column contains the spike location in the recording and the second the main channel (channel whose amplitude is maximum). Or path to an npy file

output_directory: str, optional

Location to store/look for the generate spike train, relative to CONFIG.data.root_folder

if_file_exists: str, optional

One of ‘overwrite’, ‘abort’, ‘skip’. Control de behavior for the spike_train_cluster.npy. file If ‘overwrite’ it replaces the files if exists, if ‘abort’ it raises a ValueError exception if exists, if ‘skip’ it skips the operation if the file exists (and returns the stored file)

save_results: bool, optional

Whether to save spike train to disk (in CONFIG.data.root_folder/relative_to/spike_train_cluster.npy), defaults to False

Returns:
spike_train: (TODO add documentation)

Examples

import logging

import yass
from yass import preprocess
from yass import detect
from yass import cluster

# configure logging module to get useful information
logging.basicConfig(level=logging.INFO)

# set yass configuration parameters
yass.set_config('config_sample.yaml')

(standarized_path, standarized_params, channel_index,
 whiten_filter) = preprocess.run()

(score, spike_index_clear,
 spike_index_all) = detect.run(standarized_path,
                               standarized_params,
                               channel_index,
                               whiten_filter)


spike_train_clear, tmp_loc, vbParam = cluster.run(
    score, spike_index_clear)