Getting started¶
Using YASS pre-built pipelines¶
YASS configuration file¶
YASS is configured using a YAML file, below is an example of such configuration:
########################################################
# YASS configuration example (all sections and values) #
########################################################
data:
# project's root folder, data will be loaded and saved here
# can be an absolute or relative path
root_folder: data/retina/
# recordings filename (must be a binary file), details about the recordings
# are specified in the recordings section
recordings: data.bin
# channel geometry filename , supports txt (one x, y pair per line,
# separated by spaces) or a npy file with shape (n_channels, 2),
# where every row contains a x, y pair. see yass.geometry.parse for details
geometry: geometry.npy
resources:
# maximum memory per batch allowed (only relevant for preprocess
# and detection step, which perform batch processing)
max_memory: 200MB
# maximum memory per batch allowed (only relevant for detection step
# which uses tensorflow GPU is available)
max_memory_gpu: 1GB
# number of processes to use for operations that support parallel execution,
# 'max' will use all cores, if you as an int, it will use that many cores
processes: max
recordings:
# precision of the recording – must be a valid numpy dtype
dtype: int16
# recording rate (in Hz)
sampling_rate: 20000
# number of channels
n_channels: 49
# channels spatial radius to consider them neighbors, see
# yass.geometry.find_channel_neighbors for details
spatial_radius: 70
# temporal length of waveforms in ms
spike_size_ms: 1.5
# recordings order, one of ('channels', 'samples'). In a dataset with k
# observations per channel and j channels: 'channels' means first k
# contiguous observations come from channel 0, then channel 1, and so on.
# 'sample' means first j contiguous data are the first observations from
# all channels, then the second observations from all channels and so on
order: samples
If you want to use a Neural Network as detector, you need to provide your own Neural Network. YASS provides tools for easily training the model, see this tutorial for details.
If you do not want to use a Neural Network, you can use the threshold detector instead.
For details regarding the configuration file see YASS configuration file.
Running YASS from the command line¶
After installing yass, you can sort spikes from the command line:
yass sort path/to/config.yaml
Run the following command for more information:
yass sort --help
Running YASS in a Python script¶
import logging
import numpy as np
import yass
from yass import preprocess
from yass import detect
from yass import cluster
from yass import templates
from yass import deconvolute
np.random.seed(0)
# configure logging module to get useful information
logging.basicConfig(level=logging.INFO)
# set yass configuration parameters
yass.set_config('config_sample.yaml', 'deconv-example')
standarized_path, standarized_params, whiten_filter = preprocess.run()
(spike_index_clear,
spike_index_all) = detect.run(standarized_path,
standarized_params,
whiten_filter)
spike_train_clear, tmp_loc, vbParam = cluster.run(spike_index_clear)
(templates_, spike_train,
groups, idx_good_templates) = templates.run(
spike_train_clear, tmp_loc)
spike_train = deconvolute.run(spike_index_all, templates_)
Advanced usage¶
yass sort is a wrapper for the code in yass.pipeline.run, it provides a pipeline implementation with some defaults but you cannot customize it, if you want to use experimental features, the only way to do so is to customize your pipeline:
"""
Example for creating a custom YASS pipeline
"""
import logging
import numpy as np
import yass
from yass import preprocess
from yass import detect
from yass import cluster
from yass import templates
from yass import deconvolute
from yass.preprocess.experimental import run as experimental_run
from yass.detect import nnet
from yass.detect import nnet_experimental
from yass.detect import threshold
# just for reproducibility..,
np.random.seed(0)
# configure logging module to get useful information
logging.basicConfig(level=logging.INFO)
# set yass configuration parameters
yass.set_config('config.yaml', 'custom-example')
# run standarization using the stable implementation (by default)
(standarized_path, standarized_params,
whiten_filter) = preprocess.run()
# ...or using the experimental code (see source code for details)
(standarized_path, standarized_params,
whiten_filter) = preprocess.run(function=experimental_run)
# run detection using threshold detector
(spike_index_clear,
spike_index_all) = detect.run(standarized_path,
standarized_params,
whiten_filter,
function=threshold.run)
# ...or using the neural network detector (see source code for details)
# on changing the network to use
(spike_index_clear,
spike_index_all) = detect.run(standarized_path,
standarized_params,
whiten_filter,
function=nnet.run)
# ...or using the experimental neural network detector
# (see source code for details) on changing the network to use and the
# difference between this and the stable implementation
(spike_index_clear,
spike_index_all) = detect.run(standarized_path,
standarized_params,
whiten_filter,
function=nnet_experimental.run)
# the rest is the same, you can customize the pipeline by passing different
# functions
spike_train_clear, tmp_loc, vbParam = cluster.run(spike_index_clear)
(templates_, spike_train,
groups, idx_good_templates) = templates.run(
spike_train_clear, tmp_loc)
spike_train = deconvolute.run(spike_index_all, templates_)