diff --git a/.gitignore b/.gitignore index db8d64aa3..1d06306d5 100644 --- a/.gitignore +++ b/.gitignore @@ -165,7 +165,6 @@ ehthumbs_vista.db Desktop.ini #dev and comparison -/bin #everything from test except the one the user want to send back and an example one. caiman/tests/comparison/tests/ !caiman/tests/comparison/tests/example/* diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 8b300209f..06a8cc57c 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -31,7 +31,7 @@ The workflow for contributing to Caiman is roughly illustrated by the numbers in Below we have instructions on how to do all of the above steps. While all of this may seem like a lot, some of the steps are extremely simple. Also, once you have done it once, you will have the recipe and it will be pretty easy. Finally, it is a very rewarding experience to contribute to an open source project -- we hope you'll take the plunge! ## First, create a dedicated development environment -If you have downloaded Caiman for standard use, you probably installed it using `conda` or `mamba` as described on the README page. As a contributor, you will want to set up a dedicated development environment. This means you will be setting up a version of Caiman you will edit and tweak, uncoupled from your main installation for everyday use. To set up a development environment so you can follow the worflow outlined above, do the following: +If you have downloaded Caiman for standard use, you probably installed it using `conda` or `mamba` as described on the README page. As a contributor, you will want to set up a dedicated development environment. This means you will be setting up a version of Caiman you will edit and tweak, uncoupled from your main installation for everyday use. To set up a development environment so you can follow the workflow outlined above, do the following: 1. Fork and clone the caiman repository Go to the [Caiman repo](https://github.com/flatironinstitute/CaImAn) and hit the `Fork` button at the top right of the page. You now have Caiman on your own GitHub page! On your computer, in your conda prompt, go to a directory where you want Caiman to download, and clone your personal Caiman repo: `git clone https://github.com//CaImAn.git` where is replaced by your github username. diff --git a/Jenkinsfile b/Jenkinsfile index b7e0ca0ae..92e5c8506 100644 --- a/Jenkinsfile +++ b/Jenkinsfile @@ -94,7 +94,7 @@ Check console output at $BUILD_URL to view full results. Building $BRANCH_NAME for $CAUSE $JOB_DESCRIPTION -Chages: +Changes: $CHANGES End of build log: diff --git a/bin/caiman_gui.py b/bin/caiman_gui.py index fd21e3baf..abc1121af 100755 --- a/bin/caiman_gui.py +++ b/bin/caiman_gui.py @@ -109,11 +109,11 @@ def selectionchange(self,i): estimates.Cn = cm.local_correlations(mov, swap_dim=False) #Cn = estimates.Cn -# We rotate our components 90 degrees right because of incompatiability of pyqtgraph and pyplot +# We rotate our components 90 degrees right because of incompatibility of pyqtgraph and pyplot def rotate90(img, right=None, vector=None, sparse=False): # rotate the img 90 degrees # we first transpose the img then flip axis - # If right is ture, then rotate 90 degrees right, otherwise, rotate left + # If right is true, then rotate 90 degrees right, otherwise, rotate left # If vector is True, then we first reshape the spatial 1D vector to 2D then rotate # If vector is False, then we directly rotate the matrix global dims diff --git a/caiman/base/movies.py b/caiman/base/movies.py index cb9d79883..ff860be82 100644 --- a/caiman/base/movies.py +++ b/caiman/base/movies.py @@ -129,7 +129,7 @@ def motion_correct(self, Returns: - self: motion corected movie, it might change the object itself + self: motion corrected movie, it might change the object itself shifts : tuple, contains x & y shifts and correlation with template @@ -208,7 +208,7 @@ def motion_correct_3d(self, Returns: - self: motion corected movie, it might change the object itself + self: motion corrected movie, it might change the object itself shifts : tuple, contains x, y, and z shifts and correlation with template @@ -535,7 +535,7 @@ def removeBL(self, windowSize:int=100, quantilMin:int=8, in_place:bool=False, re window size over which to compute the baseline (the larger the faster the algorithm and the less granular quantilMin: float - percentil to be used as baseline value + percentile to be used as baseline value in_place: bool update movie in place returnBL: @@ -764,7 +764,7 @@ def IPCA(self, components: int = 50, batch: int = 1000) -> tuple[np.ndarray, np. frame_size = h * w frame_samples = np.reshape(self, (num_frames, frame_size)).T - # run IPCA to approxiate the SVD + # run IPCA to approximate the SVD ipca_f = sklearn.decomposition.IncrementalPCA(n_components=components, batch_size=batch) ipca_f.fit(frame_samples) @@ -1783,7 +1783,7 @@ def sbxread(filename: str, k: int = 0, n_frames=np.inf) -> np.ndarray: # Determine number of frames in whole file max_idx = os.path.getsize(filename + '.sbx') / info['recordsPerBuffer'] / info['sz'][1] * factor / 4 - 1 - # Paramters + # Parameters N = max_idx + 1 # Last frame N = np.minimum(N, n_frames) @@ -1834,7 +1834,7 @@ def sbxreadskip(filename: str, subindices: slice) -> np.ndarray: # Determine number of frames in whole file max_idx = int(os.path.getsize(filename + '.sbx') / info['recordsPerBuffer'] / info['sz'][1] * factor / 4 - 1) - # Paramters + # Parameters if isinstance(subindices, slice): if subindices.start is None: start = 0 diff --git a/caiman/base/rois.py b/caiman/base/rois.py index aa86908fe..1d0ca4b61 100644 --- a/caiman/base/rois.py +++ b/caiman/base/rois.py @@ -720,7 +720,7 @@ def distance_masks(M_s:list, cm_s: list[list], max_dist: float, enclosed_thr: Op for gt_comp, test_comp, cmgt_comp, cmtest_comp in zip(M_s[:-1], M_s[1:], cm_s[:-1], cm_s[1:]): # todo : better with a function that calls itself - # not to interfer with M_s + # not to interfere with M_s gt_comp = gt_comp.copy()[:, :] test_comp = test_comp.copy()[:, :] @@ -753,8 +753,8 @@ def distance_masks(M_s:list, cm_s: list[list], max_dist: float, enclosed_thr: Op # if we don't have even a union this is pointless if union > 0: - # intersection is removed from union since union contains twice the overlaping area - # having the values in this format 0-1 is helpfull for the hungarian algorithm that follows + # intersection is removed from union since union contains twice the overlapping area + # having the values in this format 0-1 is helpful for the hungarian algorithm that follows D[i, j] = 1 - 1. * intersection / \ (union - intersection) if enclosed_thr is not None: @@ -791,7 +791,7 @@ def find_matches(D_s, print_assignment: bool = False) -> tuple[list, list]: matches.append(indexes) DD = D.copy() total = [] - # we want to extract those informations from the hungarian algo + # we want to extract those information from the hungarian algo for row, column in indexes2: value = DD[row, column] if print_assignment: @@ -1095,7 +1095,7 @@ def extract_binary_masks_blob(A, Args: A: scipy.sparse matrix - contains the components as outputed from the CNMF algorithm + contains the components as outputted from the CNMF algorithm neuron_radius: float neuronal radius employed in the CNMF settings (gSiz) diff --git a/caiman/behavior/behavior.py b/caiman/behavior/behavior.py index 9d9b1abcd..2f0a49dc0 100644 --- a/caiman/behavior/behavior.py +++ b/caiman/behavior/behavior.py @@ -178,7 +178,7 @@ def compute_optical_flow(m, mask selecting relevant pixels polar_coord: boolean - wheather to return the coordinate in polar coordinates (or cartesian) + whether to return the coordinate in polar coordinates (or cartesian) do_show: bool show flow movie diff --git a/caiman/cluster.py b/caiman/cluster.py index 7a01d4dff..d4e0068ef 100644 --- a/caiman/cluster.py +++ b/caiman/cluster.py @@ -386,14 +386,14 @@ def setup_cluster(backend:str = 'multiprocessing', 'You have configured the cluster setup to not raise an exception.') else: raise Exception( - 'A cluster is already runnning. Terminate with dview.terminate() if you want to restart.') + 'A cluster is already running. Terminate with dview.terminate() if you want to restart.') if platform.system() == 'Darwin': try: if 'kernel' in get_ipython().trait_names(): # type: ignore # If you're on OSX and you're running under Jupyter or Spyder, # which already run the code in a forkserver-friendly way, this # can eliminate some setup and make this a reasonable approach. - # Otherwise, seting VECLIB_MAXIMUM_THREADS=1 or using a different + # Otherwise, setting VECLIB_MAXIMUM_THREADS=1 or using a different # blas/lapack is the way to avoid the issues. # See https://github.com/flatironinstitute/CaImAn/issues/206 for more # info on why we're doing this (for now). diff --git a/caiman/components_evaluation.py b/caiman/components_evaluation.py index 39c7eade6..c6fa78bbf 100644 --- a/caiman/components_evaluation.py +++ b/caiman/components_evaluation.py @@ -64,7 +64,7 @@ def compute_event_exceptionality(traces: np.ndarray, value estimate of the quality of components (the lesser the better) erfc: ndarray - probability at each time step of observing the N consequtive actual trace values given the distribution of noise + probability at each time step of observing the N consecutive actual trace values given the distribution of noise noise_est: ndarray the components ordered according to the fitness @@ -394,10 +394,10 @@ def evaluate_components(Y: np.ndarray, value estimate of the quality of components (the lesser the better) on diff(trace) erfc_raw: ndarray - probability at each time step of observing the N consequtive actual trace values given the distribution of noise on the raw trace + probability at each time step of observing the N consecutive actual trace values given the distribution of noise on the raw trace erfc_raw: ndarray - probability at each time step of observing the N consequtive actual trace values given the distribution of noise on diff(trace) + probability at each time step of observing the N consecutive actual trace values given the distribution of noise on diff(trace) r_values: list float values representing correlation between component and spatial mask obtained by averaging important points diff --git a/caiman/mmapping.py b/caiman/mmapping.py index bb6923716..532ae296c 100644 --- a/caiman/mmapping.py +++ b/caiman/mmapping.py @@ -90,7 +90,7 @@ def save_memmap_each(fnames: list[str], list of path to the filenames dview: ipyparallel dview - used to perform computation in parallel. If none it will be signle thread + used to perform computation in parallel. If none it will be single thread base_name str BaseName for the file to be creates. If not given the file itself is used @@ -358,7 +358,7 @@ def save_memmap(filenames:list[str], x,y, and z downsampling factors (0.5 means downsampled by a factor 2) remove_init: int - number of frames to remove at the begining of each tif file + number of frames to remove at the beginning of each tif file (used for resonant scanning images if laser in rutned on trial by trial) idx_xy: tuple size 2 [or 3 for 3D data] @@ -449,11 +449,11 @@ def save_memmap(filenames:list[str], if slices is not None: Yr = Yr[tuple(slices)] else: - if idx_xy is None: #todo remove if not used, superceded by the slices parameter + if idx_xy is None: #todo remove if not used, superseded by the slices parameter Yr = Yr[remove_init:] - elif len(idx_xy) == 2: #todo remove if not used, superceded by the slices parameter + elif len(idx_xy) == 2: #todo remove if not used, superseded by the slices parameter Yr = Yr[remove_init:, idx_xy[0], idx_xy[1]] - else: #todo remove if not used, superceded by the slices parameter + else: #todo remove if not used, superseded by the slices parameter Yr = Yr[remove_init:, idx_xy[0], idx_xy[1], idx_xy[2]] else: diff --git a/caiman/motion_correction.py b/caiman/motion_correction.py index 4f77e6bcd..aea84698c 100644 --- a/caiman/motion_correction.py +++ b/caiman/motion_correction.py @@ -113,7 +113,7 @@ def __init__(self, fname, min_mov=None, dview=None, max_shifts=(6, 6), niter_rig will quickly initialize a template with the first frames splits_rig': int - for parallelization split the movies in num_splits chuncks across time + for parallelization split the movies in num_splits chunks across time num_splits_to_process_rig: list, if none all the splits are processed and the movie is saved, otherwise at each iteration @@ -123,13 +123,13 @@ def __init__(self, fname, min_mov=None, dview=None, max_shifts=(6, 6), niter_rig intervals at which patches are laid out for motion correction overlaps: tuple - overlap between pathes (size of patch strides+overlaps) + overlap between patches (size of patch strides+overlaps) pw_rigig: bool, default: False flag for performing motion correction when calling motion_correct splits_els':list - for parallelization split the movies in num_splits chuncks across time + for parallelization split the movies in num_splits chunks across time num_splits_to_process_els: list, if none all the splits are processed and the movie is saved otherwise at each iteration @@ -404,7 +404,7 @@ def apply_shifts_movie(self, fname, rigid_shifts:bool=None, save_memmap:bool=Fal rigid_shifts: bool (True) apply rigid or pw-rigid shifts (must exist in the mc object) - deprectated (read directly from mc.pw_rigid) + deprecated (read directly from mc.pw_rigid) save_memmap: bool (False) flag for saving the resulting file in memory mapped form @@ -2118,7 +2118,7 @@ def tile_and_correct(img, template, strides, overlaps, max_shifts, newoverlaps=N strides of the patches in which the FOV is subdivided overlaps: tuple - amount of pixel overlaping between patches along each dimension + amount of pixel overlapping between patches along each dimension max_shifts: tuple max shifts in x and y @@ -2127,7 +2127,7 @@ def tile_and_correct(img, template, strides, overlaps, max_shifts, newoverlaps=N strides between patches along each dimension when upsampling the vector fields newoverlaps:tuple - amount of pixel overlaping between patches along each dimension when upsampling the vector fields + amount of pixel overlapping between patches along each dimension when upsampling the vector fields upsample_factor_grid: int if newshapes or newstrides are not specified this is inferred upsampling by a constant factor the cvector field @@ -2306,7 +2306,7 @@ def tile_and_correct(img, template, strides, overlaps, max_shifts, newoverlaps=N new_img = new_img / normalizer - else: # in case the difference in shift between neighboring patches is larger than 0.5 pixels we do not interpolate in the overlaping area + else: # in case the difference in shift between neighboring patches is larger than 0.5 pixels we do not interpolate in the overlapping area half_overlap_x = int(newoverlaps[0] / 2) half_overlap_y = int(newoverlaps[1] / 2) for (x, y), (idx_0, idx_1), im, (_, _), weight_mat in zip(start_step, xy_grid, imgs, total_shifts, weight_matrix): @@ -2364,7 +2364,7 @@ def tile_and_correct_3d(img:np.ndarray, template:np.ndarray, strides:tuple, over strides of the patches in which the FOV is subdivided overlaps: tuple - amount of pixel overlaping between patches along each dimension + amount of pixel overlapping between patches along each dimension max_shifts: tuple max shifts in x, y, and z @@ -2373,7 +2373,7 @@ def tile_and_correct_3d(img:np.ndarray, template:np.ndarray, strides:tuple, over strides between patches along each dimension when upsampling the vector fields newoverlaps:tuple - amount of pixel overlaping between patches along each dimension when upsampling the vector fields + amount of pixel overlapping between patches along each dimension when upsampling the vector fields upsample_factor_grid: int if newshapes or newstrides are not specified this is inferred upsampling by a constant factor the cvector field @@ -2570,7 +2570,7 @@ def tile_and_correct_3d(img:np.ndarray, template:np.ndarray, strides:tuple, over new_img = new_img / normalizer - else: # in case the difference in shift between neighboring patches is larger than 0.5 pixels we do not interpolate in the overlaping area + else: # in case the difference in shift between neighboring patches is larger than 0.5 pixels we do not interpolate in the overlapping area half_overlap_x = int(newoverlaps[0] / 2) half_overlap_y = int(newoverlaps[1] / 2) half_overlap_z = int(newoverlaps[2] / 2) @@ -2964,7 +2964,7 @@ def motion_correct_batch_pwrigid(fname, max_shifts, strides, overlaps, add_to_mo list of produced templates, one per batch shifts: list - inferred rigid shifts to corrrect the movie + inferred rigid shifts to correct the movie Raises: Exception 'You need to initialize the template with a good estimate. See the motion' diff --git a/caiman/source_extraction/cnmf/cnmf.py b/caiman/source_extraction/cnmf/cnmf.py index 520f6e70a..0233dd349 100644 --- a/caiman/source_extraction/cnmf/cnmf.py +++ b/caiman/source_extraction/cnmf/cnmf.py @@ -392,7 +392,7 @@ def fit_file(self, motion_correct=False, indices=None, include_eval=False): def refit(self, images, dview=None): """ - Refits the data using CNMF initialized from a previous interation + Refits the data using CNMF initialized from a previous iteration Args: images diff --git a/caiman/source_extraction/cnmf/deconvolution.py b/caiman/source_extraction/cnmf/deconvolution.py index da459327f..6d185c63a 100644 --- a/caiman/source_extraction/cnmf/deconvolution.py +++ b/caiman/source_extraction/cnmf/deconvolution.py @@ -151,7 +151,7 @@ def constrained_foopsi(fluor, bl=None, c1=None, g=None, sn=None, p=None, metho c1 = c[0] - # remove intial calcium to align with the other foopsi methods + # remove initial calcium to align with the other foopsi methods # it is added back in function constrained_foopsi_parallel of temporal.py c -= c1 * g**np.arange(len(fluor)) elif p == 2: @@ -323,7 +323,7 @@ def cvxpy_foopsi(fluor, g, sn, b=None, c1=None, bas_nonneg=True, solvers=None): should the baseline be estimated solvers: tuple of two strings - primary and secondary solvers to be used. Can be choosen between ECOS, SCS, CVXOPT + primary and secondary solvers to be used. Can be chosen between ECOS, SCS, CVXOPT Returns: c: estimated calcium trace @@ -332,7 +332,7 @@ def cvxpy_foopsi(fluor, g, sn, b=None, c1=None, bas_nonneg=True, solvers=None): c1: esimtated initial calcium value - g: esitmated parameters of the autoregressive model + g: estimated parameters of the autoregressive model sn: estimated noise level @@ -501,7 +501,7 @@ def _nnls(KK, Ky, s=None, mask=None, tol=1e-9, max_iter=None): w = np.argmax(l) P[w] = True - try: # likely unnnecessary try-except-clause for robustness sake + try: # likely unnecessary try-except-clause for robustness sake #mu = np.linalg.inv(KK[P][:, P]).dot(Ky[P]) mu = np.linalg.solve(KK[P][:, P], Ky[P]) except: @@ -552,7 +552,7 @@ def onnls(y, g, lam=0, shift=100, window=None, mask=None, tol=1e-9, max_iter=Non shift : int, optional, default 100 Number of frames by which to shift window from on run of NNLS to the next. - window : int, optional, default None (200 or larger dependend on g) + window : int, optional, default None (200 or larger dependent on g) Window size. mask : array of bool, shape (n,), optional, default (True,)*n @@ -671,7 +671,7 @@ def constrained_oasisAR2(y, g, sn, optimize_b=True, b_nonneg=True, optimize_g=0, shift : int, optional, default 100 Number of frames by which to shift window from on run of NNLS to the next. - window : int, optional, default None (200 or larger dependend on g) + window : int, optional, default None (200 or larger dependent on g) Window size. tol : float, optional, default 1e-9 diff --git a/caiman/source_extraction/cnmf/estimates.py b/caiman/source_extraction/cnmf/estimates.py index ee29de2a0..8c48a4ada 100644 --- a/caiman/source_extraction/cnmf/estimates.py +++ b/caiman/source_extraction/cnmf/estimates.py @@ -473,7 +473,7 @@ def nb_view_components_3d(self, Yr=None, image_type='mean', dims=None, image_type: 'mean'|'max'|'corr' image to be overlaid to neurons (average of shapes, - maximum of shapes or nearest neigbor correlation of raw data) + maximum of shapes or nearest neighbor correlation of raw data) max_projection: bool plot max projection along specified axis if True, o/w plot layers @@ -981,7 +981,7 @@ class label for neuron shapes Returns: self: Estimates object self.idx_components contains the indeced of components above - the required treshold. + the required threshold. """ dims = params.get('data', 'dims') gSig = params.get('init', 'gSig') @@ -1415,7 +1415,7 @@ def remove_small_large_neurons(self, min_size_neuro, max_size_neuro, Returns: neurons_to_keep: np.array - indeces of components with size within the acceptable range + indices of components with size within the acceptable range ''' if self.A_thr is None: raise Exception('You need to compute thresholded components before calling remove_duplicates: use the threshold_components method') diff --git a/caiman/source_extraction/cnmf/initialization.py b/caiman/source_extraction/cnmf/initialization.py index b4730126f..cc6e7eeab 100644 --- a/caiman/source_extraction/cnmf/initialization.py +++ b/caiman/source_extraction/cnmf/initialization.py @@ -155,7 +155,7 @@ def initialize_components(Y, K=30, gSig=[5, 5], gSiz=None, ssub=1, tsub=1, nIter SC_kernel='heat', SC_sigma=1, SC_thr=0, SC_normalize=True, SC_use_NN=False, SC_nnn=20, lambda_gnmf=1): """ - Initalize components. This function initializes the spatial footprints, temporal components, + Initialize components. This function initializes the spatial footprints, temporal components, and background which are then further refined by the CNMF iterations. There are four different initialization methods depending on the data you're processing: 'greedy_roi': GreedyROI method used in standard 2p processing (default) @@ -274,7 +274,7 @@ def initialize_components(Y, K=30, gSig=[5, 5], gSiz=None, ssub=1, tsub=1, nIter (d1 * d2 [ * d3]) x nb, initialization of spatial background. fin: np.ndarray - nb x T matrix, initalization of temporal background + nb x T matrix, initialization of temporal background Raises: Exception "Unsupported method" @@ -315,11 +315,11 @@ def initialize_components(Y, K=30, gSig=[5, 5], gSiz=None, ssub=1, tsub=1, nIter if method == 'corr_pnr': logging.info("Spatial/Temporal downsampling 1-photon") - # this icrements the performance against ground truth and solves border problems + # this increments the performance against ground truth and solves border problems Y_ds = downscale(Y, tuple([ssub] * len(d) + [tsub]), opencv=False) else: logging.info("Spatial/Temporal downsampling 2-photon") - # this icrements the performance against ground truth and solves border problems + # this increments the performance against ground truth and solves border problems Y_ds = downscale(Y, tuple([ssub] * len(d) + [tsub]), opencv=True) # mean_val = np.mean(Y) # Y_ds = downscale_local_mean(Y, tuple([ssub] * len(d) + [tsub]), cval=mean_val) @@ -379,7 +379,7 @@ def initialize_components(Y, K=30, gSig=[5, 5], gSiz=None, ssub=1, tsub=1, nIter raise Exception('You need to define arguments for local NMF') else: NumCent = options_local_NMF.pop('NumCent', None) - # Max number of centers to import from Group Lasso intialization - if 0, + # Max number of centers to import from Group Lasso initialization - if 0, # we don't run group lasso cent = GetCentersData(Y_ds.transpose([2, 0, 1]), NumCent) sig = Y_ds.shape[:-1] @@ -547,7 +547,7 @@ def sparseNMF(Y_ds, nr, max_iter_snmf=200, alpha=0.5, sigma_smooth=(.5, .5, .5), smoothing along z,x, and y (.5,.5,.5) perc_baseline_snmf: - percentile to remove frmo movie before NMF + percentile to remove from movie before NMF nb: int Number of background components @@ -821,7 +821,7 @@ def greedyROI(Y, nr=30, gSig=[5, 5], gSiz=[11, 11], nIter=5, kernel=None, nb=1, # we define a squared size around it ijSig = [[np.maximum(ij[c] - gHalf[c], 0), np.minimum(ij[c] + gHalf[c] + 1, d[c])] for c in range(len(ij))] - # we create an array of it (fl like) and compute the trace like the pixel ij trough time + # we create an array of it (fl like) and compute the trace like the pixel ij through time dataTemp = np.array( Y[tuple([slice(*a) for a in ijSig])].copy(), dtype=np.float32) traceTemp = np.array(np.squeeze(rho[ij]), dtype=np.float32) @@ -920,7 +920,7 @@ def onclick(event): # we define a squared size around it ijSig = [[np.maximum(ij[c] - gHalf[c], 0), np.minimum(ij[c] + gHalf[c] + 1, d[c])] for c in range(len(ij))] - # we create an array of it (fl like) and compute the trace like the pixel ij trough time + # we create an array of it (fl like) and compute the trace like the pixel ij through time dataTemp = np.array( Y[tuple([slice(*a) for a in ijSig])].copy(), dtype=np.float32) traceTemp = np.array(np.squeeze(rho[tuple(ij)]), dtype=np.float32) @@ -978,7 +978,7 @@ def finetune(Y, cin, nIter=5): Y: D1*d2*T*K patches c: array T*K - the inital calcium traces + the initial calcium traces nIter: int True indicates that time is listed in the last axis of Y (matlab format) @@ -1001,7 +1001,7 @@ def finetune(Y, cin, nIter=5): for _ in range(nIter): a = np.maximum(np.dot(Y, cin), 0) a = a / np.sqrt(np.sum(a**2) + np.finfo(np.float32).eps) # compute the l2/a - # c as the variation of thoses patches + # c as the variation of those patches cin = np.sum(Y * a[..., np.newaxis], tuple(np.arange(Y.ndim - 1))) return a, cin @@ -1451,7 +1451,7 @@ def init_neurons_corr_pnr(data, max_number=None, gSiz=15, gSig=None, S: np.ndarray (K*T) deconvolved calcium traces of all neurons center: np.ndarray - center localtions of all neurons + center locations of all neurons """ if swap_dim: diff --git a/caiman/source_extraction/cnmf/map_reduce.py b/caiman/source_extraction/cnmf/map_reduce.py index d32089b63..5a629a028 100644 --- a/caiman/source_extraction/cnmf/map_reduce.py +++ b/caiman/source_extraction/cnmf/map_reduce.py @@ -25,7 +25,7 @@ def cnmf_patches(args_in): file_name: string full path to an npy file (2D, pixels x time) containing the movie - shape: tuple of thre elements + shape: tuple of three elements dimensions of the original movie across y, x, and time params: @@ -44,18 +44,18 @@ def cnmf_patches(args_in): 'ipyparallel' or 'single_thread' or SLURM n_processes: int - nuber of cores to be used (should be less than the number of cores started with ipyparallel) + number of cores to be used (should be less than the number of cores started with ipyparallel) memory_fact: double unitless number accounting how much memory should be used. - It represents the fration of patch processed in a single thread. + It represents the fraction of patch processed in a single thread. You will need to try different values to see which one would work low_rank_background: bool if True the background is approximated with gnb components. If false every patch keeps its background (overlaps are randomly assigned to one spatial component only) Returns: - A_tot: matrix containing all the componenents from all the patches + A_tot: matrix containing all the components from all the patches C_tot: matrix containing the calcium traces corresponding to A_tot @@ -150,7 +150,7 @@ def run_CNMF_patches(file_name, shape, params, gnb=1, dview=None, memory_fact: double unitless number accounting how much memory should be used. - It represents the fration of patch processed in a single thread. + It represents the fraction of patch processed in a single thread. You will need to try different values to see which one would work border_pix: int diff --git a/caiman/source_extraction/cnmf/merging.py b/caiman/source_extraction/cnmf/merging.py index 01cd42c15..d63be3d99 100644 --- a/caiman/source_extraction/cnmf/merging.py +++ b/caiman/source_extraction/cnmf/merging.py @@ -153,7 +153,7 @@ def merge_components(Y, A, b, C, R, f, S, sn_pix, temporal_params, for ii in range(nr): overlap_indices = A_corr[ii, :].nonzero()[1] if len(overlap_indices) > 0: - # we chesk the correlation of the calcium traces for eahc overlapping components + # we chesk the correlation of the calcium traces for each overlapping components corr_values = [scipy.stats.pearsonr(C[ii, :], C[jj, :])[ 0] for jj in overlap_indices] C_corr[ii, overlap_indices] = corr_values diff --git a/caiman/source_extraction/cnmf/oasis.pyx b/caiman/source_extraction/cnmf/oasis.pyx index 83d898e02..b1336e0f2 100644 --- a/caiman/source_extraction/cnmf/oasis.pyx +++ b/caiman/source_extraction/cnmf/oasis.pyx @@ -43,7 +43,7 @@ cdef class OASIS: s_min : float, optional, default 0 Minimal non-zero activity within each bin (minimal 'spike size'). b : float, optional, default 0 - Baseline that is substracted. + Baseline that is subtracted. num_empty_samples : int Number of elapsed frames until neuron is added and OASIS initialized g2 : float @@ -65,7 +65,7 @@ cdef class OASIS: r : float Rise factor. Only for AR(2). g12, g11g11, g11g12 : arrays of float - Precomputed quantitites related to the calcium kernel. Only for AR(2). + Precomputed quantities related to the calcium kernel. Only for AR(2). References ---------- diff --git a/caiman/source_extraction/cnmf/online_cnmf.py b/caiman/source_extraction/cnmf/online_cnmf.py index 5b0b6896c..472f35b0c 100644 --- a/caiman/source_extraction/cnmf/online_cnmf.py +++ b/caiman/source_extraction/cnmf/online_cnmf.py @@ -686,7 +686,7 @@ def fit_next(self, t, frame_in, num_iters_hals=3): self.estimates.downscale_matrix.dot( self.estimates.b0)) - Ab_.T.dot(self.estimates.b0) - # set the update counter to 0 for components that are overlaping the newly added + # set the update counter to 0 for components that are overlapping the newly added idx_overlap = self.estimates.AtA[nb_:-num_added, -num_added:].nonzero()[0] self.update_counter[idx_overlap] = 0 self.t_detect.append(time() - t_new) @@ -1281,7 +1281,7 @@ def fit_online(self, **kwargs): ' contains NaN') if t % 500 == 0: logging.info('Epoch: ' + str(iter + 1) + '. ' + str(t) + - ' frames have beeen processed in total. ' + + ' frames have been processed in total. ' + str(self.N - old_comps) + ' new components were added. Total # of components is ' + str(self.estimates.Ab.shape[-1] - self.params.get('init', 'nb'))) @@ -1556,7 +1556,7 @@ def seeded_initialization(Y, Ain, dims=None, init_batch=1000, order_init=None, g number of background components order_init: list - order of elements to be initalized using rank1 nmf restricted to the support of + order of elements to be initialized using rank1 nmf restricted to the support of each component Output: @@ -1867,7 +1867,7 @@ def init_shapes_and_sufficient_stats(Y, A, C, b, f, W=None, b0=None, ssub_B=1, b A_smooth = np.transpose([gaussian_filter(np.array(a).reshape( dims, order='F'), 0).ravel(order='F') for a in Ab.T]) A_smooth[A_smooth < 1e-2] = 0 - # set explicity zeros of Ab to small value, s.t. ind_A and Ab.indptr match + # set explicitly zeros of Ab to small value, s.t. ind_A and Ab.indptr match Ab += 1e-6 * A_smooth Ab = csc_matrix(Ab) ind_A = [Ab.indices[Ab.indptr[m]:Ab.indptr[m + 1]] @@ -2251,7 +2251,7 @@ def update_num_components(t, sv, Ab, Cf, Yres_buf, Y_buf, rho_buf, # number of total components (including background) M = np.shape(Ab)[-1] - N = M - gnb # number of coponents (without background) + N = M - gnb # number of components (without background) if corr_img is None: sv -= rho_buf.get_first() diff --git a/caiman/source_extraction/cnmf/params.py b/caiman/source_extraction/cnmf/params.py index 8ea80bfe6..712834521 100644 --- a/caiman/source_extraction/cnmf/params.py +++ b/caiman/source_extraction/cnmf/params.py @@ -45,7 +45,7 @@ def __init__(self, fnames=None, dims=None, dxy=(1, 1), ): """Class for setting the processing parameters. All parameters for CNMF, online-CNMF, quality testing, and motion correction can be set here and then used in the various processing pipeline steps. - The prefered way to set parameters is by using the set function, where a subclass is determined and a + The preferred way to set parameters is by using the set function, where a subclass is determined and a dictionary is passed. The whole dictionary can also be initialized at once by passing a dictionary params_dict when initializing the CNMFParams object. Direct setting of the positional arguments in CNMFParams is only present for backwards compatibility reasons and should not be used if possible. @@ -106,7 +106,7 @@ def __init__(self, fnames=None, dims=None, dxy=(1, 1), (to be used with one background per patch) del_duplicates: bool, default: False - Delete duplicate components in the overlaping regions between neighboring patches. If False, + Delete duplicate components in the overlapping regions between neighboring patches. If False, then merging is used. only_init: bool, default: True @@ -712,7 +712,7 @@ def __init__(self, fnames=None, dims=None, dxy=(1, 1), 'nb': gnb, # number of global background components # whether to pixelwise equalize the movies during initialization 'normalize_init': normalize_init, - # dictionary with parameters to pass to local_NMF initializaer + # dictionary with parameters to pass to local_NMF initializer 'options_local_NMF': options_local_NMF, 'perc_baseline_snmf': 20, 'ring_size_factor': ring_size_factor, @@ -1031,7 +1031,7 @@ def __eq__(self, other): def to_dict(self): """Returns the params class as a dictionary with subdictionaries for each - catergory.""" + category.""" return {'data': self.data, 'spatial_params': self.spatial, 'temporal_params': self.temporal, 'init_params': self.init, 'preprocess_params': self.preprocess, 'patch_params': self.patch, 'online': self.online, 'quality': self.quality, diff --git a/caiman/source_extraction/cnmf/pre_processing.py b/caiman/source_extraction/cnmf/pre_processing.py index d5dae11de..548054ce2 100644 --- a/caiman/source_extraction/cnmf/pre_processing.py +++ b/caiman/source_extraction/cnmf/pre_processing.py @@ -4,7 +4,7 @@ """ A set of pre-processing operations in the input dataset: 1. Interpolation of missing data -2. Indentification of saturated pixels +2. Identification of saturated pixels 3. Estimation of noise level for each imaged voxel 4. Estimation of global time constants @@ -306,7 +306,7 @@ def fft_psd_parallel(Y, sn_s, i, num_pixels, **kwargs): res: ndarray(double) noise associated to each pixel psx: ndarray - position of thoses pixels + position of those pixels """ idxs = list(range(i, i + num_pixels)) res = get_noise_fft(Y[idxs], **kwargs) @@ -335,7 +335,7 @@ def fft_psd_multithreading(args): res: ndarray(double) noise associated to each pixel psx: ndarray - position of thoses pixels + position of those pixels """ (Y, i, num_pixels, kwargs) = args @@ -528,7 +528,7 @@ def preprocess_data(Y, sn=None, dview=None, n_pixels_per_process=100, g: np.ndarray (p x 1) Discrete time constants psx: ndarray - position of thoses pixels + position of those pixels sn_s: ndarray (memory mapped) file where to store the results of computation. """ diff --git a/caiman/source_extraction/cnmf/spatial.py b/caiman/source_extraction/cnmf/spatial.py index 1cbb1a909..329dff2d3 100644 --- a/caiman/source_extraction/cnmf/spatial.py +++ b/caiman/source_extraction/cnmf/spatial.py @@ -498,7 +498,7 @@ def threshold_components(A, dims, medw=None, thr_method='max', maxthr=0.1, nrgth ss = np.ones((3,) * len(dims), dtype='uint8') # dims and nm of neurones d, nr = np.shape(A) - # instanciation of A thresh. + # instantiation of A thresh. #Ath = np.zeros((d, nr)) pars = [] # for each neurons diff --git a/caiman/source_extraction/cnmf/utilities.py b/caiman/source_extraction/cnmf/utilities.py index b21484fa4..2fb64b758 100644 --- a/caiman/source_extraction/cnmf/utilities.py +++ b/caiman/source_extraction/cnmf/utilities.py @@ -1027,7 +1027,7 @@ def compute_residuals(Yr_mmap_file, A_, b_, C_, f_, dview=None, block_size=1000, number of pixels processed together num_blocks_per_run: int - nnumber of parallel blocks processes + number of parallel blocks processes Returns: YrA: ndarray @@ -1154,9 +1154,9 @@ def fast_graph_Laplacian(mmap_file, dims, max_radius=10, kernel='heat', else: res = dview.map(fast_graph_Laplacian_pixel, pars, chunksize=128) indptr = np.cumsum(np.array([0] + [len(r[0]) for r in res])) - indeces = [item for sublist in res for item in sublist[0]] + indices = [item for sublist in res for item in sublist[0]] data = [item for sublist in res for item in sublist[1]] - W = scipy.sparse.csr_matrix((data, indeces, indptr), shape=[Np, Np]) + W = scipy.sparse.csr_matrix((data, indices, indptr), shape=[Np, Np]) D = scipy.sparse.spdiags(W.sum(0), 0, Np, Np) L = D - W else: @@ -1215,9 +1215,9 @@ def fast_graph_Laplacian_pixel(pars): [XX, YY] = np.meshgrid(xx, yy) R = np.sqrt(XX**2 + YY**2) R = R.flatten('F') - indeces = np.where(R < max_radius)[0] + indices = np.where(R < max_radius)[0] Y = load_memmap(mmap_file)[0] - Yind = np.array(Y[indeces]) + Yind = np.array(Y[indices]) y = np.array(Y[i, :]) if normalize: Yind -= Yind.mean(1)[:, np.newaxis] @@ -1238,4 +1238,4 @@ def fast_graph_Laplacian_pixel(pars): else: ind = np.where(w>0)[0] - return indeces[ind].tolist(), w[ind].tolist() + return indices[ind].tolist(), w[ind].tolist() diff --git a/caiman/source_extraction/volpy/atm.py b/caiman/source_extraction/volpy/atm.py index 724923551..7eced5d18 100644 --- a/caiman/source_extraction/volpy/atm.py +++ b/caiman/source_extraction/volpy/atm.py @@ -411,7 +411,7 @@ def get_spiketimes(trace1, thresh1, trace2, thresh2, tlimit): times = np.where((trace1[:tlimit] > thresh1[:tlimit]) & (trace2[:tlimit] > thresh2[:tlimit]))[0] - # group neigbours together + # group neighbours together if (times.size > 0): ls = [[times[0]]] for t in times[1:]: diff --git a/caiman/source_extraction/volpy/mrcnn/model.py b/caiman/source_extraction/volpy/mrcnn/model.py index 8f7d68582..2dee5f23f 100644 --- a/caiman/source_extraction/volpy/mrcnn/model.py +++ b/caiman/source_extraction/volpy/mrcnn/model.py @@ -434,7 +434,7 @@ def call(self, inputs): # Keep track of which box is mapped to which level box_to_level.append(ix) - # Stop gradient propogation to ROI proposals + # Stop gradient propagation to ROI proposals level_boxes = tf.stop_gradient(level_boxes) box_indices = tf.stop_gradient(box_indices) @@ -1091,7 +1091,7 @@ def rpn_bbox_loss_graph(config, target_bbox, rpn_match, rpn_bbox): config: the model config object. target_bbox: [batch, max positive anchors, (dy, dx, log(dh), log(dw))]. - Uses 0 padding to fill in unsed bbox deltas. + Uses 0 padding to fill in unused bbox deltas. rpn_match: [batch, anchors, 1]. Anchor match type. 1=positive, -1=negative, 0=neutral anchor. rpn_bbox: [batch, anchors, (dy, dx, log(dh), log(dw))] @@ -1669,7 +1669,7 @@ class DataGenerator(KU.Sequence): the Mask RCNN part without the RPN. detection_targets: If True, generate detection targets (class IDs, bbox deltas, and masks). Typically for debugging or visualizations because - in trainig detection targets are generated by DetectionTargetLayer. + in training detection targets are generated by DetectionTargetLayer. Returns a Python iterable. Upon calling __getitem__() on it, the iterable returns two lists, inputs and outputs. The contents @@ -1909,7 +1909,7 @@ def build(self, mode, config): _, C2, C3, C4, C5 = resnet_graph(input_image, config.BACKBONE, stage5=True, train_bn=config.TRAIN_BN) # Top-down Layers - # TODO: add assert to varify feature map sizes match what's in config + # TODO: add assert to verify feature map sizes match what's in config P5 = KL.Conv2D(config.TOP_DOWN_PYRAMID_SIZE, (1, 1), name='fpn_c5p5')(C5) P4 = KL.Add(name="fpn_p4add")([ KL.UpSampling2D(size=(2, 2), name="fpn_p5upsampled")(P5), @@ -2279,10 +2279,10 @@ def train(self, train_dataset, val_dataset, learning_rate, epochs, layers, train_dataset, val_dataset: Training and validation Dataset objects. learning_rate: The learning rate to train with epochs: Number of training epochs. Note that previous training epochs - are considered to be done alreay, so this actually determines - the epochs to train in total rather than in this particaular + are considered to be done already, so this actually determines + the epochs to train in total rather than in this particular call. - layers: Allows selecting wich layers to train. It can be: + layers: Allows selecting which layers to train. It can be: - A regular expression to match layer names to train - One of these predefined values: heads: The RPN, classifier and mask heads of the network diff --git a/caiman/source_extraction/volpy/mrcnn/neurons.py b/caiman/source_extraction/volpy/mrcnn/neurons.py index 958483ca4..6e59c2c3b 100644 --- a/caiman/source_extraction/volpy/mrcnn/neurons.py +++ b/caiman/source_extraction/volpy/mrcnn/neurons.py @@ -69,7 +69,7 @@ class NeuronsConfig(Config): # Length of square anchor side in pixels RPN_ANCHOR_SCALES = (16, 32, 64, 128, 256) #(8, 16, 32, 64, 128) - # ROIs kept after non-maximum supression (training and inference) + # ROIs kept after non-maximum suppression (training and inference) POST_NMS_ROIS_TRAINING = 1000 POST_NMS_ROIS_INFERENCE = 2000 @@ -158,7 +158,7 @@ def load_neurons(self, dataset_dir, subset): # load_mask() needs the image size to convert polygons to masks. # Unfortunately, VIA doesn't include it in JSON, so we must read - # the image. This is only managable since the dataset is tiny. + # the image. This is only manageable since the dataset is tiny. image_path = os.path.join(dataset_dir, a) image = np.load(image_path)['arr_0'] height, width = image.shape[:2] diff --git a/caiman/source_extraction/volpy/mrcnn/utils.py b/caiman/source_extraction/volpy/mrcnn/utils.py index 53a318d65..716ef3478 100644 --- a/caiman/source_extraction/volpy/mrcnn/utils.py +++ b/caiman/source_extraction/volpy/mrcnn/utils.py @@ -136,7 +136,7 @@ def non_max_suppression(boxes, scores, threshold): x2 = boxes[:, 3] area = (y2 - y1) * (x2 - x1) - # Get indicies of boxes sorted by scores (highest first) + # Get indices of boxes sorted by scores (highest first) ixs = scores.argsort()[::-1] pick = [] diff --git a/caiman/source_extraction/volpy/spikepursuit.py b/caiman/source_extraction/volpy/spikepursuit.py index d93900197..c42bc3a47 100644 --- a/caiman/source_extraction/volpy/spikepursuit.py +++ b/caiman/source_extraction/volpy/spikepursuit.py @@ -104,7 +104,7 @@ def volspike(pars): ridge or NMF for weight update do_plot: boolean - if Ture, plot trace of signals and spiketimes, + if True, plot trace of signals and spiketimes, peak triggered average, histogram of heights in the last iteration do_cross_val: boolean @@ -183,7 +183,7 @@ def volspike(pars): output['rawROI'] = {} print(f'Now processing cell number {cell_n}') - # load the movie in C-order mermory mapping file + # load the movie in C-order memory mapping file Yr, dims, T = cm.load_memmap(fnames) if bw.shape == dims: images = np.reshape(Yr.T, [T] + list(dims), order='F') @@ -276,7 +276,7 @@ def volspike(pars): if args['do_cross_val']: # need to add logging.warning('doing cross validation') - raise Exception('cross validation option is not availble yet') + raise Exception('cross validation option is not available yet') else: s_max = 1 l_max = 2 @@ -431,7 +431,7 @@ def denoise_spikes(data, window_length, fr=400, hp_freq=1, clip=100, threshold The real threshold is the value multiply estimated noise level do_plot: boolean - if Ture, will plot trace of signals and spiketimes, peak triggered + if True, will plot trace of signals and spiketimes, peak triggered average, histogram of heights Returns: diff --git a/caiman/source_extraction/volpy/volparams.py b/caiman/source_extraction/volpy/volparams.py index 93e07dafb..0690609ca 100644 --- a/caiman/source_extraction/volpy/volparams.py +++ b/caiman/source_extraction/volpy/volparams.py @@ -12,7 +12,7 @@ def __init__(self, fnames=None, fr=None, index=None, ROIs=None, weights=None, sigmas=np.array([1, 1.5, 2]), n_iter=2, weight_update='ridge', do_plot=False, do_cross_val=False, sub_freq=20, method='spikepursuit', superfactor=10, params_dict={}): """Class for setting parameters for voltage imaging. Including parameters for the data, motion correction and - spike detection. The prefered way to set parameters is by using the set function, where a subclass is determined + spike detection. The preferred way to set parameters is by using the set function, where a subclass is determined and a dictionary is passed. The whole dictionary can also be initialized at once by passing a dictionary params_dict when initializing the CNMFParams object. """ diff --git a/caiman/source_extraction/volpy/volpy.py b/caiman/source_extraction/volpy/volpy.py index c81cec027..0039f5612 100644 --- a/caiman/source_extraction/volpy/volpy.py +++ b/caiman/source_extraction/volpy/volpy.py @@ -93,7 +93,7 @@ def __init__(self, n_processes, dview=None, template_size=0.02, context_size=35, ridge or NMF for weight update do_plot: boolean - if Ture, plot trace of signals and spiketimes, + if True, plot trace of signals and spiketimes, peak triggered average, histogram of heights in the last iteration do_cross_val: boolean diff --git a/caiman/tests/comparison/comparison.py b/caiman/tests/comparison/comparison.py index fa363f3d7..034141ac3 100644 --- a/caiman/tests/comparison/comparison.py +++ b/caiman/tests/comparison/comparison.py @@ -3,7 +3,7 @@ """ compare how the elements behave We create a folder ground truth that possess the same thing than the other -in a form of a dictionnary containing nparrays and other info. +in a form of a dictionary containing nparrays and other info. the other files contains every test and the name is the date of the test """ @@ -41,7 +41,7 @@ class Comparison(object): Here it has been made for 3 different functions. for it to compare well you need to set your ground truth with the same computer with which you are comparing the files - class you instanciate to compare the different parts of CaImAn + class you instantiate to compare the different parts of CaImAn @@ -57,7 +57,7 @@ class you instanciate to compare the different parts of CaImAn C_full C_patch information - diffrences + differences params_cnm proc param_movie @@ -87,12 +87,12 @@ class you instanciate to compare the different parts of CaImAn diff_timing isdifferent diff_data - the user to change it manualy + the user to change it manually Methods ------- __init__() - Initialize the function be instanciating a comparison object + Initialize the function be instantiating a comparison object save(istruth) save the comparison object on a file @@ -143,7 +143,7 @@ def save_with_compare(self, istruth=False, params=None, dview=None, Cn=None): """save the comparison as well as the images of the precision recall calculations - depending on if we say this file will be ground truth or not, it wil be saved in either the tests or the ground truth folder + depending on if we say this file will be ground truth or not, it will be saved in either the tests or the ground truth folder if saved in test, a comparison to groundtruth will be added to the object this comparison will be on data : a normized difference of the normalized value of the arrays @@ -155,11 +155,11 @@ def save_with_compare(self, istruth=False, params=None, dview=None, Cn=None): Args: - self: dictionnary - the object of this class tha tcontains every value + self: dictionary + the object of this class that contains every value istruth: Boolean - if we want it ot be the ground truth + if we want it to be the ground truth params: movie parameters @@ -250,7 +250,7 @@ def save_with_compare(self, istruth=False, params=None, dview=None, Cn=None): C_full = dt['C_full'][()] C_patch = dt['C_patch'][()] data = dt['information'][()] - # if we cannot manage to open it or it doesnt exist: + # if we cannot manage to open it or it doesn't exist: except (IOError, OSError): # we save but we explain why there were a problem logging.warning('we were not able to read the file ' + str(file_path) + ' to compare it\n') @@ -364,7 +364,7 @@ def see(filename=None): Args: self: dictionary - the object of this class tha tcontains every value + the object of this class that tcontains every value filename: ( just give the number or name) @@ -465,7 +465,7 @@ def cnmf(Cn, A_gt, A_test, C_gt, C_test, dims_gt, dims_test, dview=None, sensiti corrs = np.array( [scipy.stats.pearsonr(C_gt_thr[gt, :], C_test_thr[comp, :])[0] for gt, comp in zip(idx_tp_gt, idx_tp_comp)]) - # todo, change this test when I will have found why I have one additionnal neuron + # todo, change this test when I will have found why I have one additional neuron isdiff = True if ((np.linalg.norm(corrs) < sensitivity) or (performance_off_on['f1_score'] < 0.98)) else False info = { diff --git a/caiman/tests/comparison/create_gt.py b/caiman/tests/comparison/create_gt.py index c204fd497..1954f7c4f 100644 --- a/caiman/tests/comparison/create_gt.py +++ b/caiman/tests/comparison/create_gt.py @@ -50,7 +50,7 @@ 'fname': ['Sue_2x_3000_40_-46.tif'], 'niter_rig': 1, 'max_shifts': (3, 3), # maximum allow rigid shift - 'splits_rig': 20, # for parallelization split the movies in num_splits chuncks across time + 'splits_rig': 20, # for parallelization split the movies in num_splits chunks across time # if none all the splits are processed and the movie is saved 'num_splits_to_process_rig': None, 'p': 1, # order of the autoregressive system @@ -83,11 +83,11 @@ # params_movie = {'fname': [u'./example_movies/demoMovieJ.tif'], # 'max_shifts': (2, 2), # maximum allow rigid shift (2,2) # 'niter_rig': 1, -# 'splits_rig': 14, # for parallelization split the movies in num_splits chuncks across time +# 'splits_rig': 14, # for parallelization split the movies in num_splits chunks across time # 'num_splits_to_process_rig': None, # if none all the splits are processed and the movie is saved # 'strides': (48, 48), # intervals at which patches are laid out for motion correction -# 'overlaps': (24, 24), # overlap between pathes (size of patch strides+overlaps) -# 'splits_els': 14, # for parallelization split the movies in num_splits chuncks across time +# 'overlaps': (24, 24), # overlap between patches (size of patch strides+overlaps) +# 'splits_els': 14, # for parallelization split the movies in num_splits chunks across time # 'num_splits_to_process_els': [14, None], # if none all the splits are processed and the movie is saved # 'upsample_factor_grid': 3, # upsample factor to avoid smearing when merging patches # 'max_deviation_rigid': 1, # maximum deviation allowed for patch with respect to rigid shift diff --git a/caiman/tests/comparison_general.py b/caiman/tests/comparison_general.py index d17359621..b014284c4 100644 --- a/caiman/tests/comparison_general.py +++ b/caiman/tests/comparison_general.py @@ -59,7 +59,7 @@ 'fname': ['Sue_2x_3000_40_-46.tif'], 'niter_rig': 1, 'max_shifts': (3, 3), # maximum allow rigid shift - 'splits_rig': 20, # for parallelization split the movies in num_splits chuncks across time + 'splits_rig': 20, # for parallelization split the movies in num_splits chunks across time # if none all the splits are processed and the movie is saved 'num_splits_to_process_rig': None, # intervals at which patches are laid out for motion correction @@ -92,7 +92,7 @@ # params_movie = {'fname': [u'./example_movies/demoMovieJ.tif'], # 'max_shifts': (2, 2), # maximum allow rigid shift (2,2) # 'niter_rig': 1, -# 'splits_rig': 14, # for parallelization split the movies in num_splits chuncks across time +# 'splits_rig': 14, # for parallelization split the movies in num_splits chunks across time # 'num_splits_to_process_rig': None, # if none all the splits are processed and the movie is saved # 'p': 1, # order of the autoregressive system # 'merge_thresh': 0.8, # merging threshold, max correlation allow diff --git a/caiman/tests/comparison_humans.py b/caiman/tests/comparison_humans.py index b8a29c0e6..4ab28b3c0 100644 --- a/caiman/tests/comparison_humans.py +++ b/caiman/tests/comparison_humans.py @@ -422,7 +422,7 @@ thresh_subset=0.6) gt_estimate.select_components(use_object=True) print(gt_estimate.A_thr.shape) - # %% prepare CNMF maks + # %% prepare CNMF mask cnm2.estimates.threshold_spatial_components(maxthr=0.2, dview=dview) cnm2.estimates.remove_small_large_neurons(min_size_neuro, max_size_neuro) cnm2.estimates.remove_duplicates(r_values=None, dist_thr=0.1, min_dist=10, thresh_subset=0.6) diff --git a/caiman/tests/test_deconvolution.py b/caiman/tests/test_deconvolution.py index 2e8a70fb5..b4d53d585 100644 --- a/caiman/tests/test_deconvolution.py +++ b/caiman/tests/test_deconvolution.py @@ -19,7 +19,7 @@ def gen_data(g=[.95], sn=.2, T=1000, framerate=30, firerate=.5, b=10, N=1, seed=0): """ - Generate data from homogenous Poisson Process + Generate data from homogeneous Poisson Process Parameters ---------- diff --git a/caiman/utils/nn_models.py b/caiman/utils/nn_models.py index 3f39dc5d0..4a09483b3 100644 --- a/caiman/utils/nn_models.py +++ b/caiman/utils/nn_models.py @@ -157,7 +157,7 @@ def get_mask(gSig=5, r_factor=1.5, width=5): radius of average neuron r_factor: float, default: 1.5 - expansion factor to deteremine inner radius + expansion factor to determine inner radius width: int, default: 5 width of ring kernel @@ -203,7 +203,7 @@ class Hadamard(Layer): pointwise multiplication with a set of learnable weights. Args: - initializer: keras initializer, deafult: Constant(0.1) + initializer: keras initializer, default: Constant(0.1) """ def __init__(self, initializer=Constant(0.1), **kwargs): #, output_dim): self.initializer = initializer @@ -230,7 +230,7 @@ class Additive(Layer): pointwise addition with a set of learnable weights. Args: - initializer: keras initializer, deafult: Constant(0) + initializer: keras initializer, default: Constant(0) """ def __init__(self, data=None, initializer=Constant(0), pct=1, **kwargs): self.data = data @@ -309,7 +309,7 @@ def my_total_variation_loss(y_true, y_pred): return my_total_variation_loss def b0_initializer(Y, pct=10): - """ Returns a pecentile based initializer for the additive layer (not used) + """ Returns a percentile based initializer for the additive layer (not used) Args: Y: np.array @@ -365,7 +365,7 @@ def create_LN_model(Y=None, shape=(None, None, 1), n_channels=2, gSig=5, r_facto radius of average neuron r_factor: float, default: 1.5 - expansion factor to deteremine inner radius + expansion factor to determine inner radius width: int, default: 5 width of ring kernel @@ -434,7 +434,7 @@ def create_NL_model(Y=None, shape=(None, None, 1), n_channels=8, gSig=5, r_facto radius of average neuron r_factor: float, default: 1.5 - expansion factor to deteremine inner radius + expansion factor to determine inner radius width: int, default: 5 width of ring kernel diff --git a/caiman/utils/stats.py b/caiman/utils/stats.py index 7bfbec449..5cd9dc041 100644 --- a/caiman/utils/stats.py +++ b/caiman/utils/stats.py @@ -168,7 +168,7 @@ def df_percentile(inputData, axis=None): """ Extracting the percentile of the data where the mode occurs and its value. Used to determine the filtering level for DF/F extraction. Note that - computation can be innacurate for short traces. + computation can be inaccurate for short traces. """ if axis is not None: diff --git a/caiman/utils/visualization.py b/caiman/utils/visualization.py index 1876a4fb1..9b74579fd 100644 --- a/caiman/utils/visualization.py +++ b/caiman/utils/visualization.py @@ -483,7 +483,7 @@ def nb_view_patches3d(Y_r, A, C, dims, image_type='mean', Yr=None, image_type: 'mean', 'max' or 'corr' image to be overlaid to neurons - (average of shapes, maximum of shapes or nearest neigbor correlation of raw data) + (average of shapes, maximum of shapes or nearest neighbor correlation of raw data) Yr: np.ndarray movie, only required if image_type=='corr' to calculate correlation image @@ -1220,7 +1220,7 @@ def get_rectangle_coords(im_dims: ArrayLike, """ Extract rectangle (patch) coordinates: a helper function used by view_quilt(). - Given dimensions of summary image (rows x colums), stride between patches, and overlap + Given dimensions of summary image (rows x columns), stride between patches, and overlap between patches, returns row coordinates of the patches in patch_rows, and column coordinates patches in patch_cols. This is meant to be used by view_quilt(). diff --git a/demos/general/demo_OnACID_mesoscope.py b/demos/general/demo_OnACID_mesoscope.py index 3fed587dc..cf9bde48f 100755 --- a/demos/general/demo_OnACID_mesoscope.py +++ b/demos/general/demo_OnACID_mesoscope.py @@ -3,7 +3,7 @@ """ Complete pipeline for online processing using CaImAn Online (OnACID). -The demo demonstates the analysis of a sequence of files using the CaImAn online +The demo demonstrates the analysis of a sequence of files using the CaImAn online algorithm. The steps include i) motion correction, ii) tracking current components, iii) detecting new components, iv) updating of spatial footprints. The script demonstrates how to construct and use the params and online_cnmf @@ -157,7 +157,7 @@ def main(): Yr, dims, T = cm.load_memmap(memmap_file) images = np.reshape(Yr.T, [T] + list(dims), order='F') - min_SNR = 2 # peak SNR for accepted components (if above this, acept) + min_SNR = 2 # peak SNR for accepted components (if above this, accept) rval_thr = 0.85 # space correlation threshold (if above this, accept) use_cnn = True # use the CNN classifier min_cnn_thr = 0.99 # if cnn classifier predicts below this value, reject diff --git a/demos/general/demo_caiman_basic.py b/demos/general/demo_caiman_basic.py index 48848a04a..b5ce1e7e0 100755 --- a/demos/general/demo_caiman_basic.py +++ b/demos/general/demo_caiman_basic.py @@ -126,7 +126,7 @@ def main(): # c) each shape passes a CNN based classifier (this will pick up only neurons # and filter out active processes) - min_SNR = 2 # peak SNR for accepted components (if above this, acept) + min_SNR = 2 # peak SNR for accepted components (if above this, accept) rval_thr = 0.85 # space correlation threshold (if above this, accept) use_cnn = True # use the CNN classifier min_cnn_thr = 0.99 # if cnn classifier predicts below this value, reject diff --git a/demos/general/demo_pipeline.py b/demos/general/demo_pipeline.py index af7a2cb9f..4d50ecc94 100755 --- a/demos/general/demo_pipeline.py +++ b/demos/general/demo_pipeline.py @@ -83,7 +83,7 @@ def main(): max_shifts = [int(a/b) for a, b in zip(max_shift_um, dxy)] # start a new patch for pw-rigid motion correction every x pixels strides = tuple([int(a/b) for a, b in zip(patch_motion_um, dxy)]) - # overlap between pathes (size of patch in pixels: strides+overlaps) + # overlap between patches (size of patch in pixels: strides+overlaps) overlaps = (24, 24) # maximum deviation allowed for patch with respect to rigid shifts max_deviation_rigid = 3 @@ -167,7 +167,7 @@ def main(): # initialization method (if analyzing dendritic data using 'sparse_nmf') method_init = 'greedy_roi' ssub = 2 # spatial subsampling during initialization - tsub = 2 # temporal subsampling during intialization + tsub = 2 # temporal subsampling during initialization # parameters for component evaluation opts_dict = {'fnames': fnames, diff --git a/demos/general/demo_pipeline_NWB.py b/demos/general/demo_pipeline_NWB.py index 17fb60c9a..f1335adcd 100644 --- a/demos/general/demo_pipeline_NWB.py +++ b/demos/general/demo_pipeline_NWB.py @@ -176,7 +176,7 @@ def main(): # initialization method (if analyzing dendritic data using 'sparse_nmf') method_init = 'greedy_roi' ssub = 2 # spatial subsampling during initialization - tsub = 2 # temporal subsampling during intialization + tsub = 2 # temporal subsampling during initialization # parameters for component evaluation opts_dict = {'fnames': fnames, diff --git a/demos/general/demo_pipeline_cnmfE.py b/demos/general/demo_pipeline_cnmfE.py index 697ac8547..e61178876 100755 --- a/demos/general/demo_pipeline_cnmfE.py +++ b/demos/general/demo_pipeline_cnmfE.py @@ -76,7 +76,7 @@ def main(): # change this one if algorithm does not work max_shifts = (5, 5) # maximum allowed rigid shift strides = (48, 48) # start a new patch for pw-rigid motion correction every x pixels - overlaps = (24, 24) # overlap between pathes (size of patch strides+overlaps) + overlaps = (24, 24) # overlap between patches (size of patch strides+overlaps) # maximum deviation allowed for patch with respect to rigid shifts max_deviation_rigid = 3 border_nan = 'copy' diff --git a/demos/general/demo_pipeline_voltage_imaging.py b/demos/general/demo_pipeline_voltage_imaging.py index 151084727..daa9f254e 100644 --- a/demos/general/demo_pipeline_voltage_imaging.py +++ b/demos/general/demo_pipeline_voltage_imaging.py @@ -70,7 +70,7 @@ def main(): # change this one if algorithm does not work max_shifts = (5, 5) # maximum allowed rigid shift strides = (48, 48) # start a new patch for pw-rigid motion correction every x pixels - overlaps = (24, 24) # overlap between pathes (size of patch strides+overlaps) + overlaps = (24, 24) # overlap between patches (size of patch strides+overlaps) max_deviation_rigid = 3 # maximum deviation allowed for patch with respect to rigid shifts border_nan = 'copy' diff --git a/demos/notebooks/demo_OnACID_mesoscope.ipynb b/demos/notebooks/demo_OnACID_mesoscope.ipynb index 163e02531..14a8020b4 100644 --- a/demos/notebooks/demo_OnACID_mesoscope.ipynb +++ b/demos/notebooks/demo_OnACID_mesoscope.ipynb @@ -7,7 +7,7 @@ "## Example of online analysis using OnACID\n", "\n", "Complete pipeline for online processing using CaImAn Online (OnACID).\n", - "The demo demonstates the analysis of a sequence of files using the CaImAn online\n", + "The demo demonstrates the analysis of a sequence of files using the CaImAn online\n", "algorithm. The steps include i) motion correction, ii) tracking current \n", "components, iii) detecting new components, iv) updating of spatial footprints.\n", "The script demonstrates how to construct and use the params and online_cnmf\n", diff --git a/demos/notebooks/demo_Ring_CNN.ipynb b/demos/notebooks/demo_Ring_CNN.ipynb index ce96ce7c7..06149b57a 100644 --- a/demos/notebooks/demo_Ring_CNN.ipynb +++ b/demos/notebooks/demo_Ring_CNN.ipynb @@ -71,7 +71,7 @@ "metadata": {}, "outputs": [], "source": [ - "reuse_model = False # set to True to re-use an existing ring model\n", + "reuse_model = False # set to True to reuse an existing ring model\n", "path_to_model = None # specify a pre-trained model here if needed \n", "gSig = (7, 7) # expected half size of neurons\n", "gnb = 2 # number of background components for OnACID\n", diff --git a/demos/notebooks/demo_caiman_cnmf_3D.ipynb b/demos/notebooks/demo_caiman_cnmf_3D.ipynb index e570fc15f..2a0c3be84 100644 --- a/demos/notebooks/demo_caiman_cnmf_3D.ipynb +++ b/demos/notebooks/demo_caiman_cnmf_3D.ipynb @@ -253,7 +253,7 @@ "# motion correction parameters\n", "opts_dict = {'fnames': fname,\n", " 'strides': (24, 24, 6), # start a new patch for pw-rigid motion correction every x pixels\n", - " 'overlaps': (12, 12, 2), # overlap between pathes (size of patch strides+overlaps)\n", + " 'overlaps': (12, 12, 2), # overlap between patches (size of patch strides+overlaps)\n", " 'max_shifts': (4, 4, 2), # maximum allowed rigid shifts (in pixels)\n", " 'max_deviation_rigid': 5, # maximum shifts deviation allowed for patch with respect to rigid shifts\n", " 'pw_rigid': False, # flag for performing non-rigid motion correction\n", diff --git a/demos/notebooks/demo_dendritic.ipynb b/demos/notebooks/demo_dendritic.ipynb index f9dab58c3..3c69dfdb6 100644 --- a/demos/notebooks/demo_dendritic.ipynb +++ b/demos/notebooks/demo_dendritic.ipynb @@ -148,7 +148,7 @@ "\n", "# motion correction parameters\n", "strides = (48, 48) # start a new patch for pw-rigid motion correction every x pixels\n", - "overlaps = (24, 24) # overlap between pathes (size of patch strides+overlaps)\n", + "overlaps = (24, 24) # overlap between patches (size of patch strides+overlaps)\n", "max_shifts = (6, 6) # maximum allowed rigid shifts (in pixels)\n", "max_deviation_rigid = 3 # maximum shifts deviation allowed for patch with respect to rigid shifts\n", "pw_rigid = True # flag for performing non-rigid motion correction\n", @@ -163,7 +163,7 @@ "method_init = 'graph_nmf' # initialization method (you could also use 'sparse_nmf' for dendritic data)\n", "alpha = 0.5 # sparsity regularizer term (default is 0.5): used only for sparse_nmf\n", "ssub = 1 # spatial subsampling during initialization\n", - "tsub = 1 # temporal subsampling during intialization\n", + "tsub = 1 # temporal subsampling during initialization\n", "\n", "opts_dict = {'fnames': fnames,\n", " 'fr': fr,\n", diff --git a/demos/notebooks/demo_motion_correction.ipynb b/demos/notebooks/demo_motion_correction.ipynb index 792aee62b..84cec1967 100644 --- a/demos/notebooks/demo_motion_correction.ipynb +++ b/demos/notebooks/demo_motion_correction.ipynb @@ -89,7 +89,7 @@ "source": [ "max_shifts = (6, 6) # maximum allowed rigid shift in pixels (view the movie to get a sense of motion)\n", "strides = (48, 48) # create a new patch every x pixels for pw-rigid correction\n", - "overlaps = (24, 24) # overlap between pathes (size of patch strides+overlaps)\n", + "overlaps = (24, 24) # overlap between patches (size of patch strides+overlaps)\n", "max_deviation_rigid = 3 # maximum deviation allowed for patch with respect to rigid shifts\n", "pw_rigid = False # flag for performing rigid or piecewise rigid motion correction\n", "shifts_opencv = True # flag for correcting motion using bicubic interpolation (otherwise FFT interpolation is used)\n", diff --git a/demos/notebooks/demo_online_cnmfE.ipynb b/demos/notebooks/demo_online_cnmfE.ipynb index a319981f5..6ca093cf9 100644 --- a/demos/notebooks/demo_online_cnmfE.ipynb +++ b/demos/notebooks/demo_online_cnmfE.ipynb @@ -556,7 +556,7 @@ "#cnm_online.estimates.dview=dview\n", "#cnm_online.estimates.compute_residuals(Yr=Yr_online)\n", "images_online = np.reshape(Yr_online.T, [T] + list(dims), order='F')\n", - "min_SNR = 2 # peak SNR for accepted components (if above this, acept)\n", + "min_SNR = 2 # peak SNR for accepted components (if above this, accept)\n", "rval_thr = 0.85 # space correlation threshold (if above this, accept)\n", "use_cnn = False # use the CNN classifier\n", "cnm_online.params.change_params({'min_SNR': min_SNR,\n", diff --git a/demos/notebooks/demo_pipeline.ipynb b/demos/notebooks/demo_pipeline.ipynb index 5e9b5c760..f7e93177a 100644 --- a/demos/notebooks/demo_pipeline.ipynb +++ b/demos/notebooks/demo_pipeline.ipynb @@ -131,7 +131,7 @@ "\n", "# motion correction parameters\n", "strides = (48, 48) # start a new patch for pw-rigid motion correction every x pixels\n", - "overlaps = (24, 24) # overlap between pathes (size of patch strides+overlaps)\n", + "overlaps = (24, 24) # overlap between patches (size of patch strides+overlaps)\n", "max_shifts = (6,6) # maximum allowed rigid shifts (in pixels)\n", "max_deviation_rigid = 3 # maximum shifts deviation allowed for patch with respect to rigid shifts\n", "pw_rigid = True # flag for performing non-rigid motion correction\n", @@ -146,7 +146,7 @@ "gSig = [4, 4] # expected half size of neurons in pixels\n", "method_init = 'greedy_roi' # initialization method (if analyzing dendritic data using 'sparse_nmf')\n", "ssub = 1 # spatial subsampling during initialization\n", - "tsub = 1 # temporal subsampling during intialization\n", + "tsub = 1 # temporal subsampling during initialization\n", "\n", "# parameters for component evaluation\n", "min_SNR = 2.0 # signal to noise ratio for accepting a component\n", diff --git a/demos/notebooks/demo_pipeline_cnmfE.ipynb b/demos/notebooks/demo_pipeline_cnmfE.ipynb index b5b4dd007..b74e7f3bc 100644 --- a/demos/notebooks/demo_pipeline_cnmfE.ipynb +++ b/demos/notebooks/demo_pipeline_cnmfE.ipynb @@ -120,7 +120,7 @@ "gSig_filt = (3, 3) # size of high pass spatial filtering, used in 1p data\n", "max_shifts = (5, 5) # maximum allowed rigid shift\n", "strides = (48, 48) # start a new patch for pw-rigid motion correction every x pixels\n", - "overlaps = (24, 24) # overlap between pathes (size of patch strides+overlaps)\n", + "overlaps = (24, 24) # overlap between patches (size of patch strides+overlaps)\n", "max_deviation_rigid = 3 # maximum deviation allowed for patch with respect to rigid shifts\n", "border_nan = 'copy' # replicate values along the boundaries\n", "\n", diff --git a/demos/notebooks/demo_pipeline_voltage_imaging.ipynb b/demos/notebooks/demo_pipeline_voltage_imaging.ipynb index 625344956..3db090b78 100644 --- a/demos/notebooks/demo_pipeline_voltage_imaging.ipynb +++ b/demos/notebooks/demo_pipeline_voltage_imaging.ipynb @@ -104,7 +104,7 @@ " # change this one if algorithm does not work\n", "max_shifts = (5, 5) # maximum allowed rigid shift\n", "strides = (48, 48) # start a new patch for pw-rigid motion correction every x pixels\n", - "overlaps = (24, 24) # overlap between pathes (size of patch strides+overlaps)\n", + "overlaps = (24, 24) # overlap between patches (size of patch strides+overlaps)\n", "max_deviation_rigid = 3 # maximum deviation allowed for patch with respect to rigid shifts\n", "border_nan = 'copy'\n", "\n", diff --git a/demos/notebooks/demo_realtime_cnmfE.ipynb b/demos/notebooks/demo_realtime_cnmfE.ipynb index e45516bea..0122b9e0b 100644 --- a/demos/notebooks/demo_realtime_cnmfE.ipynb +++ b/demos/notebooks/demo_realtime_cnmfE.ipynb @@ -7,8 +7,8 @@ "## Pipeline for real-time processing of microendoscopic data with CaImAn\n", "This demo presents 3 approaches for processing microendoscopic data in real time using CaImAn. \n", "1. Sufficiently long initialization phase to identify all ROIs followed by tracking\n", - "2. Short initalization phase followed by online processing using OnACID-E \n", - "3. Short initalization phase followed by online processing using Ring-CNN+OnACID\n", + "2. Short initialization phase followed by online processing using OnACID-E \n", + "3. Short initialization phase followed by online processing using Ring-CNN+OnACID\n", "\n", "All approached include:\n", "- Motion Correction using the NoRMCorre algorithm\n", @@ -145,7 +145,7 @@ "metadata": {}, "source": [ "### Take a break from imaging to process recorded data\n", - "Taking a break to keep this demo simple. One could in parallel continue to save the otherwise \"lost\" frames to disk if one was not only intersted in the real-time experiment but post-analysis of the entire session" + "Taking a break to keep this demo simple. One could in parallel continue to save the otherwise \"lost\" frames to disk if one was not only interested in the real-time experiment but post-analysis of the entire session" ] }, { @@ -401,7 +401,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 2. Short initalization phase followed by online processing using OnACID-E " + "## 2. Short initialization phase followed by online processing using OnACID-E " ] }, { @@ -432,7 +432,7 @@ "metadata": {}, "source": [ "### Take a break from imaging to process recorded data\n", - "Taking a break to keep this demo simple. One could in parallel continue to save the otherwise \"lost\" frames to disk if one was not only intersted in the real-time experiment but post-analysis of the entire session" + "Taking a break to keep this demo simple. One could in parallel continue to save the otherwise \"lost\" frames to disk if one was not only interested in the real-time experiment but post-analysis of the entire session" ] }, { @@ -721,7 +721,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 3. Short initalization phase followed by online processing using Ring-CNN+OnACID" + "## 3. Short initialization phase followed by online processing using Ring-CNN+OnACID" ] }, { @@ -752,7 +752,7 @@ "metadata": {}, "source": [ "### Take a break from imaging to process recorded data\n", - "Taking a break to keep this demo simple. One could in parallel continue to save the otherwise \"lost\" frames to disk if one was not only intersted in the real-time experiment but post-analysis of the entire session" + "Taking a break to keep this demo simple. One could in parallel continue to save the otherwise \"lost\" frames to disk if one was not only interested in the real-time experiment but post-analysis of the entire session" ] }, { @@ -770,7 +770,7 @@ "metadata": {}, "outputs": [], "source": [ - "reuse_model = False # set to True to re-use an existing ring model\n", + "reuse_model = False # set to True to reuse an existing ring model\n", "path_to_model = None # specify a pre-trained model here if needed \n", "gSig = (7, 7) # expected half size of neurons\n", "gnb = 2 # number of background components for OnACID\n", diff --git a/demos/notebooks/demo_seeded_CNMF.ipynb b/demos/notebooks/demo_seeded_CNMF.ipynb index 9bba63828..4f1ad1e60 100644 --- a/demos/notebooks/demo_seeded_CNMF.ipynb +++ b/demos/notebooks/demo_seeded_CNMF.ipynb @@ -355,7 +355,7 @@ "metadata": {}, "source": [ "## Component filtering\n", - "We can apply our quality tests to filter out these components. Components are evaluated by the same critera:\n", + "We can apply our quality tests to filter out these components. Components are evaluated by the same criteria:\n", "\n", "- the shape of each component must be correlated with the data at the corresponding location within the FOV\n", "- a minimum peak SNR is required over the length of a transient\n", @@ -521,7 +521,7 @@ "\n", "To manually determine neuron locations, a template on which to find neurons has to be created. This template can be constructed with two different methods:\n", "1. **Structural channel and average intensity:** If your recording incorporates a calcium-independent structural channel, it can be used to extract locations of neurons by averaging the intensity of each pixel over time. This method can also be applied to the calcium-dependent channel itself. The averaging greatly reduces noise, but any temporal component is eliminated, and it is impossible to tell whether a neuron was active or inactive during the recording. Thus, many neurons selected through this technique will be false positives, which should be filtered out during the component evaluation.\n", - "2. **Local correlations:** The arguably more accurate template can be the local correlation image of the calcium-dependent signal. Here, each pixel's value is not determined by it's intensity, but by its intensity **correlation** to its neighbors. Thus, groups of pixels that change intensity together will be brighter. This method incorporates the temporal component of the signal and accentuates firing structures like neurons and dendrites. Features visible in the local correlation image are likely functional units (such as neurons), which is what we are ultimately interested in. The number of false positives should be lower than in method 1, as structual features visible in mean-intensity images are greatly reduced. Additionally, it reduces the donut shape of some somata, making neuron detection easier.\n", + "2. **Local correlations:** The arguably more accurate template can be the local correlation image of the calcium-dependent signal. Here, each pixel's value is not determined by it's intensity, but by its intensity **correlation** to its neighbors. Thus, groups of pixels that change intensity together will be brighter. This method incorporates the temporal component of the signal and accentuates firing structures like neurons and dendrites. Features visible in the local correlation image are likely functional units (such as neurons), which is what we are ultimately interested in. The number of false positives should be lower than in method 1, as structural features visible in mean-intensity images are greatly reduced. Additionally, it reduces the donut shape of some somata, making neuron detection easier.\n", "\n", "A binary seeding mask from this template can be created automatically or manually:\n", "1. **Automatic:** The CaImAn function `extract_binary_masks_from_structural_channel()` does basically what it says. It extracts binary masks from a movie or an image with the Adaptive Gaussian Thresholding algorithm provided by the OpenCV library. If the function is provided a movie, it applies the thresholding on the average intensity image of this movie.\n", diff --git a/pyproject.toml b/pyproject.toml index 75eaf8028..d9f85e955 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -20,3 +20,10 @@ py-modules = ["_buildhelper_cython"] [tool.setuptools.cmdclass] build_py = "_buildhelper_cython.build_py" + +[tool.codespell] +skip = '.git,*.pdf,*.svg,*.ai' +check-hidden = true +ignore-regex = '^\s*"image/\S+": ".*' +# some ad-hoc options/variable names +ignore-words-list = 'ans,siz,nd,dirct,dircts,fo,comparitor,shfts,mapp,coo,ptd,manuel,thre,recuse'