Skip to content

Latest commit

 

History

History
34 lines (20 loc) · 3.27 KB

Readme.md

File metadata and controls

34 lines (20 loc) · 3.27 KB

CUDA-Accelerated implementation of Cox Method

##Synopsis

At the top of the file there should be a short introductioCox method is based on the theory of modulated renewal processes and it estimates a vector of influence strengths from multiple spike trains (called reference trains) to the selected (target) spike train. Selecting another target spike train and repeating the calculation of the influence strengths from the reference spike trains enables researchers to find all functional connections among multiple spike trains. In order to study functional connectivity an “influence function” is identified. This function recognises the specificity of neuronal interactions and reflects the dynamics of postsynaptic potential. In comparison to existing techniques, the Cox method has the following advantages:

  • It does not use bins (binless method)
  • It is applicable to cases where the sample size is small
  • It is sufficiently sensitive such that it estimates weak influences
  • It supports the simultaneous analysis of multiple influences
  • It is able to identify a correct connectivity scheme in difficult cases of “common source” or “indirect” connectivity.

The Cox method has been thoroughly tested using multiple sets of data generated by the neural network model of the leaky integrate and fire neurons with a prescribed architecture of connections (Masud & Borisyuk, 2011).

##Motivation

The interest of proposing GPU implementations is in gaining computation time, but another important interest is that such implementation requires rethinking the algorithm in different ways as the sequential implementation. This rethinking in itself brings new optimization possibilities. Utilizing this accelerated implementation, the Cox method is then capable to be applied on an experimental dataset, e.g. CRCNS, in a personal computer. This should facilitate observations of biological neural network organizations that can provide new insights to improve understanding of memory, learning and intelligence.

##API Reference

The API documentation is available in ./Sphinx/output/documentation.html. Downloading the "./Sphinx/output" is sufficient for access.

##Citation

If you use this library for your published research, we suggest that you cite our paper:

Andalibi et al. 2016

##License

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.