Explore the docs »
View Demo
·
Report Bug
·
Request Feature
ISANet library provides a flexible and modular neural network library. It was entirely developed in Python using Numpy as a package for scientific computation. This library has been developed during the following projects at Department of Computer Science of University of Pisa:
- ML_Project_19_20: developed during the machine learning (ML) course held by Professor Alessio Micheli. The aims were to implement an ML model simulator (Neural Network, SVM, ...), understand the hyper-parameters effect on the model and solve a supervised regression learning task by using your own library and the CUP dataset provided in the course.
- CM_Project_19_20: developed during the Computational Mathematics for learning and data analysis course held by Professor Antonio Frangioni and Professor Federico Poloni. The aims were to extend ISANet lib in order to include the NCG FR/PR/HS and beta+ variants, and the L-BFGS methods (as new optimizer) and study the objective function used during the learning phase and study it from a mathematical and optimisation point of view.
ISANet is composed of low (Keras-like) and high-level (Scikit-learn-like) APIs divided into modules. The idea is to provide an easy but powerfull implementation of a Neural Network library to allow everyone to understand it from the theory to practice. More importat, the library leave open any kind of future work: extend to JAX, CNN layer or optimizer, and so on. In addition the library provides some datasets and a module for model selection (Grid Search and Cross Validation API).
NOTE: ISANet only support SGD, NCG and LBFGS with (Mean Square Error + regularization) as LOSS function in the gradient computation.
For more details about the library explore the docs.
This code requires Python 3.5 or later, to download the repository:
git clone https://github.com/alessandrocuda/ISANet
Then you need to install the basic dependencies to run the project on your system:
cd ISANet
pip install -r requirements.txt
An example with the low level api (keras-like):
# ...
from isanet.model import Mlp
from isanet.optimizer import SGD, EarlyStopping
from isanet.datasets.monk import load_monk
import numpy as np
X_train, Y_train = load_monk("1", "train")
X_test, Y_test = load_monk("1", "test")
#create the model
model = Mlp()
# Specify the range for the weights and lambda for regularization
# Of course can be different for each layer
kernel_initializer = 0.003
kernel_regularizer = 0.001
# Add many layers with different number of units
model.add(4, input= 17, kernel_initializer, kernel_regularizer)
model.add(1, kernel_initializer, kernel_regularizer)
es = EarlyStopping(0.00009, 20) # eps_GL and s_UP
#fix which optimizer you want to use in the learning phase
model.setOptimizer(
SGD(lr = 0.83, # learning rate
momentum = 0.9, # alpha for the momentum
nesterov = True, # Specify if you want to use Nesterov
sigma = None # sigma for the Acc. Nesterov
))
#start the learning phase
model.fit(X_train,
Y_train,
epochs=600,
#batch_size=31,
validation_data = [X_test, Y_test],
es = es,
verbose=0)
# after trained the model the prediction operation can be
# perform with the predict method
outputNet = model.predict(X_test)
what's next? look to USE_CASES.md for more example with the High-Level API (Scikit-learn-like) and the Model Selection API. Instead, here you can find some example scripts.
- Separate bias from the weight matrix for more clarity.
- Extend with JAX for GPU support.
- Fork it!
- Create your feature branch:
git checkout -b my-new-feature
- Commit your changes:
git commit -am 'Add some feature'
- Push to the branch:
git push origin my-new-feature
- Submit a pull request :D
Alessandro Cudazzo - @alessandrocuda - [email protected]
Giulia Volpi - [email protected]
Project Link: https://github.com/alessandrocuda/ISANet
This library is free software; you can redistribute it and/or modify it under the terms of the MIT license.
- MIT license
- Copyright 2019 © Alessandro Cudazzo - Giulia Volpi