Skip to content
This repository has been archived by the owner on Jul 24, 2024. It is now read-only.

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
xEricCardozo authored Oct 19, 2023
1 parent 08775c6 commit b6d01f8
Showing 1 changed file with 16 additions and 12 deletions.
28 changes: 16 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,5 @@
# CaberNet C++ Deep Learning Library

Important 18/10: Thank you for all the support! I will be inactive developing features for a few months researching in how to implement quantization and loading torchscripts into the computational graph for making CPU inference, since I just find out how to create memory pool for quantized tensors and had some cool ideas using template metaprogramming. Anyway, every contribution is welcomed since I'm planning to rehuse the user interface created for this library. If you are not sure of your contribution, just push it into the in-process folder and I will see how can I merge it into the project, we need implementations for convolutions, optimizers, and different criterions.

## Join the Discord:

https://discord.gg/aDxCxYEm
If the link doesn't work, just send me an email: [email protected].


## To build the project

Please see the [contributing](.github/CONTRIBUTING.md#building-the-library) guide for more information.

## Description

This is a prototype for a full C++ deep learning library inspired by PyTorch API. It has one notable difference: when you perform an operation, the program doesn't actually execute it immediately. Instead, it allocates a node into a graph, waiting for you to call the perform() method on the result (like tensorflow but this is a dynamic graph). This allows the programmer to perform operations without making new memory allocations.
Expand All @@ -20,6 +8,14 @@ There is an example [here](examples/model.cpp) , of the digit MNIST dataset for

In the future, I plan to re write the backend using static polymorphism to avoid the virtual calls that disables the compilers optimizations.

Important 18/10: Thank you for all the support! I will be inactive developing features for a few months researching in how to implement quantization and loading torchscripts into the computational graph for making CPU inference, I just find out how to create memory pool for quantized tensors using low level C and had some cool ideas using template metaprogramming.

Anyway, every contribution is welcomed since I'm planning to rehuse the user interface created for this library. If you are not sure of your contribution, just push it into the in-process folder and I will see how can I merge it into the project, we need implementations for convolutions, optimizers, criterions, broadcasting mechanisms, etc..

## To build the project

Please see the [contributing](.github/CONTRIBUTING.md#building-the-library) guide for more information.

Example:

```cpp
Expand Down Expand Up @@ -102,6 +98,14 @@ int main() {

Eigen library is used for performing all operations. The code is also backend-agnostic, meaning you can write your custom CUDA implementations if needed.


## Join the Discord:

https://discord.gg/aDxCxYEm
If the link doesn't work, just send me an email: [email protected].



## Acknowledgements

Thanks for all your work!:
Expand Down

0 comments on commit b6d01f8

Please sign in to comment.