This repository has been archived by the owner on Jul 24, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 4
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
08775c6
commit b6d01f8
Showing
1 changed file
with
16 additions
and
12 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,17 +1,5 @@ | ||
# CaberNet C++ Deep Learning Library | ||
|
||
Important 18/10: Thank you for all the support! I will be inactive developing features for a few months researching in how to implement quantization and loading torchscripts into the computational graph for making CPU inference, since I just find out how to create memory pool for quantized tensors and had some cool ideas using template metaprogramming. Anyway, every contribution is welcomed since I'm planning to rehuse the user interface created for this library. If you are not sure of your contribution, just push it into the in-process folder and I will see how can I merge it into the project, we need implementations for convolutions, optimizers, and different criterions. | ||
|
||
## Join the Discord: | ||
|
||
https://discord.gg/aDxCxYEm | ||
If the link doesn't work, just send me an email: [email protected]. | ||
|
||
|
||
## To build the project | ||
|
||
Please see the [contributing](.github/CONTRIBUTING.md#building-the-library) guide for more information. | ||
|
||
## Description | ||
|
||
This is a prototype for a full C++ deep learning library inspired by PyTorch API. It has one notable difference: when you perform an operation, the program doesn't actually execute it immediately. Instead, it allocates a node into a graph, waiting for you to call the perform() method on the result (like tensorflow but this is a dynamic graph). This allows the programmer to perform operations without making new memory allocations. | ||
|
@@ -20,6 +8,14 @@ There is an example [here](examples/model.cpp) , of the digit MNIST dataset for | |
|
||
In the future, I plan to re write the backend using static polymorphism to avoid the virtual calls that disables the compilers optimizations. | ||
|
||
Important 18/10: Thank you for all the support! I will be inactive developing features for a few months researching in how to implement quantization and loading torchscripts into the computational graph for making CPU inference, I just find out how to create memory pool for quantized tensors using low level C and had some cool ideas using template metaprogramming. | ||
|
||
Anyway, every contribution is welcomed since I'm planning to rehuse the user interface created for this library. If you are not sure of your contribution, just push it into the in-process folder and I will see how can I merge it into the project, we need implementations for convolutions, optimizers, criterions, broadcasting mechanisms, etc.. | ||
|
||
## To build the project | ||
|
||
Please see the [contributing](.github/CONTRIBUTING.md#building-the-library) guide for more information. | ||
|
||
Example: | ||
|
||
```cpp | ||
|
@@ -102,6 +98,14 @@ int main() { | |
|
||
Eigen library is used for performing all operations. The code is also backend-agnostic, meaning you can write your custom CUDA implementations if needed. | ||
|
||
|
||
## Join the Discord: | ||
|
||
https://discord.gg/aDxCxYEm | ||
If the link doesn't work, just send me an email: [email protected]. | ||
|
||
|
||
|
||
## Acknowledgements | ||
|
||
Thanks for all your work!: | ||
|