Skip to content
This repository has been archived by the owner on Jul 24, 2024. It is now read-only.

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
xEricCardozo authored Oct 5, 2023
1 parent 7538f66 commit 998fdb2
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@ The API is currently inspired by PyTorch, with one notable difference: when you
int main() {

// You can use enums to set the gradient requirement:
net::Tensor x({2,3}, net::requires_gradient::False); x.fill({1,2,3,4,5,6});
net::Tensor w({4,3}, net::requires_gradient::True); w.fill({1,2,-3,4,5,6,7,8,-9,10,11,-12});
net::Tensor<float> x({2,3}, net::requires_gradient::False); x.fill({1,2,3,4,5,6});
net::Tensor<float> w({4,3}, net::requires_gradient::True); w.fill({1,2,-3,4,5,6,7,8,-9,10,11,-12});

// Or use just a boolean. Whatever you prefer.
net::Tensor b({1,4}, true); b.fill({1,2,3,4});
net::Tensor I({2,4}, false); I.fill(1);
net::Tensor<float> b({1,4}, true); b.fill({1,2,3,4});
net::Tensor<float> I({2,4}, false); I.fill(1);

x = net::function::linear(x,w,b);
x = net::function::relu(x);
Expand Down Expand Up @@ -61,7 +61,7 @@ struct Autoencoder : public net::Model<Autoencoder> {
net::layer::LogSoftmax(1/*axis*/)
};

net::Tensor forward(net::Tensor x) {
net::Tensor<float> forward(net::Tensor<float> x) {
x = encoder(x);
x = decoder(x);
return x;
Expand Down

0 comments on commit 998fdb2

Please sign in to comment.