You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to understand the implementation and architecture of the network model.
There is an option to make use of Embedding layers to represent the midi files. I know their use as vector representation of words in a given vocabulary but I don't understand how the midi samples can take the role of words, since we don't want the generator to use the exact same vocabulary as in a language neural network right ? Otherwise, he won't create new note patterns thanks to learned low-level features.
I'm sure there is smth I don't quite get but I can't find what, so that's why I'm asking for your help :).
Thanks.
The text was updated successfully, but these errors were encountered:
Hi guys,
I am trying to understand the implementation and architecture of the network model.
There is an option to make use of Embedding layers to represent the midi files. I know their use as vector representation of words in a given vocabulary but I don't understand how the midi samples can take the role of words, since we don't want the generator to use the exact same vocabulary as in a language neural network right ? Otherwise, he won't create new note patterns thanks to learned low-level features.
I'm sure there is smth I don't quite get but I can't find what, so that's why I'm asking for your help :).
Thanks.
The text was updated successfully, but these errors were encountered: