We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I downloaded the github repo and placed in on a localhost server.
I opened the page, and clicked on the "Load GPT2 117Mb" model.
I've been waiting for a few minutes now, with the output stuck on Loading token embeddings.... Is that normal behaviour?
Loading token embeddings...
Loading model from folder: gpt2 Loading params... Warning: Buffer size calc result exceeds GPU limit, are you using this value for a tensor size? 50257 768 1 154389504 bufferSize @ model.js:510 loadParameters @ model.js:298 await in loadParameters (async) loadModel @ model.js:276 initialize @ model.js:32 await in initialize (async) loadModel @ gpt/:105 onclick @ gpt/:23 Params: {n_layer: 12, n_head: 12, n_embd: 768, vocab_size: 50257, n_ctx: 1024, …} Loading token embeddings...
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I downloaded the github repo and placed in on a localhost server.
I opened the page, and clicked on the "Load GPT2 117Mb" model.
I've been waiting for a few minutes now, with the output stuck on
Loading token embeddings...
. Is that normal behaviour?The text was updated successfully, but these errors were encountered: