You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ValueError: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
To Reproduce
Quantize a HuggingFace model with bitsandbytes (for example 8-bit).
Draw_graph has:
Expected behavior
To be able to draw the quantized graph.
Another point: I can successfully generate the graph using torchviz's make_dot.
Both repositories produce Graphviz representation of PyTorch autograd graph, so right now I use make_dot for quantized models.
The text was updated successfully, but these errors were encountered:
Bug
To Reproduce
Quantize a HuggingFace model with bitsandbytes (for example 8-bit).
Draw_graph has:
.to
is not supported for this kind of model.Expected behavior
To be able to draw the quantized graph.
Another point: I can successfully generate the graph using torchviz's make_dot.
Both repositories produce Graphviz representation of PyTorch autograd graph, so right now I use make_dot for quantized models.
The text was updated successfully, but these errors were encountered: