-
Notifications
You must be signed in to change notification settings - Fork 449
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
09b5ae5
commit f22977b
Showing
4 changed files
with
282 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,106 @@ | ||
.. _models_builtin_wizardcoder_python_v1_0: | ||
|
||
======================= | ||
WizardCoder-Python-v1.0 | ||
======================= | ||
|
||
- **Context Length:** 100000 | ||
- **Model Name:** wizardcoder-python-v1.0 | ||
- **Languages:** en | ||
- **Abilities:** generate, chat | ||
|
||
Specifications | ||
^^^^^^^^^^^^^^ | ||
|
||
Model Spec 1 (pytorch, 7 Billion) | ||
+++++++++++++++++++++++++++++++++ | ||
|
||
- **Model Format:** pytorch | ||
- **Model Size (in billions):** 7 | ||
- **Quantizations:** 4-bit, 8-bit, none | ||
- **Model ID:** WizardLM/WizardCoder-Python-7B-V1.0 | ||
|
||
Execute the following command to launch the model, remember to replace `${quantization}` with your | ||
chosen quantization method from the options listed above:: | ||
|
||
xinference launch --model-name wizardcoder-python-v1.0 --size-in-billions 7 --model-format pytorch --quantization ${quantization} | ||
|
||
.. note:: | ||
|
||
4-bit quantization is not supported on macOS. | ||
|
||
|
||
Model Spec 2 (pytorch, 13 Billion) | ||
++++++++++++++++++++++++++++++++++ | ||
|
||
- **Model Format:** pytorch | ||
- **Model Size (in billions):** 13 | ||
- **Quantizations:** 4-bit, 8-bit, none | ||
- **Model ID:** WizardLM/WizardCoder-Python-13B-V1.0 | ||
|
||
Execute the following command to launch the model, remember to replace `${quantization}` with your | ||
chosen quantization method from the options listed above:: | ||
|
||
xinference launch --model-name wizardcoder-python-v1.0 --size-in-billions 13 --model-format pytorch --quantization ${quantization} | ||
|
||
.. note:: | ||
|
||
4-bit quantization is not supported on macOS. | ||
|
||
Model Spec 3 (pytorch, 34 Billion) | ||
++++++++++++++++++++++++++++++++++ | ||
|
||
- **Model Format:** pytorch | ||
- **Model Size (in billions):** 34 | ||
- **Quantizations:** 4-bit, 8-bit, none | ||
- **Model ID:** WizardLM/WizardCoder-Python-34B-V1.0 | ||
|
||
Execute the following command to launch the model, remember to replace `${quantization}` with your | ||
chosen quantization method from the options listed above:: | ||
|
||
xinference launch --model-name wizardcoder-python-v1.0 --size-in-billions 34 --model-format pytorch --quantization ${quantization} | ||
|
||
.. note:: | ||
|
||
4-bit quantization is not supported on macOS. | ||
|
||
Model Spec 4 (ggufv2, 7 Billion) | ||
++++++++++++++++++++++++++++++++ | ||
|
||
- **Model Format:** ggufv2 | ||
- **Model Size (in billions):** 7 | ||
- **Quantizations:** Q2_K, Q3_K_L, Q3_K_M, Q3_K_S, Q4_0, Q4_K_M, Q4_K_S, Q5_0, Q5_K_M, Q5_K_S, Q6_K, Q8_0 | ||
- **Model ID:** TheBloke/WizardCoder-Python-7B-V1.0-GGUF | ||
|
||
Execute the following command to launch the model, remember to replace `${quantization}` with your | ||
chosen quantization method from the options listed above:: | ||
|
||
xinference launch --model-name wizardcoder-python-v1.0 --size-in-billions 7 --model-format ggufv2 --quantization ${quantization} | ||
|
||
Model Spec 5 (ggufv2, 13 Billion) | ||
+++++++++++++++++++++++++++++++++ | ||
|
||
- **Model Format:** ggufv2 | ||
- **Model Size (in billions):** 13 | ||
- **Quantizations:** Q2_K, Q3_K_L, Q3_K_M, Q3_K_S, Q4_0, Q4_K_M, Q4_K_S, Q5_0, Q5_K_M, Q5_K_S, Q6_K, Q8_0 | ||
- **Model ID:** TheBloke/WizardCoder-Python-13B-V1.0-GGUF | ||
- **File Name Template:** wizardcoder-python-13b-v1.0.{quantization}.gguf | ||
|
||
Execute the following command to launch the model, remember to replace `${quantization}` with your | ||
chosen quantization method from the options listed above:: | ||
|
||
xinference launch --model-name wizardcoder-python-v1.0 --size-in-billions 13 --model-format ggufv2 --quantization ${quantization} | ||
|
||
Model Spec 6 (ggufv2, 34 Billion) | ||
+++++++++++++++++++++++++++++++++ | ||
|
||
- **Model Format:** ggufv2 | ||
- **Model Size (in billions):** 34 | ||
- **Quantizations:** Q2_K, Q3_K_L, Q3_K_M, Q3_K_S, Q4_0, Q4_K_M, Q4_K_S, Q5_0, Q5_K_M, Q5_K_S, Q6_K, Q8_0 | ||
- **Model ID:** TheBloke/WizardCoder-Python-34B-V1.0-GGUF | ||
- **File Name Template:** wizardcoder-python-34b-v1.0.{quantization}.gguf | ||
|
||
Execute the following command to launch the model, remember to replace `${quantization}` with your | ||
chosen quantization method from the options listed above:: | ||
|
||
xinference launch --model-name wizardcoder-python-v1.0 --size-in-billions 34 --model-format ggufv2 --quantization ${quantization} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters