From 0758eb9cb19b1cf3ed8764226713ecba8aac94c8 Mon Sep 17 00:00:00 2001 From: Ashwin Bharambe Date: Sat, 23 Nov 2024 00:03:42 -0800 Subject: [PATCH] Try new llama stack image --- README.md | 2 ++ docs/source/distributions/configuration.md | 3 +-- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 03c1de987..fb307a642 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,5 @@ +Llama Stack + # Llama Stack [![PyPI version](https://img.shields.io/pypi/v/llama_stack.svg)](https://pypi.org/project/llama_stack/) diff --git a/docs/source/distributions/configuration.md b/docs/source/distributions/configuration.md index 64c00a7ac..2b05c493b 100644 --- a/docs/source/distributions/configuration.md +++ b/docs/source/distributions/configuration.md @@ -3,7 +3,6 @@ The Llama Stack runtime configuration is specified as a YAML file. Here is a simplied version of an example configuration file for the Ollama distribution: ```{dropdown} Sample Configuration File -:closed: ```yaml version: 2 @@ -85,6 +84,6 @@ models: provider_id: ollama provider_model_id: null ``` -A Model is an instance of a "Resource" (see [Concepts](../concepts)) and is associated with a specific inference provider (in this case, the provider with identifier `ollama`). This is an instance of a "pre-registered" model. While we always encourage the clients to always register models before using them, some Stack servers may come up a list of "already known and available" models. +A Model is an instance of a "Resource" (see [Concepts](../concepts/index)) and is associated with a specific inference provider (in this case, the provider with identifier `ollama`). This is an instance of a "pre-registered" model. While we always encourage the clients to always register models before using them, some Stack servers may come up a list of "already known and available" models. What's with the `provider_model_id` field? This is an identifier for the model inside the provider's model catalog. Contrast it with `model_id` which is the identifier for the same model for Llama Stack's purposes. For example, you may want to name "llama3.2:vision-11b" as "image_captioning_model" when you use it in your Stack interactions. When omitted, the server will set `provider_model_id` to be the same as `model_id`.