Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Guide readme fix #552

Merged
merged 4 commits into from
Nov 30, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
203 changes: 110 additions & 93 deletions docs/zero_to_hero_guide/README.md
Original file line number Diff line number Diff line change
@@ -1,145 +1,159 @@
# Llama Stack: from Zero to Hero

Llama-Stack allows you to configure your distribution from various providers, allowing you to focus on going from zero to production super fast.
Llama Stack defines and standardizes the set of core building blocks needed to bring generative AI applications to market. These building blocks are presented in the form of interoperable APIs with a broad set of Providers providing their implementations. These building blocks are assembled into Distributions which are easy for developers to get from zero to production.

This guide will walk you through how to build a local distribution, using Ollama as an inference provider.
This guide will walk you through an end-to-end workflow with Llama Stack with Ollama as the inference provider and ChromaDB as the memory provider. Please note the steps for configuring your provider and distribution will vary a little depending on the services you use. However, the user experience will remain universal - this is the power of Llama-Stack.

We also have a set of notebooks walking you through how to use Llama-Stack APIs:
If you're looking for more specific topics, we have a [Zero to Hero Guide](#next-steps) that covers everything from Tool Calling to Agents in detail. Feel free to skip to the end to explore the advanced topics you're interested in.

- Inference
- Prompt Engineering
- Chatting with Images
- Tool Calling
- Memory API for RAG
- Safety API
- Agentic API

Below, we will learn how to get started with Ollama as an inference provider, please note the steps for configuring your provider will vary a little depending on the service. However, the user experience will remain universal-this is the power of Llama-Stack.

Prototype locally using Ollama, deploy to the cloud with your favorite provider or own deployment. Use any API from any provider while focussing on development.

# Ollama Quickstart Guide

This guide will walk you through setting up an end-to-end workflow with Llama Stack with ollama, enabling you to perform text generation using the `Llama3.2-3B-Instruct` model. Follow these steps to get started quickly.

If you're looking for more specific topics like tool calling or agent setup, we have a [Zero to Hero Guide](#next-steps) that covers everything from Tool Calling to Agents in detail. Feel free to skip to the end to explore the advanced topics you're interested in.

> If you'd prefer not to set up a local server, explore our notebook on [tool calling with the Together API](Tool_Calling101_Using_Together's_Llama_Stack_Server.ipynb). This guide will show you how to leverage Together.ai's Llama Stack Server API, allowing you to get started with Llama Stack without the need for a locally built and running server.
> If you'd prefer not to set up a local server, explore our notebook on [tool calling with the Together API](Tool_Calling101_Using_Together's_Llama_Stack_Server.ipynb). This notebook will show you how to leverage together.ai's Llama Stack Server API, allowing you to get started with Llama Stack without the need for a locally built and running server.

## Table of Contents
1. [Setup ollama](#setup-ollama)
1. [Setup and run ollama](#setup-ollama)
2. [Install Dependencies and Set Up Environment](#install-dependencies-and-set-up-environment)
3. [Build, Configure, and Run Llama Stack](#build-configure-and-run-llama-stack)
4. [Run Ollama Model](#run-ollama-model)
5. [Next Steps](#next-steps)
4. [Test with llama-stack-client CLI](#test-with-llama-stack-client-cli)
5. [Test with curl](#test-with-curl)
6. [Test with Python](#test-with-python)
7. [Next Steps](#next-steps)

---

## Setup ollama

1. **Download Ollama App**:
- Go to [https://ollama.com/download](https://ollama.com/download).
- Download and unzip `Ollama-darwin.zip`.
- Follow instructions based on the OS you are on. For example, if you are on a Mac, download and unzip `Ollama-darwin.zip`.
- Run the `Ollama` application.

1. **Download the Ollama CLI**:
- Ensure you have the `ollama` command line tool by downloading and installing it from the same website.
Ensure you have the `ollama` command line tool by downloading and installing it from the same website.

1. **Start ollama server**:
- Open the terminal and run:
```
ollama serve
```

Open the terminal and run:
```
ollama serve
```
1. **Run the model**:
- Open the terminal and run:
```bash
ollama run llama3.2:3b-instruct-fp16
```
**Note**: The supported models for llama stack for now is listed in [here](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/inference/ollama/ollama.py#L43)

Open the terminal and run:
```bash
ollama run llama3.2:3b-instruct-fp16 --keepalive -1m
```
**Note**:
- The supported models for llama stack for now is listed in [here](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/inference/ollama/ollama.py#L43)
- `keepalive -1m` is used so that ollama continues to keep the model in memory indefinitely. Otherwise, ollama frees up memory and you would have to run `ollama run` again.

---

## Install Dependencies and Set Up Environment

1. **Create a Conda Environment**:
- Create a new Conda environment with Python 3.10:
```bash
conda create -n ollama python=3.10
```
- Activate the environment:
```bash
conda activate ollama
```
Create a new Conda environment with Python 3.10:
```bash
conda create -n ollama python=3.10
```
Activate the environment:
```bash
conda activate ollama
```

2. **Install ChromaDB**:
- Install `chromadb` using `pip`:
```bash
pip install chromadb
```
Install `chromadb` using `pip`:
```bash
pip install chromadb
```

3. **Run ChromaDB**:
- Start the ChromaDB server:
```bash
chroma run --host localhost --port 8000 --path ./my_chroma_data
```
Start the ChromaDB server:
```bash
chroma run --host localhost --port 8000 --path ./my_chroma_data
```

4. **Install Llama Stack**:
- Open a new terminal and install `llama-stack`:
```bash
conda activate hack
pip install llama-stack==0.0.53
```
Open a new terminal and install `llama-stack`:
```bash
conda activate ollama
pip install llama-stack==0.0.55
```

---

## Build, Configure, and Run Llama Stack

1. **Build the Llama Stack**:
- Build the Llama Stack using the `ollama` template:
```bash
llama stack build --template ollama --image-type conda
```

After this step, you will see the console output:

```
Build Successful! Next steps:
Build the Llama Stack using the `ollama` template:
```bash
llama stack build --template ollama --image-type conda
```
**Expected Output:**
```
...
Build Successful! Next steps:
1. Set the environment variables: LLAMASTACK_PORT, OLLAMA_URL, INFERENCE_MODEL, SAFETY_MODEL
2. `llama stack run /Users/username/.llama/distributions/llamastack-ollama/ollama-run.yaml`
```
2. `llama stack run /Users/<username>/.llama/distributions/llamastack-ollama/ollama-run.yaml
```

2. **Set the ENV variables by exporting them to the terminal**:
```bash
export OLLAMA_URL="http://localhost:11434"
export LLAMA_STACK_PORT=5001
export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
export SAFETY_MODEL="meta-llama/Llama-Guard-3-1B"
```
3. **Set the ENV variables by exporting them to the terminal**:
```bash
export OLLAMA_URL="http://localhost:11434"
export LLAMA_STACK_PORT=5051
export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
export SAFETY_MODEL="meta-llama/Llama-Guard-3-1B"
```

3. **Run the Llama Stack**:
- Run the stack with command shared by the API from earlier:
```bash
llama stack run ollama \
--port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env SAFETY_MODEL=$SAFETY_MODEL \
--env OLLAMA_URL=http://localhost:11434
```

Note: Everytime you run a new model with `ollama run`, you will need to restart the llama stack. Otherwise it won't see the new model
Run the stack with command shared by the API from earlier:
```bash
llama stack run ollama \
--port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env SAFETY_MODEL=$SAFETY_MODEL \
--env OLLAMA_URL=$OLLAMA_URL
```
Note: Everytime you run a new model with `ollama run`, you will need to restart the llama stack. Otherwise it won't see the new model.

The server will start and listen on `http://localhost:5051`.

---

## Testing with `curl`
## Test with `llama-stack-client` CLI
After setting up the server, open a new terminal window and install the llama-stack-client package.

1. Install the llama-stack-client package
```bash
conda activate ollama
pip install llama-stack-client
```
2. Configure the CLI to point to the llama-stack server.
```bash
llama-stack-client configure --endpoint http://localhost:5051
```
**Expected Output:**
```bash
Done! You can now use the Llama Stack Client CLI with endpoint http://localhost:5051
```
3. Test the CLI by running inference:
```bash
llama-stack-client inference chat-completion --message "Write me a 2-sentence poem about the moon"
```
**Expected Output:**
```bash
ChatCompletionResponse(
completion_message=CompletionMessage(
content='Here is a 2-sentence poem about the moon:\n\nSilver crescent shining bright in the night,\nA beacon of wonder, full of gentle light.',
role='assistant',
stop_reason='end_of_turn',
tool_calls=[]
),
logprobs=None
)
```

## Test with `curl`

After setting up the server, open a new terminal window and verify it's working by sending a `POST` request using `curl`:

```bash
curl http://localhost:5051/inference/chat_completion \
curl http://localhost:$LLAMA_STACK_PORT/inference/chat_completion \
-H "Content-Type: application/json" \
-d '{
"model": "Llama3.2-3B-Instruct",
Expand Down Expand Up @@ -168,15 +182,16 @@ You can check the available models with the command `llama-stack-client models l

---

## Testing with Python
## Test with Python

You can also interact with the Llama Stack server using a simple Python script. Below is an example:

### 1. Active Conda Environment and Install Required Python Packages
### 1. Activate Conda Environment and Install Required Python Packages
The `llama-stack-client` library offers a robust and efficient python methods for interacting with the Llama Stack server.

```bash
conda activate your-llama-stack-conda-env
conda activate ollama
pip install llama-stack-client
```

Note, the client library gets installed by default if you install the server library
Expand All @@ -188,6 +203,8 @@ touch test_llama_stack.py

### 3. Create a Chat Completion Request in Python

In `test_llama_stack.py`, write the following code:

```python
from llama_stack_client import LlamaStackClient

Expand Down