The goal of ollama is to wrap the
ollama
API and provide infrastructure to be used within
{gptstudio}
You can install the development version of ollama like so:
pak::pak("calderonsamuel/ollama")
The user is in charge of downloading ollama and providing networking configuration. We recommend using the official docker image, which trivializes this process.
The following code downloads the default ollama image and runs an “ollama” container exposing the 11434 port.
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
By default, this package will use http://localhost:11434
as API host
url. Although we provide methods to change this, only do it if you are
absolutely sure of what it means.
This is a basic example which shows you how to solve a common problem:
library(ollama)
## basic example code