Use Ollama for local llama inference on Alfred.
Ollama installed and running on your mac. At least one model need to be installed throw Ollama cli tools.
required install model: llama2
ollama run llama2
Command is olla
and you can use it to chat with Ollama.
- Add support for multiple models (currently only one model(
llama2
) is supported)