Replies: 4 comments 2 replies
-
Same here. I've been testing local LLMs (mostly ~7Bs and 13Bs, occasionally ~30B , fine-tuned to code, latest being Llama 3) since about a month ago and everything is a mess. As @phalexo mentions: Test with GPT-4 was great. Obviously, I do not expect 7B local LLMs to have same quality as the big gunned LLMs but judging by some of HF LLM leaderboard stars like StarCoder, DeepSeek, Llama 3 Instruct Coder, etc., I was honestly expecting a very different outcome of this. Maybe I'm using wrong LLM parameters? I use LM Studio but try not to mess too much with parameters. |
Beta Was this translation helpful? Give feedback.
-
@phalexo in my case Ollama-Pilot-CasaOs |
Beta Was this translation helpful? Give feedback.
-
The question is not whether it technically works, but if it can actually
produce useful code. When I looked at it, gpt-pilot couldn't do anything
useful. Many hours spent trying to use it with Llama 3 coder. Maybe the
models are better now.
…On Sat, Oct 12, 2024, 6:32 AM Fahad Usman ***@***.***> wrote:
@phalexo <https://github.com/phalexo> in my case Ollama-Pilot-CasaOs
<https://github.com/hqnicolas/Ollama-Pilot-CasaOs/tree/main> I'm Using
docker container to run GPT Pilod to Ollama If you like my Automation Leave
a Start ⭐
is it working with ollama?
—
Reply to this email directly, view it on GitHub
<#843 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABDD3ZN6FG64RF4SMPC2MS3Z3D3DNAVCNFSM6AAAAABFX626NOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAOJSGE4TMOI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
@phalexo https://github.com/hqnicolas/n3backend this is my config file: // Each agent can use a different model or configuration. The default, as before, is GPT4 Turbo
// for most tasks and GPT3.5 Turbo to generate file descriptions. The agent name here should match
// the Python class name.
"agent": {
"default": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"architect": {
"provider": "openai",
"model": "llama3.2:70b-text-fp16",
"temperature": 0.5
},
"error_handler": {
"provider": "openai",
"model": "llama3.2:70b-text-fp16",
"temperature": 0.5
},
"legacy_handler": {
"provider": "openai",
"model": "llama3.2:70b-text-fp16",
"temperature": 0.5
},
"task_completer": {
"provider": "openai",
"model": "llama3.2:70b-text-fp16",
"temperature": 0.5
},
"executor": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"mixins": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"tech_lead": {
"provider": "openai",
"model": "llama3.2:70b-text-fp16",
"temperature": 0.5
},
"bug_hunter": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"external_docs": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"orchestrator": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"tech_writer": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"code_monkey": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"human_input": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"problem_solver": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"troubleshooter": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"convo": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"importer": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"response": {
"provider": "openai",
"model": "dolphin-2.9.1-llama-3-70b",
"temperature": 0.5
},
"developer": {
"provider": "openai",
"model": "gpt-4-turbo",
"temperature": 0.5
},
"spec_writer": {
"provider": "openai",
"model": "o1-mini",
"temperature": 0.5
}
}, |
Beta Was this translation helpful? Give feedback.
-
I've tried it with GPT-4-turbo-preview and it was making progress, even with some grave errors like modifying working code into junk.
But all my attempts to use local coding models via ollama, simply do not work. They loop, produce bad JSON, and seem to do a lot of useless and potentially bad stuff.
GPT-4 API costs quite a bit of money. I ran up the bill to about $90, just for the UI for login/register screen, and 3 basic UI screens.
Not even touching the backend, except login/register table.
Beta Was this translation helpful? Give feedback.
All reactions