-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Persona Initiative - Long Live the LLM #297
Comments
persona support added to llamacpp - TigreGotico/ovos-solver-plugin-llamacpp@40aca66 |
'persona service repo is up https://github.com/OpenVoiceOS/ovos-persona |
have initial code for a memory module for the solver plugins, it keeps context and prepares prompts to feed to a LLM, first attempt at integration with online services too c = ChatHistory("this assistant is {persona} and is called {name}", persona="evil", name="mycroft")
c.user_says("hello!")
c.llm_says("hello user")
c.user_says("what is your name?")
c.llm_says("my name is {name}")
print(c.prompt)
# this assistant is evil and is called mycroft
# User: hello!
# AI: hello user
# User: what is your name?
# AI: my name is mycroft
def get_wolfram(query):
return {"wolfram_answer": "42"} # return text or a dict, dict also populates self.variables
c = InstructionHistory("you are an AGI, follow the prompts")
c.instruct("what is the meaning of life", get_wolfram)
print(c.variables) # {'wolfram_answer': '42'}
c.llm_says("the answer is {wolfram_answer}")
print(c.prompt)
# you are an AGI, follow the prompts
#
# ## INSTRUCTION
#
# what is the meaning of life
#
# ## DATA
#
# wolfram_answer: 42
#
# ## RESPONSE
#
# the answer is 42
|
I was going to be trying to do a bit of this myself over my vacation this week then finally found your work here. From what I see reading here, the workflow is something like:
I also understand that this isn't finished, isn't very documented, etc. but what happens with this then is that the persona bit here you've made then talks to the message bus for the system and acts as a complete fallback when nothing else can respond and then it goes out to the neonai solver through the fallback or via the direct skill (in theory). Right now it looks like the selection of which method that is, is based on the fallback skill. This will basically result in the device/system/etc. to respond via a single result from the LLM and whatever context/personality has been configured to be passed over to it? Main things I see that aren't implemented yet are memory or chatbot style communication, i.e. where you could continually argue with the same LLM with a hope of it keeping track of the conversation. My original thought was going to be trying to build something on top of langchain for trying to create a crude fallback skill to do this since it had some more stuff pre-created for all this but looking at your work I think it'd end up being able to do more than I was thinking I'd accomplish directly myself anyway. I'll give all this a shot to setup and see if I can find any things that could use some help in your work and at the very least try to help with some in initial documentation for it. |
Love the work so far! I don't see a roadmap item/checkbox for |
it is there for the solver plugins the ovos-persona repo will have a Dockerfile too for standalone usage probably, but mainly it will be part of core as a new service in the intent pipeline, you can see it is already listed here #293 |
tracking progress here OpenVoiceOS/ovos-persona#4 |
Give OpenVoiceOS some sass with Persona!
Phrases not explicitly handled by other skills will be run by Persona, so nearly every interaction will have some response. But be warned, OpenVoiceOS might become a bit obnoxious...
Large language models are having their Stable Diffusion moment. The next few months will be filled with new useful, and potentially controversial applications, pushing incumbents and startups to innovate once again.
The last weeks were filled with exciting announcements for open-source LLMs:
Instruction fine-tuning is powerful for language models because it allows for targeted training on a specific task or domain, resulting in improved performance on that task and enabling transfer learning to other tasks.
Until recently, powerful LLMs were only accessible through APIs: OpenAI and now PaLM. Now the open-source community showed how the smallest LLaMA (7B) achieves GPT Davinci-like performance.
OpenVoiceOS is already adopting these technologies, the current stretch goal of our fundraiser aims to bring the Persona project to it's first stable release
https://www.gofundme.com/f/openvoiceos
Persona
Core personality
in mycroft.conf
Skills personality
New file format,
.jsonl
jsonl format info: https://jsonlines.org/
.jsonl
file if it exists, else old.dialog
filemycroft.conf
/ current active persona.jsonl
file"Solver" plugins
these plugins automatically handle language support and auto translation, they provide the base for a "spoken answers" api
each persona loads several of those plugins sorted by priority, similarly to the fallback skills mechanism, this allows to check internet sources by order of reliability/functionality
create your own chatbot persona by choosing what plugins you install
self hosted
"Chatbot" Skills
"increase sarcasm by 20%"
"enable story teller persona"
Inspiration
MycroftAI wanted to start an initiative called ‘Persona’ - a tool to help build distinct personalities for Mycroft. Think Sassy Mycroft, Polite Mycroft and so on.
the technology just wasn't there yet and the backend implementation was never finished or made public but the beta skill is still available (non-functional) https://github.com/MycroftAI/skill-fallback-persona
The text was updated successfully, but these errors were encountered: