Replies: 2 comments 3 replies
-
Would this be considered prompt engineering? Isn't it just extracting semantics from a given input and creating |
Beta Was this translation helpful? Give feedback.
-
I found an easy, maybe naive, example to illustrate what correct prompting is about. If you do - or don't - show to the LLM how to "reason", at least what you expect him to do, say by giving a example, then it does - or not - work. Besides this, he also use Instructor, the Elixir version of the Python Instructor. If I understand correctly, the idea is to parse the LLM's response into a schema for validity. Valid means that the response (a FFMPEG command) is a valid one when it is executable. He tests it with What I don't get yet is how this is used then the changeset is invalid. Is there a loop back? Some kind of response back to the LLM saying "hey, this is not valid, please RTF prompt and answer correctly next time 😀. Try again!", until the response "seems" valid, as per the schema at least (but this is a huge advancement). |
Beta Was this translation helpful? Give feedback.
-
I was wrong about prompt engineering. Ignorance probably....this made me change my mind
You type a desired
ffmpeg
kind like command in a text input, and the best real approaching command - using ffmpeg - will be run against your file input.Stunning...
Beta Was this translation helpful? Give feedback.
All reactions