-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better Handling of Large Methods #12
Comments
Out of curiosity, why not leverage GPT-4? It seems a far more capable model and it's available via the API, I believe |
You may also want to explore DevGPT: https://www.devgpt.com/ It looks interesting as it's GPT-4 but also aware of your Git repo's context |
@chrisgliddon I looked into GPT-4, it makes small file processing a lot slower. It is a pretty viable alternative. Theoretically to try it one would just need to update YAML Part of this ticket is experiment with this, maybe use gpt-4 for large blocks of code and get-3.5 for small ones? |
Less about speed and more about comprehensiveness. Whenever I've experimented with 4, it's outperformed 3.5 in coherence. I get more useful responses from 4. |
why instead build a private GPT for the specific task ? https://www.devzero.io/create-private-chatgpt-devzero, and put some $$ on few servers. Or I'm talking rubbish? |
@jpcordovae I think it is an option if someone wants to experiment with it. Generally |
There are better ways of improving the performance without splitting up your huge methods into smaller prompts, there are multiple versions of ChatGPT 3.5, the default only has 4096 tokens, however there is also a As for ChatGPT4, if you can get access to the GPT4 model api, the supports a max of 8192 tokens, but they also have a Tokens aside, ChatGPT 4 will give you much better results than the 3.5 when it comes to coding problems. Especially if you do proper prompt engineering. Example Prompt: Please help me translate/convert this Unity Code into it's Unreal Engine Equivalent: You can also specify system prompts that let the LLM understand the task/goal even if the user simply pastes in unity code without any context. Here is an example script using system prompts: import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": "You are a helpful LLM tasked with helping game developers that needs help migrating/porting all their unity projects to unreal engine or Godot engine. If you are unsure of something or the context is unclear, let the user know and ask for clarification."
},
{
"role": "user",
"content": "Please help me convert this unity code to the unreal engine equivalent:\n```\n#include \"UnityToAirSimCalls.h\"\n\nvoid StartServerThread(std::string sim_mode_name, int port_number)\n{\n key = new SimHUD(sim_mode_name, port_number);\n key->BeginPlay();\n}\n```"
},
{
"role": "assistant",
"content": "In order to convert this Unity code to Unreal Engine, you would need to create a new C++ class and include the necessary headers. Here's an example of how you can convert the provided code:\n\n1. Create a new C++ class in Unreal Engine editor and name it `ServerThread`.\n\n2. Open the generated `ServerThread.cpp` file and include the appropriate headers for your project:\n\n```cpp\n#include \"ServerThread.h\"\n#include \"SimHUD.h\"\n```\n\n3. Update the `StartServerThread` function in `ServerThread.cpp` as follows:\n\n```cpp\nvoid ServerThread::StartServerThread(FString sim_mode_name, int32 port_number)\n{\n // Convert sim_mode_name to std::string\n std::string std_sim_mode_name(TCHAR_TO_UTF8(*sim_mode_name));\n\n // Create an instance of SimHUD\n USimHUD* SimHUDInstance = NewObject<USimHUD>();\n\n // BeginPlay as if in Unity\n SimHUDInstance->BeginPlay(std_sim_mode_name, port_number);\n}\n```\n\nNow you can use the `StartServerThread` function in Unreal Engine by calling it from a Blueprint or other C++ classes. Don't forget to change the access specifiers based on your"
}
],
temperature=1,
max_tokens=256,
top_p=1,
frequency_penalty=0,
presence_penalsystem Sources: https://help.openai.com/en/articles/7127966-what-is-the-difference-between-the-gpt-4-models Using custom system instructions: |
ChatGPT 3.5 has a limit of 4097 tokens per translation. It means that huge methods will fail translation.
Maybe if a method body is larger then, say, 3500 tokens (using
ChatGptMixin::count_tokens
), we can just comment out the entire body and paste it as is in the destination file?Lets discuss better options on how to split the method.
The text was updated successfully, but these errors were encountered: