You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Seems like language models do a really decent job in de-obfuscating the generated code (same holds for other obfuscating tools, not Carbon in particular). Didn't test it in all any depth, but GPT-3.5 give great hints when fead with a function from the example folder: https://chat.openai.com/share/0dd8d626-4de1-4de4-af79-d9acbd66c7b5
So, be careful when you use it against important stuff. If larger code bases are used, at least for now, the limited context length of LLMs may give a bit of protection.
The text was updated successfully, but these errors were encountered:
Seems like language models do a really decent job in de-obfuscating the generated code (same holds for other obfuscating tools, not Carbon in particular). Didn't test it in all any depth, but GPT-3.5 give great hints when fead with a function from the example folder: https://chat.openai.com/share/0dd8d626-4de1-4de4-af79-d9acbd66c7b5
So, be careful when you use it against important stuff. If larger code bases are used, at least for now, the limited context length of LLMs may give a bit of protection.
The text was updated successfully, but these errors were encountered: