Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where is a Demo for the ChatLaw2-MoE model described in the Readme? #86

Open
MartialTerran opened this issue Nov 6, 2024 · 0 comments

Comments

@MartialTerran
Copy link

MartialTerran commented Nov 6, 2024

Hi. This is a great project! Where is the public demo or weights and MoE_model.py for the Mixture of Experts ChatLaw2-MoE model? Since the MOE_model has moderate size(? parameters), can you publish a pure-python MOE_model.py that does not depend upon "from transformers import"? Providing a pure standalone MOE_model.py (independent of the bulky transformers.py) would facilitate local installation/operation with the user's own RAG dataset (of statutes and caselaw) including edge-device operation.

Demonstrated: “Role of LLMs in the Future of Legal Practice”

“LLMs’ transformative potential in the legal field is evident from their impressive performance in legal exams. GPT-4 scored in the 90th percentile on the Uniform Bar Examination [61], and ChatGPT autonomously passed four law school final exams at a top law school [383]. These achievements showcase the significant impact of AI language models on legal practice. … LLMs [augmented by document Retrieval] can serve as a valuable tool for initial research, explanations, and improving efficiency in legal practice. ”
LLM_Survey_2015_onwards_arxiv.pdf available at:
https://www.techrxiv.org/doi/full/10.36227/techrxiv.23589741.v6

Can you specify confirm that only about 28B (4x7B) parameters were in the MoE ChatLaw model that exceeded performance of ChatGPT-4 on the "Unified Qualification Exam" Bar Exam? Did that MOE model have both English and Chinese language-tokens and comprehension/pretraining, or only English corresponding to the text of the Unified Qualification Exam" Bar Exam?
ChatLaw, an open-source MoE LLM, boasts even higher performance on BAR Exams than ChatGPT-4: “Our MoE model outperforms GPT-4 in the Lawbench and Unified Qualification Exam for Legal Professionals by 7.73% in accuracy” [using a model having only _?__B parameters “Based on the InternLM architecture with a 4x7B Mixture of Experts (MoE) design”] https://github.com/PKU-YuanGroup/ChatLaw/blob/main/README.md
https://arxiv.org/abs/2306.16092 https://huggingface.co/papers/2306.16092

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant