You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. This is a great project! Where is the public demo or weights and MoE_model.py for the Mixture of Experts ChatLaw2-MoE model? Since the MOE_model has moderate size(? parameters), can you publish a pure-python MOE_model.py that does not depend upon "from transformers import"? Providing a pure standalone MOE_model.py (independent of the bulky transformers.py) would facilitate local installation/operation with the user's own RAG dataset (of statutes and caselaw) including edge-device operation.
Demonstrated: “Role of LLMs in the Future of Legal Practice”
“LLMs’ transformative potential in the legal field is evident from their impressive performance in legal exams. GPT-4 scored in the 90th percentile on the Uniform Bar Examination [61], and ChatGPT autonomously passed four law school final exams at a top law school [383]. These achievements showcase the significant impact of AI language models on legal practice. … LLMs [augmented by document Retrieval] can serve as a valuable tool for initial research, explanations, and improving efficiency in legal practice. ”
LLM_Survey_2015_onwards_arxiv.pdf available at: https://www.techrxiv.org/doi/full/10.36227/techrxiv.23589741.v6
Can you specify confirm that only about 28B (4x7B) parameters were in the MoE ChatLaw model that exceeded performance of ChatGPT-4 on the "Unified Qualification Exam" Bar Exam? Did that MOE model have both English and Chinese language-tokens and comprehension/pretraining, or only English corresponding to the text of the Unified Qualification Exam" Bar Exam?
ChatLaw, an open-source MoE LLM, boasts even higher performance on BAR Exams than ChatGPT-4: “Our MoE model outperforms GPT-4 in the Lawbench and Unified Qualification Exam for Legal Professionals by 7.73% in accuracy” [using a model having only _?__B parameters “Based on the InternLM architecture with a 4x7B Mixture of Experts (MoE) design”] https://github.com/PKU-YuanGroup/ChatLaw/blob/main/README.md https://arxiv.org/abs/2306.16092https://huggingface.co/papers/2306.16092
The text was updated successfully, but these errors were encountered:
Hi. This is a great project! Where is the public demo or weights and MoE_model.py for the Mixture of Experts ChatLaw2-MoE model? Since the MOE_model has moderate size(? parameters), can you publish a pure-python MOE_model.py that does not depend upon "from transformers import"? Providing a pure standalone MOE_model.py (independent of the bulky transformers.py) would facilitate local installation/operation with the user's own RAG dataset (of statutes and caselaw) including edge-device operation.
Demonstrated: “Role of LLMs in the Future of Legal Practice”
“LLMs’ transformative potential in the legal field is evident from their impressive performance in legal exams. GPT-4 scored in the 90th percentile on the Uniform Bar Examination [61], and ChatGPT autonomously passed four law school final exams at a top law school [383]. These achievements showcase the significant impact of AI language models on legal practice. … LLMs [augmented by document Retrieval] can serve as a valuable tool for initial research, explanations, and improving efficiency in legal practice. ”
LLM_Survey_2015_onwards_arxiv.pdf available at:
https://www.techrxiv.org/doi/full/10.36227/techrxiv.23589741.v6
Can you specify confirm that only about 28B (4x7B) parameters were in the MoE ChatLaw model that exceeded performance of ChatGPT-4 on the "Unified Qualification Exam" Bar Exam? Did that MOE model have both English and Chinese language-tokens and comprehension/pretraining, or only English corresponding to the text of the Unified Qualification Exam" Bar Exam?
ChatLaw, an open-source MoE LLM, boasts even higher performance on BAR Exams than ChatGPT-4: “Our MoE model outperforms GPT-4 in the Lawbench and Unified Qualification Exam for Legal Professionals by 7.73% in accuracy” [using a model having only _?__B parameters “Based on the InternLM architecture with a 4x7B Mixture of Experts (MoE) design”] https://github.com/PKU-YuanGroup/ChatLaw/blob/main/README.md
https://arxiv.org/abs/2306.16092 https://huggingface.co/papers/2306.16092
The text was updated successfully, but these errors were encountered: