Skip to content

Latest commit

 

History

History
174 lines (123 loc) · 5.89 KB

README.md

File metadata and controls

174 lines (123 loc) · 5.89 KB

Grading Notes

Grading Notes

Grading Notes is a Python package that leverages Large Language Models (LLMs) as automated judges for evaluating AI-generated answers against human-written grading criteria. The repo is based on the awesome post by Databricks. The idea is to guide LLMs wtih simple grading notes rather than provide full ground truth answers thereby lowering the cost of creating ground truth data.

Key Features

  • LLM-powered Evaluation: Harness the intelligence of LLMs guided by humans to evaluate AI-generated answers.
  • Flexible AI Providers: Support for multiple LLM providers, including Anthropic's Claude and OpenAI's GPT models.
  • Bulk Processing: Evaluate multiple questions and answers efficiently using CSV files.

Installation

You can install the Grading Notes package using pip:

pip install grading-notes

Quick Start

from grading_notes import GradingNote, evaluate, get_judge

# set ANTHROPIC_API_KEY and OPENAI_API_KEY env. variables

# create a judge
judge = get_judge(provider="anthropic", model="claude-3-5-sonnet-20240620")

# create a grading note
grade_note = GradingNote(
    question="What is the capital of France?",
    grading_note="Answer is 'Paris'. Accept case-insensitive variations."
)

# evaluate an answer
result = evaluate(judge=judge, grading_note=grade_note, answer="Paris")
print(result) # True

More complex examples

Here is one complex example inspired from the Databricks post (answer generated by Gemini):

grade_note = GradingNote(
question="How do I drop all tables in a Unity Catalog schema?",

grading_note="The response should contain steps to get all table names then \
drop each of them. Alternatively the response can suggest dropping the entire \
schema with risks explained. The response should not treat tables as views."
)

answer = """Unfortunately, Unity Catalog doesn't currently provide a direct 
way to drop all tables within a schema with a single command. You'll need to 
list the tables in the schema and then drop them individually.

Here's a breakdown of how you could approach this using Databricks SQL:

1. List the Tables in the Schema using Python notebook

You can use the SHOW TABLES command to get a list of tables within your schema:

%sql
SHOW TABLES IN your_db_name.your_schema_name;

2. Drop Each Table
You can use the DROP TABLE command to drop each table:

tables = spark.sql("SHOW TABLES IN your_db_name.your_schema_name").collect()

for table in tables:
    table_name = table["tableName"]
    spark.sql(f"DROP TABLE your_db_name.your_schema_name.{table_name}")
"""

result = evaluate(judge=judge, grading_note=grade_note, answer=answer)
print(result) # True

Main Components

  • evaluate: Function to evaluate an answer against a grading note. Optionally takes an answer.
  • evaluate_from_csv: Function to evaluate multiple questions and answers from a CSV file. Optionally takes an answer.
  • GradingNote: Represents the grading criteria for a specific question.
  • Judge: Represents the judge client for different AI providers.
  • Evaluation: Represents the evaluation result (Good or Bad).
  • get_judge: Function to create an Judge for different AI providers.

CSV Evaluation

You can evaluate multiple questions and answers using a CSV file:

from grading_notes import get_judge, evaluate_from_csv
judge = get_judge(provider="openai", model="gpt-4-turbo-preview")
results = evaluate_from_csv(judge=judge, csv_file="path/to/your/csv_file.csv")

The CSV file should have columns question, grading_note, and answer.

Langchain integration

You can integrate Grading Notes with langchain's custom string evaluator (example). Here's an example of how to create a custom evaluator in Langchain using Grading Notes:

from typing import Any, Optional
from langchain.evaluation import StringEvaluator
from grading_notes import GradingNote, evaluate, get_judge


class GradingNotesEvaluator(StringEvaluator):
    """Evaluate predictions using Grading Notes."""
    def __init__(
            self,
            provider: str = "anthropic",
            model: str = "claude-3-5-sonnet-20240620",
        ):
        self.judge = get_judge(provider=provider, model=model)

    def _evaluate_strings(
            self,
            *,
            prediction: str,
            reference: Optional[str] = None,
            input: Optional[str] = None,
            **kwargs: Any,
        ) -> dict:

        if not input or not reference or not prediction:
            msg = "'input' (question), 'reference' (grading note) and \
                'prediction' (answer being evaluated) are *all* required."
            raise ValueError(msg)

        grading_note = GradingNote(question=input, grading_note=reference)
        result = evaluate(
            judge=self.judge,
            grading_note=grading_note,
            answer=prediction,
            )
        return {"score": result}

# Usage
evaluator = GradingNotesEvaluator()
result = evaluator.evaluate_strings(
    prediction="Paris",
    reference="Answer is 'Paris'. Accept case-insensitive variations.",
    input="What is the capital of France?"
)
print(result)  # {'score': True}

Customization

The repo currently supports Anthropic and OpenAI through the instructor library.

Environment Variables

Make sure to set the following environment variables:

  • ANTHROPIC_API_KEY: Your Anthropic API key
  • OPENAI_API_KEY: Your OpenAI API key

License

This project is licensed under the Apache 2.0 License.

Contributing

We welcome contributions! Please see our Contributing Guide for more details.