Skip to content

A modular and extensible Python framework, designed to aid in the creation of high-quality, unbiased datasets to build robust models for MGT-related tasks such as detection, attribution, and boundary detection.

License

Notifications You must be signed in to change notification settings

Genaios/TextMachina

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

TextMachina

license Documentation Contributor Covenant Pypi version Downloads

Unifying strategies to build MGT datasets in a single framework

icon TextMachina is a modular and extensible Python framework, designed to aid in the creation of high-quality, unbiased datasets to build robust models for MGT-related tasks such as:

  • ๐Ÿ”Ž Detection: detect whether a text has been generated by an LLM.
  • ๐Ÿ•ต๏ธโ€โ™‚๏ธ Attribution: identify what LLM has generated a text.
  • ๐Ÿšง Boundary detection: find the boundary between human and generated text.
  • ๐ŸŽจ Mixcase: ascertain whether specific text spans are human-written or generated by LLMs.

icon TextMachina provides a user-friendly pipeline that abstracts away the inherent intricacies of building MGT datasets:

  • ๐Ÿฆœ LLM integrations: easily integrates any LLM provider. Currently, icon supports LLMs from Anthropic, Cohere, OpenAI, Google Vertex AI, Amazon Bedrock, AI21, Azure OpenAI, models deployed on VLLM and TRT inference servers, and any model from HuggingFace deployed either locally or remotely through Inference API or Inference Endpoints. See models to implement your own LLM provider.

  • โœ๏ธ Prompt templating: just write your prompt template with placeholders and let icon extractors to fill the template and prepare a prompt for an LLM. See extractors to implement your own extractors and learn more about the placeholders for each extractor.

  • ๐Ÿ”’ Constrained decoding: automatically infer LLM decoding hyper-parameters from the human texts to improve the quality and reduce the biases of your MGT datasets. See constrainers to implement your own constrainers.

  • ๐Ÿ› ๏ธ Post-processing: post-process functions aimed to improve the quality of any MGT dataset and prevent common biases and artifacts. See postprocessing to add new postprocess functions.

  • ๐ŸŒˆ Bias mitigation: icon is built with bias prevention in mind and helps you across all the pipeline to prevent introducing spurious correlations in your datasets.

  • ๐Ÿ“Š Dataset exploration: explore the generated datasets and quantify its quality with a set of metrics. See metrics and interactive to implement your own metrics and visualizations.

The following diagram depicts the icon's pipeline.

TextMachina Pipeline

๐Ÿ”ง Installation


You can install all the dependencies with pip:

pip install text-machina[all]

or just with specific dependencies for an specific LLM provider or development dependencies (see setup.py):

pip install text-machina[anthropic,dev]

You can also install directly from source:

pip install .[all]

If you're planning to modify the code for specific use cases, you can install icon in development mode:

pip install -e .[dev]

๐Ÿ‘€ Quick Tour


Once installed, you are ready to use icon for building MGT datasets either using the CLI or programmatically.

๐Ÿ“Ÿ Using the CLI

The first step is to define a YAML configuration file or a directory tree containing YAML files. Read the examples/learning files to learn how to define configuration using different providers and extractors for different tasks. Take a look to examples/use_cases to see configurations for specific use cases.

Then, we can call the explore and generate endpoints of icon's CLI. The explore endpoint allows to inspect a small generated dataset using an specific configuration through an interactive interface. For instance, let's suppose we want to check how an MGT detection dataset generated using XSum news articles and gpt-3.5-turbo-instruct looks like, and compute some metrics:

text-machina explore --config-path etc/examples/xsum_gpt-3-5-turbo-instruct_openai.yaml \
--task-type detection \
--metrics-path etc/metrics.yaml \
--max-generations 10

CLI interface showing generated and human text for detection

Great! Our dataset seems to look great, no artifacts, no biases, and high-quality text using this configuration. Let's now generate a whole dataset for MGT detection using that config file. The generate endpoint allows you to do that:

text-machina generate --config-path etc/examples/xsum_gpt-3-5-turbo-instruct_openai.yaml \
--task-type detection

A run name will be assigned to your execution and icon will cache results behind the scenes. If your run is interrupted at any point, you can use --run-name <run-name> to recover the progress and continue generating your dataset.

๐Ÿ‘ฉโ€๐Ÿ’ป Programmatically

You can also use icon programmatically. To do that, instantiate a dataset generator by calling get_generator with a Config object, and run its generate method. The Config object must contain the input, model, and generation configs, together with the task type for which the MGT dataset will be generated. Let's replicate the previous experiment programmatically:

from text_machina import get_generator
from text_machina import Config, InputConfig, ModelConfig

input_config = InputConfig(
    domain="news",
    language="en",
    quantity=10,
    random_sample_human=True,
    dataset="xsum",
    dataset_text_column="document",
    dataset_params={"split": "test"},
    template=(
        "Write a news article whose summary is '{summary}'"
        "using the entities: {entities}\n\nArticle:"
    ),
    extractor="combined",
    extractors_list=["auxiliary.Auxiliary", "entity_list.EntityList"],
    max_input_tokens=256,
)

model_config = ModelConfig(
    provider="openai",
    model_name="gpt-3.5-turbo-instruct",
    api_type="COMPLETION",
    threads=8,
    max_retries=5,
    timeout=20,
)

generation_config = {"temperature": 0.7, "presence_penalty": 1.0}

config = Config(
    input=input_config,
    model=model_config,
    generation=generation_config,
    task_type="detection",
)
generator = get_generator(config)
dataset = generator.generate()

๐Ÿ› ๏ธ Supported tasks


icon can generate datasets for MGT detection, attribution, boundary detection, and mixcase detection:

CLI interface showing generated and human text for detection

Example from a detection task.

CLI interface showing generated and human text for attribution

Example from an attribution task.

CLI interface showing generated and human text for boundary

Example from a boundary detection task.

CLI interface showing generated and human text for sentence-based mixcase

Example from a mixcase task (tagging), interleaving generated sentences with human texts.

CLI interface showing generated and human text for word-span-based mixcase

Example from a mixcase task (tagging), interleaving generated word spans with human texts.

However, the users can build datasets for other tasks not included in icon just by leveraging the provided task types. For instance, datasets for mixcase classification can be built from datasets for mixcase tagging, or datasets for mixcase attribution can be built using the generation model name as label.

๐Ÿ”„ Common Use Cases


There is a set of common use cases with icon. Here's how to carry them out using the explore and generate endpoints.

Use case Command
Explore a dataset of 10 samples for MGT detection and show metrics
text-machina explore \ 
--config-path config.yaml \
--task-type detection \
--max-generations 10 \
--metrics_path metrics.yaml
Explore an existing dataset for MGT detection and show metrics
text-machina explore \ 
--config-path config.yaml \
--run-name greedy-bear \
--task-type detection \
--metrics_path metrics.yaml
Generate a dataset for MGT detection
text-machina generate \ 
--config-path config.yaml \
--task-type detection
Generate a dataset for MGT attribution
text-machina generate \ 
--config-path config.yaml \
--task-type attribution
Generate a dataset for boundary detection
text-machina generate \ 
--config-path config.yaml \
--task-type boundary
Generate a dataset for mixcase detection
text-machina generate \ 
--config-path config.yaml \
--task-type mixcase
Generate a dataset for MGT detection using config files in a directory tree
text-machina generate \ 
--config-path configs/ \
--task-type detection

๐Ÿ’พ Caching

icon TextMachina caches each dataset it generates through the CLI endpoints under a run name. The specific run name is given as the last message in the logs, and can be used with --run-name <run-name> to continue from interrupted runs. The default cache dir used by icon TextMachina is /tmp/text_machina_cache. It can be modified by setting TEXT_MACHINA_CACHE_DIR to a different path.

โš ๏ธ Notes and Limitations


  • Although you can use any kind of extractor to build boundary detection datasets, it is highly recommended to use the sentence_prefix or word_prefix extractors with a random number of sentences/words to avoid biases that lead boundary detection models to just count sentences or words.

  • icon attempts to remove disclosure patterns (e.g., "As an AI language model ...") with a limited set of regular expressions, but they depend on the LLM and the language. We strictly recommend to first explore your dataset looking for these biases, and modify the postprocessing or the prompt template accordingly to remove them.

  • Generating multilingual datasets is not well supported yet. At this moment, we recommend to generate independent datasets for each language and combine them together out of icon.

  • Generating machine-generated code datasets is not well supported yet.

๐Ÿ“– Citation


@misc{sarvazyan2024textmachina,
      title={TextMachina: Seamless Generation of Machine-Generated Text Datasets}, 
      author={Areg Mikael Sarvazyan and Josรฉ รngel Gonzรกlez and Marc Franco-Salvador},
      year={2024},
      eprint={2401.03946},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

๐Ÿค Contribute


Feel free to contribute to icon by raising an issue.

Please install and use the dev-tools for correctly formatting the code when contributing to this repo.

๐Ÿญ Commercial Purposes


Please, contact [email protected] and [email protected] if you are interested in using TextMachina for commercial purposes.

About

A modular and extensible Python framework, designed to aid in the creation of high-quality, unbiased datasets to build robust models for MGT-related tasks such as detection, attribution, and boundary detection.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published