v0.6.0: LLaMa (Alpaca), Benchmark Util, T5 ILQL, Tests
The v0.6.0
release includes several new features, bug fixes, and overall improvements to the codebase. Here are the key changes:
📏 Benchmarking and Improved Unit Tests
This release introduces a new benchmark util to more easily track regressions in our training pipeline along with improved unit tests with the help of the hypothesis
package:
- [feat] Add benchmark tools by @reciprocated in #357
- Add
hypothesis
tests for ILQL and fix edge cases by @cat-state in #370
🦙 LLaMa and Alpaca PPO/SFT Support
PPO support and examples for LLaMa are now available and we’ve baked in an example for instruction fine-tuning models with the Alpaca dataset using our SFT trainer:
- [feat] Add LLaMa Model support for PPO by @PhungVanDuy in #375
- Add Alpaca by @cat-state in #400
5️⃣ T5 ILQL Support
T5 models can now be fine-tuned with ILQL:
- Support ILQL for T5 model, Fix PPO T5 for refactored code by @PhungVanDuy in #290
Fixes
- Remove example usage of deprecating
trlx.train
dataset arg by @jon-tow in #331 - Remove logit_mask unused argument by @cat-state in #332
- [fix] Convert the rest of configs from
ymls
by @reciprocated in #346 - fix default_ilql_config in notebook by @xu-song in #350
- hot-fix: update PPOConfig import in examples by @jon-tow in #352
- [fix] Update
AdaptiveKLController
with correct KL by @reciprocated in #361 - [fix] Drop
<eos>
from ILQL sample's phrases by @reciprocated in #362 - fixes half exp not implemented error by @Dahoas in #363
- [fix] ILQL
total_steps
calculation when running distributed by @reciprocated in #374 - [fix] split for validation by @hzwer in #369
- fix(docs): Update incorrect
PPORLElement
logprob tensor shape hint by @jon-tow in #377 - [fix] Enable HF downloads from a revision by @reciprocated in #382
- [fix] Fix ILQL head sync under ZeRO3 by @reciprocated in #387
- [fix] Preserve
<eos>
token and in-place it after trimming by @reciprocated in #401 - Nemo ILQL fixes by @cat-state in #404
What's Changed
- Move to Python config classes instead of
ymls
by @cat-state in #306 - Add intermediate checkpointing to
accelerate
trainers by @jon-tow in #349 - Enable infinite dataloader for prompt_dataloader in PPO Trainer by @alexandremuzio in #358
- [feat] Add optional dependency list by @reciprocated in #381
- Add some synchronization to the db download in the simulacra example by @dakinggg in #406
New Contributors
- @xu-song made their first contribution in #350
- @hzwer made their first contribution in #369
- @alexandremuzio made their first contribution in #358
- @dakinggg made their first contribution in #406
Full Changelog: v0.5.0...v0.6.0