Skip to content

Latest commit

 

History

History
120 lines (101 loc) · 5.99 KB

README.en.md

File metadata and controls

120 lines (101 loc) · 5.99 KB

ime

Transformers4IME

Transformers4IME is repo for exploring and adapting transformer-based models to IME.

PinyinGPT

PinyinGPT is a model from Exploring and Adapting Chinese GPT to Pinyin Input Method which appears in ACL2022.

@inproceedings{tan-etal-2022-exploring,
    title = "Exploring and Adapting {C}hinese {GPT} to {P}inyin Input Method",
    author = "Tan, Minghuan  and
      Dai, Yong  and
      Tang, Duyu  and
      Feng, Zhangyin  and
      Huang, Guoping  and
      Jiang, Jing  and
      Li, Jiwei  and
      Shi, Shuming",
    booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = may,
    year = "2022",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.acl-long.133",
    doi = "10.18653/v1/2022.acl-long.133",
    pages = "1899--1909",
    abstract = "While GPT has become the de-facto method for text generation tasks, its application to pinyin input method remains unexplored.In this work, we make the first exploration to leverage Chinese GPT for pinyin input method.We find that a frozen GPT achieves state-of-the-art performance on perfect pinyin.However, the performance drops dramatically when the input includes abbreviated pinyin.A reason is that an abbreviated pinyin can be mapped to many perfect pinyin, which links to even larger number of Chinese characters.We mitigate this issue with two strategies,including enriching the context with pinyin and optimizing the training process to help distinguish homophones. To further facilitate the evaluation of pinyin input method, we create a dataset consisting of 270K instances from fifteen domains.Results show that our approach improves the performance on abbreviated pinyin across all domains.Model analysis demonstrates that both strategiescontribute to the performance boost.",
}

While GPT has become the de-facto method for text generation tasks, its application to pinyin inp ut method remains unexplored. In this work, we make the first exploration to leverage Chinese GPT for pinyin input method. We find that a frozen GPT achieves state-of-the-art performance on perfect pinyin. However, the performance drops dramatically when the input includes abbreviated pinyin. A reason is that an abbreviated pinyin can be mapped to many perfect pinyin, which links to even larger number of Chinese characters. We mitigate this issue with two strategies, including enriching the context with pinyin and optimizing the training process to help distinguish homophones. To further facilitate the evaluation of pinyin input method, we create a dataset consisting of 270K instances from 15 domains. Results show that our approach improves performance on abbreviated pinyin across all domains. Model analysis demonstrates that both strategies contribute to the performance boost.

pinyinGPT-method

Corpus Preparation

{'words': [['观众', '姥爷'], [','], ['如果', '你', '有', '超神', '超', '秀'], ['、'], ['坑爹', '搞笑', '素材'], [','],
           ['欢迎', '给', '苍', '姐', '投稿'], [','], ['采用', '有奖', '哦'], ['!']],
 'tokens': [[['观', '众'], ['姥', '爷']], [','], [['如', '果'], ['你'], ['有'], ['超', '神'], ['超'], ['秀']], ['、'],
            [['坑', '爹'], ['搞', '笑'], ['素', '材']], [','], [['欢', '迎'], ['给'], ['苍'], ['姐'], ['投', '稿']], [','],
            [['采', '用'], ['有', '奖'], ['哦']], ['!']],
 'pinyin': [[['guan', 'zhong'], ['lao', 'ye']], [','],
            [['ru', 'guo'], ['ni'], ['you'], ['chao', 'shen'],
             ['chao'], ['xiu']], ['、'],
            [['keng', 'die'], ['gao', 'xiao'], ['su', 'cai']],
            [','], [['huan', 'ying'], ['gei'], ['cang'], ['jie'],
                    ['tou', 'gao']], [','],
            [['cai', 'yong'], ['you', 'jiang'], ['o']], ['!']],
 'abbr': [[['g', 'z'], ['l', 'y']], [','], [['r', 'g'], ['n'], ['y'], ['c', 's'], ['c'], ['x']], ['、'],
          [['k', 'd'], ['g', 'x'], ['s', 'c']], [','], [['h', 'y'], ['g'], ['c'], ['j'], ['t', 'g']], [','],
          [['c', 'y'], ['y', 'j'], ['o']], ['!']]}

Model List

  • GPT2
  • PinyinGPT2Concat
    • Directly
    • Segmented (visualjoyce/transformers4ime-pinyingpt-concat) 🤗 models
  • PinyinGPT2Compose
    • PinyinGPT2ComposeBottom
    • PinyinGPT2ComposeTop
      • logits
      • states
      • residual

Training Mode

  • AbbrOnly: full abbreviation
  • PinyinOnly: none abbreviation
  • PinyinAbbr: mixed (not covered in this paper)
sh pretrain_pinyingpt.sh

Benchmarking

Benchmarking dataset is shared via:

99E333F0B1C6D7B67ACB9D9E61A73DA8

PD benchmarking

python3 benchmarks.py --samples_json data/benchmarks/PD/samples_0.json \
  --pretrained_model_name_or_path data/pretrained_models/gpt2-zh-ours \
  --additional_special_tokens data/pretrained/additional_special_tokens.json \
  --pinyin2char_json data/pretrained/pinyin2char.json \
  --pinyin_logits_processor_cls pinyingpt-compatible \
  --num_beams 16 \
  --abbr_mode none

Benchmarking with specific checkpoint

sh benchmarks.sh pinyingpt-concat data/output/pinyingpt \
  data/output/models/ckpt50000/pytorch_model.bin

Acknowledgment

Work done during internship at Tencent AI Lab.