tokenizer-hub yoctol 乂卍oO煞氣ㄟtokenizerOo卍乂 Tokenizers have the same interface of Jieba: from tokenizer_hub import XXX_tokenizer tokenizer = XXX_tokenizer() tokenizer.lcut('我来到北京清华大学') > ['我', '来到', '北京', '清华大学']