Skip to content

Latest commit

 

History

History
45 lines (32 loc) · 2.53 KB

README.md

File metadata and controls

45 lines (32 loc) · 2.53 KB

GPT4Video

GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation

Zhanyu Wang, Longyue Wang*, Zhen Zhao, Minghao Wu, Chenyang Lyu, Huayang Li, Deng Cai, Luping Zhou*, Shuming Shi, Zhaopeng Tu

Tencent AI Lab, University of Sydney (*Correspondence)

✨ Demo video

11.24.1.mp4

Framework

image-20230924124604776 Video Encoding stage: The video encoding module employs a frozen ViT-L/14 model to capture raw video features, while the video abstraction module utilizes a transformer-based cross attention layer and two novel learnable tokens, designed to condense information along the temporal and spatial axes.

LLM reasoning: The core of GPT4Video is powered by a frozen LLaMA model, efficiently fine-tuned via LoRA. The LLM is trained with custom video-centric and safety-aligned data, enabling it to comprehend videos and generate appropriate video prompts (indicated by underlined text).

Video Generation: The prompts generated by LLM are then used as text inputs for the models in the Text-to-Video Model Gallery to create videos. We use ZeroScope as our video generation model in this work.

Training

first, install the requestments.

   pip install -r requestments.txt

training model with two gpus for 10 epoches.

    python train.py --devices 2 --max_epochs 10

Citation

@articles{wang2023gpt4video,
  title={GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation},
  author={Zhanyu Wang, Longyue Wang, Minghao Wu, Zhen Zhao, Chenyang Lyu, Huayang Li, Deng Cai, Luping Zhou, Shuming Shi, Zhaopeng Tu},
  journal = {CoRR},
  year={2023}
}