-
On a machine with a GPU and docker installed, download the NVIDIA-NeMo docker container using the following command:
docker pull nvcr.io/nvidia/nemo:23.01
-
Clone this project’s directory by running:
git clone [email protected]:vadam5/patched-dialogue-model.git
-
Download the base GPT-2 model from NVIDIA and place it in
~/patched-dialogue-model/NeMo/models
using the following commandwget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/nemo/megatron_gpt_345m/versions/1/zip -O megatron_gpt_345m_1.zip unzip megatron_gpt_345m_1.zip mv megatron_gpt_345m.nemo ~/patched-dialogue-model/NeMo/models
-
Start the NeMo docker container with the NeMo directory in the patched-dialogue-model mounted as a volume by running
docker run --gpus all -it -v ~/patched-dialogue-model/NeMo:/NeMo --shm-size=8g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 --device=/dev/snd nvcr.io/nvidia/nemo:23.01
-
From within the container, cd to the /NeMo directory
cd /NeMo
-
Run the dialogue manager to chat with the dialogue system on command line
python dialogue_manager.py
-
Type “STOP” when you are done chatting and a conversation log will print to the terminal.
-
Notifications
You must be signed in to change notification settings - Fork 0
vadam5/patched-dialogue-model
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published