Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mac with lacks Nvidia graphics capabilities : AssertionError: Torch not compiled with CUDA enabled #91

Open
SkyForceCoder opened this issue Aug 17, 2023 · 0 comments

Comments

@SkyForceCoder
Copy link

Hi there,

"I am utilizing a Macintosh computer, which lacks Nvidia graphics capabilities. Could someone kindly provide instructions on how to execute tasks using the CPU? Additionally, I am curious if there exists an alternative to CUDA. I've observed that stable diffusion functions smoothly on the CPU, whereas AudiioGPT seems to encounter issues in that regard.

Steps followed :

create a new environment

conda create -n audiogpt python=3.8

prepare the basic environments

pip install -r requirements.txt

download the foundation models you need

bash download.sh

prepare your private openAI private key

export OPENAI_API_KEY={Your_Private_Openai_Key}

Start AudioGPT !

python audio-chatgpt.py


(audiogpt) Micky@Micky-iMac AudioGPT % python audio-chatgpt.py
/Users/Micky/anaconda3/envs/audiogpt/lib/python3.8/site-packages/whisper/timing.py:58: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def backtrace(trace: np.ndarray):
Initializing AudioGPT
Initializing T2I to cuda:0
Traceback (most recent call last):
File "audio-chatgpt.py", line 1379, in
bot = ConversationBot()
File "audio-chatgpt.py", line 1057, in init
self.t2i = T2I(device="cuda:0")
File "audio-chatgpt.py", line 116, in init
self.pipe.to(device)
File "/Users/Micky/anaconda3/envs/audiogpt/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 681, in to
module.to(torch_device, torch_dtype)
File "/Users/Micky/anaconda3/envs/audiogpt/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1749, in to
return super().to(*args, **kwargs)
File "/Users/Micky/anaconda3/envs/audiogpt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 927, in to
return self._apply(convert)
File "/Users/Micky/anaconda3/envs/audiogpt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 579, in _apply
module._apply(fn)
File "/Users/Micky/anaconda3/envs/audiogpt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 579, in _apply
module._apply(fn)
File "/Users/Micky/anaconda3/envs/audiogpt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 579, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "/Users/Micky/anaconda3/envs/audiogpt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 602, in _apply
param_applied = fn(param)
File "/Users/Micky/anaconda3/envs/audiogpt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 925, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "/Users/Micky/anaconda3/envs/audiogpt/lib/python3.8/site-packages/torch/cuda/init.py", line 211, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
(audiogpt) Micky@Mickys-iMac AudioGPT %

@SkyForceCoder SkyForceCoder changed the title AssertionError: Torch not compiled with CUDA enabled Mac with lacks Nvidia graphics capabilities : AssertionError: Torch not compiled with CUDA enabled Aug 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant