Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on running the model on mac #26

Open
Tian807 opened this issue Nov 22, 2024 · 3 comments
Open

Question on running the model on mac #26

Tian807 opened this issue Nov 22, 2024 · 3 comments

Comments

@Tian807
Copy link

Tian807 commented Nov 22, 2024

Dear authors,

Thank you for creating this invaluable resource and for making it open source!

I am currently trying to run the example prediction script and jupyter notebook on my mac however I keep getting errors when I run the section to load the model weights.

Here is what it prints out:
Deformable Transformer Encoder is not available.
hostname: illegal option -- I
usage: hostname [-f] [-s | -d] [name-of-host]

Here is the error:
CalledProcessError: Command '['hostname -I']' returned non-zero exit status 1.

I managed to get it to work on google collab but I would like to use it locally therefore I was wondering whether the model must use GPU in order for it to run. If not, could you direct me to the places where I can change for it to work on CPU.

Thank you!

@Victorouledi
Copy link

I’m currently trying to run the model locally on macOS. The code uses many functions that call CUDA-specific GPU operations, which Mac OS doesn’t have. Each error of the type 'torch using cuda unable' specifies which .py file in the project is calling CUDA. The aim is to modify these folder so that it doesn't use CUDA but instead uses the macOS CPU. Chat gpt helps easily on this

Ex in BiomedParse]/inference_examples_DICOM, instead of :

model = BaseModel(opt, build_model(opt)).from_pretrained(pretrained_pth).eval().cuda()

use :

device = 'cuda' if torch.cuda.is_available() else 'cpu'

model = BaseModel(opt, build_model(opt, device=device))

model = model.from_pretrained(pretrained_pth).eval()

model.to(device)

@FrancoisPorcher
Copy link

FrancoisPorcher commented Nov 22, 2024

You also have to replace:

  • retry_if_cuda_oom in the model architecture (for example seem_model_v1.py or seem_model_demo.py)
  • remove cuda from dilation_kernel = torch.ones((1, 1, dilation, dilation), device=torch.cuda.current_device())

@theodore-zhao
Copy link
Contributor

Thanks for the solutions provided in the thread! We developed the model on Linux with CUDA GPUs and never tried MacOS. Let me know if these solutions work for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants