Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Threads picking up cached outputs? #53

Closed
jonathanasdf opened this issue Mar 24, 2016 · 1 comment
Closed

Threads picking up cached outputs? #53

jonathanasdf opened this issue Mar 24, 2016 · 1 comment

Comments

@jonathanasdf
Copy link

Here is the script I used to reproduce this problem

require 'cudnn'

local model = (require 'loadcaffe').load('deploy.prototxt', 'VGG_ILSVRC_16_layers.caffemodel', 'cudnn')

local nThreads = 8
torch.setnumthreads(nThreads)
local Threads = require 'threads'
Threads.serialization('threads.sharedserialize')
local mutex_id = Threads.Mutex():id()
local threads = Threads(nThreads,
  function()
    require 'cudnn'
  end,
  function()
    _model = model
    _mutex = (require 'threads').Mutex(mutex_id)
  end
)

for t=1,100 do
  for i=2,10 do
    threads:addjob(
      function()
        _mutex:lock()
        local inputs = torch.rand(i, 3, 224, 224):cuda()
        local outputs = _model:forward(inputs)
        if i ~= outputs:size(1) then
          print("mismatch!", inputs:size(1), outputs:size(1))
        end
        _mutex:unlock()
      end
    )
  end
  threads:synchronize()
end

When I run this, I see "mismatch" being printed.

I tried replacing VGG with a simpler model (eg, nn.Sequential of a bunch of nn.Linear) but could not reproduce the issue like that. So, maybe the model itself has something to do with the problem?

@jonathanasdf
Copy link
Author

It seems it is a problem with cudnn.SpatialConvolution. I filed soumith/cudnn.torch#155 and will be closing this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant