-
Notifications
You must be signed in to change notification settings - Fork 967
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trainable batch normalization #467
Comments
Right, batch-normalization is not available yet. We started by focussing on language models where group-norm is far more frequent than batch-norm. We've just started adding the vision bits, e.g. convolutions so as to get stable-diffusion to run, we would like to add some actual vision model now so batch norm is likely to be added soonish (a week or two I would say). |
Not sure if it will be enough for your use case but I've just merged #508 which adds a batch normalization layer. It could be used in a similar way to |
I am training networks, so unfortunately this is not enough for my usecase. |
Interesting, what models do you actually care about? |
I am working with ResNets for AlphaZero / MuZero. |
Has there been any progress on this front? |
I'm using MobileNetV3, which needs trainable batchnorms, as well as other mobile-scale realtime classification convnets. |
Not much progress I'm afraid. @Awpteamoose do you have some MobileNetV3 or other models code that you could share? Would be very interesting to point at it as external resources that use candle. If I understand you're training these models? I would have assumed that nowadays even mobile scale vision models have mostly switched to transformers like tinyvit etc. |
I was porting my implementation from dfdx (coreylowman/dfdx#794) and halfway through noticed that batchnorms aren't trainable so I don't really have any code to share.
I'm probably just out of date as the field moves very fast, but also transformers that I have looked at require an order of magnitude more FLOPS. I'm doing inference on tiny single-core CPUs as part of massively parallelised video analysis so even real-time is too slow for me. |
@LaurentMazare This should be closed due to the merge of #1504 |
I am trying to translate some code I wrote with
tch-rs
intocandle
as an experiment to see what the library is like.It looks like I stumbled into a road-block almost immediately. I have a convolutional neural network made up of many residual blocks. Each residual block internally uses batch normalization.
In
tch-rs
, I could usenn::batch_norm_2d
. Is batch normalization is not implemented bycandle
yet?The text was updated successfully, but these errors were encountered: