-
Notifications
You must be signed in to change notification settings - Fork 966
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds nn.Normalize #341
Adds nn.Normalize #341
Conversation
definitely the name has to be Normalize, not Norm |
@szagoruyko Maybe it's not too difficult to extend it to work for other dimensions, by viewing the input to be 2D with the last dimension the desired normalized dimension, but one has to take care about batched/non-batched inputs as well. Maybe we could add a Everybody ok to renaming it to |
Why is it obvious that the name should be Normalize and not Norm? As far as I can tell almost all operations from torch are ported without name changes to nn layers. E.g. analogous to |
torch.norm returns the p-norm. isn't this module also normalizing the input according to the returned norm? |
Good point. Almost suggest that there should be both nn.Norm that does exactly what norm does, and then nn.Normalize that also does a div right afterwards. But perhaps that gets a bit too hairy then :) |
That's actually a good idea :) |
@fmassa looks like it's good to go is it? |
It's good. I just need to squash the commits. |
squash, good to go. |
@soumith I just squashed the commits. |
thanks a lot Francisco, for the excellent PR. |
Lots of memory is needed in backprop of this module. One reason might be creating the |
@ffmpbgrnn here is a version of |
@fmassa thanks for your work. I will give a try! |
@ffmpbgrnn so, does this patch works fine for you ? Is it much slower than the previous version ? |
@fmassa , sorry for my late response..Yes, it passes the test. But when I test require 'nn'
require 'cutorch'
require 'cunn'
local module = nn.Normalize(2):cuda()
module:fastMode(false)
local input = torch.rand(64, 2400):cuda()
local t = torch.Timer()
for i = 1, 100 do
module:forward(input)
module:backward(input, input)
print(i)
end
print(t:time().real/100)
|
Generalizes #260 from @karpathy to accept arbitrary
L_p
norms.A few remarks:
LpNormalize
orNormalize
is a better name ?dim
parameter to make it equivalent totorch.norm
. Is it worth it given that we haveSpatialBatchNormalization
?updateGradInput
is quite memory consuming for larged
.cc: @bamos