You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to train a CNN using optim package. I am using this code, obtained from a video tutorial (see around 24:01), as a reference. This particular example uses a normal neural network. I also used this reference.
My code for the CNN can be found here. The problem is that if the input X is not a single image, I get an error:
In 14 module of nn.Sequential:
/home/ubuntu/torch/install/share/lua/5.1/nn/Linear.lua:57: size mismatch at /tmp/luarocks_cutorch-scm-1-1695/cutorch/lib/THC/generic/THCTensorMathBlas.cu:52
When I don't use GPU, the error becomes more clear: size mismatch, [1024 x 2048], [409600]
I understand that the input to the first Linear layer is not 2048 when the input X has more than one image. But how does optim.sgd work properly with the entire training set used as input (X), in the case of the normal neural network given in first reference?
The text was updated successfully, but these errors were encountered:
I am trying to train a CNN using
optim
package. I am using this code, obtained from a video tutorial (see around 24:01), as a reference. This particular example uses a normal neural network. I also used this reference.My code for the CNN can be found here. The problem is that if the input
X
is not a single image, I get an error:When I don't use GPU, the error becomes more clear:
size mismatch, [1024 x 2048], [409600]
I understand that the input to the first
Linear
layer is not 2048 when the inputX
has more than one image. But how doesoptim.sgd
work properly with the entire training set used as input (X
), in the case of the normal neural network given in first reference?The text was updated successfully, but these errors were encountered: