-
Notifications
You must be signed in to change notification settings - Fork 6.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error(s) in loading state_dict for ResnetGenerator #296
Comments
Facing same issue |
I deleted the root project directory and clone this repository again. Then, it works. I don't know the reason. This is just a fast method to solve this issue for me. |
Yes, please check out the latest commit. |
I face the same problem. |
I face the same issue. I've trained pix2pix model with the previous version of the code and tried to test it using the older and the latest commit and got the same "missing keys in state_dict" error in both. |
Could you check if you have used the same normalization ( |
I used the default normalization ( |
I faced same error on applying a pre-train model (cyclegan) in the newest version. |
Sorry, I solved this problem by correct docker setting.
|
@taesung89 |
The issue with |
Thank you very much. I downloaded the new version of the code and fixed the problem. |
Thank you @junyanz This resolved it for me. |
I've had similar issue while trying to test CycleGAN ( I've noticed that the problem is wrong keys in the Based on my error message I have created two lists (missing_list and expected_list). Afterwards I've replaced wrong keys with corresponding correct ones. Example of my snippet is here. I've inserted it after line 135 here. |
Another solution to this is to modify |
I faced the same issue. But I added "--no_dropout" when I tested, the issue was gone. As follows: |
Thank you @SunLeL ,it works for me!!!! |
Amazing! I found when I use unet_256, it is ok. The error happens when I use resnet_6blocks. |
Yes, this method works. One has to make the changes in |
Thank you! It is working but the results are not as good as samples saved during training. It includes much noise |
@vis-opt @omid-ghozatlou I also got much noise when added |
Today, I want to test my trained model. There are some errors not occur before.
Traceback (most recent call last):
File "test.py", line 19, in
model.setup(opt)
File "/home/t-fayan/vision/pytorch-CycleGAN-and-pix2pix/models/base_model.py", line 43, in setup
self.load_networks(opt.which_epoch)
File "/home/t-fayan/vision/pytorch-CycleGAN-and-pix2pix/models/base_model.py", line 130, in load_networks
net.load_state_dict(state_dict)
File "/home/t-fayan/anaconda2/envs/py27/lib/python2.7/site-packages/torch/nn/modules/module.py", line 721, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ResnetGenerator:
Missing key(s) in state_dict: "model.10.conv_block.6.bias", "model.10.conv_block.6.weight", "model.10.conv_block.7.running_var", "model.10.conv_block.7.running_mean", "model.11.conv_block.6.bias", "model.11.conv_block.6.weight", "model.11.conv_block.7.running_var", "model.11.conv_block.7.running_mean", "model.12.conv_block.6.bias", "model.12.conv_block.6.weight", "model.12.conv_block.7.running_var", "model.12.conv_block.7.running_mean", "model.13.conv_block.6.bias", "model.13.conv_block.6.weight", "model.13.conv_block.7.running_var", "model.13.conv_block.7.running_mean", "model.14.conv_block.6.bias", "model.14.conv_block.6.weight", "model.14.conv_block.7.running_var", "model.14.conv_block.7.running_mean", "model.15.conv_block.6.bias", "model.15.conv_block.6.weight", "model.15.conv_block.7.running_var", "model.15.conv_block.7.running_mean", "model.16.conv_block.6.bias", "model.16.conv_block.6.weight", "model.16.conv_block.7.running_var", "model.16.conv_block.7.running_mean", "model.17.conv_block.6.bias", "model.17.conv_block.6.weight", "model.17.conv_block.7.running_var", "model.17.conv_block.7.running_mean", "model.18.conv_block.6.bias", "model.18.conv_block.6.weight", "model.18.conv_block.7.running_var", "model.18.conv_block.7.running_mean".
Unexpected key(s) in state_dict: "model.10.conv_block.5.weight", "model.10.conv_block.5.bias", "model.10.conv_block.6.running_mean", "model.10.conv_block.6.running_var", "model.11.conv_block.5.weight", "model.11.conv_block.5.bias", "model.11.conv_block.6.running_mean", "model.11.conv_block.6.running_var", "model.12.conv_block.5.weight", "model.12.conv_block.5.bias", "model.12.conv_block.6.running_mean", "model.12.conv_block.6.running_var", "model.13.conv_block.5.weight", "model.13.conv_block.5.bias", "model.13.conv_block.6.running_mean", "model.13.conv_block.6.running_var", "model.14.conv_block.5.weight", "model.14.conv_block.5.bias", "model.14.conv_block.6.running_mean", "model.14.conv_block.6.running_var", "model.15.conv_block.5.weight", "model.15.conv_block.5.bias", "model.15.conv_block.6.running_mean", "model.15.conv_block.6.running_var", "model.16.conv_block.5.weight", "model.16.conv_block.5.bias", "model.16.conv_block.6.running_mean", "model.16.conv_block.6.running_var", "model.17.conv_block.5.weight", "model.17.conv_block.5.bias", "model.17.conv_block.6.running_mean", "model.17.conv_block.6.running_var", "model.18.conv_block.5.weight", "model.18.conv_block.5.bias", "model.18.conv_block.6.running_mean", "model.18.conv_block.6.running_var".
What does it mean? Also, if I test pretrained model like horse2zebra, these errors occur too. But I didn't encounter these errors before.
The text was updated successfully, but these errors were encountered: