您好,请问为什么我没办法使用V1? #62

Closed
opened 2026-01-29 21:40:25 +00:00 by claunia · 3 comments
Owner

Originally created by @HangAround47 on GitHub (Sep 3, 2021).

python inference_gfpgan.py --upscale 2 --test_path inputs/whole_imgs --save_root results --model_path experiments/pretrained_models/GFPGANv1.pth
Traceback (most recent call last):
File "inference_gfpgan.py", line 98, in
main()
File "inference_gfpgan.py", line 57, in main
bg_upsampler=bg_upsampler)
File "W:\MyWork\My_GAN_Work\GFPGAN\gfpgan\utils.py", line 65, in init
self.gfpgan.load_state_dict(loadnet[keyname], strict=True)
File "C:\Users\Creator\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1224, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for GFPGANv1Clean:
Missing key(s) in state_dict: "conv_body_first.weight", "conv_body_first.bias", "conv_body_down.0.conv1.weight", "conv_body_down.0.conv1.bias", "conv_body_down.0.conv2.weight", "conv_body_down.0.conv2.bias", "conv_body_down.0.skip.weight", "conv_body_down.1.conv1.weight", "conv_body_down.1.conv1.bias", "conv_body_down.1.conv2.weight", "conv_body_down.1.conv2.bias", "conv_body_down.1.skip.weight", "conv_body_down.2.conv1.weight", "conv_body_down.2.conv1.bias", "conv_body_down.2.conv2.weight", "conv_body_down.2.conv2.bias", "conv_body_down.2.skip.weight", "conv_body_down.3.conv1.weight", "conv_body_down.3.conv1.bias", "conv_body_down.3.conv2.weight", "conv_body_down.3.conv2.bias", "conv_body_down.3.skip.weight", "conv_body_down.4.conv1.weight", "conv_body_down.4.conv1.bias", "conv_body_down.4.conv2.weight", "conv_body_down.4.conv2.bias", "conv_body_down.4.skip.weight", "conv_body_down.5.conv1.weight", "conv_body_down.5.conv1.bias", "conv_body_down.5.conv2.weight", "conv_body_down.5.conv2.bias", "conv_body_down.5.skip.weight", "conv_body_down.6.conv1.weight", "conv_body_down.6.conv1.bias", "conv_body_down.6.conv2.weight", "conv_body_down.6.conv2.bias", "conv_body_down.6.skip.weight", "final_conv.weight", "final_conv.bias", "conv_body_up.0.conv1.weight", "conv_body_up.0.conv1.bias", "conv_body_up.0.conv2.bias", "conv_body_up.1.conv1.weight", "conv_body_up.1.conv1.bias", "conv_body_up.1.conv2.bias", "conv_body_up.2.conv1.weight", "conv_body_up.2.conv1.bias", "conv_body_up.2.conv2.bias", "conv_body_up.3.conv1.weight", "conv_body_up.3.conv1.bias", "conv_body_up.3.conv2.bias", "conv_body_up.4.conv1.weight", "conv_body_up.4.conv1.bias", "conv_body_up.4.conv2.bias", "conv_body_up.5.conv1.weight", "conv_body_up.5.conv1.bias", "conv_body_up.5.conv2.bias", "conv_body_up.6.conv1.weight", "conv_body_up.6.conv1.bias", "conv_body_up.6.conv2.bias", "stylegan_decoder.style_mlp.9.weight", "stylegan_decoder.style_mlp.9.bias", "stylegan_decoder.style_mlp.11.weight", "stylegan_decoder.style_mlp.11.bias", "stylegan_decoder.style_mlp.13.weight", "stylegan_decoder.style_mlp.13.bias", "stylegan_decoder.style_mlp.15.weight", "stylegan_decoder.style_mlp.15.bias", "stylegan_decoder.style_conv1.bias", "stylegan_decoder.style_convs.0.bias", "stylegan_decoder.style_convs.1.bias", "stylegan_decoder.style_convs.2.bias", "stylegan_decoder.style_convs.3.bias", "stylegan_decoder.style_convs.4.bias", "stylegan_decoder.style_convs.5.bias", "stylegan_decoder.style_convs.6.bias", "stylegan_decoder.style_convs.7.bias", "stylegan_decoder.style_convs.8.bias", "stylegan_decoder.style_convs.9.bias", "stylegan_decoder.style_convs.10.bias", "stylegan_decoder.style_convs.11.bias", "stylegan_decoder.style_convs.12.bias", "stylegan_decoder.style_convs.13.bias".
Unexpected key(s) in state_dict: "conv_body_first.0.weight", "conv_body_first.1.bias", "conv_body_down.0.conv1.0.weight", "conv_body_down.0.conv1.1.bias", "conv_body_down.0.conv2.1.weight", "conv_body_down.0.conv2.2.bias", "conv_body_down.0.skip.1.weight", "conv_body_down.1.conv1.0.weight", "conv_body_down.1.conv1.1.bias", "conv_body_down.1.conv2.1.weight", "conv_body_down.1.conv2.2.bias", "conv_body_down.1.skip.1.weight", "conv_body_down.2.conv1.0.weight", "conv_body_down.2.conv1.1.bias", "conv_body_down.2.conv2.1.weight", "conv_body_down.2.conv2.2.bias", "conv_body_down.2.skip.1.weight", "conv_body_down.3.conv1.0.weight", "conv_body_down.3.conv1.1.bias", "conv_body_down.3.conv2.1.weight", "conv_body_down.3.conv2.2.bias", "conv_body_down.3.skip.1.weight", "conv_body_down.4.conv1.0.weight", "conv_body_down.4.conv1.1.bias", "conv_body_down.4.conv2.1.weight", "conv_body_down.4.conv2.2.bias", "conv_body_down.4.skip.1.weight", "conv_body_down.5.conv1.0.weight", "conv_body_down.5.conv1.1.bias", "conv_body_down.5.conv2.1.weight", "conv_body_down.5.conv2.2.bias", "conv_body_down.5.skip.1.weight", "conv_body_down.6.conv1.0.weight", "conv_body_down.6.conv1.1.bias", "conv_body_down.6.conv2.1.weight", "conv_body_down.6.conv2.2.bias", "conv_body_down.6.skip.1.weight", "final_conv.0.weight", "final_conv.1.bias", "conv_body_up.0.conv1.0.weight", "conv_body_up.0.conv1.1.bias", "conv_body_up.0.conv2.activation.bias", "conv_body_up.1.conv1.0.weight", "conv_body_up.1.conv1.1.bias", "conv_body_up.1.conv2.activation.bias", "conv_body_up.2.conv1.0.weight", "conv_body_up.2.conv1.1.bias", "conv_body_up.2.conv2.activation.bias", "conv_body_up.3.conv1.0.weight", "conv_body_up.3.conv1.1.bias", "conv_body_up.3.conv2.activation.bias", "conv_body_up.4.conv1.0.weight", "conv_body_up.4.conv1.1.bias", "conv_body_up.4.conv2.activation.bias", "conv_body_up.5.conv1.0.weight", "conv_body_up.5.conv1.1.bias", "conv_body_up.5.conv2.activation.bias", "conv_body_up.6.conv1.0.weight", "conv_body_up.6.conv1.1.bias", "conv_body_up.6.conv2.activation.bias", "stylegan_decoder.style_mlp.2.weight", "stylegan_decoder.style_mlp.2.bias", "stylegan_decoder.style_mlp.4.weight", "stylegan_decoder.style_mlp.4.bias", "stylegan_decoder.style_mlp.6.weight", "stylegan_decoder.style_mlp.6.bias", "stylegan_decoder.style_mlp.8.weight", "stylegan_decoder.style_mlp.8.bias", "stylegan_decoder.style_conv1.activate.bias", "stylegan_decoder.style_convs.0.activate.bias", "stylegan_decoder.style_convs.1.activate.bias", "stylegan_decoder.style_convs.2.activate.bias", "stylegan_decoder.style_convs.3.activate.bias", "stylegan_decoder.style_convs.4.activate.bias", "stylegan_decoder.style_convs.5.activate.bias", "stylegan_decoder.style_convs.6.activate.bias", "stylegan_decoder.style_convs.7.activate.bias", "stylegan_decoder.style_convs.8.activate.bias", "stylegan_decoder.style_convs.9.activate.bias", "stylegan_decoder.style_convs.10.activate.bias", "stylegan_decoder.style_convs.11.activate.bias", "stylegan_decoder.style_convs.12.activate.bias", "stylegan_decoder.style_convs.13.activate.bias".
size mismatch for conv_body_up.3.conv2.weight: copying a param with shape torch.Size([128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for conv_body_up.3.skip.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for conv_body_up.4.conv2.weight: copying a param with shape torch.Size([64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for conv_body_up.4.skip.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]).
size mismatch for conv_body_up.5.conv2.weight: copying a param with shape torch.Size([32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 128, 3, 3]).
size mismatch for conv_body_up.5.skip.weight: copying a param with shape torch.Size([32, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
size mismatch for conv_body_up.6.conv2.weight: copying a param with shape torch.Size([16, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 3, 3]).
size mismatch for conv_body_up.6.skip.weight: copying a param with shape torch.Size([16, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 64, 1, 1]).
size mismatch for toRGB.3.weight: copying a param with shape torch.Size([3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 256, 1, 1]).
size mismatch for toRGB.4.weight: copying a param with shape torch.Size([3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 128, 1, 1]).
size mismatch for toRGB.5.weight: copying a param with shape torch.Size([3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 64, 1, 1]).
size mismatch for toRGB.6.weight: copying a param with shape torch.Size([3, 16, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 32, 1, 1]).
size mismatch for stylegan_decoder.style_convs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 64, 3, 3]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 256, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 128, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 64, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_scale.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).

Originally created by @HangAround47 on GitHub (Sep 3, 2021). python inference_gfpgan.py --upscale 2 --test_path inputs/whole_imgs --save_root results --model_path experiments/pretrained_models/GFPGANv1.pth Traceback (most recent call last): File "inference_gfpgan.py", line 98, in <module> main() File "inference_gfpgan.py", line 57, in main bg_upsampler=bg_upsampler) File "W:\MyWork\My_GAN_Work\GFPGAN\gfpgan\utils.py", line 65, in __init__ self.gfpgan.load_state_dict(loadnet[keyname], strict=True) File "C:\Users\Creator\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1224, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for GFPGANv1Clean: Missing key(s) in state_dict: "conv_body_first.weight", "conv_body_first.bias", "conv_body_down.0.conv1.weight", "conv_body_down.0.conv1.bias", "conv_body_down.0.conv2.weight", "conv_body_down.0.conv2.bias", "conv_body_down.0.skip.weight", "conv_body_down.1.conv1.weight", "conv_body_down.1.conv1.bias", "conv_body_down.1.conv2.weight", "conv_body_down.1.conv2.bias", "conv_body_down.1.skip.weight", "conv_body_down.2.conv1.weight", "conv_body_down.2.conv1.bias", "conv_body_down.2.conv2.weight", "conv_body_down.2.conv2.bias", "conv_body_down.2.skip.weight", "conv_body_down.3.conv1.weight", "conv_body_down.3.conv1.bias", "conv_body_down.3.conv2.weight", "conv_body_down.3.conv2.bias", "conv_body_down.3.skip.weight", "conv_body_down.4.conv1.weight", "conv_body_down.4.conv1.bias", "conv_body_down.4.conv2.weight", "conv_body_down.4.conv2.bias", "conv_body_down.4.skip.weight", "conv_body_down.5.conv1.weight", "conv_body_down.5.conv1.bias", "conv_body_down.5.conv2.weight", "conv_body_down.5.conv2.bias", "conv_body_down.5.skip.weight", "conv_body_down.6.conv1.weight", "conv_body_down.6.conv1.bias", "conv_body_down.6.conv2.weight", "conv_body_down.6.conv2.bias", "conv_body_down.6.skip.weight", "final_conv.weight", "final_conv.bias", "conv_body_up.0.conv1.weight", "conv_body_up.0.conv1.bias", "conv_body_up.0.conv2.bias", "conv_body_up.1.conv1.weight", "conv_body_up.1.conv1.bias", "conv_body_up.1.conv2.bias", "conv_body_up.2.conv1.weight", "conv_body_up.2.conv1.bias", "conv_body_up.2.conv2.bias", "conv_body_up.3.conv1.weight", "conv_body_up.3.conv1.bias", "conv_body_up.3.conv2.bias", "conv_body_up.4.conv1.weight", "conv_body_up.4.conv1.bias", "conv_body_up.4.conv2.bias", "conv_body_up.5.conv1.weight", "conv_body_up.5.conv1.bias", "conv_body_up.5.conv2.bias", "conv_body_up.6.conv1.weight", "conv_body_up.6.conv1.bias", "conv_body_up.6.conv2.bias", "stylegan_decoder.style_mlp.9.weight", "stylegan_decoder.style_mlp.9.bias", "stylegan_decoder.style_mlp.11.weight", "stylegan_decoder.style_mlp.11.bias", "stylegan_decoder.style_mlp.13.weight", "stylegan_decoder.style_mlp.13.bias", "stylegan_decoder.style_mlp.15.weight", "stylegan_decoder.style_mlp.15.bias", "stylegan_decoder.style_conv1.bias", "stylegan_decoder.style_convs.0.bias", "stylegan_decoder.style_convs.1.bias", "stylegan_decoder.style_convs.2.bias", "stylegan_decoder.style_convs.3.bias", "stylegan_decoder.style_convs.4.bias", "stylegan_decoder.style_convs.5.bias", "stylegan_decoder.style_convs.6.bias", "stylegan_decoder.style_convs.7.bias", "stylegan_decoder.style_convs.8.bias", "stylegan_decoder.style_convs.9.bias", "stylegan_decoder.style_convs.10.bias", "stylegan_decoder.style_convs.11.bias", "stylegan_decoder.style_convs.12.bias", "stylegan_decoder.style_convs.13.bias". Unexpected key(s) in state_dict: "conv_body_first.0.weight", "conv_body_first.1.bias", "conv_body_down.0.conv1.0.weight", "conv_body_down.0.conv1.1.bias", "conv_body_down.0.conv2.1.weight", "conv_body_down.0.conv2.2.bias", "conv_body_down.0.skip.1.weight", "conv_body_down.1.conv1.0.weight", "conv_body_down.1.conv1.1.bias", "conv_body_down.1.conv2.1.weight", "conv_body_down.1.conv2.2.bias", "conv_body_down.1.skip.1.weight", "conv_body_down.2.conv1.0.weight", "conv_body_down.2.conv1.1.bias", "conv_body_down.2.conv2.1.weight", "conv_body_down.2.conv2.2.bias", "conv_body_down.2.skip.1.weight", "conv_body_down.3.conv1.0.weight", "conv_body_down.3.conv1.1.bias", "conv_body_down.3.conv2.1.weight", "conv_body_down.3.conv2.2.bias", "conv_body_down.3.skip.1.weight", "conv_body_down.4.conv1.0.weight", "conv_body_down.4.conv1.1.bias", "conv_body_down.4.conv2.1.weight", "conv_body_down.4.conv2.2.bias", "conv_body_down.4.skip.1.weight", "conv_body_down.5.conv1.0.weight", "conv_body_down.5.conv1.1.bias", "conv_body_down.5.conv2.1.weight", "conv_body_down.5.conv2.2.bias", "conv_body_down.5.skip.1.weight", "conv_body_down.6.conv1.0.weight", "conv_body_down.6.conv1.1.bias", "conv_body_down.6.conv2.1.weight", "conv_body_down.6.conv2.2.bias", "conv_body_down.6.skip.1.weight", "final_conv.0.weight", "final_conv.1.bias", "conv_body_up.0.conv1.0.weight", "conv_body_up.0.conv1.1.bias", "conv_body_up.0.conv2.activation.bias", "conv_body_up.1.conv1.0.weight", "conv_body_up.1.conv1.1.bias", "conv_body_up.1.conv2.activation.bias", "conv_body_up.2.conv1.0.weight", "conv_body_up.2.conv1.1.bias", "conv_body_up.2.conv2.activation.bias", "conv_body_up.3.conv1.0.weight", "conv_body_up.3.conv1.1.bias", "conv_body_up.3.conv2.activation.bias", "conv_body_up.4.conv1.0.weight", "conv_body_up.4.conv1.1.bias", "conv_body_up.4.conv2.activation.bias", "conv_body_up.5.conv1.0.weight", "conv_body_up.5.conv1.1.bias", "conv_body_up.5.conv2.activation.bias", "conv_body_up.6.conv1.0.weight", "conv_body_up.6.conv1.1.bias", "conv_body_up.6.conv2.activation.bias", "stylegan_decoder.style_mlp.2.weight", "stylegan_decoder.style_mlp.2.bias", "stylegan_decoder.style_mlp.4.weight", "stylegan_decoder.style_mlp.4.bias", "stylegan_decoder.style_mlp.6.weight", "stylegan_decoder.style_mlp.6.bias", "stylegan_decoder.style_mlp.8.weight", "stylegan_decoder.style_mlp.8.bias", "stylegan_decoder.style_conv1.activate.bias", "stylegan_decoder.style_convs.0.activate.bias", "stylegan_decoder.style_convs.1.activate.bias", "stylegan_decoder.style_convs.2.activate.bias", "stylegan_decoder.style_convs.3.activate.bias", "stylegan_decoder.style_convs.4.activate.bias", "stylegan_decoder.style_convs.5.activate.bias", "stylegan_decoder.style_convs.6.activate.bias", "stylegan_decoder.style_convs.7.activate.bias", "stylegan_decoder.style_convs.8.activate.bias", "stylegan_decoder.style_convs.9.activate.bias", "stylegan_decoder.style_convs.10.activate.bias", "stylegan_decoder.style_convs.11.activate.bias", "stylegan_decoder.style_convs.12.activate.bias", "stylegan_decoder.style_convs.13.activate.bias". size mismatch for conv_body_up.3.conv2.weight: copying a param with shape torch.Size([128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for conv_body_up.3.skip.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]). size mismatch for conv_body_up.4.conv2.weight: copying a param with shape torch.Size([64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]). size mismatch for conv_body_up.4.skip.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]). size mismatch for conv_body_up.5.conv2.weight: copying a param with shape torch.Size([32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 128, 3, 3]). size mismatch for conv_body_up.5.skip.weight: copying a param with shape torch.Size([32, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]). size mismatch for conv_body_up.6.conv2.weight: copying a param with shape torch.Size([16, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 3, 3]). size mismatch for conv_body_up.6.skip.weight: copying a param with shape torch.Size([16, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 64, 1, 1]). size mismatch for toRGB.3.weight: copying a param with shape torch.Size([3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 256, 1, 1]). size mismatch for toRGB.4.weight: copying a param with shape torch.Size([3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 128, 1, 1]). size mismatch for toRGB.5.weight: copying a param with shape torch.Size([3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 64, 1, 1]). size mismatch for toRGB.6.weight: copying a param with shape torch.Size([3, 16, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 32, 1, 1]). size mismatch for stylegan_decoder.style_convs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]). size mismatch for stylegan_decoder.style_convs.7.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]). size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for stylegan_decoder.style_convs.8.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 512, 3, 3]). size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for stylegan_decoder.style_convs.9.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 256, 3, 3]). size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]). size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for stylegan_decoder.style_convs.10.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 256, 3, 3]). size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]). size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for stylegan_decoder.style_convs.11.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 128, 3, 3]). size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]). size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for stylegan_decoder.style_convs.12.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 128, 3, 3]). size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]). size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for stylegan_decoder.style_convs.13.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 64, 3, 3]). size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]). size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]). size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 256, 1, 1]). size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]). size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 128, 1, 1]). size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]). size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 64, 1, 1]). size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]). size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for condition_scale.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for condition_scale.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for condition_scale.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for condition_scale.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for condition_scale.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for condition_scale.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for condition_scale.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for condition_scale.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for condition_scale.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for condition_scale.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for condition_scale.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for condition_scale.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for condition_scale.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]). size mismatch for condition_scale.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for condition_scale.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]). size mismatch for condition_scale.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for condition_shift.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for condition_shift.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for condition_shift.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for condition_shift.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for condition_shift.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for condition_shift.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for condition_shift.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for condition_shift.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for condition_shift.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for condition_shift.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for condition_shift.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for condition_shift.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for condition_shift.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]). size mismatch for condition_shift.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for condition_shift.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]). size mismatch for condition_shift.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
Author
Owner

@PennyPeng369 commented on GitHub (Sep 3, 2021):

Add --channel=1 --arch='v1' and try.

@PennyPeng369 commented on GitHub (Sep 3, 2021): Add `--channel=1 --arch='v1'` and try.
Author
Owner

@xinntao commented on GitHub (Sep 3, 2021):

@chengkeng add --arch original --channel 1. Please refer to https://github.com/TencentARC/GFPGAN/blob/master/PaperModel.md for more details.

@PennyPeng369 Thanks.

@xinntao commented on GitHub (Sep 3, 2021): @chengkeng add `--arch original --channel 1`. Please refer to https://github.com/TencentARC/GFPGAN/blob/master/PaperModel.md for more details. @PennyPeng369 Thanks.
Author
Owner

@xinntao commented on GitHub (Sep 3, 2021):

@chengkeng add --arch original --channel 1. Please refer to for more details.

@xinntao commented on GitHub (Sep 3, 2021): @chengkeng add `--arch original --channel 1`. Please refer to for more details.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: TencentARC/GFPGAN#62