Issue with custom model inferencing #182

Open
opened 2026-01-29 21:45:27 +00:00 by claunia · 1 comment
Owner

Originally created by @keshavoct98 on GitHub (Apr 4, 2022).

Hello,

I have trained GFPGAN on my custom dataset to improve quality of license plates using train_gfpgan_v1_simple.yml. However the results I am getting on my validation data at training time is different from results I am getting while inferencing using the trained model.
Below is the output image I got while training validation step:
0_120122_5000

This is the output image I got while inferencing using the same model:
0_120122

Training command: BASICSR_EXT=True BASICSR_JIT=True python -m torch.distributed.launch --nproc_per_node=1 --master_port=22021 gfpgan/train.py -opt options/train_gfpgan_v1_simple.yml --launcher pytorch
Inferencing command: python inference_gfpgan.py -i datasets/validation/input/ -o results -v 1 -s 1 --bg_upsampler realesrgan
(I added custom model path inside the inferencing script.)

Any ideas on why this is happening?

Originally created by @keshavoct98 on GitHub (Apr 4, 2022). Hello, I have trained GFPGAN on my custom dataset to improve quality of license plates using train_gfpgan_v1_simple.yml. However the results I am getting on my validation data at training time is different from results I am getting while inferencing using the trained model. Below is the output image I got while training validation step: ![0_120122_5000](https://user-images.githubusercontent.com/32236225/161547637-a68b0dc4-52ad-46d6-b1c0-acd663cbc106.png) This is the output image I got while inferencing using the same model: ![0_120122](https://user-images.githubusercontent.com/32236225/161547718-a76e48eb-d077-4ee7-80f9-239a10af18ed.png) Training command: BASICSR_EXT=True BASICSR_JIT=True python -m torch.distributed.launch --nproc_per_node=1 --master_port=22021 gfpgan/train.py -opt options/train_gfpgan_v1_simple.yml --launcher pytorch Inferencing command: python inference_gfpgan.py -i datasets/validation/input/ -o results -v 1 -s 1 --bg_upsampler realesrgan (I added custom model path inside the inferencing script.) Any ideas on why this is happening?
Author
Owner

@qianx77 commented on GitHub (Oct 17, 2024):

because the official codes include face det. The result you test is the result of real-esrgan not your text model

@qianx77 commented on GitHub (Oct 17, 2024): because the official codes include face det. The result you test is the result of real-esrgan not your text model
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: TencentARC/GFPGAN#182