Are the decoders finetuned? #153

Open
opened 2026-01-29 21:44:10 +00:00 by claunia · 1 comment
Owner

Originally created by @mchong6 on GitHub (Jan 24, 2022).

From the training script I dont believe the decoders are being fine-tuned but when I play with the colab code I am getting weird results.

In the colab code, if I make conditions empty, it should return the results without SFT, however, the results are bad.
` image, _ = self.stylegan_decoder(
[style_code],
[],
return_latents=return_latents,
input_is_latent=True,
randomize_noise=randomize_noise)

`
image
This is the result from setting conditions to empty using the test images. If decoders are not being fine-tuned, this should give proper face results.

Originally created by @mchong6 on GitHub (Jan 24, 2022). From the training script I dont believe the decoders are being fine-tuned but when I play with the colab code I am getting weird results. In the colab code, if I make conditions empty, it should return the results without SFT, however, the results are bad. ` image, _ = self.stylegan_decoder( [style_code], [], return_latents=return_latents, input_is_latent=True, randomize_noise=randomize_noise) ` ![image](https://user-images.githubusercontent.com/20831135/150839458-4bec158f-f51c-414a-818e-2634c8693709.png) This is the result from setting conditions to empty using the test images. If decoders are not being fine-tuned, this should give proper face results.
Author
Owner

@laodar commented on GitHub (Jul 31, 2023):

@mchong6 Have you solved this problem?

@laodar commented on GitHub (Jul 31, 2023): > @mchong6 Have you solved this problem?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: TencentARC/GFPGAN#153