tanh activation #211

Open
opened 2026-01-29 21:45:45 +00:00 by claunia · 1 comment
Owner

Originally created by @g-luo on GitHub (Jul 22, 2022).

Hi, I was curious why in the implementation of GFPGAN there is no final tanh activation to keep generator outputs constrained between [-1, 1]; is there a reason why / could GFPGAN possibly benefit from such an activation?

Originally created by @g-luo on GitHub (Jul 22, 2022). Hi, I was curious why in the implementation of GFPGAN there is no final tanh activation to keep generator outputs constrained between [-1, 1]; is there a reason why / could GFPGAN possibly benefit from such an activation?
Author
Owner

@g-luo commented on GitHub (Jul 22, 2022):

Additionally, what is the intuition for detaching the fake output when training the discriminator (see code)?

@g-luo commented on GitHub (Jul 22, 2022): Additionally, what is the intuition for detaching the fake output when training the discriminator ([see code](https://github.com/TencentARC/GFPGAN/blob/9c3f2d62cb4e63a7ba7ce68648dd1667b2b2ef44/gfpgan/models/gfpgan_model.py#L420))?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: TencentARC/GFPGAN#211