mirror of
https://github.com/TencentARC/GFPGAN.git
synced 2026-04-25 15:20:58 +00:00
How to improve GPU-Util in NVIDIA GPU #374
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @kenZhangCn on GitHub (Jul 13, 2023).
When i use GFPGAN to process same images, i find the GPU-Util (NVIDIA 3090) always under 50% (while GPU Memory Usage around 3G/24G). I want to speed up the whole process so i think maybe improve the GPU-Util is a way(or maybe there are better way please let me know)
ENV:
Ubuntu 20.04.6 LTS
CUDA Version: 11.4
python 3.7.17
torch===1.13.1
opencv-python==4.1.0.25
GFP paras: -v 1.4 -s 2 --only_center_face --bg_upsampler None
@FXmonkey commented on GitHub (Nov 1, 2023):
change the for loop in inference_gfpgan.py tomultiprocessing
exp : https://github.com/FXmonkey/GFPGAN-Speed
@Crestina2001 commented on GitHub (Dec 24, 2023):
This works for me. It speeds up a lot!