How to improve GPU-Util in NVIDIA GPU #374

Open
opened 2026-01-29 21:47:19 +00:00 by claunia · 2 comments
Owner

Originally created by @kenZhangCn on GitHub (Jul 13, 2023).

When i use GFPGAN to process same images, i find the GPU-Util (NVIDIA 3090) always under 50% (while GPU Memory Usage around 3G/24G). I want to speed up the whole process so i think maybe improve the GPU-Util is a way(or maybe there are better way please let me know)

ENV:
Ubuntu 20.04.6 LTS
CUDA Version: 11.4
python 3.7.17
torch===1.13.1
opencv-python==4.1.0.25
GFP paras: -v 1.4 -s 2 --only_center_face --bg_upsampler None

Originally created by @kenZhangCn on GitHub (Jul 13, 2023). When i use GFPGAN to process same images, i find the GPU-Util (NVIDIA 3090) always under 50% (while GPU Memory Usage around 3G/24G). I want to speed up the whole process so i think maybe improve the GPU-Util is a way(or maybe there are better way please let me know) ENV: Ubuntu 20.04.6 LTS CUDA Version: 11.4 python 3.7.17 torch===1.13.1 opencv-python==4.1.0.25 GFP paras: -v 1.4 -s 2 --only_center_face --bg_upsampler None
Author
Owner

@FXmonkey commented on GitHub (Nov 1, 2023):

change the for loop in inference_gfpgan.py tomultiprocessing
exp : https://github.com/FXmonkey/GFPGAN-Speed

@FXmonkey commented on GitHub (Nov 1, 2023): change the for loop in inference_gfpgan.py tomultiprocessing exp : https://github.com/FXmonkey/GFPGAN-Speed
Author
Owner

@Crestina2001 commented on GitHub (Dec 24, 2023):

change the for loop in inference_gfpgan.py tomultiprocessing exp : https://github.com/FXmonkey/GFPGAN-Speed

This works for me. It speeds up a lot!

@Crestina2001 commented on GitHub (Dec 24, 2023): > change the for loop in inference_gfpgan.py tomultiprocessing exp : https://github.com/FXmonkey/GFPGAN-Speed This works for me. It speeds up a lot!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: TencentARC/GFPGAN#374