Improve inference time. #308

Open
opened 2026-01-29 21:46:43 +00:00 by claunia · 12 comments
Owner

Originally created by @rohaantahir on GitHub (Feb 9, 2023).

Hi, I love the work. I wanted to enhance the videos of containing facial images. To do that I am converting the video to frames and processing it one by one. I wanted to know if there any solution to improve the inferrence time, so that when a big video processed it can be processed efficiently.

P.S I have RTX 3090 and core i9 high end system.

Originally created by @rohaantahir on GitHub (Feb 9, 2023). Hi, I love the work. I wanted to enhance the videos of containing facial images. To do that I am converting the video to frames and processing it one by one. I wanted to know if there any solution to improve the inferrence time, so that when a big video processed it can be processed efficiently. P.S I have RTX 3090 and core i9 high end system.
Author
Owner

@Wong-denis commented on GitHub (Feb 17, 2023):

same issue here. For me, it took about 90ms to process an 120x160 image. I wonder if there is room for improvement.

p.s. I use RTX3060.

@Wong-denis commented on GitHub (Feb 17, 2023): same issue here. For me, it took about 90ms to process an 120x160 image. I wonder if there is room for improvement. p.s. I use RTX3060.
Author
Owner

@rohaantahir commented on GitHub (Mar 21, 2023):

Still waiting for someone to respond and guide.

@rohaantahir commented on GitHub (Mar 21, 2023): Still waiting for someone to respond and guide.
Author
Owner

@dearkafka commented on GitHub (Mar 21, 2023):

@rohaantahir you know that 90ms is fast, right? Gfpgan is so far one of the fastest (still) given quality

@dearkafka commented on GitHub (Mar 21, 2023): @rohaantahir you know that 90ms is fast, right? Gfpgan is so far one of the fastest (still) given quality
Author
Owner

@rohaantahir commented on GitHub (Mar 21, 2023):

Yeah that seems fast, but, in my system I am getting morethan 90ms could you please tell me you are using? I am using this command to execute.

python inference_gfpgan.py -i input_image -o outputPath -v 1.4 -s 2 --only_center_face --bg_upsampler none

@rohaantahir commented on GitHub (Mar 21, 2023): Yeah that seems fast, but, in my system I am getting morethan 90ms could you please tell me you are using? I am using this command to execute. python inference_gfpgan.py -i input_image -o outputPath -v 1.4 -s 2 --only_center_face --bg_upsampler none
Author
Owner

@JGooLaaR commented on GitHub (Mar 21, 2023):

@rohaantahir , you can skip frames without faces.

@JGooLaaR commented on GitHub (Mar 21, 2023): @rohaantahir , you can skip frames without faces.
Author
Owner

@rohaantahir commented on GitHub (Mar 21, 2023):

@JGooLaaR I am doing it I am using the video that has only faces in every frame already.

@rohaantahir commented on GitHub (Mar 21, 2023): @JGooLaaR I am doing it I am using the video that has only faces in every frame already.
Author
Owner

@JGooLaaR commented on GitHub (Mar 21, 2023):

@rohaantahir, do you know the specific bottleneck code region?

@JGooLaaR commented on GitHub (Mar 21, 2023): @rohaantahir, do you know the specific bottleneck code region?
Author
Owner

@rohaantahir commented on GitHub (Mar 21, 2023):

I am getting this time for one image to restore of resolution 640X360.

image

@rohaantahir commented on GitHub (Mar 21, 2023): I am getting this time for one image to restore of resolution 640X360. ![image](https://user-images.githubusercontent.com/22777330/226675830-8f7206e5-3ddb-4515-8e6d-52e3668ec7c7.png)
Author
Owner

@rohaantahir commented on GitHub (Mar 21, 2023):

no I am trying to figure it out. Could you please help me out.

@rohaantahir commented on GitHub (Mar 21, 2023): no I am trying to figure it out. Could you please help me out.
Author
Owner

@rohaantahir commented on GitHub (Mar 21, 2023):

@Wong-denis what configurations you are using can you please tell me? like which pytorch, and python version specifically you are using?

@rohaantahir commented on GitHub (Mar 21, 2023): @Wong-denis what configurations you are using can you please tell me? like which pytorch, and python version specifically you are using?
Author
Owner

@JGooLaaR commented on GitHub (Mar 21, 2023):

@rohaantahir , you must use "Measure execution time of a function" way to find out what method is the bottleneck. Inside inference script.

@JGooLaaR commented on GitHub (Mar 21, 2023): @rohaantahir , you must use "Measure execution time of a function" way to find out what method is the bottleneck. Inside inference script.
Author
Owner

@gg22mm commented on GitHub (Nov 20, 2024):

@rohaantahir, do you know the specific bottleneck code region?

1732080231663

@gg22mm commented on GitHub (Nov 20, 2024): > @rohaantahir, do you know the specific bottleneck code region? ![1732080231663](https://github.com/user-attachments/assets/9df064c7-e9da-414b-acc0-29fee5e3e3b8)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: TencentARC/GFPGAN#308