CUDA out of memory #148

Open
opened 2026-01-29 21:43:53 +00:00 by claunia · 3 comments
Owner

Originally created by @PAk-CatchFire on GitHub (Jan 9, 2022).

Hello.
Under Anaconda for windows, I am getting the following message:

File "C:\ProgramData\Anaconda3b\lib\site-packages\torch\nn\functional.py", line 2282, in batch_norm
return torch.batch_norm(
RuntimeError: CUDA out of memory. Tried to allocate 1.96 GiB (GPU 0; 6.00 GiB total capacity; 2.83 GiB already allocated; 916.67 MiB free; 2.85 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Is there anything to do?
Thank you

Originally created by @PAk-CatchFire on GitHub (Jan 9, 2022). Hello. Under Anaconda for windows, I am getting the following message: File "C:\ProgramData\Anaconda3b\lib\site-packages\torch\nn\functional.py", line 2282, in batch_norm return torch.batch_norm( RuntimeError: CUDA out of memory. Tried to allocate 1.96 GiB (GPU 0; 6.00 GiB total capacity; 2.83 GiB already allocated; 916.67 MiB free; 2.85 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Is there anything to do? Thank you
Author
Owner

@PAk-CatchFire commented on GitHub (Jan 9, 2022):

I tried to reduce the input image size, however I get:

Traceback (most recent call last):
File "inference_gfpgan.py", line 126, in
main()
File "inference_gfpgan.py", line 89, in main
cropped_faces, restored_faces, restored_img = restorer.enhance(
File "C:\ProgramData\Anaconda3b\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Users\frasa\ANACONDA\Proyectos\GFPGAN\gfpgan\utils.py", line 91, in enhance
self.face_helper.read_image(img)
File "C:\ProgramData\Anaconda3b\lib\site-packages\facexlib\utils\face_restoration_helper.py", line 106, in read_image
if np.max(img) > 256: # 16-bit image
TypeError: '>' not supported between instances of 'NoneType' and 'int'

Even after convert the image to grayscale

@PAk-CatchFire commented on GitHub (Jan 9, 2022): I tried to reduce the input image size, however I get: Traceback (most recent call last): File "inference_gfpgan.py", line 126, in <module> main() File "inference_gfpgan.py", line 89, in main cropped_faces, restored_faces, restored_img = restorer.enhance( File "C:\ProgramData\Anaconda3b\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "C:\Users\frasa\ANACONDA\Proyectos\GFPGAN\gfpgan\utils.py", line 91, in enhance self.face_helper.read_image(img) File "C:\ProgramData\Anaconda3b\lib\site-packages\facexlib\utils\face_restoration_helper.py", line 106, in read_image if np.max(img) > 256: # 16-bit image TypeError: '>' not supported between instances of 'NoneType' and 'int' Even after convert the image to grayscale
Author
Owner

@Jojanyu15 commented on GitHub (Jun 9, 2022):

@PAk-CatchFire I corrected this error changing the code I don't remember but it is because is trying to make an operation with double and int values at the same time, check the line 106 of face_restoration_helper.py

@Jojanyu15 commented on GitHub (Jun 9, 2022): @PAk-CatchFire I corrected this error changing the code I don't remember but it is because is trying to make an operation with double and int values at the same time, check the line 106 of face_restoration_helper.py
Author
Owner

@Phoenix8215 commented on GitHub (Dec 26, 2022):

@PAk-CatchFire i also met this problem,i covered it by letting input folder which just contains input images,not having any subfolders

@Phoenix8215 commented on GitHub (Dec 26, 2022): @PAk-CatchFire i also met this problem,i covered it by letting input folder which just contains input images,not having any subfolders
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: TencentARC/GFPGAN#148