How do I force it to work on the CPU? #302

Closed
opened 2026-01-29 21:46:37 +00:00 by claunia · 7 comments
Owner

Originally created by @joe-eis on GitHub (Jan 30, 2023).

How do I force it to work on the CPU?
My GPU RAM is too small with 2GB, I get the message

"RuntimeError: CUDA out of memory. Tried to allocate 154.00 MiB (GPU 0; 1.96 GiB total capacity; 927.97 MiB already allocated; 72.44 MiB free; 1.05 GiB reserved in total by PyTorch)"

Originally created by @joe-eis on GitHub (Jan 30, 2023). How do I force it to work on the CPU? My GPU RAM is too small with 2GB, I get the message ` "RuntimeError: CUDA out of memory. Tried to allocate 154.00 MiB (GPU 0; 1.96 GiB total capacity; 927.97 MiB already allocated; 72.44 MiB free; 1.05 GiB reserved in total by PyTorch)"`
Author
Owner

@medalawi commented on GitHub (Jan 31, 2023):

try this or lower value to 128

Windows: set 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'

Linux: export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'

@medalawi commented on GitHub (Jan 31, 2023): try this or lower value to 128 Windows: set 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512' Linux: export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'
Author
Owner

@joe-eis commented on GitHub (Jan 31, 2023):

Unfortunately no. Change the allocation
export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'
in the SHELL bevor I execute the command: " inference_gfpgan.py"

brings only minor change:
RuntimeError: CUDA out of memory. Tried to allocate 154.00 MiB (GPU 0; 1.96 GiB total capacity; 775.63 MiB already allocated; 99.88 MiB free; 920.00 MiB reserved in total by PyTorch)

also the line
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = max_split_size_mb:512>'

in the script does not change anything

@joe-eis commented on GitHub (Jan 31, 2023): Unfortunately no. Change the allocation export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512' in the SHELL bevor I execute the command: " inference_gfpgan.py" brings only minor change: RuntimeError: CUDA out of memory. Tried to allocate 154.00 MiB (GPU 0; 1.96 GiB total capacity; 775.63 MiB already allocated; 99.88 MiB free; 920.00 MiB reserved in total by PyTorch) also the line os.environ["PYTORCH_CUDA_ALLOC_CONF"] = max_split_size_mb:512>' in the script does not change anything
Author
Owner

@medalawi commented on GitHub (Feb 3, 2023):

you have too low VRam -- 4GB is the minimum but try to downgrade torch version to 1.7

@medalawi commented on GitHub (Feb 3, 2023): you have too low VRam -- 4GB is the minimum but try to downgrade torch version to 1.7
Author
Owner

@joe-eis commented on GitHub (Feb 3, 2023):

Thank you for this useful information.

Unfortunately, my notebook has a 2 GB Nvidia installed.
The program can work on the CPU even if I would not use an Nvidia. But it recognizes the CUDA capable card even if it has too little memory.

@joe-eis commented on GitHub (Feb 3, 2023): Thank you for this useful information. Unfortunately, my notebook has a 2 GB Nvidia installed. The program can work on the CPU even if I would not use an Nvidia. But it recognizes the CUDA capable card even if it has too little memory.
Author
Owner

@medalawi commented on GitHub (Feb 3, 2023):

find this line and remove "not" after if statment to force using CPU

if not torch.cuda.is_available(): #cpu

@medalawi commented on GitHub (Feb 3, 2023): find this line and remove "not" after if statment to force using CPU if not torch.cuda.is_available(): #cpu
Author
Owner

@medalawi commented on GitHub (Feb 3, 2023):

i found this line in gfpgan/utils.py and /GFPGAN/inference_gfpgan.py

@medalawi commented on GitHub (Feb 3, 2023): i found this line in gfpgan/utils.py and /GFPGAN/inference_gfpgan.py
Author
Owner

@joe-eis commented on GitHub (Feb 3, 2023):

Thank you, the idea to remove 'not' is great and brought the solution.

@joe-eis commented on GitHub (Feb 3, 2023): Thank you, the idea to remove 'not' is great and brought the solution.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: TencentARC/GFPGAN#302