mirror of
https://github.com/TencentARC/GFPGAN.git
synced 2026-02-17 14:54:38 +00:00
How do I force it to work on the CPU? #302
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @joe-eis on GitHub (Jan 30, 2023).
How do I force it to work on the CPU?
My GPU RAM is too small with 2GB, I get the message
"RuntimeError: CUDA out of memory. Tried to allocate 154.00 MiB (GPU 0; 1.96 GiB total capacity; 927.97 MiB already allocated; 72.44 MiB free; 1.05 GiB reserved in total by PyTorch)"@medalawi commented on GitHub (Jan 31, 2023):
try this or lower value to 128
Windows: set 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'
Linux: export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'
@joe-eis commented on GitHub (Jan 31, 2023):
Unfortunately no. Change the allocation
export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'
in the SHELL bevor I execute the command: " inference_gfpgan.py"
brings only minor change:
RuntimeError: CUDA out of memory. Tried to allocate 154.00 MiB (GPU 0; 1.96 GiB total capacity; 775.63 MiB already allocated; 99.88 MiB free; 920.00 MiB reserved in total by PyTorch)
also the line
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = max_split_size_mb:512>'
in the script does not change anything
@medalawi commented on GitHub (Feb 3, 2023):
you have too low VRam -- 4GB is the minimum but try to downgrade torch version to 1.7
@joe-eis commented on GitHub (Feb 3, 2023):
Thank you for this useful information.
Unfortunately, my notebook has a 2 GB Nvidia installed.
The program can work on the CPU even if I would not use an Nvidia. But it recognizes the CUDA capable card even if it has too little memory.
@medalawi commented on GitHub (Feb 3, 2023):
find this line and remove "not" after if statment to force using CPU
if not torch.cuda.is_available(): #cpu
@medalawi commented on GitHub (Feb 3, 2023):
i found this line in gfpgan/utils.py and /GFPGAN/inference_gfpgan.py
@joe-eis commented on GitHub (Feb 3, 2023):
Thank you, the idea to remove 'not' is great and brought the solution.