训练日志异常 #20

Closed
opened 2026-01-29 21:37:08 +00:00 by claunia · 28 comments
Owner

Originally created by @SimKarras on GitHub (Jul 6, 2021).

为什么当我将4gpu调整为2gpu,训练时就不输出结果?无论是实时终端,亦或是experiments内项目文件下本应出现的log文件,都无。改动如下:

# general settings
name: train_GFPGANv1_512_2gpu
model_type: GFPGANModel
num_gpu: 2     # 4
manual_seed: 0

2021-07-06 10-33-58 的屏幕截图
gpu看情况是进入训练状态的。
ps:之前复现您的BasicSR中的esrgan也是类似情况

Originally created by @SimKarras on GitHub (Jul 6, 2021). 为什么当我将4gpu调整为2gpu,训练时就不输出结果?无论是实时终端,亦或是experiments内项目文件下本应出现的log文件,都无。改动如下: ``` # general settings name: train_GFPGANv1_512_2gpu model_type: GFPGANModel num_gpu: 2 # 4 manual_seed: 0 ``` ![2021-07-06 10-33-58 的屏幕截图](https://user-images.githubusercontent.com/57309899/124533868-b5d9ed80-de45-11eb-889c-3e889b19661c.png) gpu看情况是进入训练状态的。 ps:之前复现您的BasicSR中的esrgan也是类似情况
Author
Owner

@xinntao commented on GitHub (Jul 6, 2021):

这个问题我没有遇到过, 需要更多信息

  1. 四卡就没有问题?
  2. pytorch版本?
@xinntao commented on GitHub (Jul 6, 2021): 这个问题我没有遇到过, 需要更多信息 1. 四卡就没有问题? 2. pytorch版本?
Author
Owner

@SimKarras commented on GitHub (Jul 6, 2021):

@xinntao 感谢您的回复,我暂时还未能上四卡。
pytorch = '1.8.0+cu111'
之前同样的环境单卡train esrgan没有类似问题。
我发现出问题的好像只是log出不来,训练照常进行,模型也保存了,wandb也一切正常。
如果您之前双卡没出现类似情况,那应该是环境不匹配造成的吧

@SimKarras commented on GitHub (Jul 6, 2021): @xinntao 感谢您的回复,我暂时还未能上四卡。 pytorch = '1.8.0+cu111' 之前同样的环境单卡train esrgan没有类似问题。 我发现出问题的好像只是log出不来,训练照常进行,模型也保存了,wandb也一切正常。 如果您之前双卡没出现类似情况,那应该是环境不匹配造成的吧
Author
Owner

@SimKarras commented on GitHub (Jul 6, 2021):

补充一下,esrgan原本就是单卡训练,所以没有问题。但basicsr内有其他项目是四卡的,改双卡就出现类似问题。

@SimKarras commented on GitHub (Jul 6, 2021): 补充一下,esrgan原本就是单卡训练,所以没有问题。但basicsr内有其他项目是四卡的,改双卡就出现类似问题。
Author
Owner

@xinntao commented on GitHub (Jul 7, 2021):

这个问题确实很奇怪, 我在 pytorch 1.8 cuda10.2下 没有遇到这个问题。

那你的程序, 它有保存 .log的文件吗?

@xinntao commented on GitHub (Jul 7, 2021): 这个问题确实很奇怪, 我在 pytorch 1.8 cuda10.2下 没有遇到这个问题。 那你的程序, 它有保存 .log的文件吗?
Author
Owner

@SimKarras commented on GitHub (Jul 7, 2021):

@xinntao 我刚又检查了,确实没有.log文件,终端也是没输出
2021-07-07 11-23-06 的屏幕截图
其他一切正常。

@SimKarras commented on GitHub (Jul 7, 2021): @xinntao 我刚又检查了,确实没有.log文件,终端也是没输出 ![2021-07-07 11-23-06 的屏幕截图](https://user-images.githubusercontent.com/57309899/124695462-cce81080-df15-11eb-863a-939b45debd67.png) 其他一切正常。
Author
Owner

@SimKarras commented on GitHub (Jul 8, 2021):

@xinntao 您好,我有一些关于网络改进的想法,想跟您讨论一下是否可行。您能否给我一个联系方式。我的邮箱:jiaweishi.cv@qq.com

@SimKarras commented on GitHub (Jul 8, 2021): @xinntao 您好,我有一些关于网络改进的想法,想跟您讨论一下是否可行。您能否给我一个联系方式。我的邮箱:jiaweishi.cv@qq.com
Author
Owner

@SimKarras commented on GitHub (Jul 8, 2021):

八卡情况下,log正常。

@SimKarras commented on GitHub (Jul 8, 2021): 八卡情况下,log正常。
Author
Owner

@syfbme commented on GitHub (Jul 12, 2021):

same issue...

@syfbme commented on GitHub (Jul 12, 2021): same issue...
Author
Owner

@xinntao commented on GitHub (Jul 12, 2021):

@syfbme @JiaweiShiCV
I cannot reproduce this issue. Could you guys help me to debug it?

It may be caused by the logging mechanism in BasicSR.

In the basicsr folder: basicsr/utils/logger.py Line106 -Line40
Could you please add these lines and post the outputs here? Thanks

image

def get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None):
    """Get the root logger.

    The logger will be initialized if it has not been initialized. By default a
    StreamHandler will be added. If `log_file` is specified, a FileHandler will
    also be added.

    Args:
        logger_name (str): root logger name. Default: 'basicsr'.
        log_file (str | None): The log filename. If specified, a FileHandler
            will be added to the root logger.
        log_level (int): The root logger level. Note that only the process of
            rank 0 is affected, while other processes will set the level to
            "Error" and be silent most of the time.

    Returns:
        logging.Logger: The root logger.
    """
    print('Enter get_root_logger')
    logger = logging.getLogger(logger_name)
    # if the logger has been initialized, just return it
    if logger.hasHandlers():
        return logger

    print('logger: add handlers')
    format_str = '%(asctime)s %(levelname)s: %(message)s'
    logging.basicConfig(format=format_str, level=log_level)
    rank, _ = get_dist_info()
    if rank != 0:
        logger.setLevel('ERROR')
    elif log_file is not None:
        file_handler = logging.FileHandler(log_file, 'w')
        file_handler.setFormatter(logging.Formatter(format_str))
        file_handler.setLevel(log_level)
        logger.addHandler(file_handler)
    print('logger: last return')
    return logger
@xinntao commented on GitHub (Jul 12, 2021): @syfbme @JiaweiShiCV I cannot reproduce this issue. Could you guys help me to debug it? It may be caused by the logging mechanism in BasicSR. In the basicsr folder: basicsr/utils/logger.py Line106 -Line40 Could you please add these lines and post the outputs here? Thanks ![image](https://user-images.githubusercontent.com/17445847/125223156-e1ede680-e2fd-11eb-9fdb-854fa64e6044.png) ```python def get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None): """Get the root logger. The logger will be initialized if it has not been initialized. By default a StreamHandler will be added. If `log_file` is specified, a FileHandler will also be added. Args: logger_name (str): root logger name. Default: 'basicsr'. log_file (str | None): The log filename. If specified, a FileHandler will be added to the root logger. log_level (int): The root logger level. Note that only the process of rank 0 is affected, while other processes will set the level to "Error" and be silent most of the time. Returns: logging.Logger: The root logger. """ print('Enter get_root_logger') logger = logging.getLogger(logger_name) # if the logger has been initialized, just return it if logger.hasHandlers(): return logger print('logger: add handlers') format_str = '%(asctime)s %(levelname)s: %(message)s' logging.basicConfig(format=format_str, level=log_level) rank, _ = get_dist_info() if rank != 0: logger.setLevel('ERROR') elif log_file is not None: file_handler = logging.FileHandler(log_file, 'w') file_handler.setFormatter(logging.Formatter(format_str)) file_handler.setLevel(log_level) logger.addHandler(file_handler) print('logger: last return') return logger ```
Author
Owner

@syfbme commented on GitHub (Jul 12, 2021):

Hi @xinntao
Only output "Enter get_root_logger"

@syfbme commented on GitHub (Jul 12, 2021): Hi @xinntao Only output "Enter get_root_logger"
Author
Owner

@xinntao commented on GitHub (Jul 12, 2021):

@syfbme Thanks
It is strange...

Could you please modify this function to the follows, and post the outputs?

def get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None):
    """Get the root logger.

    The logger will be initialized if it has not been initialized. By default a
    StreamHandler will be added. If `log_file` is specified, a FileHandler will
    also be added.

    Args:
        logger_name (str): root logger name. Default: 'basicsr'.
        log_file (str | None): The log filename. If specified, a FileHandler
            will be added to the root logger.
        log_level (int): The root logger level. Note that only the process of
            rank 0 is affected, while other processes will set the level to
            "Error" and be silent most of the time.

    Returns:
        logging.Logger: The root logger.
    """
    print('Enter get_root_logger')
    logger = logging.getLogger(logger_name)
    # if the logger has been initialized, just return it
    # if logger.hasHandlers():
    #    return logger
    if log_file is None:
        return logger

    print('logger: add handlers')
    format_str = '%(asctime)s %(levelname)s: %(message)s'
    logging.basicConfig(format=format_str, level=log_level)
    rank, _ = get_dist_info()
    if rank != 0:
        logger.setLevel('ERROR')
    elif log_file is not None:
        file_handler = logging.FileHandler(log_file, 'w')
        file_handler.setFormatter(logging.Formatter(format_str))
        file_handler.setLevel(log_level)
        logger.addHandler(file_handler)
    print('logger: last return')
    return logger

image

@xinntao commented on GitHub (Jul 12, 2021): @syfbme Thanks It is strange... Could you please modify this function to the follows, and post the outputs? ```python def get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None): """Get the root logger. The logger will be initialized if it has not been initialized. By default a StreamHandler will be added. If `log_file` is specified, a FileHandler will also be added. Args: logger_name (str): root logger name. Default: 'basicsr'. log_file (str | None): The log filename. If specified, a FileHandler will be added to the root logger. log_level (int): The root logger level. Note that only the process of rank 0 is affected, while other processes will set the level to "Error" and be silent most of the time. Returns: logging.Logger: The root logger. """ print('Enter get_root_logger') logger = logging.getLogger(logger_name) # if the logger has been initialized, just return it # if logger.hasHandlers(): # return logger if log_file is None: return logger print('logger: add handlers') format_str = '%(asctime)s %(levelname)s: %(message)s' logging.basicConfig(format=format_str, level=log_level) rank, _ = get_dist_info() if rank != 0: logger.setLevel('ERROR') elif log_file is not None: file_handler = logging.FileHandler(log_file, 'w') file_handler.setFormatter(logging.Formatter(format_str)) file_handler.setLevel(log_level) logger.addHandler(file_handler) print('logger: last return') return logger ``` ![image](https://user-images.githubusercontent.com/17445847/125224761-c0422e80-e300-11eb-858b-0ab6a6f59fbb.png)
Author
Owner

@syfbme commented on GitHub (Jul 12, 2021):

Hi @xinntao
i only used 1 gpu to make display cleaner. And below is the output:
image
Only the first time enter has "add handlers" and "last return"

@syfbme commented on GitHub (Jul 12, 2021): Hi @xinntao i only used 1 gpu to make display cleaner. And below is the output: ![image](https://user-images.githubusercontent.com/13032160/125225224-8291d580-e301-11eb-9880-2304e48f301d.png) Only the first time enter has "add handlers" and "last return"
Author
Owner

@xinntao commented on GitHub (Jul 12, 2021):

@syfbme If it prints "add handlers" and "last return", then the issue has been solved.

So, you could see the screen outputs, and also have a log file in the experiments file, right?

@xinntao commented on GitHub (Jul 12, 2021): @syfbme If it prints "add handlers" and "last return", then the issue has been solved. So, you could see the screen outputs, and also have a log file in the experiments file, right?
Author
Owner

@xinntao commented on GitHub (Jul 12, 2021):

@JiaweiShiCV 确实没有.log文件,终端也是没输出 这个问题,你现在还遇到么

@xinntao commented on GitHub (Jul 12, 2021): @JiaweiShiCV `确实没有.log文件,终端也是没输出` 这个问题,你现在还遇到么
Author
Owner

@SimKarras commented on GitHub (Jul 12, 2021):

@xinntao 我目前八卡以及四卡都没问题,双卡的话应该还是没输出

@SimKarras commented on GitHub (Jul 12, 2021): @xinntao 我目前八卡以及四卡都没问题,双卡的话应该还是没输出
Author
Owner

@xinntao commented on GitHub (Jul 12, 2021):

@JiaweiShiCV
能帮忙在两卡上(即不能输出log 的case) 测试下面的解决方案吗? (我这边没法复现,所以没法debug)

在 BasicSR folder: basicsr/utils/logger.py Line106 -Line40
修改为:

def get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None):
    """Get the root logger.

    The logger will be initialized if it has not been initialized. By default a
    StreamHandler will be added. If `log_file` is specified, a FileHandler will
    also be added.

    Args:
        logger_name (str): root logger name. Default: 'basicsr'.
        log_file (str | None): The log filename. If specified, a FileHandler
            will be added to the root logger.
        log_level (int): The root logger level. Note that only the process of
            rank 0 is affected, while other processes will set the level to
            "Error" and be silent most of the time.

    Returns:
        logging.Logger: The root logger.
    """
    print('Enter get_root_logger')
    logger = logging.getLogger(logger_name)
    # if the logger has been initialized, just return it
    # if logger.hasHandlers():
    #    return logger
    if log_file is None:
        return logger

    print('logger: add handlers')
    format_str = '%(asctime)s %(levelname)s: %(message)s'
    logging.basicConfig(format=format_str, level=log_level)
    rank, _ = get_dist_info()
    if rank != 0:
        logger.setLevel('ERROR')
    elif log_file is not None:
        file_handler = logging.FileHandler(log_file, 'w')
        file_handler.setFormatter(logging.Formatter(format_str))
        file_handler.setLevel(log_level)
        logger.addHandler(file_handler)
    print('logger: last return')
    return logger

谢谢!

@xinntao commented on GitHub (Jul 12, 2021): @JiaweiShiCV 能帮忙在两卡上(即不能输出log 的case) 测试下面的解决方案吗? (我这边没法复现,所以没法debug) 在 BasicSR folder: basicsr/utils/logger.py Line106 -Line40 修改为: ```python def get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None): """Get the root logger. The logger will be initialized if it has not been initialized. By default a StreamHandler will be added. If `log_file` is specified, a FileHandler will also be added. Args: logger_name (str): root logger name. Default: 'basicsr'. log_file (str | None): The log filename. If specified, a FileHandler will be added to the root logger. log_level (int): The root logger level. Note that only the process of rank 0 is affected, while other processes will set the level to "Error" and be silent most of the time. Returns: logging.Logger: The root logger. """ print('Enter get_root_logger') logger = logging.getLogger(logger_name) # if the logger has been initialized, just return it # if logger.hasHandlers(): # return logger if log_file is None: return logger print('logger: add handlers') format_str = '%(asctime)s %(levelname)s: %(message)s' logging.basicConfig(format=format_str, level=log_level) rank, _ = get_dist_info() if rank != 0: logger.setLevel('ERROR') elif log_file is not None: file_handler = logging.FileHandler(log_file, 'w') file_handler.setFormatter(logging.Formatter(format_str)) file_handler.setLevel(log_level) logger.addHandler(file_handler) print('logger: last return') return logger ``` 谢谢!
Author
Owner

@SimKarras commented on GitHub (Jul 12, 2021):

@xinntao 好的

@SimKarras commented on GitHub (Jul 12, 2021): @xinntao 好的
Author
Owner

@syfbme commented on GitHub (Jul 12, 2021):

@syfbme If it prints "add handlers" and "last return", then the issue has been solved.

So, you could see the screen outputs, and also have a log file in the experiments file, right?

Yes. Thanks~

@syfbme commented on GitHub (Jul 12, 2021): > @syfbme If it prints "add handlers" and "last return", then the issue has been solved. > > So, you could see the screen outputs, and also have a log file in the experiments file, right? Yes. Thanks~
Author
Owner

@SimKarras commented on GitHub (Jul 12, 2021):

@xinntao 双卡终端输出:

(BasicSR) ➜  GFPGAN git:(master) ✗ python -m torch.distributed.launch --nproc_per_node 2 --master_port 8888 train.py -opt train_gfpgan_v1.yml --launcher pytorch
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
*****************************************
Path already exists. Rename it to /home/sjw/文档/SR/GFPGAN/experiments/train_GFPGANv1_512_2gpu_archived_20210712_132006
Path already exists. Rename it to /home/sjw/文档/SR/GFPGAN/tb_logger/train_GFPGANv1_512_2gpu_archived_20210712_132006
Enter get_root_logger
logger: add handlers
logger: last return
Enter get_root_logger
logger: add handlers
logger: last return
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_loggerEnter get_root_logger

Enter get_root_logger
Enter get_root_logger
Enter get_root_loggerEnter get_root_logger

Enter get_root_loggerEnter get_root_logger

Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
WARNING:basicsr:Current net - loaded net:
WARNING:basicsr:  bn1.num_batches_tracked
WARNING:basicsr:  bn4.num_batches_tracked
WARNING:basicsr:  bn5.num_batches_tracked
WARNING:basicsr:  layer1.0.bn0.num_batches_tracked
WARNING:basicsr:  layer1.0.bn1.num_batches_tracked
WARNING:basicsr:  layer1.0.bn2.num_batches_tracked
WARNING:basicsr:  layer1.1.bn0.num_batches_tracked
WARNING:basicsr:  layer1.1.bn1.num_batches_tracked
WARNING:basicsr:  layer1.1.bn2.num_batches_tracked
WARNING:basicsr:  layer2.0.bn0.num_batches_tracked
WARNING:basicsr:  layer2.0.bn1.num_batches_tracked
WARNING:basicsr:  layer2.0.bn2.num_batches_tracked
WARNING:basicsr:  layer2.0.downsample.1.num_batches_tracked
WARNING:basicsr:  layer2.1.bn0.num_batches_tracked
WARNING:basicsr:  layer2.1.bn1.num_batches_tracked
WARNING:basicsr:  layer2.1.bn2.num_batches_tracked
WARNING:basicsr:  layer3.0.bn0.num_batches_tracked
WARNING:basicsr:  layer3.0.bn1.num_batches_tracked
WARNING:basicsr:  layer3.0.bn2.num_batches_tracked
WARNING:basicsr:  layer3.0.downsample.1.num_batches_tracked
WARNING:basicsr:  layer3.1.bn0.num_batches_tracked
WARNING:basicsr:  layer3.1.bn1.num_batches_tracked
WARNING:basicsr:  layer3.1.bn2.num_batches_tracked
WARNING:basicsr:  layer4.0.bn0.num_batches_tracked
WARNING:basicsr:  layer4.0.bn1.num_batches_tracked
WARNING:basicsr:  layer4.0.bn2.num_batches_tracked
WARNING:basicsr:  layer4.0.downsample.1.num_batches_tracked
WARNING:basicsr:  layer4.1.bn0.num_batches_tracked
WARNING:basicsr:  layer4.1.bn1.num_batches_tracked
WARNING:basicsr:  layer4.1.bn2.num_batches_tracked
WARNING:basicsr:Loaded net - current net:
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
[W reducer.cpp:1050] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters, consider turning this flag off. Note that this warning may be a false positive your model has flow control causing later iterations to have unused parameters. (function operator())
/home/sjw/anaconda3/envs/BasicSR/lib/python3.8/site-packages/torch/nn/functional.py:3499: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn(
[W reducer.cpp:1050] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters, consider turning this flag off. Note that this warning may be a false positive your model has flow control causing later iterations to have unused parameters. (function operator())
/home/sjw/anaconda3/envs/BasicSR/lib/python3.8/site-packages/torch/nn/functional.py:3499: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn(

log文件内容:

2021-07-12 13:20:14,046 WARNING: Current net - loaded net:
2021-07-12 13:20:14,046 WARNING:   bn1.num_batches_tracked
2021-07-12 13:20:14,046 WARNING:   bn4.num_batches_tracked
2021-07-12 13:20:14,046 WARNING:   bn5.num_batches_tracked
2021-07-12 13:20:14,046 WARNING:   layer1.0.bn0.num_batches_tracked
2021-07-12 13:20:14,046 WARNING:   layer1.0.bn1.num_batches_tracked
2021-07-12 13:20:14,046 WARNING:   layer1.0.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer1.1.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer1.1.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer1.1.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer2.0.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer2.0.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer2.0.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer2.0.downsample.1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer2.1.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer2.1.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer2.1.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer3.0.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer3.0.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer3.0.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer3.0.downsample.1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer3.1.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer3.1.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer3.1.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer4.0.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer4.0.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer4.0.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer4.0.downsample.1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer4.1.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer4.1.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING:   layer4.1.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: Loaded net - current net:
@SimKarras commented on GitHub (Jul 12, 2021): @xinntao 双卡终端输出: ``` (BasicSR) ➜ GFPGAN git:(master) ✗ python -m torch.distributed.launch --nproc_per_node 2 --master_port 8888 train.py -opt train_gfpgan_v1.yml --launcher pytorch ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** Path already exists. Rename it to /home/sjw/文档/SR/GFPGAN/experiments/train_GFPGANv1_512_2gpu_archived_20210712_132006 Path already exists. Rename it to /home/sjw/文档/SR/GFPGAN/tb_logger/train_GFPGANv1_512_2gpu_archived_20210712_132006 Enter get_root_logger logger: add handlers logger: last return Enter get_root_logger logger: add handlers logger: last return Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_loggerEnter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_loggerEnter get_root_logger Enter get_root_loggerEnter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger WARNING:basicsr:Current net - loaded net: WARNING:basicsr: bn1.num_batches_tracked WARNING:basicsr: bn4.num_batches_tracked WARNING:basicsr: bn5.num_batches_tracked WARNING:basicsr: layer1.0.bn0.num_batches_tracked WARNING:basicsr: layer1.0.bn1.num_batches_tracked WARNING:basicsr: layer1.0.bn2.num_batches_tracked WARNING:basicsr: layer1.1.bn0.num_batches_tracked WARNING:basicsr: layer1.1.bn1.num_batches_tracked WARNING:basicsr: layer1.1.bn2.num_batches_tracked WARNING:basicsr: layer2.0.bn0.num_batches_tracked WARNING:basicsr: layer2.0.bn1.num_batches_tracked WARNING:basicsr: layer2.0.bn2.num_batches_tracked WARNING:basicsr: layer2.0.downsample.1.num_batches_tracked WARNING:basicsr: layer2.1.bn0.num_batches_tracked WARNING:basicsr: layer2.1.bn1.num_batches_tracked WARNING:basicsr: layer2.1.bn2.num_batches_tracked WARNING:basicsr: layer3.0.bn0.num_batches_tracked WARNING:basicsr: layer3.0.bn1.num_batches_tracked WARNING:basicsr: layer3.0.bn2.num_batches_tracked WARNING:basicsr: layer3.0.downsample.1.num_batches_tracked WARNING:basicsr: layer3.1.bn0.num_batches_tracked WARNING:basicsr: layer3.1.bn1.num_batches_tracked WARNING:basicsr: layer3.1.bn2.num_batches_tracked WARNING:basicsr: layer4.0.bn0.num_batches_tracked WARNING:basicsr: layer4.0.bn1.num_batches_tracked WARNING:basicsr: layer4.0.bn2.num_batches_tracked WARNING:basicsr: layer4.0.downsample.1.num_batches_tracked WARNING:basicsr: layer4.1.bn0.num_batches_tracked WARNING:basicsr: layer4.1.bn1.num_batches_tracked WARNING:basicsr: layer4.1.bn2.num_batches_tracked WARNING:basicsr:Loaded net - current net: Enter get_root_logger Enter get_root_logger Enter get_root_logger Enter get_root_logger [W reducer.cpp:1050] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters, consider turning this flag off. Note that this warning may be a false positive your model has flow control causing later iterations to have unused parameters. (function operator()) /home/sjw/anaconda3/envs/BasicSR/lib/python3.8/site-packages/torch/nn/functional.py:3499: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn( [W reducer.cpp:1050] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters, consider turning this flag off. Note that this warning may be a false positive your model has flow control causing later iterations to have unused parameters. (function operator()) /home/sjw/anaconda3/envs/BasicSR/lib/python3.8/site-packages/torch/nn/functional.py:3499: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn( ``` log文件内容: ``` 2021-07-12 13:20:14,046 WARNING: Current net - loaded net: 2021-07-12 13:20:14,046 WARNING: bn1.num_batches_tracked 2021-07-12 13:20:14,046 WARNING: bn4.num_batches_tracked 2021-07-12 13:20:14,046 WARNING: bn5.num_batches_tracked 2021-07-12 13:20:14,046 WARNING: layer1.0.bn0.num_batches_tracked 2021-07-12 13:20:14,046 WARNING: layer1.0.bn1.num_batches_tracked 2021-07-12 13:20:14,046 WARNING: layer1.0.bn2.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer1.1.bn0.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer1.1.bn1.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer1.1.bn2.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer2.0.bn0.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer2.0.bn1.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer2.0.bn2.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer2.0.downsample.1.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer2.1.bn0.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer2.1.bn1.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer2.1.bn2.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer3.0.bn0.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer3.0.bn1.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer3.0.bn2.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer3.0.downsample.1.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer3.1.bn0.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer3.1.bn1.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer3.1.bn2.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer4.0.bn0.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer4.0.bn1.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer4.0.bn2.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer4.0.downsample.1.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer4.1.bn0.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer4.1.bn1.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: layer4.1.bn2.num_batches_tracked 2021-07-12 13:20:14,047 WARNING: Loaded net - current net: ```
Author
Owner

@xinntao commented on GitHub (Jul 12, 2021):

@syfbme Thanks for your feedback!

@xinntao commented on GitHub (Jul 12, 2021): @syfbme Thanks for your feedback!
Author
Owner

@xinntao commented on GitHub (Jul 12, 2021):

@JiaweiShiCV
It seems that this issue could be solved by the above modification!

@xinntao commented on GitHub (Jul 12, 2021): @JiaweiShiCV It seems that this issue could be solved by the above modification!
Author
Owner

@SimKarras commented on GitHub (Jul 12, 2021):

@xinntao 。。。 输出不是还是只有这么点吗

@SimKarras commented on GitHub (Jul 12, 2021): @xinntao 。。。 输出不是还是只有这么点吗
Author
Owner

@xinntao commented on GitHub (Jul 12, 2021):

@JiaweiShiCV 它没有接着输出了么...

@xinntao commented on GitHub (Jul 12, 2021): @JiaweiShiCV 它没有接着输出了么...
Author
Owner

@SimKarras commented on GitHub (Jul 12, 2021):

@JiaweiShiCV 它没有接着输出了么...

没......

@SimKarras commented on GitHub (Jul 12, 2021): > @JiaweiShiCV 它没有接着输出了么... 没......
Author
Owner

@xinntao commented on GitHub (Jul 12, 2021):

@JiaweiShiCV
This bug has been fixed in BasicSR: bf93f27e88
It should be OK now!

@xinntao commented on GitHub (Jul 12, 2021): @JiaweiShiCV This bug has been fixed in BasicSR: https://github.com/xinntao/BasicSR/commit/bf93f27e88940ebe60e6cbdc92e65e0ef1cf3de5 It should be OK now!
Author
Owner

@SimKarras commented on GitHub (Jul 12, 2021):

@xinntao 重新装basicsr=1.3.3.5就ok了是吗

@SimKarras commented on GitHub (Jul 12, 2021): @xinntao 重新装basicsr=1.3.3.5就ok了是吗
Author
Owner

@xinntao commented on GitHub (Jul 12, 2021):

@xinntao 重新装basicsr=1.3.3.5就ok了是吗

这个目前是改在master分支上, 还没有新的版本, 我现在发一个新版 1.3.3.6

@xinntao commented on GitHub (Jul 12, 2021): > @xinntao 重新装basicsr=1.3.3.5就ok了是吗 这个目前是改在master分支上, 还没有新的版本, 我现在发一个新版 1.3.3.6
Author
Owner

@SimKarras commented on GitHub (Jul 12, 2021):

ok!

@SimKarras commented on GitHub (Jul 12, 2021): ok!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: TencentARC/GFPGAN#20