I am using pytorch-lightning with Optuna for hyperparam search. The
trainer is instantiated as
trainer = pl.Trainer(weights_summary=None, gpus=1, auto_scale_batch_size="power", deterministic=True, max_epochs=50, logger=False, progress_bar_refresh_rate=0, callbacks=[checkpoint_CB,earlystopping_CB])
For every trial I get the following message, which I would like to hide/filter.
GPU available: True, used: True TPU available: None, using: 0 TPU cores LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: