Using BucketIterator of torchtext with Lightning

I’m using BucketIterator instance of torchtext (v0.8.0) for my train loader. My model is a base Transformer using for Translation Task.

from torchtext.data import BucketIterator
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
                                 (train_data, valid_data, test_data), 
                                 batch_size = 128)
model = Seq2Seq(enc, dec)
trainer = pl.Trainer(model, train_iterator)
trainer.fit()

One solution I can think of is to sub-class BucketIterator and implement on_init_start method. Are there any other alternatives to this, Thank you.

My Error:
AttributeError: ‘BucketIterator’ object has no attribute ‘on_init_start’

Full Trace:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in ()
----> 1 trainer = pl.Trainer(model, train_iterator)
2 trainer.fit()

2 frames
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/connectors/env_vars_connector.py in overwrite_by_env_vars(self, *args, **kwargs)
     39 
     40         # all args were already moved to kwargs
---> 41         return fn(self, **kwargs)
     42 
     43     return overwrite_by_env_vars

/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py in __init__(self, logger, checkpoint_callback, callbacks, default_root_dir, gradient_clip_val, process_position, num_nodes, num_processes, gpus, auto_select_gpus, tpu_cores, log_gpu_memory, progress_bar_refresh_rate, overfit_batches, track_grad_norm, check_val_every_n_epoch, fast_dev_run, accumulate_grad_batches, max_epochs, min_epochs, max_steps, min_steps, limit_train_batches, limit_val_batches, limit_test_batches, val_check_interval, flush_logs_every_n_steps, log_every_n_steps, accelerator, sync_batchnorm, precision, weights_summary, weights_save_path, num_sanity_val_steps, truncated_bptt_steps, resume_from_checkpoint, profiler, benchmark, deterministic, reload_dataloaders_every_epoch, auto_lr_find, replace_sampler_ddp, terminate_on_nan, auto_scale_batch_size, prepare_data_per_node, plugins, amp_backend, amp_level, distributed_backend, automatic_optimization)
    310 
    311         # hook
--> 312         self.on_init_start()
    313 
    314         # init optimizer + lr scheduler related flags

/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/callback_hook.py in on_init_start(self)
     40         """Called when the trainer initialization begins, model has not yet been set."""
     41         for callback in self.callbacks:
---> 42             callback.on_init_start(self)
     43 
     44     def on_init_end(self):

AttributeError: 'BucketIterator' object has no attribute 'on_init_start'
trainer = pl.Trainer()
trainer.fit(model, train_iterator)

While initializing trainer you need to define your training configurations. model/dataloaders for in .fit/.test