How to share replay buffer between workers

Hello!

I’m trying to share the replay buffer between multiple workers so that all new samples are added in the same place, from which a learner would be able to sample. Basicly, I’m implementing data parallelisation.

In my old implementation from scratch (without any high-level modules like lightning) I’ve managed to do this by creating a tensor for each variable (state, action, reward …) and then using the share_memory command. I then passed the replay buffer object to each manually spawnned worker.

This works, but I would like to do the same in lightning using the PyTorch DataLoader and IterableDatabase.

Any ideas, please?

I think I found a solution. For anyone else interested, please see link