pytorch forums遇到的问答

这里记录一些在Pytorch的论坛上看到的问题与解答。论坛地址:https://discuss.pytorch.org/

  1. What’s the difference between nn.ReLU() and nn.ReLU(inplace=True)?
    nn.ReLU() 与 nn.ReLU(inplace=True) 的区别

    inplace=True means that it will modify the input directly, without allocating any additional output. It can sometimes slightly decrease the memory usage, but may not always be a valid operation (because the original input is destroyed). However, if you don’t see an error, it means that your use case is valid.

  2. Clone and detach in v0.4.0
    clone和detach的用法

    tensor.detach() creates a tensor that shares storage with tensor that does not require grad. tensor.clone()creates a copy of tensor that imitates the original tensor’s requires_grad field.
    You should use detach() when attempting to remove a tensor from a computation graph, and clone as a way to copy the tensor while still keeping the copy as a part of the computation graph it came from.

  3. When to set pin_memory to true?,Pin_memory() on variables?

    If you load your samples in the Dataset on CPU and would like to push it during training to the GPU, you can speed up the host to device transfer by enabling pin_memory.This lets your DataLoader allocate the samples in page-locked memory, which speeds-up the transfer.You can find more information on the NVIDIA blog
    if you are loading your data on the CPU.It won’t work if your dataset is really small and you push it already to the GPU in the Dataset, but that’s pretty obvious.
    Pinning memory is only useful for CPU Tensors that have to be moved to the GPU. So, pinning all of a model’s variables/tensors doesn’t make sense at all.