watch -n0.1 nvidia-smiGPU
Monitor GPU usage.
PyTorch
Check if PyTorch is using a GPU.
import torchtorch.cuda.current_device()torch.cuda.device(0)torch.cuda.device_count()torch.cuda.get_device_name(0)Check if model is on CUDA.
next(model.parameters()).is_cuda # returns a booleanClear cache on a specific pytorch CUDA device.
with torch.cuda.device('cuda:1'):
torch.cuda.empty_cache()Check the number of parameters of a model.
def get_n_params(model):
pp=0
for p in list(model.parameters()):
nn=1
for s in list(p.size()):
nn = nn*s
pp += nn
return ppdef count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)