-n0.1 nvidia-smi watch
GPU
Monitor GPU usage.
PyTorch
Check if PyTorch is using a GPU.
import torch
torch.cuda.current_device()
0) torch.cuda.device(
torch.cuda.device_count()
0) torch.cuda.get_device_name(
Check if model is on CUDA.
next(model.parameters()).is_cuda # returns a boolean
Clear cache on a specific pytorch CUDA device.
with torch.cuda.device('cuda:1'):
torch.cuda.empty_cache()
Check the number of parameters of a model.
def get_n_params(model):
=0
ppfor p in list(model.parameters()):
=1
nnfor s in list(p.size()):
= nn*s
nn += nn
pp return pp
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)