In [1]:
# ÆÄÀÌÅäÄ¡ ÅäÄ¡ºñÁ¯ ¼³Ä¡ 
!pip3 install torch torchvision
Requirement already satisfied: torch in /usr/local/lib/python3.6/dist-packages (1.1.0)
Requirement already satisfied: torchvision in /usr/local/lib/python3.6/dist-packages (0.3.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch) (1.16.4)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision) (4.3.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from torchvision) (1.12.0)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow>=4.1.1->torchvision) (0.46)

텐서, 기울기, 장치 (Tensor, Gradient, and Devices)

  • ÅÙ¼­ÀÇ »ý¼º (Tensor Creation)
  • ±â¿ï±â °è»ê (Gradient Calculation)
  • ÀÎÆÛ·±½º ¸ðµå (Inference Mode)
  • ÅÙ¼­ À§Ä¡ ¹Ù²Ù±â (Device Change)
In [0]:
# ¸ÕÀú ³ÑÆÄÀÌ¿Í ÆÄÀÌÅäÄ¡¸¦ ºÒ·¯¿É´Ï´Ù.
import numpy as np
import torch

기울기 계산 기능을 끄고 기울기 계산을 시도하는 경우 (Tensor Creation without gradient calculation requirement)

In [3]:
# https://pytorch.org/docs/stable/tensors.html?highlight=requires_grad#torch.Tensor.requires_grad
# ÅÙ¼­ÀÇ ±â¿ï±â °Ô»ê ¿©ºÎ´Â requires_grad ·Î È®ÀÎÇÒ ¼ö ÀÖ½À´Ï´Ù.

# x ÅÙ¼­´Â 1,2,3À¸·Î ÃʱâÈ­, y ÅÙ¼­´Â [2,3,4]·Î ÃʱâÈ­
x = torch.tensor([1.,2.,3.])
y = torch.tensor([2.,3.,4.])

# x ¿Í xÀÇ ±â¿ï±â °è»ê ¿©ºÎ
print(x, x.requires_grad)

# ±â¿ï±â °è»êÀÌ ²¨Á®ÀÖÀ¸¹Ç·Î backward¸¦ ÇÏ¸é ¿À·ù°¡ ³³´Ï´Ù.
z = x + y
z.sum().backward()
tensor([1., 2., 3.]) False
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-3-5b607f7ec4ff> in <module>()
      7 # ±â¿ï±â °è»êÀÌ ²¨Á®ÀÖÀ¸¹Ç·Î backward¸¦ ÇÏ¸é ¿À·ù°¡ ³³´Ï´Ù.
      8 z = x + y
----> 9 z.sum().backward()

/usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
    105                 products. Defaults to ``False``.
    106         """
--> 107         torch.autograd.backward(self, gradient, retain_graph, create_graph)
    108 
    109     def register_hook(self, hook):

/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
     91     Variable._execution_engine.run_backward(
     92         tensors, grad_tensors, retain_graph, create_graph,
---> 93         allow_unreachable=True)  # allow_unreachable flag
     94 
     95 

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

기울기 계산 기능을 키는 경우 (Tensor Creation with gradient calculation requirement)

In [4]:
# À̹ø¿¡´Â requires_grad¸¦ True·Î ¼³Á¤ÇØ ±â¿ï±â °è»êÀ» ÄÑ°í °°Àº ¿¬»êÀ» ½ÃµµÇغ¸°Ú½À´Ï´Ù.

x = torch.tensor([1.,2.,3.],requires_grad=True)
y = torch.tensor([2.,3.,4.],requires_grad=True)
print(x, x.requires_grad)

# Àß µÇ´Â°ÍÀ» È®ÀÎÇÒ ¼ö ÀÖ½À´Ï´Ù.
z = x + y
z.sum().backward()
tensor([1., 2., 3.], requires_grad=True) True

인퍼런스 모드 (Inference Mode)

  • ÇнÀÀÌ ÀÌ·ç¾îÁø »óȲ¿¡¼­ ¸ðµ¨ÀÇ °á°ú°ª¸¸ º¸°í ½ÍÀ»¶§ »ç¿ëÇÕ´Ï´Ù.
  • torch.no_grad()
In [5]:
# https://pytorch.org/docs/stable/autograd.html?highlight=no_grad#torch.autograd.no_grad
# ±â¿ï±â °è»êÀÌ ÄÑÁ® ÀÖ´õ¶óµµ torch.no_grad()·Î À̸¦ ²ø ¼ö ÀÖ½À´Ï´Ù.
# ÆÄÀ̽㠹®¹ýÀÎ with ¸¦ »ç¿ëÇØ ÇØ´ç ºÎºÐ¸¸ ±â¿ï±â °è»êÀ» ²ûÀ¸·Î½á ¸ðµ¨À» ÀÎÆÛ·±½º ¸ðµå·Î »ç¿ëÇÒ ¼ö ÀÖ½À´Ï´Ù.
# Ãß°¡ÀûÀ¸·Î model.eval()À» ÇØÁÖ¸é ´õ È®½ÇÇѵ¥ ÀÌ´Â ³ªÁß¿¡ ¹è¿ï ÇнÀ¸ðµå¿Í ÀÎÆÛ·±½º ¸ðµå¿¡¼­ ´Ù¸£°Ô ÀÛµ¿ÇÏ´Â ¿¬»êµé¿¡ ´ëÇØ Àû¿ëµË´Ï´Ù.(ex.¹èÄ¡ Á¤±ÔÈ­, µå·Ó¾Æ¿ô)

print(x.requires_grad,y.requires_grad)

with torch.no_grad():
    z = x + y
    print(z.requires_grad)
True True
False

텐서의 위치 할당 (Device Allocation of Tensor)

  • torch.cuda.is_available()
  • torch.device()
  • torch.Tensor.to()
In [6]:
# device ÇÔ¼ö¸¦ ÀÌ¿ëÇØ cpu ¶Ç´Â ¸î¹ø gpu ¿¡ ¿Ã¸± °ÍÀÎÁö ÁöÁ¤ÇÒ ¼ö ÀÖ½À´Ï´Ù.

cpu = torch.device('cpu')
gpu = torch.device('cuda')

# ÅÙ¼­¸¦ »ý¼ºÇÒ ¶§, ÀåÄ¡¸¦ ÁöÁ¤ÇÒ ¼öµµ ÀÖ½À´Ï´Ù.
x = torch.tensor([1.,2.,3.],dtype=torch.float64, device=cpu, requires_grad=True)
print(x.device)

# to ÇÔ¼ö¸¦ ÀÌ¿ëÇØ cpu¿¡ ¸¸µé¾îÁø ÅÙ¼­¸¦ gpu¿¡ ¿Ã¸± ¼ö ÀÖ½À´Ï´Ù.
if torch.cuda.is_available():
    x = x.to(gpu)
    print(x.device)
cpu
cuda:0
In [0]: