ÀÇ·áAI
¹øÈ£
¨Ï
Á¦ ¸ñ
̵̧
±³¼ö Ãßõ ÁÖÁ¦ (°è¼Ó ¾÷µ¥ÀÌÆ®)
±â¸»°úÁ¦ °ü·Ã ÀÚ·á (±â»ç, ±â¼ú Àü¸Á, Àú³Î³í¹®, ¼®»ç/¹Ú»çÇÐÀ§ ³í¹®, ¿ÀÇ ¼Ò½º µî)
Âü°í »çÀÌÆ®
213
¦¦❸
l
The idea behind Actor-Critics and how A2C and A3C improve them
1
212
¦¦❸
l
How Positional Embeddings work in Self-Attention (code in Pytorch)
6
211
¦¦❸
l
Why multi-head self attention works: math, intuitions and 10+1 hidden insights
3
210
¦¦❸
l
Introduction to 3D medical imaging for machine learning: preprocessing and augmentations
2
209
¦¦❸
l
Explainable AI (XAI): A survey of recents methods, applications and frameworks
5
208
¦¦❸
l
In-layer normalization techniques for training very deep neural networks
2
207
¦¦❸
l
Best Graph Neural Network architectures: GCN, GAT, MPNN and more
4
206
¦¦❸
l
How Graph Neural Networks (GNN) work: introduction to graph convolutions from scratch
8
205
¦¦❸
l
GANs in computer vision - Improved training with Wasserstein distance, game theory control ...
1
204
¦¦❸
l
GANs in computer vision - Introduction to generative learning
2
203
¦¦❸
l
How diffusion models work: the math from scratch
0
202
¦¦❸
l
Transformers in computer vision: ViT architectures, tips, tricks and improvements
5
201
¦¦❸
l
How the Vision Transformer (ViT) works in 10 minutes: an image is worth 16x16 words
2
200
¦¦❸
l
How Transformers work in deep learning and NLP: an intuitive introduction
7
199
¦¦❸
l
How Attention works in Deep Learning: understanding the attention mechanism in sequence mod...
4
198
¦¦❸
l
The theory behind Latent Variable Models: formulating a Variational Autoencoder
4
197
¦¦❸
l
How to Generate Images using Autoencoders
0
196
¦¦❸
l
Recurrent neural networks: building a custom LSTM cell
2
195
¦¦❸
l
Best deep CNN architectures and their principles: from AlexNet to EfficientNet
5
194
¦¦❸
l
A journey into Optimization algorithms for Deep Neural Networks
6
[1]
[2]
[
3
]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
-
[Next]
[13]
Á¦¸ñ
À̸§
³»¿ë
̵̧
Àüü
+ °Ë»ö tip : 1) ¹®ÀÚ¿°Ë»ö 2) OR(|) Á¶°Ç°Ë»ö 3) AND(&) Á¶°Ç°Ë»ö