ÀÇ·áAI
¹øÈ£ ¨Ï Á¦ ¸ñ Ãßõ
±³¼ö Ãßõ ÁÖÁ¦ (°è¼Ó ¾÷µ¥ÀÌÆ®)
±â¸»°úÁ¦ °ü·Ã ÀÚ·á (±â»ç, ±â¼ú Àü¸Á, Àú³Î³í¹®, ¼®»ç/¹Ú»çÇÐÀ§ ³í¹®, ¿ÀÇ ¼Ò½º µî)
Âü°í »çÀÌÆ®
213       ¦¦❸ lThe idea behind Actor-Critics and how A2C and A3C improve them 1
212       ¦¦❸ lHow Positional Embeddings work in Self-Attention (code in Pytorch) 6
211       ¦¦❸ lWhy multi-head self attention works: math, intuitions and 10+1 hidden insights 3
210       ¦¦❸ lIntroduction to 3D medical imaging for machine learning: preprocessing and augmentations 2
209       ¦¦❸ lExplainable AI (XAI): A survey of recents methods, applications and frameworks 5
208       ¦¦❸ lIn-layer normalization techniques for training very deep neural networks 2
207       ¦¦❸ lBest Graph Neural Network architectures: GCN, GAT, MPNN and more 4
206       ¦¦❸ lHow Graph Neural Networks (GNN) work: introduction to graph convolutions from scratch 8
205       ¦¦❸ lGANs in computer vision - Improved training with Wasserstein distance, game theory control ... 1
204       ¦¦❸ lGANs in computer vision - Introduction to generative learning 2
203       ¦¦❸ lHow diffusion models work: the math from scratch 0
202       ¦¦❸ lTransformers in computer vision: ViT architectures, tips, tricks and improvements 5
201       ¦¦❸ lHow the Vision Transformer (ViT) works in 10 minutes: an image is worth 16x16 words 2
200       ¦¦❸ lHow Transformers work in deep learning and NLP: an intuitive introduction 7
199       ¦¦❸ lHow Attention works in Deep Learning: understanding the attention mechanism in sequence mod... 4
198       ¦¦❸ lThe theory behind Latent Variable Models: formulating a Variational Autoencoder 4
197       ¦¦❸ lHow to Generate Images using Autoencoders 0
196       ¦¦❸ lRecurrent neural networks: building a custom LSTM cell 2
195       ¦¦❸ lBest deep CNN architectures and their principles: from AlexNet to EfficientNet 5
194       ¦¦❸ lA journey into Optimization algorithms for Deep Neural Networks 6

[1][2][3][4][5][6][7][8][9][10]-[Next][13]