ÀÇ·áAI
¹øÈ£
¨Ï
Á¦ ¸ñ
̵̧
Á¦1ȸ Medical AI (MAI) °æÁø´ëȸ (ÇѼº´ë ÀåÇö°â, ½ÅÇö¼öÆÀ
ÃÖ¿ì¼ö»ó ¼ö»ó
)
2024-11-15
AI ½Å¾à°³¹ß µÚÈçµé °ÔÀÓüÀÎÀú µîÀå
2025-02-25
¡Ú¡Ú¡Ú µåµð¾î ¾Ï Á¤º¹Çϳª¡¦±¸±Û µö¸¶Àεå CEO "AI·Î ¸ðµç Áúº´ Ä¡·á"
2024-10-03
¡Ú¡Ú±¸±Û µö¸¶Àεå, '¾ËÆÄÆúµå 3' °ø°³..."´Ü¹éÁú »ý¼º ³Ñ¾î »ýü ºÐÀÚ ¿¹ÃøÀ¸·Î È®Àå"
2024-10-05
ÀΰøÁö´É, ±â¼úÀû µµ±¸ ³Ñ¾î 'ÀÇ·á ±³À°ÀÇ ÆÐ·¯´ÙÀÓ' ±Ùº»ÀûÀ¸·Î º¯È½ÃŰ´Â ÇÙ½É µ¿·Â
2025-01-01
¡øÃÖ±Ù °Ô½Ã±Û¡ù
ÀΰøÁö´É, Ç×»ýÁ¦ ³»¼º±Õ ÅðÄ¡ »õ µ¹ÆÄ±¸ ¿´Ù!...MIT ¿¬±¸Áø, »ý¼ºÇü AI·Î ½Å°³³ä Ç×»ýÁ¦ °³¹ß
25-08-26
[¹ÙÀÌ¿ÀÇコ µðÁöÅÐÇõ½ÅÆ÷·³]±Û·Î¹ú ÀÇ·áAI °æÀï·Â, ½Å¼Ó ½ÃÀå ÁøÀÔ¡¤¿¬±¸ »ýŰè Á¶¼º ½Ã±Þ
25-08-26
¾à¹° Èĺ¸ '¶Òµü'¡¦ KAIST ±è¿ì¿¬ ±³¼öÆÀ, Ç¥Àû ´Ü¹éÁú¿¡ ÃÖÀûÈµÈ ½Å¾à ¼³°è AI °³¹ß
25-08-15
¹è°æÈÆ Àå°ü, "÷´Ü¹ÙÀÌ¿À À°¼ºÇϰí, ÀΰøÁö´É°úÀÇ °áÇÕÀ¸·Î Çõ½Å °¡¼ÓÇϰڴÙ"
25-07-27
ÀÇ·á AI, º´¿ø°ú ÇÔ²² ÀÚ¶õ´Ù¡¦°³¹ßºÎÅÍ ÀÓ»ó±îÁö '°øµ¿ ¼³°è'
25-07-25
"¾Ï Áø´Ü°ú Ç×¾ÏÁ¦ °³¹ß µî¿¡ Çõ½ÅÀûÀ¸·Î ±â¿©ÇÒ °Í!".
25-07-16
Á¶´ÜÀ§ ¡®ºòµô¡¯ ½ñ¾ÆÁö´Â AI ½Å¾à °³¹ß¡¦ùÛÁ¤ºÎµµ ¼Óµµ ³½´Ù
25-07-05
±³¼ö Ãßõ ÁÖÁ¦ (°è¼Ó ¾÷µ¥ÀÌÆ®)
¨Õ
±â¸»°úÁ¦ °ü·Ã ÀÚ·á (±â»ç, ±â¼ú Àü¸Á, Àú³Î³í¹®, ¼®»ç/¹Ú»çÇÐÀ§ ³í¹®, ¿ÀÇ ¼Ò½º µî)
Âü°í »çÀÌÆ®
283
¦¦❸
l
Data preprocessing for deep learning: How to build an efficient big data pipeline
10
282
¦¦❸
l
Self-supervised learning tutorial: Implementing SimCLR with pytorch lightning
20
281
¦¦❸
l
Unravel Policy Gradients and REINFORCE
5
280
¦¦❸
l
The idea behind Actor-Critics and how A2C and A3C improve them
22
279
¦¦❸
l
How Positional Embeddings work in Self-Attention (code in Pytorch)
17
278
¦¦❸
l
Why multi-head self attention works: math, intuitions and 10+1 hidden insights
12
277
¦¦❸
l
Introduction to 3D medical imaging for machine learning: preprocessing and augmentations
29
276
¦¦❸
l
Explainable AI (XAI): A survey of recents methods, applications and frameworks
39
275
¦¦❸
l
In-layer normalization techniques for training very deep neural networks
1
274
¦¦❸
l
Best Graph Neural Network architectures: GCN, GAT, MPNN and more
12
273
¦¦❸
l
How Graph Neural Networks (GNN) work: introduction to graph convolutions from scratch
5
272
¦¦❸
l
GANs in computer vision - Improved training with Wasserstein distance, game theory control and progre...
12
271
¦¦❸
l
GANs in computer vision - Introduction to generative learning
31
270
¦¦❸
l
How diffusion models work: the math from scratch
9
269
¦¦❸
l
Transformers in computer vision: ViT architectures, tips, tricks and improvements
9
268
¦¦❸
l
How the Vision Transformer (ViT) works in 10 minutes: an image is worth 16x16 words
63
267
¦¦❸
l
How Transformers work in deep learning and NLP: an intuitive introduction
20
266
¦¦❸
l
How Attention works in Deep Learning: understanding the attention mechanism in sequence models
10
265
¦¦❸
l
The theory behind Latent Variable Models: formulating a Variational Autoencoder
30
264
¦¦❸
l
How to Generate Images using Autoencoders
14
[1]
[2]
[
3
]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
-
[Next]
[17]
Á¦¸ñ
À̸§
³»¿ë
̵̧
Àüü
+ °Ë»ö tip : 1) ¹®ÀÚ¿°Ë»ö 2) OR(|) Á¶°Ç°Ë»ö 3) AND(&) Á¶°Ç°Ë»ö