- Symbolic Chain-of-Thought Distillation: Small Models Can Also “Think” Step-by-Step 정리
Symbolic Chain-of-Thought Distillation: Small Models Can Also “Think” Step-by-Step
Read More
- (T5) Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer 정리
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper review
Read More
- Specializing Multi-domain NMT via Penalizing Low Mutual Information 정리
Specializing Multi-domain NMT via Penalizing Low Mutual Information paper review
Read More
- Do Language Models Understand Measurements? 정리
Do Language Models Understand Measurements?
Read More
- Self-Attention with Relative Position Representations 정리
Self-Attention with Relative Position Representations paper review
Read More
- Effective Approaches to Attention-based Neural Machine Translation 정리
Effective Approaches to Attention-based Neural Machine Translation paper review
Read More
- Word2vec: Distributed Representations of Words and Phrases and their Compositionality 정리
Distributed Representations of Words and Phrases and their Compositionality
Read More
- FastText: Bag of Tricks for Efficient Text Classification 정리
Bag of Tricks for Efficient Text Classification
Read More
- New Intent Discovery with Pre-training and Contrastive Learning 정리
Between words and characters:A Brief History of Open-Vocabulary Modeling and Tokenization in NLP
Read More
- Between words and characters: A Brief History of Open-Vocabulary Modeling and Tokenization in NLP 정리
Between words and characters:A Brief History of Open-Vocabulary Modeling and Tokenization in NLP
Read More
- Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost 정리
Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost
Read More
- An Empirical Study of Tokenization Strategies for Various Korean NLP Tasks 정리
An Empirical Study of Tokenization Strategies for Various Korean NLP Tasks
Read More
- FastText: Enriching Word Vectors with Subword Information 정리
Enriching Word Vectors with Subword Information, FastText
Read More
- UXLA: A Robust Unsupervised Data Augmentation Framework for Zero-Resource Cross-Lingual NLP 정리
UXLA: A Robust Unsupervised Data Augmentation Framework for Zero-Resource Cross-Lingual NLP
Read More
- [04] GAN 정리
Generative Adversarial Network
Read More
- [02] GPT-1 정리
Improving Language Understanding by Generative Pre-Training
Read More
- [01] BERT 정리
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Read More
- The Illustrated Transformer 정리
Attention is All You Need
Read More