모두의연구소 flipped school 과정 중 하나인 NLP bootcamp에서 발표한 paper 목록과 그 자료들입니다.
- participant : 윤주성, 박승일, 이윤주, 김정미, 양홍민, 백영상, 김성운, 최병주, 양승무, 허훈, 박성찬, 임한동, 임정섭, 김동완, 류지은, 조원호, 한지윤, 염혜원, 강성현, 이기창, 이영수
- faciliator : 김보섭
- Orientation
- Attention Is All You Need
- Presenter : 이윤주
- Paper : https://arxiv.org/abs/1706.03762
- Material : Attention Is All You Need_이윤주.pdf
- Universal Sentence Encoder
- Presenter : 염혜원
- Paper : https://arxiv.org/abs/1803.11175
- Material : Universal Sentence Encoder_염혜원.pdf
- Self-Attention with Relative Postion Representations
- Presenter : 김성운
- Paper : https://arxiv.org/abs/1803.02155
- Material : Self-Attention with Relative Position Representation_김성운.pdf
- Character-Level Language Modeling with Deeper Self-Attention
- Presenter : 양홍민
- Paper : https://arxiv.org/abs/1808.04444
- Material : Character-Level Language Modeling with Deeper Self-Attention_양홍민.pdf
- Generating Wikipedia by Summarizing Long Sequences
- Presenter : 임한동
- Paper : https://arxiv.org/abs/1801.10198
- Material :Generating Wikipedia by Summarizing Long Sequences_임한동.pdf
- Improving Language Understanding by Generative Pre-Training
- Language Models are Unsupervised Multitask Learners (optional)
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- Presenter : 박승일
- Paper : https://arxiv.org/abs/1810.04805
- Material : BERT_ Pre-training of Deep Bidirectional Transformers for Language Understanding_박승일.pdf
- Multi-Task Deep Neural Networks for Natural Language Understanding
- Presenter : 백영상
- Paper : https://arxiv.org/abs/1901.11504
- Material : Multi-Task Deep Neural Networks for Natural Language Understanding_백영상.pdf
- SpanBERT: Improving Pre-training by Representing and Predicting Spans
- Presenter : 김정미
- Paper : https://arxiv.org/abs/1907.10529
- Material : SpanBERT: Improving Pre-training by Representing and Predicting Spans_김정미.pdf
- ERNIE: Enhanced Representation through Knowledge Integration
- Presenter : 양승무
- Paper : https://arxiv.org/abs/1904.09223
- Material : ERNIE: Enhanced Representation through Knowledge Integration_양승무.pdf
- Pre-Training with Whole Word Masking for Chinese BERT (optional)
- Paper : https://arxiv.org/abs/1906.08101
- RoBERTa: A Robustly Optimized BERT Pretraining Approach
- Presenter : 강성현
- Paper : https://arxiv.org/abs/1907.11692
- Material : RoBERTa: A Robustly Optimized BERT Pretraining Approach_강성현.pdf
- ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
- Presenter : 이기창
- Paper : https://arxiv.org/abs/1907.12412
- Reference : ERNIE 2.0: A Continual Pre-training Framework for Language Understanding_이기창.pdf
- Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
- Presenter : 류지은
- Paper : https://arxiv.org/abs/1901.02860
- Material : Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context_류지은.pdf
- XLNet: Generalized Autoregressive Pretraining for Language Understanding
- Presenter : 박성찬
- Paper : https://arxiv.org/abs/1906.08237
- Material : XLNet: Generalized Autoregressive Pretraining for Language Understanding_박성찬.pdf
- Cross-lingual Language Model Pretraining
- Presenter : 한지윤
- Paper : https://arxiv.org/abs/1901.07291
- Material : Cross-lingual Language Model Pretraining_한지윤.pdf
- MASS: Masked Sequence to Sequence Pre-training for Language Generation
- Presenter : 최병주
- Paper : https://arxiv.org/abs/1905.02450
- Material : MASS: Masked Sequence to Sequence Pre-training for Language Generation_최병주.pdf
- Unified Language Model Pre-training for Natural Language Understanding and Generation
- Presenter : 윤주성
- Paper : https://arxiv.org/abs/1905.03197
- Material : Unified Language Model Pre-training for Natural Language Understanding and Generation_윤주성.pdf
- CTRL: A Conditional Transformer Language Model for Controllable Generation
- Presenter : 조원호
- Paper : https://arxiv.org/abs/1909.05858
- Material : CTRL: A Conditional Transformer Language Model for Controllable Generation_조원호.pdf
- TinyBERT: Distilling BERT for Natural Language Understanding
- Presenter : 허훈
- Paper : https://arxiv.org/abs/1909.10351
- Material : TinyBERT_Distilling BERT for Natural Language Understanding_허훈.pdf
- ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
- Presenter : 김동완
- Paper : https://arxiv.org/abs/1909.11942
- Material : ALBERT_A Lite BERT for Self-supervised Learning of Language Representations_김동완.pdf
- DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter (optional)
- Paper : https://arxiv.org/abs/1910.01108