#مقاله #پیاده_سازی
XLNet: Generalized Autoregressive Pretraining for Language Understanding
#XLNet outperforms #BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking. Code and comparisons here:
سورس کد (تنسرفلو)
https://github.com/zihangdai/xlnet
مقاله
https://arxiv.org/abs/1906.08237v1
#NLP
XLNet: Generalized Autoregressive Pretraining for Language Understanding
#XLNet outperforms #BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking. Code and comparisons here:
سورس کد (تنسرفلو)
https://github.com/zihangdai/xlnet
مقاله
https://arxiv.org/abs/1906.08237v1
#NLP
GitHub
GitHub - zihangdai/xlnet: XLNet: Generalized Autoregressive Pretraining for Language Understanding
XLNet: Generalized Autoregressive Pretraining for Language Understanding - zihangdai/xlnet
#مقاله #پیاده_سازی
XLNet: Generalized Autoregressive Pretraining for Language Understanding
#XLNet outperforms #BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking. Code and comparisons here:
سورس کد (تنسرفلو)
https://github.com/zihangdai/xlnet
مقاله
https://arxiv.org/abs/1906.08237v1
#NLP
XLNet: Generalized Autoregressive Pretraining for Language Understanding
#XLNet outperforms #BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking. Code and comparisons here:
سورس کد (تنسرفلو)
https://github.com/zihangdai/xlnet
مقاله
https://arxiv.org/abs/1906.08237v1
#NLP