First Page | Document Content | |
---|---|---|
Date: 2018-05-23 20:17:13Machine learning Artificial intelligence Computational neuroscience Learning Applied mathematics Artificial neural networks Cybernetics Formal sciences Deep learning Convolutional neural network Multi-task learning Training test and validation sets | Universal Language Model Fine-tuning for Text Classification Jeremy Howard∗ fast.ai University of San FranciscoAdd to Reading ListSource URL: arxiv.orgDownload Document from Source WebsiteFile Size: 956,46 KBShare Document on Facebook |
Semi-supervised Multi-task Learning of Structured Prediction Models for Web Information Extraction Paramveer S. Dhillon S SundararajanDocID: 1unE8 - View Document | |
Multi-task Self-Supervised Visual Learning Carl Doersch† arXiv:1708.07860v1 [cs.CV] 25 Aug 2017 †DocID: 1tDtE - View Document | |
Deep multi-task learning with low level tasks supervised at lower layers Anders Søgaard University of Copenhagen Yoav GoldbergDocID: 1tg1g - View Document | |
DEEP NEURAL NETWORKS EMPLOYING MULTI-TASK LEARNING AND STACKED BOTTLENECK FEATURES FOR SPEECH SYNTHESIS Zhizheng Wu Cassia Valentini-BotinhaoDocID: 1rHBq - View Document | |
DEEP NEURAL NETWORKS EMPLOYING MULTI-TASK LEARNING AND STACKED BOTTLENECK FEATURES FOR SPEECH SYNTHESIS Zhizheng Wu Cassia Valentini-BotinhaoDocID: 1rj8C - View Document |