Resources
Lessons
- Lesson 1 Notes Tim Lee (Tim’s GitHub repo)
- Lesson 2: Case Study - A world class image classifier for dogs and cats (err…, anything) Apil Tamang
- Lesson 2 Notes Tim Lee
- Lesson 3 Notes Tim Lee
- Lesson 4 Notes Tim Lee
Blog Sites by Author
Blogs Written by (or recommended by) fastai Fellows
Resnet
- Decoding the ResNet architecture Anand Saha
- Yet Another ResNet Tutorial (or not) Apil Tamang
- An Overview of ResNet and its Variants Vincent Fung
Stuctured Deep Learning
- Structured Deep Learning Kerem Turgutlu (Masters’ student at USF)
NLP
- Fine-tuned Language Models for Text Classification by Jeremy Howard and Sebastian Ruder
PyTorch
- Transfer Learning using PyTorch — Part 2 Vishnu Subramanian (April 2017)
- A practitioner’s guide to PyTorch by Radek
Learning Rate
- Improving the way we work with learning rate Vitaly Bushaev
- Visualizing Learning rate vs Batch size (Neural Nets basics using Fast.ai tools) Miguel (Nov 2017)
- Estimating an Optimal Learning Rate For a Deep Neural Network Pavel Surmenok
- Cyclical Learning Rate Technique Anand Saha
- Transfer Learning using differential learning rates Manikanta Yadunanda
CNN
- Convolutional Neural Network in 5 minutes Sanyam Bhutani
- CS231n Convolutional Neural Networks for Visual Recognition
Kaggle
- FastAI Kaggle Starter Kit Tim Lee
Jupyter Notebook
and More
- Do smoother areas of the error surface lead to better generalization? (An experiment inspired by the first lecture of the fast.ai MOOC) Radek
- Contributing to fast.ai Wayde Gilliam
- Getting Computers To See Better Than Humans Arjun Rajkumar
- Fun with small image data-sets Nikhil B
- Fun with small image data-sets (Part 2) Nikhil B
- Structured Deep Learning Kerem Turgutlu
- Exploring Stochastic Gradient Descent with Restarts (SGDR) Mark Hoffman
- How do We Train Neural Networks? Vitaly Bushev
Reference Blogs
- Understanding LSTMs Christopher Olah
- Recurrent Neural Network Tutorial, Part 4 – Implementing a GRU/LSTM RNN with Python and Theano Denny Britz
Research Publications
- A systematic study of the class imbalance problem
in convolutional neural networks - What’s your ML Test Score? A rubric for ML
production systems (NIPS 2016) - ADAM: A Method for Stochastic Optimization (ICLR 2015)
- A disciplined approach to neural network hyper-parameters: Part 1 – learning rate, batch size, momentum, and weight decay Leslie Smith, March 2018
- Cyclical Learning Rates for Training Neural Networks (WACV 2017) Leslie Smith
- Fixing Weight Decay Regularization in Adam Ilya Loshchilov, Frank Hutter (Submitted on 14 Nov 2017)
- Learning Distributed Representations of Concepts Geoffrey Hinton, 1986
- Using the Output Embedding to Improve Language Models
Key Research Papers
- A disciplined approach to neural network hyper-parameters: Part 1 – learning rate, batch size, momentum, and weight decay, Leslie N. Smith, 2018
- Deep Residual Learning for Image Recognition Kaiming He, ILSVRC 2015 classification task winner
- Visualizing and Understanding Convolutional Networks Zeiler & Fergus, 2013
Videos
- The wonderful and terrifying implications of computers that can learn (Ted Talk by Jeremy Howard 2014)
- A Visual and Intuitive Understanding of Deep Learning Otavio Good of Google, AI Conf SF Sep 2017
- Ian Goodfellow - Numerical Computation for Deep Learning - AI With The Best Oct 14-15, 2017
- Ali Rahimi’s talk at NIPS(NIPS 2017 Test-of-time award presentation)
来源:CSDN
作者:DrogoZhang
链接:https://blog.csdn.net/weixin_40400177/article/details/103606457