- 本课程为精品课,您可以登录eeworld继续观看:
- Modeling sequences_brief overview
- 继续观看
课时1:Why do we need machine learning
课时2:What are neural networks
课时3:Some simple models of neurons
课时4: A simple example of learning
课时5:Three types of learning
课时6:An overview of the main types of network architecture
课时7:Perceptrons
课时8:A geometrical view of perceptrons
课时9:Why the learning works
课时10:What perceptrons can not do
课时11:Learning the weights of a linear neuron
课时12:The error surface for a linear neuron
课时13:Learning the weights of a logistic output neuron
课时14:The backpropagation algorithm
课时15:How to use the derivatives computed by the backpropagation algorithm
课时16:Learning to predict the next word
课时17:A brief diversion into cognitive science
课时18:Another diversion_The softmax output function
课时19:Neuro-probabilistic language models
课时20:ways to deal with large number of possible outputs
课时21:Why object recognition is difficult
课时22:Ways to achieve viewpoint invariance
课时23:Convolutional neural networks for hand-written digit recognition
课时24:Convolutional neural networks for object recognition
课时25:Overview of mini-batch gradient descent
课时26:A bag of tricks for mini-batch descent
课时27:The momentum method
课时28:A separate, adaptive learning rate for each connection
课时29:rmsprop_divide the gradient
课时30:Modeling sequences_brief overview
课时31:Training RNNs with backpropagation
课时32:A toy example of training an RNN
课时33:Why it is difficul to train an RNN
课时34:Long term short term memory
课时35:Modeling character strings with multiplicative connections
课时36:Learning to predict the next character using HF
课时37:Echo state networks
课时38:Overview of ways to improve generalization
课时39:Limiting size of the weights
课时40:Using noise as a regularizer
课时41:Introduction to the bayesian approach
课时42:The bayesian interpretation of weight decay
课时43:MacKays quick and dirty method of fixing weight costs
课时44:Why it helps to combine models
课时45:Mixtures of experts
课时46:The idea of full bayesian learning
课时47:Making full bayesian learning practical
课时48:Dropout an efficient way to combine neural nets
课时49:Hopfield Nets
课时50:Dealing with spurious minima in hopfield nets
课时51:Hopfields Nets with hidden units
课时52:Using stochastic units to improve search
课时53:How a boltzmann machine models data
课时54:The boltzmann machine learning algorithm
课时55:More efficient ways to get the statistics
课时56:Restricted boltzmann machines
课时57:An example of contrastive divergence learning
课时58:RBMs for collaborative filtering
课时59:The ups and downs of backpropagation
课时60:Belief nets
课时61:The wake-sleep algorithm
课时62:Learning layers of features by stacking RBMs
课时63:Discriminative fine-tuning for DBNs
课时64:What happens during discriminative fine-tuning
课时65:Modeling real-valued data with an RBM
课时66:RBMs are infinite sigmoid belief nets
课时67:From principal components analysis to autoencoders
课时68:Deep Autoencoders
课时69:Deep autoencoders for document retrieval and visualization
课时70:Semantic hashing
课时71:Learning binary codes for image retrieval
课时72:Shallow autoencoders for pre-training
课时73:Learning a joint model of images and captions
课时74:Hierarchical coordinate frames
课时75:Bayesian optimization of neural network hyperparameters
课程介绍共计75课时,12小时4分17秒