Hinton机器学习与神经网络
共75课时 12小时4分17秒秒
简介
机器学习祖师爷之一Geoffrey Hinton大神于2012年主讲录制。课程内容以深刻睿智,鞭辟入里见长。课程深入介绍了Machine Learning中的神经网络的方法,人工神经网络在语音识别和物体识别、图像分割、建模语言和人类运动等过程中的应用,及其在机器学习中发挥的作用。
Geoffrey Hinton,被称为“神经网络之父”、“深度学习鼻祖”,他曾获得爱丁堡大学人工智能的博士学位,并且为多伦多大学的特聘教授。在2012年,Hinton还获得了加拿大基廉奖(Killam Prizes,有“加拿大诺贝尔奖”之称的国家最高科学奖)。2013年,Hinton 加入谷歌并带领一个AI团队,他将神经网络带入到研究与应用的热潮,将“深度学习”从边缘课题变成了谷歌等互联网巨头仰赖的核心技术,并将反向传播算法应用到神经网络与深度学习。2018年图灵奖获得者,深度学习祖师爷
章节
- 课时1:Why do we need machine learning (13分14秒)
- 课时2:What are neural networks (8分30秒)
- 课时3:Some simple models of neurons (8分23秒)
- 课时4: A simple example of learning (5分38秒)
- 课时5:Three types of learning (7分37秒)
- 课时6:An overview of the main types of network architecture (7分8秒)
- 课时7:Perceptrons (7分56秒)
- 课时8:A geometrical view of perceptrons (6分24秒)
- 课时9:Why the learning works (6分2秒)
- 课时10:What perceptrons can not do (4分38秒)
- 课时11:Learning the weights of a linear neuron (11分55秒)
- 课时12:The error surface for a linear neuron (5分3秒)
- 课时13:Learning the weights of a logistic output neuron (3分56秒)
- 课时14:The backpropagation algorithm (11分51秒)
- 课时15:How to use the derivatives computed by the backpropagation algorithm (9分49秒)
- 课时16:Learning to predict the next word (12分33秒)
- 课时17:A brief diversion into cognitive science (4分26秒)
- 课时18:Another diversion_The softmax output function (7分20秒)
- 课时19:Neuro-probabilistic language models (7分52秒)
- 课时20:ways to deal with large number of possible outputs (12分16秒)
- 课时21:Why object recognition is difficult (4分40秒)
- 课时22:Ways to achieve viewpoint invariance (5分58秒)
- 课时23:Convolutional neural networks for hand-written digit recognition (16分1秒)
- 课时24:Convolutional neural networks for object recognition (17分44秒)
- 课时25:Overview of mini-batch gradient descent (8分22秒)
- 课时26:A bag of tricks for mini-batch descent (13分15秒)
- 课时27:The momentum method (8分42秒)
- 课时28:A separate, adaptive learning rate for each connection (5分44秒)
- 课时29:rmsprop_divide the gradient (11分38秒)
- 课时30:Modeling sequences_brief overview (17分23秒)
- 课时31:Training RNNs with backpropagation (6分23秒)
- 课时32:A toy example of training an RNN (6分14秒)
- 课时33:Why it is difficul to train an RNN (7分43秒)
- 课时34:Long term short term memory (9分15秒)
- 课时35:Modeling character strings with multiplicative connections (14分35秒)
- 课时36:Learning to predict the next character using HF (12分24秒)
- 课时37:Echo state networks (9分37秒)
- 课时38:Overview of ways to improve generalization (11分44秒)
- 课时39:Limiting size of the weights (6分22秒)
- 课时40:Using noise as a regularizer (7分31秒)
- 课时41:Introduction to the bayesian approach (10分49秒)
- 课时42:The bayesian interpretation of weight decay (10分52秒)
- 课时43:MacKays quick and dirty method of fixing weight costs (3分31秒)
- 课时44:Why it helps to combine models (13分10秒)
- 课时45:Mixtures of experts (13分15秒)
- 课时46:The idea of full bayesian learning (7分27秒)
- 课时47:Making full bayesian learning practical (6分44秒)
- 课时48:Dropout an efficient way to combine neural nets (8分35秒)
- 课时49:Hopfield Nets (13分1秒)
- 课时50:Dealing with spurious minima in hopfield nets (11分2秒)
- 课时51:Hopfields Nets with hidden units (9分39秒)
- 课时52:Using stochastic units to improve search (10分24秒)
- 课时53:How a boltzmann machine models data (11分44秒)
- 课时54:The boltzmann machine learning algorithm (12分15秒)
- 课时55:More efficient ways to get the statistics (14分48秒)
- 课时56:Restricted boltzmann machines (10分54秒)
- 课时57:An example of contrastive divergence learning (7分14秒)
- 课时58:RBMs for collaborative filtering (8分16秒)
- 课时59:The ups and downs of backpropagation (9分53秒)
- 课时60:Belief nets (12分35秒)
- 课时61:The wake-sleep algorithm (13分14秒)
- 课时62:Learning layers of features by stacking RBMs (17分34秒)
- 课时63:Discriminative fine-tuning for DBNs (9分40秒)
- 课时64:What happens during discriminative fine-tuning (8分39秒)
- 课时65:Modeling real-valued data with an RBM (9分56秒)
- 课时66:RBMs are infinite sigmoid belief nets (17分11秒)
- 课时67:From principal components analysis to autoencoders (7分57秒)
- 课时68:Deep Autoencoders (4分10秒)
- 课时69:Deep autoencoders for document retrieval and visualization (8分19秒)
- 课时70:Semantic hashing (8分50秒)
- 课时71:Learning binary codes for image retrieval (9分37秒)
- 课时72:Shallow autoencoders for pre-training (7分2秒)
- 课时73:Learning a joint model of images and captions (9分5秒)
- 课时74:Hierarchical coordinate frames (9分40秒)
- 课时75:Bayesian optimization of neural network hyperparameters (13分29秒)
热门下载
热门帖子