- 本课程为精品课,您可以登录eeworld继续观看:
- Linear Network Hypothesis
- 继续观看
课时1:Course Introduction
课时2:Large-Margin Separating Hyperplane
课时3:Standard Large-Margin Problem
课时4:Support Vector Machine
课时5:Reasons behind Large-Margin Hyperplane
课时6:Motivation of Dual SVM
课时7:Lagrange Dual SVM
课时8:Solving Dual SVM
课时9:Messages behind Dual SVM
课时10:Kernel Trick
课时11:Polynomial Kernel
课时12:Gaussian Kernel
课时13:Comparison of Kernels
课时14:Motivation and Primal Problem
课时15:Dual Problem
课时16:Messages behind Soft-Margin SVM
课时17:Model Selection
课时18:Soft-Margin SVM as Regularized Model
课时19:SVM versus Logistic Regression
课时20:SVM for Soft Binary Classification
课时21:Kernel Logistic Regression
课时22:Kernel Ridge Regression
课时23:Support Vector Regression Primal
课时24:Support Vector Regression Dual
课时25:Summary of Kernel Models
课时26:Motivation of Aggregation
课时27:Uniform Blending
课时28:Linear and Any Blending
课时29:Bagging (Bootstrap Aggregation)
课时30:Motivation of Boosting
课时31:Diversity by Re-weighting
课时32:Adaptive Boosting Algorithm
课时33:Adaptive Boosting in Action
课时34:Decision Tree Hypothesis
课时35:Decision Tree Algorithm
课时36:Decision Tree Heuristics in CRT
课时37:Decision Tree in Action
课时38:Random Forest Algorithm
课时39:Out-Of-Bag Estimate
课时40:Feature Selection
课时41:Random Forest in Action
课时42:Adaptive Boosted Decision Tree
课时43:Optimization View of AdaBoost
课时44:Gradient Boosting
课时45:Summary of Aggregation Models
课时46:Motivation
课时47:Neural Network Hypothesis
课时48:Neural Network Learning
课时49:Optimization and Regularization
课时50:Deep Neural Network
课时51:Autoencoder
课时52:Denoising Autoencoder
课时53:Principal Component Analysis
课时54:RBF Network Hypothesis
课时55:RBF Network Learning
课时56:k-Means Algorithm
课时57:k-Means and RBF Network in Action
课时58:Linear Network Hypothesis
课时59:Basic Matrix Factorization
课时60:Stochastic Gradient Descent
课时61:Summary of Extraction Models
课时62:Feature Exploitation Techniques
课时63:Error Optimization Techniques
课时64:Overfitting Elimination Techniques
课时65:Machine Learning in Action
课程介绍共计65课时,16小时4分32秒