- 本课程为精品课,您可以登录eeworld继续观看:
- Neurons and the Brain
- 继续观看
课时1:Welcome
课时2:What is Machine Learning
课时3:Supervised Learning
课时4:Unsupervised Learning
课时5:Model Representation
课时6:Cost Function
课时7:Cost Function - Intuition I
课时8:Cost Function - Intuition II
课时9:Gradient Descent
课时10:Gradient Descent Intuition
课时11:Gradient Descent For Linear Regression
课时12:What-'s Next
课时13:Matrices and Vectors
课时14:Addition and Scalar Multiplication
课时15:Matrix Vector Multiplication
课时16:Matrix Matrix Multiplication
课时17:Matrix Multiplication Properties
课时18:Inverse and Transpose
课时19:Multiple Features
课时20:Gradient Descent for Multiple Variables
课时21:Gradient Descent in Practice I - Feature Scaling
课时22:Gradient Descent in Practice II - Learning Rate
课时23:Features and Polynomial Regression
课时24:Normal Equation
课时25:Normal Equation Noninvertibility (Optional)
课时26:Basic Operations
课时27:Moving Data Around
课时28:Computing on Data
课时29:Plotting Data
课时30:Control Statements- for, while, if statements
课时31:Vectorization
课时32:Working on and Submitting Programming Exercises
课时33:Classification
课时34:Hypothesis Representation
课时35:Decision Boundary
课时36:Cost Function
课时37:Simplified Cost Function and Gradient Descent
课时38:Advanced Optimization
课时39:Multiclass Classification- One-vs-all
课时40:The Problem of Overfitting
课时41:Cost Function
课时42:Regularized Linear Regression
课时43:Regularized Logistic Regression
课时44:Non-linear Hypotheses
课时45:Neurons and the Brain
课时46:Model Representation I
课时47:Model Representation II
课时48:Examples and Intuitions I
课时49:Examples and Intuitions II
课时50:Multiclass Classification
课时51:Cost Function
课时52:Backpropagation Algorithm
课时53:Backpropagation Intuition
课时54:Implementation Note- Unrolling Parameters
课时55:Gradient Checking
课时56:Random Initialization
课时57:Putting It Together
课时58:Autonomous Driving
课时59:Deciding What to Try Next
课时60:Evaluating a Hypothesis
课时61:Model Selection and Train-Validation-Test Sets
课时62:Diagnosing Bias vs. Variance
课时63:Regularization and Bias-Variance
课时64:Learning Curves
课时65:Deciding What to Do Next Revisited
课时66:Prioritizing What to Work On
课时67:Error Analysis
课时68:Error Metrics for Skewed Classes
课时69:Trading Off Precision and Recall
课时70:Data For Machine Learning
课时71:Optimization Objective
课时72:Large Margin Intuition
课时73:Mathematics Behind Large Margin Classification (Optional)
课时74:Kernels I
课时75:Kernels II
课时76:Using An SVM
课时77:Unsupervised Learning- Introduction
课时78:K-Means Algorithm
课时79:Optimization Objective
课时80:Random Initialization
课时81:Choosing the Number of Clusters
课时82:Motivation I- Data Compression
课时83:Motivation II- Visualization
课时84:Principal Component Analysis Problem Formulation
课时85:Principal Component Analysis Algorithm
课时86:Choosing the Number of Principal Components
课时87:Reconstruction from Compressed Representation
课时88:Advice for Applying PCA
课时89:Problem Motivation
课时90:Gaussian Distribution
课时91:Algorithm
课时92:Developing and Evaluating an Anomaly Detection System
课时93:Anomaly Detection vs. Supervised Learning
课时94:Choosing What Features to Use
课时95:Multivariate Gaussian Distribution (Optional)
课时96:Anomaly Detection using the Multivariate Gaussian Distribution (Optional)
课时97:Problem Formulation
课时98:Content Based Recommendations
课时99:Collaborative Filtering
课时100:Collaborative Filtering Algorithm
课时101:Vectorization- Low Rank Matrix Factorization
课时102:Implementational Detail- Mean Normalization
课时103:Learning With Large Datasets
课时104:Stochastic Gradient Descent
课时105:Mini-Batch Gradient Descent
课时106:Stochastic Gradient Descent Convergence
课时107:Online Learning
课时108:Map Reduce and Data Parallelism
课时109:Problem Description and Pipeline
课时110:Sliding Windows
课时111:Getting Lots of Data and Artificial Data
课时112:Ceiling Analysis- What Part of the Pipeline to Work on Next
课时113:Summary and Thank You
课程介绍共计113课时,19小时28分58秒