Machine Learning 101
latest
HOME
Machine Learning
1. Introduction
2. Learning Models
3. Bias and Variance
4. Covariance and Correlation
5. Model Metrics
6. Underfitting and Overfitting
7. Model Performance
8. Gradient descent
Regression
1. Regression
2. Simple Linear Regression
3. Example Simple Linear Regression
4. Multiple Linear Regression
5. Example Multiple Linear Regression
Machine Learning 101
Docs
»
HOME
Edit on GitHub
HOME
¶
Machine Learning
1. Introduction
1.1. Supervised Learning
1.2. Unsupervised Learning
1.3. Semi-Supervised Learning
1.4. Key Terms
1.4.1. Model
1.4.2. Algorithm
1.4.3. Training
1.4.4. Regression
1.4.5. Classification
1.4.6. Target
1.4.7. Feature
1.4.8. Label
1.4.9. Overfitting
1.4.10. Regularization
1.4.11. Parameter and Hyper-Parameter
2. Learning Models
2.1. Regression Algorithms
2.2. Instance-based Algorithms
2.3. Regularization Algorithms
2.4. Decision Tree Algorithms
2.5. Bayesian Algorithms
2.6. Clustering Algorithms
2.7. Association Rule Learning Algorithms
2.8. Dimensionality Reduction Algorithms
2.9. Ensemble Algorithms
3. Bias and Variance
3.1. Bias
3.2. Variance
3.3. Differences
3.4. Mathematical Representation
3.4.1. Irreducible error
3.4.2. Bias error
3.4.3. Variance error
3.5. Bias-Variance Tradeoff
4. Covariance and Correlation
4.1. Covariance
4.2. Correlation
4.3. Difference
5. Model Metrics
5.1. Mean Absolute Error
5.2. Mean Squared Error
5.3. Log Loss
5.3.1. Binary Classification
5.3.2. Multiclass Classification
5.4. Confusion Matrix
5.4.1. Key Terms
5.5. Classification Accuracy
5.6. Precision
5.7. Recall or Sensitivity
5.8. F1 Score
5.9. Receiver operating characteristic (ROC)
5.9.1. True Positive Rate (Sensitivity)
5.9.2. False Positive Rate (Specificity)
5.10. AUC: Area Under the ROC Curve
5.10.1. Properties
6. Underfitting and Overfitting
6.1. Overfitting
6.2. Underfitting
6.3. Example
7. Model Performance
7.1. Data Splitting
7.2. Validation
7.3. Cost function
7.4. High Bias and High Variance
7.4.1. Regime 1 (High Variance)
7.4.2. Regime 2 (High Bias)
7.5. Regularizations
7.5.1. Key points
7.6. Early Stopping
7.7. Hyperparameter Optimization
7.7.1. Grid search
7.7.2. Random search
7.7.3. Bayesian optimization
7.7.4. Gradient-based optimization
8. Gradient descent
8.1. Gradient
8.2. Cost Function
8.3. Method
8.4. Algorithm
8.4.1. Steps
8.5. Learning Rate
8.6. Convergence
8.7. Types of Gradient Descent
8.7.1. Batch Gradient Descent
8.7.2. Stochastic gradient descent (SGD)
8.7.3. Mini-batch Gradient Descent
Regression
1. Regression
1.1. Basic Models
1.1.1. Continuous variables
1.1.1.1. Linear regression
1.1.1.2. Polynomial Regression
1.1.1.3. Ridge regression
1.1.1.4. Lasso regression
1.1.1.5. ElasticNet Regression
1.1.2. Categorical variables
1.1.2.1. Binary Logistic Regression
1.1.2.2. Ordinal Logistic Regression
1.1.2.3. Nominal Logistic Regression
1.1.2.4. Poisson regression
1.2. Selecting Model
2. Simple Linear Regression
2.1. Ordinary Least Sqaure
2.1.1. Method
2.1.2. Evaluation
3. Example Simple Linear Regression
3.1. Ordinary Least Sqaure
3.1.1. Python
3.1.2. Scikit
3.2. Gradient Descent
3.2.1. Python
3.2.2. Scikit
4. Multiple Linear Regression
4.1. Least Squared Residual
4.1.1. Method
4.1.1.1. Intercept
4.1.2. Evaluation
5. Example Multiple Linear Regression
5.1. Ordinary Least Square
5.1.1. Python
5.1.2. Scikit
Indices and tables
¶
Index
Module Index
Search Page
Read the Docs
v: latest
Versions
latest
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds
Free document hosting provided by
Read the Docs
.