Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@
},
"source": [
"# Regression\n",
"In this section, we will try to solve the problem of **Regression**. In our first step, we will atack the problem from the analytical view."
"In this section, we will try to solve the problem of **Regression**. In our first step, we will attack the problem from the analytical view."
]
},
{
Expand Down Expand Up @@ -377,7 +377,7 @@
"## Gradient Descent:\n",
"In this section, we will use the popular iterative method called **Gradient Descent** to solve the regression problem.\n",
"\n",
"Assuming we need to find $ w_0\\ and\\ w_1 $ in the problem of linear regression, update rule using gradinet descent will be:\n",
"Assuming we need to find $ w_0\\ and\\ w_1 $ in the problem of linear regression, update rule using gradient descent will be:\n",
"$$\n",
"\\begin{array}{l}\n",
"w_0 \\leftarrow w_0 - \\eta \\frac{\\partial J}{\\partial w_0} = w_0 - \\eta \\sum_{i=1}^{m} (h_w(x^{(i)}) - y^{(i)}) \\\\\n",
Expand Down Expand Up @@ -660,7 +660,7 @@
"id": "Rc90d9vW7qPX"
},
"source": [
"As shown in the plots above, choosing a large learning rate leads to divergence. In this example, the update rule keeps making weights larger and larger and the weights will never converge. Choosing a small learning rate on the other hand, leads to slow convergence. In this example, learning $ w_0 $ is happening at a slow time because the update rule is being changed almost minimially !"
"As shown in the plots above, choosing a large learning rate leads to divergence. In this example, the update rule keeps making weights larger and larger and the weights will never converge. Choosing a small learning rate on the other hand, leads to slow convergence. In this example, learning $ w_0 $ is happening at a slow time because the update rule is being changed almost minimally!"
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions Jupyter_Notebooks/Chapter_01_Supervised_Learning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ You should work your way through the notebooks in the following order:


# [1. Linear Regression](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_01_Supervised_Learning/01-Linear%20Regression/01-Linear_Regression.ipynb)
In this notebook we solve the problem of Regression from scratch using only numpy. We also experiment wih Polynomial Regression and use sklearn to solve real-world problems.
In this notebook we solve the problem of Regression from scratch using only numpy. We also experiment with Polynomial Regression and use sklearn to solve real-world problems.

**Warning!** If you want to run this notebook online, make sure you run the following code to download the dataset:
```
Expand Down Expand Up @@ -40,7 +40,7 @@ In this notebook we use Regression to solve binary classification problems and u


# [3. Logistic Regression](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_01_Supervised_Learning/03-Logistic%20Regression/03-Logistic_Regression.ipynb)
In this notebook we learn about Logistic Regression and repurpose Regeression to classify data from small datasets.
In this notebook we learn about Logistic Regression and repurpose Regression to classify data from small datasets.


[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_01_Supervised_Learning/03-Logistic%20Regression/03-Logistic_Regression.ipynb)
Expand All @@ -49,7 +49,7 @@ In this notebook we learn about Logistic Regression and repurpose Regeression to


# [4. K-Nearest Neighbors](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_01_Supervised_Learning/04-kNN/04-kNN.ipynb)
In this notebook we use implement the kNN algorithm from scratch and use it for classification.
In this notebook we implement the kNN algorithm from scratch and use it for classification.


[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_01_Supervised_Learning/04-kNN/04-kNN.ipynb)
Expand All @@ -58,7 +58,7 @@ In this notebook we use implement the kNN algorithm from scratch and use it for


# [5. Ensemble Learning](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_01_Supervised_Learning/05-Ensemble%20Learning/05-Ensemble_Learning.ipynb)
In this notebook we use implement the Decision Tree Classifiers from scratch and explore how ensembling different models improve classification.
In this notebook we implement the Decision Tree Classifiers from scratch and explore how ensembling different models improve classification.
We also work with Random Forests and XGBoost and compare their results.


Expand Down
22 changes: 11 additions & 11 deletions Jupyter_Notebooks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,29 +3,29 @@ This is an overview of notebooks from all chapters.
----

# [Chapter 1: Supervised Learning](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/tree/main/Jupyter_Notebooks/Chapter_01_Supervised_Learning)
In this chapter we over the basics of Supervised Learning, where we train models using labeled data.
In this chapter we go over the basics of Supervised Learning, where we train models using labeled data.
To learn more about notebooks from this chapter check out [here](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_01_Supervised_Learning/README.md).

## [1. Linear Regression](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_01_Supervised_Learning/01-Linear%20Regression/01-Linear_Regression.ipynb)
In this notebook we solve the problem of Regression from scratch using only numpy. We also experiment wih Polynomial Regression and use sklearn to solve real-world problems.
In this notebook we solve the problem of Regression from scratch using only numpy. We also experiment with Polynomial Regression and use sklearn to solve real-world problems.

## [2. Linear Classification](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_01_Supervised_Learning/02-Linear%20Classification/02-Linear_Classification.ipynb)
In this notebook we use Regression to solve binary classification problems and understand their limitations.

## [3. Logistic Regression](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_01_Supervised_Learning/03-Logistic%20Regression/03-Logistic_Regression.ipynb)
In this notebook we learn about Logistic Regression and repurpose Regeression to classify data from small datasets.
In this notebook we learn about Logistic Regression and repurpose Regression to classify data from small datasets.

## [4. K-Nearest Neighbors](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_01_Supervised_Learning/04-kNN/04-kNN.ipynb)
In this notebook we use implement the kNN algorithm from scratch and use it for classification.
In this notebook we implement the kNN algorithm from scratch and use it for classification.

## [5. Ensemble Learning](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_01_Supervised_Learning/05-Ensemble%20Learning/05-Ensemble_Learning.ipynb)
In this notebook we use implement the Decision Tree Classifiers from scratch and explore how ensembling different models improve classification.
In this notebook we implement the Decision Tree Classifiers from scratch and explore how ensembling different models improve classification.
We also work with Random Forests and XGBoost and compare their results.

----

# [Chapter 2: Unsupervised Learning](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/tree/main/Jupyter_Notebooks/Chapter_02_Unsupervised_Learning)
In this chapter we over the basics of Unsupervised Learning, where we train models using unlabeled data.
In this chapter we go over the basics of Unsupervised Learning, where we train models using unlabeled data.
To learn more about notebooks from this chapter check out [here](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_02_Unsupervised_Learning/README.md).

## [1. Clustering](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_02_Unsupervised_Learning/Clustering.ipynb)
Expand All @@ -37,7 +37,7 @@ In this notebook we go over different Dimensionality Reduction algorithms such a
----

# [Chapter 3: Neural Networks](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/tree/main/Jupyter_Notebooks/Chapter_03_Neural_Networks)
In this chapter we over the basics of Neural Networks, which are the building blocks of modern Deep Learning.
In this chapter we go over the basics of Neural Networks, which are the building blocks of modern Deep Learning.
To learn more about notebooks from this chapter check out [here](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_03_Neural_Networks/README.md).

## [1. Neural Networks from Scratch](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_03_Neural_Networks/NNs_from_scratch.ipynb)
Expand All @@ -60,8 +60,8 @@ We try to get better results using different optimizers, bigger networks, batch

----

# [Chapter 4: Compuer Vision](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_04_Compuer_Vision)
In this chapter we over the basics of Computer Vision, exploring how neural networks can be modified to solve different problems in Computer Vision.
# [Chapter 4: Computer Vision](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_04_Compuer_Vision)
In this chapter we go over the basics of Computer Vision, exploring how neural networks can be modified to solve different problems in Computer Vision.
To learn more about notebooks from this chapter check out [here](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_04_Compuer_Vision/README.md).

## [1. Convolutional Neural Networks from Scratch](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_04_Computer_Vision/CNNs_from_scratch.ipynb)
Expand All @@ -86,7 +86,7 @@ In this notebook we introduce the U-Net architecture and investigate its use cas
----

# [Chapter 5: Natural Language Processing](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_05_Natural_Language_Processing)
In this chapter we over the basics of Natural Language Processing, exploring how neural networks can be modified to solve different problems in NLP.
In this chapter we go over the basics of Natural Language Processing, exploring how neural networks can be modified to solve different problems in NLP.
To learn more about notebooks from this chapter check out [here](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_05_Natural_Language_Processing/README.md).

## [1. Word Embeddings](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_05_Natural_Language_Processing/01-Word%20Embedding/Word%20Embedding.ipynb)
Expand All @@ -98,7 +98,7 @@ In this notebook, we are going to implement the Attention layers from scratch us
----

# [Chapter 6: Contrastive Learning](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/tree/main/Jupyter_Notebooks/Chapter_06_Contrastive_Learning)
In this chapter we over the basics of Contrastive Learning, exploring how models trained on images and languages can be used together to enhance the performance of our models.
In this chapter we go over the basics of Contrastive Learning, exploring how models trained on images and languages can be used together to enhance the performance of our models.
To learn more about notebooks from this chapter check out [here](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_06_Contrastive_Learning/README.md).

## [1. Vision Transformers](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/blob/main/Jupyter_Notebooks/Chapter_06_Contrastive_Learning/ViT.ipynb)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
},
"source": [
"# Regression\n",
"In this section, we will try to solve the problem of **Regression**. In our first step, we will atack the problem from the analytical view."
"In this section, we will try to solve the problem of **Regression**. In our first step, we will attack the problem from the analytical view."
]
},
{
Expand Down Expand Up @@ -404,7 +404,7 @@
"## Gradient Descent:\n",
"In this section, we will use the popular iterative method called **Gradient Descent** to solve the regression problem.\n",
"\n",
"Assuming we need to find $ w_0\\ and\\ w_1 $ in the problem of linear regression, update rule using gradinet descent will be:\n",
"Assuming we need to find $ w_0\\ and\\ w_1 $ in the problem of linear regression, update rule using gradient descent will be:\n",
"$$\n",
"\\begin{array}{l}\n",
"w_0 \\leftarrow w_0 - \\eta \\frac{\\partial J}{\\partial w_0} = w_0 - \\eta \\sum_{i=1}^{m} (h_w(x^{(i)}) - y^{(i)}) \\\\\n",
Expand Down Expand Up @@ -693,7 +693,7 @@
"id": "Rc90d9vW7qPX"
},
"source": [
"As shown in the plots above, choosing a large learning rate leads to divergence. In this example, the update rule keeps making weights larger and larger and the weights will never converge. Choosing a small learning rate on the other hand, leads to slow convergence. In this example, learning $ w_0 $ is happening at a slow time because the update rule is being changed almost minimially !"
"As shown in the plots above, choosing a large learning rate leads to divergence. In this example, the update rule keeps making weights larger and larger and the weights will never converge. Choosing a small learning rate on the other hand, leads to slow convergence. In this example, learning $ w_0 $ is happening at a slow time because the update rule is being changed almost minimally!"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Machine Learning Course

Welcome to the "Machine Learning" course of [Department of Computer Engineering](https://ce.sharif.edu), [Sharif University of Technology](https://www.sharif.edu).
Welcome to the "Machine Learning" course at [Department of Computer Engineering](https://ce.sharif.edu), [Sharif University of Technology](https://www.sharif.edu).

You can access [slides](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/tree/main/Slides), [Jupyter notebooks](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/tree/main/Jupyter_Notebooks), and [exercises](https://github.com/SharifiZarchi/Introduction_to_Machine_Learning/tree/main/Exercises).

Expand Down