From 739c48e86b8cba35b1545c2b7559e06d650f275d Mon Sep 17 00:00:00 2001 From: JongHyun Choi Date: Thu, 6 Sep 2018 14:36:09 +0900 Subject: [PATCH] =?UTF-8?q?=EC=98=A4=ED=83=80=20=EC=88=98=EC=A0=95=20?= =?UTF-8?q?=EB=B0=8F=20=EA=B2=BD=EB=A1=9C=20=EC=B6=94=EA=B0=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- Chap08-Dimensionality_Reduction/LLE-study.md | 4 ++-- README.md | 2 ++ 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/Chap08-Dimensionality_Reduction/LLE-study.md b/Chap08-Dimensionality_Reduction/LLE-study.md index 183b6c6..c53b3df 100644 --- a/Chap08-Dimensionality_Reduction/LLE-study.md +++ b/Chap08-Dimensionality_Reduction/LLE-study.md @@ -12,7 +12,7 @@ -![](./images/manifold02.png) +![](./images/manifold05.PNG) @@ -38,7 +38,7 @@ LLE 알고리즘을 각 단계별로 자세히 살펴보도록 하자. ### Step 1: Select Neighbors -먼저, $N$-차원($N$-Features)을 가지는 $m$-개의 데이터셋의 각 데이터 포인트 $X_i$에 대해, $X_i$와 가장 가까운 $k$-개의 이웃점($k$-nearest neighbors) $X_j$, $(j=1, \dots, k)$ 들을 선택한다. 여기서 $k$는 하이퍼파라미터(hyper-parmeter)로써 사람이 직접 적절한 개수를 정해준다. +먼저, $N$-차원($N$-Features)을 가지는 $m$-개의 데이터셋의 각 데이터 포인트 $\vec{x}_i$에 대해, $\vec{x}_i$와 가장 가까운 $k$-개의 이웃점($k$-nearest neighbors) $\vec{x}_j$, $(j=1, \dots, k)$ 들을 선택한다. 여기서 $k$는 하이퍼파라미터(hyper-parameter)로써 사람이 직접 적절한 개수를 정해준다. diff --git a/README.md b/README.md index b8d7eb8..a6dee89 100644 --- a/README.md +++ b/README.md @@ -16,6 +16,8 @@ - Chap05 - [Support Vector Machine](https://github.com/ExcelsiorCJH/Hands-On-ML/blob/master/Chap05-SVM/Chap05-SVM.ipynb) - Chap06 - [Decision Tree](https://github.com/ExcelsiorCJH/Hands-On-ML/blob/master/Chap06-Decision_Tree/Chap06-Decision_Tree.ipynb) - Chap07 - [앙상블 학습과 랜덤 포레스트](https://github.com/ExcelsiorCJH/Hands-On-ML/blob/master/Chap07-Ensemble_Learning_and_Random_Forests/Chap07-Ensemble_Learning_and_Random_Forests.ipynb) +- Chap08 - [차원 축소](https://github.com/ExcelsiorCJH/Hands-On-ML/blob/master/Chap08-Dimensionality_Reduction/Chap08-Dimensionality_Reduction.ipynb) + ## 3. 참고자료