From d9ef6aef1404989e82d7a01802e3be23e94a79e6 Mon Sep 17 00:00:00 2001
From: Saman Nia
Date: Wed, 7 Mar 2018 01:11:49 +0100
Subject: [PATCH] Final
---
README.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index 6e31434..b91f4d1 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,5 @@
# This project is in progress...
In this report, we try to optimize an idea which already has been presented under title Learning Deep Representations for Graph clustering (F. Tian, B. Gao, Q. Cui, E. Chen, T. Liu, 2014). The idea is described as follows: “modeling a simple method which embedding the similarity graph by deep autoencoder with sparsity penalty, then runs the K-Means algorithm on the embedding graph to obtain the clustering result”. However, although our model is based on the original idea, but the graph similarity and the loss function and the model training methods are different. We also compare our results with the two previous results based on the recent papers ( F. Tian, B. Gao, Q. Cui, E. Chen, T. Liu, 2014), (S. Cao, W. Lu, Q, Xu, 2016) on the same datasets.
-Below you will see a range of 10 NMI autoencoder on the datasets:
+Below you will see a autoencoder embedded data into two dimensions:
-![Alt text](https://github.com/saman-nia/Autoencoder_Clustering/blob/master/20%20newsgroups%20text%20dataset/Random%20Factor/AE_Random_Factor.png?raw=true "Title")
+![Alt text](https://github.com/saman-nia/Autoencoder_Clustering/blob/master/Visualizations/2D_Embedded.png?raw=true "Title")
--
GitLab