diff --git a/README.md b/README.md
index 6e3143483b3133e5a2891e2c12f9a6eda5cc39ba..b91f4d1a5979d5b5bc95671ea8520ef75945e57d 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,5 @@
# This project is in progress...
In this report, we try to optimize an idea which already has been presented under title Learning Deep Representations for Graph clustering (F. Tian, B. Gao, Q. Cui, E. Chen, T. Liu, 2014). The idea is described as follows: “modeling a simple method which embedding the similarity graph by deep autoencoder with sparsity penalty, then runs the K-Means algorithm on the embedding graph to obtain the clustering result”. However, although our model is based on the original idea, but the graph similarity and the loss function and the model training methods are different. We also compare our results with the two previous results based on the recent papers ( F. Tian, B. Gao, Q. Cui, E. Chen, T. Liu, 2014), (S. Cao, W. Lu, Q, Xu, 2016) on the same datasets.
-Below you will see a range of 10 NMI autoencoder on the datasets:
+Below you will see a autoencoder embedded data into two dimensions:
-![Alt text](https://github.com/saman-nia/Autoencoder_Clustering/blob/master/20%20newsgroups%20text%20dataset/Random%20Factor/AE_Random_Factor.png?raw=true "Title")
+![Alt text](https://github.com/saman-nia/Autoencoder_Clustering/blob/master/Visualizations/2D_Embedded.png?raw=true "Title")