From 454563be9e67f21258f5a4856e682080c70192dc Mon Sep 17 00:00:00 2001
From: Hammouda Elbez <hammouda.elbez@univ-lille.fr>
Date: Sat, 19 Nov 2022 07:51:00 +0100
Subject: [PATCH] Re

---
 README.md | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/README.md b/README.md
index f8c9464..44d2c10 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,5 @@
 # Progressive Layer-based Compression for Convolutional Spiking Neural Network 
-Here you find the code for the paper `Progressive Layer-based Compression for Convolutional Spiking Neural Network` <!--([link](https://hal.archives-ouvertes.fr/hal-03826823))-->
+Here you find the code for the paper `Progressive Layer-based Compression for Convolutional Spiking Neural Network`. <!--([link](https://hal.archives-ouvertes.fr/hal-03826823))-->
 
 ## CSNN
 
@@ -11,16 +11,16 @@ Here you find the code for the paper `Progressive Layer-based Compression for Co
 
 ### Building the binaries
 
-Run the following commands inside CSNN folder:
+Run the following commands inside the CSNN folder:
 
     mkdir build
     cd build
     cmake ../ -G"Unix Makefiles" -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS='-std=c++11'
     make
-Don't forget to build again if you change the source code.
+Remember to build again if you change the source code.
 
 ### How to use CSNN
-Once the `make` command is finished, you should see binary files which represent each simulation
+Once the `make` command is finished, you should see binary files which represent each simulation.
 
 Run a simulation:
 
@@ -30,11 +30,11 @@ Run a simulation:
         x = enable PP (pruning) [0 or 1]
         y = enable DSWR (reinforcement) [0 or 1]
 
-To run MNIST simulation without compression and reinforcement:
+For example: to run MNIST simulation without compression and reinforcement:
 
     ./Mnist 0 0
 
-In `apps` folder you find the source code for each simulation where you can change the architecture, the network parameters, or activate the layerwise compression.
+In the `apps` folder, you find the source code for each simulation where you can change the architecture, the network parameters, or activate the [layerwise compression](https://gitlab.univ-lille.fr/hammouda.elbez/progressive-layer-based-compression-for-convolutional-spiking-neural-network/-/blob/main/CSNN-Simulator/apps/Mnist.cpp#L21).
 
 ## Going from CSNN to SpiNNaker
 To transfer the learned weights from CSNN to SpiNNaker, we use the following command:
@@ -46,7 +46,7 @@ For example:
     ./Weight_extractor mnist_params conv1
 > `weights_conv1` is generated
 
-This will generate another binary file (named weights_[name_layer]) which contains only the weights of the selected layer .
+This will generate another binary file (named weights_[name_layer]) which contains only the weights of the selected layer.
 
 ## How to use SpiNNaker scripts
 To setup the SpiNNaker board, please check the following link:
@@ -57,11 +57,11 @@ http://spinnakermanchester.github.io/
 
 in SpiNNaker folder:
 
-run the `ConvertTheWeights.ipynb` notebook to adapt the extracted weights from CSNN to a text format readable by PyNN.
+Run the `ConvertTheWeights.ipynb` notebook to adapt the extracted weights from CSNN to a text format readable by PyNN.
 
 ### Using the extracted weights with SpiNNaker
 
-Run the `SpiNNakerRun.ipynb` notebook to deploy the weights on the board and run simulation.
+Run the `SpiNNakerRun.ipynb` notebook to deploy the weights on the board and run the simulation.
 
 ## Folder structure
 
-- 
GitLab